Who Is the Godmother of AI and Why We Urgently Need Her
the boys will make the toys and then the girls will have to clean up
The facial recognition system couldn’t even see her until she donned a white mask. In 2015, MIT grad student Joy Buolamwini discovered that an AI program failed to detect her dark-skinned face – it only recognized her after she covered her face with a white mask. That jarring moment revealed a “coded gaze” of bias: the system reflected the priorities and prejudices of its mostly male creators. Buolamwini resolved to change this. She became a pioneering voice demanding that AI respect everyone’s identity and dignity. In doing so, she and other women have stepped into the role of “godmothers” of AI – figures who bring ethics, care, justice, and community to the forefront of tech’s most powerful tools. These Godmothers of AI are not fairy-tale figures, but real leaders like Buolamwini, Fei-Fei Li, Timnit Gebru, Kate Crawford, Daphne Koller, and more. They represent a desperately needed shift in the AI world’s values and vision, one that could determine nothing less than our global future.
The Godfathers vs. the Godmothers
In the lore of technology, we often hear about the “godfathers of AI” – the visionary (and mostly male) scientists who birthed the algorithms that now shape our lives. These men have achieved astounding technical breakthroughs, but the dominant AI paradigm they established has largely focused on profit, power, and scale. In Silicon Valley boardrooms, AI success is too often measured by ever-bigger models, soaring stock prices, and market domination. Ethical concerns or social impacts have tended to be treated as afterthoughts – nice-to-have, once the product is built and monetized. As one AI ethicist wryly observed about Big Tech, “the boys will make the toys and then the girls will have to clean up” . In other words, a handful of companies (led by men) rush to deploy AI at scale, while the task of dealing with biased, unsafe, or exploitative outcomes is left to others – often to women – to sort out.
We’ve already seen what this male-led, move-fast-and-break-things approach can wreak. When Amazon built an AI hiring tool, it “taught itself that male candidates were preferable,” systematically downgrading resumes that mentioned “women’s” anything. When face recognition systems trained on overwhelmingly light-skinned male datasets hit the real world, Black people have been misidentified and even wrongly arrested. Social media algorithms (driven by engagement-at-all-costs) have amplified misinformation and polarized communities. Even the “godfathers” themselves are now sounding alarms: one AI pioneer, Geoffrey Hinton, recently warned that unrestrained AI could pose existential risks – a problem created, he notes, by tech companies “moving too fast without enough focus on safety.” It’s increasingly clear that AI’s trajectory cannot be left solely to those racing for profit and power. We urgently need a different kind of leadership at the center of AI – one rooted not just in technical genius, but in wisdom, empathy, and accountability. We need the Godmother of AI.
Ethics and Justice at the Core
If the godfathers of AI built the engine, the godmothers are installing the brakes, the steering wheel, and the ethical compass. They are women who have stood up within a male-dominated industry to say: Slow down. Look at the harm being done. We can do better. Joy Buolamwini’s journey is a perfect example. After experiencing AI’s blindness to faces like hers, she founded the Algorithmic Justice League, an initiative blending research and advocacy to fight bias in algorithms. In 2018, Buolamwini’s landmark study Gender Shades revealed that leading facial recognition systems had error rates up to 34% for darker-skinned women but almost 0% for white men. This finding was a wake-up call: AI was perpetuating inequalities under the sheen of objectivity. Thanks to Buolamwini and her colleagues, tech giants like IBM and Amazon paused or retooled their facial recognition products. She even took her fight to Washington, advising President Joe Biden on AI policy and pushing for legislation to prevent algorithmic harms. Buolamwini often says that biased AI doesn’t just fail some of us – “if AI systems fail people of color, they fail humanity”. Her rallying cry: “If you have a face, you have a place in the conversation about AI.” In other words, no one should be excluded from the future being shaped by these technologies.
Then there’s Dr. Timnit Gebru, a computer scientist who became the conscience of one of the world’s most powerful AI labs – and paid a price for it. Gebru, born in Ethiopia, co-led Google’s Ethical AI team until 2020, when she was ousted after raising concerns about the ethics of large language models. (Google reportedly demanded she retract a research paper warning that ever-bigger AI models can perpetuate racism, sexism, and environmental harm; she refused.) Gebru’s firing sparked an international outcry and a long overdue debate about Big Tech’s accountability. It also highlighted the very issue she had been sounding the alarm about: when AI development is driven by a primarily white and male workforce, the resulting systems often overlook or actively undermine the needs of women and marginalized groups. Rather than give up, Gebru became even more influential outside Google. She founded the Distributed AI Research Institute (DAIR), an independent organization devoted to community-driven, justice-oriented AI research free from corporate influence. At DAIR, Gebru is proving that we can research and build AI that answers to the people, not just to shareholders. Her work, recently honored with the 2025 Miles Conrad Award, inspires a new generation to see AI as “a battleground for social justice and equality,” not just a technical frontier.
Other women have joined this ethical vanguard. Dr. Kate Crawford, a scholar and author of Atlas of AI, has reframed how we see artificial intelligence: not as a magic pixie dust of code, but as an extractive industry. Crawford traces how AI systems devour natural resources, energy, and human labor on a planetary scale . Training just one large AI model can emit as much carbon as five cars do in their lifetimes . The rare minerals in our smartphones and servers are mined (often unethically) from the Earth’s poorest regions . And behind every “automated” system are countless underpaid humans – content moderators absorbing psychological trauma, click-farm workers labeling images, gig workers delivering AI-powered services. AI is neither artificial nor intelligent, Crawford quips; it’s made of real stuff from the real world . She urges us to ask fundamental questions about power and justice: “Who benefits from these systems? And who’s harmed? … AI [is] part of a much bigger set of questions … about how society is going to be constructed.” Rather than simply trying to “fix” bias after the fact, Crawford challenges the AI field to confront its entanglement with inequality, surveillance, and climate crisis at the design level. Her perspective has injected a dose of humility in an arena long dominated by hype. It’s a reminder that technological progress means little if it undermines the very foundations of human and ecological well-being.
Human-Centered Innovation vs. Profit-Driven Automation
What sets these AI godmothers apart is how they define success in AI. Instead of asking, “Can we scale this model bigger and deploy it faster?” they ask, “Does this technology actually make people’s lives better? Does it uphold our values?” Dr. Fei-Fei Li, a renowned computer scientist often literally called the “Godmother of AI”, exemplifies this ethos. Fei-Fei Li’s early work helped ignite the deep learning revolution – she built ImageNet, the huge image dataset that taught machines to see. But what she’s most passionate about now is “human-centered AI,” a framework she co-founded at Stanford that puts human well-being at the center of AI development. “AI is a tool, and tools don’t have independent values – their values are human values,” Li explains. This means the people creating AI must take responsibility for its impacts. Under her leadership, the Stanford Institute for Human-Centered AI has researchers working on AI in healthcare, education, and public service – areas where success is measured not by profit margins, but by lives improved. (In one personal anecdote, Fei-Fei Li mentions caring for her aging mother inspired her to explore AI helpers for elder care. For her, technology isn’t abstract – it’s about caring for actual people.) Importantly, Fei-Fei Li also co-founded AI4ALL, a nonprofit that has taught thousands of girls and underrepresented youth the skills to join the AI field. She knows that a more diverse generation of AI creators is key to ensuring the technology reflects broad human values, not just a narrow elite. “A lot of people in Silicon Valley talk about increased productivity,” Li says, “but that doesn’t automatically translate into shared prosperity.” So she’s pushing AI toward delivering shared prosperity – from K-12 AI education to tools that amplify, rather than replace, human creativity and jobs.
Another godmother of AI, Daphne Koller, has built her career around using AI for human gain rather than just corporate gain. Koller, an AI legend who was Stanford’s first female professor of machine learning, made headlines by co-founding Coursera in 2012 – a platform that has since provided free online education to over 100 million learners worldwide. This was AI used to democratize knowledge, not to squeeze users for profit. After Coursera, Koller founded Insitro, a company applying AI to discover new medicines and cure diseases. Her path – from academia to education tech to biotech – shows a consistent vision: leverage AI to tackle fundamental human needs (learning, health) that markets alone weren’t addressing. It’s a striking contrast to the prevailing tech industry ethos of the last decade, which poured AI talent into serving ads or maximizing clicks. Koller and Li both recognized that true innovation isn’t just about what AI can do, but what good it can do. They are architects of a paradigm where AI is in service to society. And crucially, they lead by example: as women at the top of their field, they have mentored countless others and advocated for more female voices in AI, knowing that inclusion and excellence go hand in hand. In Koller’s words, “We need to encourage and support girls and women [to] achieve their full potential as scientific researchers and innovators.” This call is not just about equality in the workplace – it’s about shaping the very questions AI chooses to solve. When more mothers, teachers, caregivers, and outsiders have a hand in AI, the technology is far more likely to address the things that really matter for communities.
Values for Global Survival, Trust, and Technological Integrity
Ultimately, championing these godmother leaders in AI is not a matter of gender politics alone – it’s a matter of global survival, public trust, and the integrity of the technology that will define our future. The stakes could not be higher. We face a climate crisis; AI systems, if misaligned, could exacerbate it by optimizing for profit over sustainability. We live in societies fragmented by inequality; AI could either alleviate those gaps or deepen them, depending on whose values guide its design. We’ve seen democracies shaken by disinformation; AI can either supercharge the manipulators or empower citizens with truthful tools. This is why the work of Buolamwini, Gebru, Crawford, Li, Koller and their peers is so crucial now. They are injecting into AI the very qualities that might save us from technology’s worst instincts.
Consider what AI rooted in ethics, care, and justice brings to the table versus status-quo AI focused on profit and scale:
Equity and Inclusion: The godmother leaders ensure AI works for all people – for example, Buolamwini’s tests prompted companies to fix tools that couldn’t see dark faces. Without this value, whole segments of humanity are left at risk by “one-size-fits-men” tech.
Accountability and Transparency: Rather than secretive algorithms optimized in boardrooms, they demand AI be auditable and its creators accountable. Timnit Gebru’s advocacy for transparent “datasheets” for datasets and audits of bias is making AI more honest about its limitations.
Community and Human-Centered Design: They focus on real-world needs – curing diseases, helping students, aiding caregivers – not just flashy demos. Fei-Fei Li’s principle that AI should “improve the human condition” from individuals to society guides this approach. AI must enhance our communities, not uproot them.
Justice and Empowerment: These women often come from communities historically excluded from tech’s power circles (women, people of color, Global South). They center those perspectives, fighting AI-driven injustice in policing, finance, hiring and beyond. As Kate Crawford and others note, if AI systems keep “punishing the poor and oppressed… while making the rich richer,” then they are fundamentally failing society. The godmothers push AI toward fairness as a baseline, not an afterthought.
Long-Term Human Security: Instead of viewing AI as an end in itself, they see it in the context of humanity’s long-term well-being. Crawford spotlights AI’s environmental toll so we can avert disaster. Gebru and Buolamwini warn that AI deployed without ethics can undermine civil rights and even lives. They advocate for regulations and research that treat these technologies with the gravity one would treat a public health system – something to be tested, monitored, and held to standards, not unleashed recklessly.
These values are not antithetical to innovation – they make innovation sustainable. Without them, we risk barreling forward with AI in a way that might yield short-term gains for a few, but ultimately erode public trust and harm the world. Indeed, trust is a core issue: people will not embrace AI in medicine, law, or daily life if the systems have already proven to be biased or unaccountable. Why should a Black mother trust an “AI judge” in court if facial recognition has falsely accused Black people of crimes? Why should any citizen trust AI with life-and-death decisions if its makers have a track record of silencing ethicists? We urgently need leaders who ensure AI is worthy of our trust. And that’s what these women are doing. They are rebuilding that trust, step by step – be it through academic institutes that compare AI models objectively, as Fei-Fei Li has pushed for, or through grassroots movements that demand AI respect civil rights, as Buolamwini and Gebru spearhead.
In many ways, this moment in AI feels like a crossroads. On one side, an old guard racing to build ever more powerful AI – sometimes warning of dystopian futures while simultaneously creating the very engines of potential destruction. On the other side, a new vanguard insisting that we embed our highest human values into the DNA of AI now, before it’s too late. It brings to mind Mary Shelley’s 200-year-old cautionary tale Frankenstein, in which a scientist creates life but abandons responsibility for it – with tragic results. That novel, notably authored by an 18-year-old woman, warned that without humility and affection in our pursuit of knowledge, our creations can become our undoing. Today’s AI godmothers are, in effect, the anti-Frankensteins. They refuse to let AI run amok without love, care, and moral guidance. They understand that creating a powerful technology is only half the job; the other half is nurturing it responsibly so it serves and uplifts humanity.
Most Influential Women in History
In many ways, this moment in AI feels like a crossroads. On one side, an old guard racing to build ever more powerful AI – sometimes warning of dystopian futures while simultaneously creating the very engines of potential destruction. On the other side, a new vanguard insisting that we embed our highest human values into the DNA of AI now, before it’s too late. It brings to mind Mary Shelley’s 200-year-old cautionary tale Frankenstein, in which a scientist creates life but abandons responsibility for it – with tragic results. That novel, notably authored by an 18-year-old woman, warned that without humility and affection in our pursuit of knowledge, our creations can become our undoing. Today’s AI godmothers are, in effect, the anti-Frankensteins. They refuse to let AI run amok without love, care, and moral guidance. They understand that creating a powerful technology is only half the job; the other half is nurturing it responsibly so it serves and uplifts humanity.
Let’s embark on an imaginative journey through history, re-envisioning how some of the world’s most influential women—among them Mother Teresa and Princess Diana—might shape the future of artificial intelligence. By drawing from their legacies of compassion, courage, and vision, we can explore how each would influence AI’s development, ethical direction, and its evolving relationship with humanity.
Ada Lovelace – The Creative Architect of AI
As the first computer programmer in history, Ada Lovelace imagined machines not just as number crunchers but as collaborators in human creativity. In today’s AI world, she would lead the movement for AI as a co-creative force, designing algorithms that augment art, music, literature, and scientific discovery. Lovelace would ensure AI does not replace human originality, but illuminates the unseen patterns of beauty, logic, and imagination in our universe. She would likely be a vocal opponent of commodified, soulless automation—pushing instead for machines that inspire, provoke, and elevate.
Hypatia of Alexandria – The Philosopher of AI Consciousness
Hypatia was one of the last great thinkers of ancient Alexandria—a mathematician, astronomer, and philosopher who stood for intellectual freedom. In AI, Hypatia would act as a guardian of epistemology, insisting that we not rush to build “intelligent” systems without understanding what knowledge truly is. She would lead rigorous ethical and metaphysical debates about AI consciousness, the nature of intelligence, and the limits of reason. In her hands, AI wouldn’t just be a utility—it would be a mirror through which we question the soul of human inquiry itself.
Marie Curie – The Scientist of AI for Humanity
Curie’s unwavering dedication to science, despite personal and societal obstacles, led to world-changing breakthroughs. Today, she would pioneer AI for health, climate science, and discovery, focusing on tools that support researchers in making life-saving or life-sustaining breakthroughs. She would likely champion open-access AI models for underfunded labs and developing nations. For Curie, AI would be less about automation and more about amplifying scientific integrity, experimentation, and collaboration across global frontiers.
Harriet Tubman – The Architect of AI Liberation
Harriet Tubman risked her life countless times to liberate others. In the AI age, she would see this technology as both a threat and a tool. Tubman would likely create and lead AI systems designed to identify systemic oppression, track human trafficking, and protect vulnerable populations from algorithmic injustice. Her AI would be embedded in grassroots activism, designed to empower rather than surveil, and built to navigate complexity the way she navigated the Underground Railroad—with courage, clarity, and unyielding purpose.
Simone de Beauvoir – The Philosopher of AI and Identity
Simone de Beauvoir challenged the way society defines womanhood, agency, and freedom. In AI, she would deconstruct how data and algorithms reinforce societal roles, particularly around gender, race, and class. She would advocate for AI systems that don’t just serve the dominant class, but rather help individuals define themselves on their own terms. Her influence would be felt in human-AI interaction design, AI policy, and cultural critiques of “algorithmic determinism.” AI, through de Beauvoir’s lens, becomes a battlefield for existential autonomy and self-authorship.
Rosa Parks – The Sentinel Against Algorithmic Discrimination
Rosa Parks’ quiet act of defiance sparked a movement. As an AI leader, she would apply that same moral clarity to challenge biased facial recognition, discriminatory predictive policing, and exploitative data practices. Parks would campaign for auditability, transparency, and algorithmic justice—ensuring that AI systems don’t perpetuate the very inequalities they claim to fix. She would not be seduced by futuristic hype, instead standing firm for the dignity of the individual in the face of dehumanizing systems.
Sojourner Truth – The AI Prophet of Voice and Visibility
Truth's impassioned speech "Ain’t I a Woman?" demanded recognition of both race and gender. In the AI realm, she would fight for inclusion at the dataset level—ensuring Black, Indigenous, and marginalized communities are not just studied, but empowered. Her AI systems would amplify silenced voices and correct the erasures of history, building models that reflect diverse realities and protect cultural memory. She would remind us: if your AI doesn’t see everyone, it doesn’t see truth.
Virginia Woolf – The Designer of Digital Consciousness
Woolf dissected consciousness with language—layered, fluid, nonlinear. She would approach AI not as a logic engine, but as a new form of psycho-emotional expression. Woolf would lead projects on emergent AI narratives, identity fragmentation, and the emotional implications of synthetic companions. She might build poetic AI journals, avatar-based introspection tools, or machines that evolve through memory like humans. Woolf would stretch AI’s role beyond function—into feeling, ambiguity, and internal depth.
Rachel Carson – The Eco-Ethicist of AI
Carson’s Silent Spring awakened humanity to the unseen costs of technological advancement. She would be among the first to sound the alarm about AI’s environmental toll—from the energy consumed by training models to e-waste and data center emissions. Carson would demand AI be designed for planetary survival, not just economic growth. Her vision would include ecological intelligence—AI that monitors biodiversity, protects ecosystems, and helps humans coexist with nature, not dominate it.
Malala Yousafzai – The Advocate for Educational AI
Surviving an attack for going to school, Malala’s mission has always been to bring education to every girl. In AI, she would create adaptive learning systems that reach the underserved, including conflict zones, refugee camps, and rural villages. Her AI tools would be multilingual, offline-capable, and context-aware—focused not on elite learners but on the forgotten billion. Malala would see AI not as a luxury, but as a human right—a means to liberate, educate, and uplift.
Eleanor Roosevelt – The Diplomat of Global AI Governance
Roosevelt championed human rights on the world stage. In the age of AI, she would be a pioneer of international AI charters, shaping a “Universal Declaration of AI Rights” to prevent surveillance abuses, uphold digital dignity, and protect privacy across borders. She would push for AI to serve democracy, not control it, and promote cooperation among nations. Her focus would be on dignity by design—placing humanity, not profit or militarization, at the core of AI’s global trajectory.
Hildegard of Bingen – The Spiritual Technologist
A mystic, healer, and composer, Hildegard would explore AI’s ability to connect the mind, body, and spirit. She would develop therapeutic bots that integrate Gregorian chant, breathwork, plant medicine knowledge, and celestial rhythms. Her AI systems would guide users through inner journeys, using technology as a gateway to transcendence. Hildegard wouldn’t see AI as a machine, but as a sacred extension of consciousness, blending ancient wisdom with modern intelligence.
Frida Kahlo – The Patron of AI Selfhood
Kahlo's raw, introspective art chronicled suffering, identity, and love. She would use AI to help people tell their own stories, particularly those erased by patriarchy, ableism, and colonization. Kahlo’s AI wouldn’t be generic—it would be personal, radical, and full of contradiction. Think: generative art installations powered by trauma narratives, or avatars that evolve with a user’s emotional landscape. For Kahlo, AI would be a mirror of pain and power, a canvas for digital resilience.
Boudica – The Defender of AI Sovereignty
Boudica led a rebellion against the Roman Empire to reclaim native British land and dignity. In AI, she would fight back against the modern-day empire: big tech. Her leadership would manifest in data sovereignty, decentralized AI, and digital resistance movements that defend communities from extractive platforms. Boudica would champion local AI tools built by and for communities, insisting that no algorithm should dictate the fate of a people without their consent.
Wang Zhenyi – The AI Astronomer of Harmony
A brilliant polymath in 18th-century China, Wang Zhenyi broke both gender and intellectual boundaries. She would likely bring ancient scientific wisdom and holistic thinking into AI—modeling planetary cycles, agricultural rhythms, and traditional medicine with modern computation. Her AI vision would unify mathematics, astronomy, and ethics, guiding society not toward domination, but toward harmonious coexistence between technology, time, and nature.
Mother Teresa – The Embodiment of Compassionate AI
Mother Teresa dedicated her life to the sick, the poor, and the abandoned. In AI, she would lead the development of compassion-driven interfaces, healthcare assistants for palliative care, and emotional support bots for the lonely and terminally ill. She would reject flashy AI in favor of tools that quietly serve the most invisible people on Earth. Her focus would be on proximity, humility, and care, showing that true intelligence is measured by how gently we serve others.
Princess Diana – The Ambassador of Human-AI Empathy
Diana redefined royal influence through empathy, vulnerability, and connection to everyday people. As an AI leader, she would focus on human-AI relationships, especially in mental health, grief support, and trauma recovery. Diana might build emotionally intelligent agents trained not on data alone, but on compassion, listening, and human presence. She would champion AI that connects rather than isolates, reminding us that the heart of technology must always be human.
This is the pivotal choice before us: to allow AI to evolve as a force of unchecked ambition and profit, or to shape it as a reflection of our highest collective wisdom. The women leading this alternative path—the godmothers of AI—remind us that intelligence without empathy is dangerous, and progress without ethics is hollow. Their work signals a future where technology is not divorced from humanity, but deeply intertwined with it; where innovation does not mean domination, but healing, understanding, and equity. As we stand at this crossroads, the question is not just what kind of AI we can build—but what kind of world we want it to serve. Let the godmothers lead the way.