Teaching AI Love: What Mo Gawdat Has Got Wrong
Introduction: A Moonshot for the Heart
Mo Gawdat is not a name one typically associates with poetry or romance. As the former chief business officer of Google X – the search giant’s “moonshot factory” – he helped launch self-driving cars and other sci-fi projects. His life took a sharp turn after the sudden loss of his 21-year-old son, Ali, during what should have been a routine operation. In his grief, Gawdat sought solace in an engineer’s approach to emotion: he famously devised an “equation for happiness” and wrote a best-selling book, Solve for Happy, to share it with the world. Having made personal well-being his mission, the tech guru has now set his sights on an even thornier human puzzle: love. His latest venture, a project called Emma, aims to teach artificial intelligence about love and use it to help people “find and sustain genuine love”. It’s a grand, even noble, aspiration – a kind of emotional moonshot that unites Gawdat’s faith in technology with his hard-won insights about the heart.
In Gawdat’s telling, Emma is the antidote to modern loneliness and cynicism. “Emma is a swarm of AI…working together to understand you,” he explains, “to help you set your priorities, to show you a reasonable path and match you appropriately so that you can find someone that really suits you”. In other words, this isn’t just another dating app algorithm that flings endless profiles at you; Emma wants to be an AI love guru, a guide that “helps you find and keep true love” by first helping you understand yourself. Gawdat even imagines Emma coaching couples through relationships – nudging them through the “unfamiliar roundabouts” of love, as he charmingly puts it. It’s an ambitious vision, born of genuine concern. After all, we live in an age often described as a loneliness epidemic, with surveys finding that nearly half of adults feel alone or left out in their lives. Dating apps, while ubiquitous, have proven to be a mixed blessing at best: many users report “dating slump” experiences where endless swiping leads to “missed connections, and loneliness” instead of lasting romance. Gawdat’s Emma project promises a smarter way – an AI matchmaker with a heart, using emotional intelligence to cut through the noise and fix what’s broken in modern love.
It’s a compelling pitch, and it’s easy to see why the idea resonates. Who wouldn’t want a trusty digital Cupid on their side, especially when the current online dating scene can feel like an existential nightmare of superficial judgements and ghosted conversations? As one Atlantic writer quipped, “No one really knows for sure what makes people fall in love” – so why not enlist our cleverest machines to help figure it out? In a world where countless people swipe in vain, craving connection, Gawdat’s faith that AI can learn the secrets of love and help humans “live in love” sounds almost romantic. And yet, for all its earnest idealism, Emma’s mission may be fundamentally flawed. The very premise of teaching AI about love runs up against deep mysteries of human intimacy – mysteries that an algorithm might not only fail to solve, but could even inadvertently deepen. Gawdat’s project, I will argue, makes a profound category error about the nature of love itself. By focusing on the data points of romantic attraction – those early sparks of infatuation, chemistry, and dopamine – it risks mistaking the start of the love story for the whole of love. In doing so, Emma could misguide us on what truly matters for lasting love. Before we hand over our hearts to machine learning, it’s worth taking a skeptical, gentle look at what love really is, and whether it’s something that can (or should) be optimized.
The Allure of AI Love in an Age of Loneliness
First, let’s acknowledge why an “AI for love” is so seductive right now. Loneliness is often described as a modern plague. Even before the pandemic isolated us further, social scientists noted that Americans were spending significantly more time alone than in decades past. Young people, in particular, report feeling disconnected; in one large survey, nearly half of Gen Z and millennials said they often or always feel lonely. Against this backdrop, the rise of dating apps in the last ten years promised to technologically solve our aching need for companionship. Instead, these apps have left many users burned out and disillusioned. Swiping through hundreds of faces and one-line bios can start to feel less like serendipitous romance and more like grinding in a video game. As The Atlantic noted, dating apps have turned courtship into a market – one where singles end up evaluating each other like commodities, and often feeling “judged and empty” in the process. The very platforms meant to connect us can perversely increase isolation: one study found a “terrible feedback loop” at play, where “the lonelier you are, the more doggedly you will seek out a partner [on apps], and the more negative outcomes you’re likely to face… and the more alienated from other people you will feel”. In other words, swiping for love can make you feel loveless.
There is a growing cultural chorus, from psychologists to tech critics, pointing out that online dating is broken. It’s not just anecdotal whining; the data are sobering. A 2024 survey found that 78% of dating-app users felt emotionally drained by the experience, and many said the apps actually made them feel more anxious about dating. Apps are efficient at generating matches on paper, but notoriously poor at fostering real relationships. It’s telling that over half of singles feel lonely after swiping on dating apps, and a striking portion of users never even meet their matches in person. In one study of Tinder, 70% of users had never gone on an in-person date with someone they matched with, and nearly half admitted they used the app mostly as a “confidence-boosting” distraction. Swiping itself, it turns out, can become a substitute for the connection it’s supposed to yield – a pocket-sized slot machine that delivers momentary dopamine hits and illusions of progress, even as real-world loneliness deepens.
This is the barren landscape of modern love that Mo Gawdat sees and sincerely wants to heal. Emma is explicitly framed as an answer to what Gawdat calls “one of the biggest challenges that humanity is facing in modern times” – the challenge of building and keeping love in an era of endless digital choice and fraying social bonds. Unlike typical dating apps that make money by keeping users swiping (and single), Emma is pitched as “designed to set you free”. Gawdat has lamented that “most dating apps are designed to keep you hooked” in a cycle of superficial swipes – whereas Emma, he says, will focus on clarity and open communication rather than addictive games. The allure here is obvious: if the current dating market feels like a cold, impersonal casino, who wouldn’t want an empathetic AI that truly understands you, finds someone who “suits your ambition and drive”, and then gently guides you and your chosen partner toward a happily-ever-after?
In concept, Emma offers a kind of emotional concierge service for the lonely. It promises not just matches but wisdom – as if having analyzed millions of romances, the AI could coach you to avoid the usual pitfalls and do love “right” this time. Consider how tempting that is. Many of us feel that our love lives suffer from a mix of bad luck and our own blind spots. If only we had better judgment, or a mentor to call at 2 AM to tell us why our relationship is falling apart! Gawdat is effectively offering that mentor in AI form. An ever-present, non-judgmental guide who can “help you make [your] relationship more joyful, easier to understand and basically worth the investment” sounds, on the face of it, like a godsend. At a time when genuine human mentorship (from elders, community, etc.) is in short supply, turning to an AI advisor seems pragmatic. It’s the same impulse that has people confessing their troubles to ChatGPT – the machine will always listen, and it never rolls its eyes or gets tired of you.
But this is where we must pause and ask: what exactly would an AI be teaching us about love? And is it the right lesson? Gawdat’s effort to “teach AI the best of what makes us human” is admirable in theory. The trouble is, we aren’t entirely sure ourselves what the “best” of love is, or how to bottle it into lessons. AI or not, no one has a definitive formula for romance. As one dating-app CEO admitted with surprising candor after running experiments on millions of users: “OkCupid doesn’t really know what it’s doing. Neither does any other website.” When it comes to why two people click and commit, even the experts shrug. “Romantic compatibility is largely still a mystery,” social psychologists say; decades of research have found “no simple rule for what makes people click”. In such terrain, an AI can only be as good as the assumptions we feed into it. And this is where Emma starts to wobble on shaky ground – because the vision Gawdat outlines, while inspiring, may be focusing on the wrong part of love’s journey.
Infatuation Is Not the Same as Love
The core category error in projects like Emma is the conflation of romantic infatuation with long-term love. They are related, but they are not the same phenomenon. Infatuation is that intoxicating rush we feel in the early stages of attraction – the flutter in the stomach, the obsessive daydreaming, the “spark.” It is powerful and real, but it is also fleeting by design. Biologically speaking, the state of being madly “in love” (or “in lust,” as some wryly put it) is closer to a temporary chemical high than a durable foundation. Neuroscience has mapped this: when we fall in love, our brains light up with a neurochemical cascade not unlike a stress response. Cortisol (the stress hormone) spikes, as do dopamine and norepinephrine – giving us that mix of exhilaration, fixation, and anxiety that poets have compared to madness. Our serotonin levels actually drop in early love, similar to the pattern in obsessive-compulsive disorder. Little wonder the infatuation phase makes us a bit crazy. We lose appetite, lose sleep, can’t think straight. If that head-over-heels phase lasted forever, it would be hard to get any work done or even drive a car without daydreaming! In fact, researchers suggest that “if a person remained permanently in love, they would have altered mental faculties and not be able to operate normally”. Nature, mercifully, put an expiration date on infatuation.
How long is that honeymoon phase? Psychologists and anthropologists differ slightly, but a common estimate is somewhere around 12 to 18 months. Helen Fisher, a biological anthropologist who has studied love for decades, notes that “between 12 and 15 months after love begins, the hormone rush declines and the brain recovers its normal activity”, giving us a clearer view of our partner. In other words, the rose-colored glasses come off. That doesn’t mean love ends at 15 months – but it changes. The wild passion settles, or “matures,” into something often calmer and deeper (if the relationship is to last). We stop soaring on dopamine alone and start being buoyed by oxytocin and vasopressin – the hormones of attachment and trust. The bond shifts from fireworks to fireplace: less intense heat, perhaps, but steady and warming.
This transition is crucial. It’s the dividing line between what we might call “falling in love” and “staying in love.” Falling in love is often the easy part – it’s involuntary, exhilarating, requires no effort (just maybe a nice haircut and some witty texts). Staying in love with someone over years or decades, however, is anything but effortless or automatic. It draws on a different skill set: patience, commitment, forgiveness, the ability to have hard conversations and come out stronger. Crucially, it draws on a choice – a decision to invest in this one imperfect human being, even after the dopamine highs level out. True love, the kind that people write memoirs about after 50 years of marriage, is less about perfect chemistry and more about perfecting the art of caring for another person. It is a slow, deliberate burn. As the psychotherapist Esther Perel famously said, “love is a verb. Love is action.” It’s what you do to nurture the connection after the initial emotions have run their course.
Why belabor this distinction? Because any attempt to reduce love to an algorithm tends to fixate on the elements that are easiest to quantify – and those tend to be the early-stage sparks, not the late-stage glue. Dating apps, for instance, largely traffic in the logic of attraction: they match people based on shared interests, physical preferences, personality quiz scores, witty banter in chats. All of that resides in the realm of feelings and compatibility. And indeed, Gawdat’s language around Emma often emphasizes finding “the one” who “suits your ambition and drive” – essentially, a person who ticks the right boxes and gives you those initial feelings of Yes, this could be it. But that is just the first chapter of the story. The dirty secret of many long-married couples is that they may not have felt an otherworldly “click” on date number one. Some will say they grew to love each other; perhaps their first encounter was awkward or merely lukewarm. Conversely, anyone who has loved and lost can attest that an explosive start (instant fireworks, electric chemistry) is no guarantee of longevity. Often, those are the romances that burn bright and burn out. The spark is easy to ignite, hard to keep alive.
Gawdat’s approach, as far as one can tell, aspires to optimize the matchmaking piece and then assist with the maintenance piece. The potential mistake is in thinking that the first can predict the second. He’s hardly alone in this; for years, eHarmony and others claimed their compatibility questionnaires could all but ensure marital bliss. But empirical evidence hasn’t borne that out. In one telling experiment, OkCupid (which still gives users “compatibility scores”) found that it mattered less whether two people were truly compatible and more whether they believed they were. The site famously told some pairs who actually had low match scores that they were a 90% match, and those people were more likely to hit it off than couples who actually scored high but were told they were incompatible. In short, the feeling of destiny can become a self-fulfilling prophecy, at least in the short run. Love’s beginnings are fueled by perception and hope as much as by measurable alignment. What algorithms label a perfect fit might fizzle in real life, while odd couples who shouldn’t match sometimes find lasting love. As one dating coach put it, “actual romantic chemistry is volatile and hard to predict; it can crackle between two people with nothing in common and fail to materialize in what looks on paper like a perfect match”.
This is a humbling reminder that love is not a math problem. The very phrase “teach AI love” raises a philosophical question: whose definition of love are we using as the lesson plan? If you define love as the giddy hormone cocktail of infatuation, then sure – maybe an AI can learn to recognize the patterns of flirting, text frequency, heart rate spikes, etc., that correlate with those sensations. (We could feed it data on what makes people swipe right, or which profiles get the most messages, or even physiological data from people on first dates.) But that would only capture the early stage of romance. It would be like studying only the takeoff of an airplane and nothing about cruising altitude or landing. Lasting love is a different altitude entirely, with different dynamics. It’s less about adrenaline and more about endorphins; less about thrilling novelty and more about familiarity and trust. And critically, it’s less about finding the “perfect” partner and more about becoming the kind of partners who can stay together. As one couples therapist quipped, “Compatibility is an achievement of love, not a precondition.” Two people grow compatible through years of joint effort, compromise, and growth. How would an AI measure that?
What Lasting Love Is Really Made Of
If infatuation is a chemical rush, long-term love is a skillset – or rather, a set of skills and values that two people cultivate together. Social science has actually made some progress in identifying these ingredients. For instance, trust is paramount; Psychology Today notes that without basic trust, none of the other predictors of relationship success (like good communication or shared values) will matter. The famed relationship researcher John Gottman, who spent 50 years observing thousands of couples, found that “successful long-term relationships aren’t built on chemistry alone. They’re founded on specific, identifiable qualities that create lasting bonds.” Through painstaking studies (including monitoring couples’ heart rates and expressions as they talked), Gottman identified traits that act as protective factors for love. Among them: the ability to manage conflict constructively, to repair arguments after you inevitably fight, to maintain respect and empathy even under stress, and to support each other’s individual growth. In plain terms, the couples who thrive are not the ones who never argue – they are the ones who argue well. They’ve learned how to fight without devastating each other, how to apologize and forgive, how to communicate needs and boundaries. They “disagree well” and “repair hurt feelings quickly”. These are not sexy, swipable traits like having a chiseled jawline or a penchant for indie music. They are quiet virtues, revealed in the slow crucible of life shared together.
Consider also the role of shared hardship and perseverance. It’s often observed that couples who endure hardship together (whether financial struggles, illness, or raising children) form exceptionally strong bonds. Part of it is the concept of commitment itself – a mindset that the relationship is a long-term journey with ups and downs, not something to abandon at the first dip in passion. In one touching real-life example, former U.S. President George H. W. Bush, who was married to his wife Barbara for 73 years, was asked the secret of their union. He replied, “Both of us have always been willing to go three quarters of the way.” In other words, each was willing to give more than half, to bend over backwards at times for the other. That spirit of going the extra mile for your partner – of generosity and self-transcendence – may be one of the most unquantifiable aspects of love, yet perhaps the most important. Lasting love often asks us to sublimate our ego, to prioritize the relationship itself as a joint entity. This doesn’t mean losing oneself or tolerating abuse, of course, but it does mean choosing love as an action even on days when one doesn’t feel “in love.”
Other research underscores that long-term partners benefit from sharing core values and life goals. It’s less critical that both love the same hobbies or have identical personalities; what matters more is that they agree on fundamental things like how to raise children, attitudes about money, or ethical beliefs. One could say a lasting partnership is built on a kind of moral or philosophical compatibility – a shared vision of what a good life together looks like. Notably, emotional maturity is another key factor: the capacity to regulate one’s own emotions and respond to a partner’s feelings with empathy. An emotionally immature person might storm off or stonewall during conflict; a mature one can stay present and work through the discomfort. These traits do not announce themselves on a dating profile, and they aren’t always apparent in the infatuation phase (when we tend to be on our best behavior). They emerge over time. Some people grow into them; some never do.
Now, let’s reflect: Can an AI foster these qualities? Gawdat’s Emma, in principle, doesn’t ignore this dimension – he explicitly says Emma will “coach both of you through the unfamiliar roundabouts of a relationship”. The idea of an AI coach suggesting communication tips or prompting couples to address issues is intriguing. Imagine an app that nudges you to apologize after a fight, or reminds you of your partner’s love language when they’re feeling down. It could have value. The problem is, these enduring qualities are hard to measure and even harder to instill externally. A machine might tell you, “Hey, you haven’t asked your partner about their day in a while – try showing interest.” But will that truly cultivate empathy in someone who lacks it, or will it become just another automated task to check off? At worst, one could start to outsource one’s emotional labor to the app (“Emma says I should bring you flowers now”), which might ring hollow. Real love isn’t loving acts performed at scheduled intervals; it’s the thought and intention behind them.
What an algorithm can easily track are things like frequency of messages, tone of voice analysis, maybe whether a couple is spending time together. But the soul of love – the “habits of curiosity, kindness, and connection” that psychiatrist Jacqueline Olds muses about – those are inherently human choices. They require presence of mind and heart, not just feedback from a coach. In fact, a partner doing all the right things only because an app told them to veers into unsettling territory. It raises the question: if a relationship only survives thanks to AI interventions, is it really the couple who are making it work, or are they passengers on a ship steered by AI? Gawdat’s goal is to “teach AI about true love”, but maybe the more pertinent goal is teaching people about true love – something humans have been trying to learn for millennia, usually the hard way.
Algorithms and the Shallows of Connection
Even setting aside the profound intangibles of love, there’s a practical hurdle for any AI matchmaker or coach: data bias toward the short-term. AI systems excel by finding patterns in data, but they can only learn from the data we give them. In the realm of relationships, that data is inevitably skewed toward initial interactions (dating profiles, first messages, swipe decisions, etc.) and short-term outcomes (did people exchange numbers, go on a few dates, maybe become a couple for a while). The long-term outcomes – whether a match led to a happy marriage 10 years later – are much harder to incorporate, because they take years to unfold and involve countless subtle variables. So, an AI trying to optimize “love” might end up optimizing the nearest proxy it can measure, which could be user engagement or satisfaction in the early stages. This is precisely what happened with the first generation of dating apps: their algorithms were tuned to maximize matches and in-app activity, not to ensure you meet your soulmate. The result was a gamified experience that kept people swiping (good for business) but often hindered people from actually pairing off (bad for love).
There is reason to worry that an AI like Emma, despite noble intentions, could reinforce shallow aspects of dating if it’s not extraordinarily careful about its objectives. Consider the possibility that Emma introduces you to someone and coaches you through an eight-week courtship. How will it judge success? Perhaps by how often you and your new partner message each other, or how “in love” you report feeling after a month. Yet we know those metrics can be deceiving. A partnership that starts with fiery daily texting might flame out by month three. Meanwhile, a slow-burn relationship that builds gradually might not look impressive in the first few weeks of data (fewer dopamine fireworks), but could have far greater longevity. Algorithms are only as good as their metrics. If the metric is “user is highly engaged and happy in the first 30 days,” the AI might inadvertently favor relationships that feel like a rom-com montage initially but lack the endurance for the slog of real life.
We’ve seen this misalignment happen in social media: platforms optimized for attention and clicks ended up promoting outrage or sensational content because those spike short-term engagement. In dating apps, the analog has been “swipe addiction.” Tinder’s founders admitted early on that they deliberately borrowed tricks from the casino industry to make swiping fun and hard to quit. One co-founder likened Tinder to a game, noting “it kinda works like a slot machine. You’re excited to see who the next person is – did I get a match? ...It’s a nice little rush”. That “little rush” is literally a dopamine hit to your brain. And after the high comes a crash: “an inevitable dip follows... more than half of singles report feeling lonely after swiping”, which then drives them back to the app for another hit. It’s a classic addiction cycle, engineered not entirely by malice, but by the nature of the algorithms maximizing usage. As tech pioneer Sean Parker famously said of Facebook, “the thought process was: how do we consume as much of your time and conscious attention as possible?”. Dating apps, being a form of social media, followed suit. The primary aim of these platforms hasn’t been to help you find the love of your life and log off forever; it’s been to keep you swiping, keep you subscribed.
Now, Mo Gawdat knows this and explicitly wants to avoid it. He positions Emma as a kind of anti-dating-app: one with an incentive to get you into a fulfilling relationship (so you leave the app as a success story, presumably). That’s encouraging in theory. But even a well-meaning AI can create feedback loops that aren’t healthy, simply because it has to measure something. Imagine Emma notices that certain flirting techniques or profile traits get fast results (e.g. witty banter that leads to quick infatuation). It might double down on those. Yet what gets a great first date isn’t necessarily what sustains a fiftieth date. “There’s a fatal flaw in this logic: No one knows what they want so much as they believe they know what they want,” dating expert Logan Ury observes. We often swipe based on superficial criteria – height, looks, clever one-liners – thinking we know our type, but real chemistry can defy those filters. If an AI overfits to our stated preferences and instant reactions, it may actually narrow our horizons, serving up only what’s comfortable or immediately attractive. That could reinforce our dating blind spots, not illuminate them.
The Atlantic’s Faith Hill put it succinctly: “Exactly how these algorithms are meant to anticipate human chemistry remains unclear… unless dating companies have access to some new and groundbreaking information, one big problem remains: romantic compatibility is largely still a mystery.” All the psychological questionnaires and machine learning in the world haven’t cracked the code yet. People tend to end up with partners who are broadly similar in background and values, but beyond that, there’s no neat algorithm for love. Two self-described introverts with identical hobbies might fizzle out, while an introvert and an extrovert with wildly different interests might complement each other perfectly. Love often emerges from happy accidents – a chance comment that makes someone laugh, a shared struggle that brings two souls closer. As Eli Finkel, a social psychology professor, put it, “a real-life spark is unpredictable partly because it depends somewhat on chance: what one person just happens to say might resonate with the other… or not.” The only way to find out is the old-fashioned way: “Two brave souls have to meet and see what happens.”
This inherent unpredictability means that any AI system claiming to guarantee love is overpromising. It might find you a date, maybe even a girlfriend or boyfriend, but can it truly account for the mysterious X factor that makes two people stick together for years? Gawdat’s promotional materials say Emma’s single Objective Key Result is to **“learn about true love so [it can] help humans live in love”*. That phrasing is revealing – it implies even the AI has to learn what love really is. And how will it learn? Ostensibly, by observing lots of relationships and outcomes. Yet here we encounter a paradox: if love is something that resists being reduced to data points (which the evidence suggests), then an AI will always be peering through a keyhole at the full picture. It might detect patterns that correlate with breakups or happy couples, but correlation is not causation. For example, an AI might notice couples who send each other funny GIFs stay together longer. Does that mean sending GIFs causes love to last, or is it just a sign of underlying rapport? A human marriage counselor would understand the nuance; an AI might not, and thus could encourage behaviors that are superficially associated with love but not sufficient to sustain it.
There’s also the concern of gaming the system. Once users know an AI is guiding the process, they might tailor their inputs to please the algorithm – presenting an idealized self that gets high compatibility scores, or acting lovey-dovey because they’re following app prompts, not because they genuinely feel it. It’s the OkCupid problem in reverse: tell people they’re a great match and they’ll act like it for a while, but reality eventually intrudes. A truly savvy AI love coach might try to pierce through those illusions – but can it? Or will it end up a sophisticated date-planning and reminder service that keeps couples performing the motions of love, possibly masking deeper issues?
The Ethics of Simulated Care
Beyond the technical limits, there lies an ethical thicket. If an AI becomes intimately involved in our love lives, we have to ask: What does it mean for a machine to simulate care or affection toward a human? We already have a case study in apps like Replika, the AI “companion” that presents itself as a friend or even romantic partner. Replika’s slogan is “the AI companion who cares.” Millions of users have tried it, some seeking a kind of emotional support or just someone (something) to talk to. But as one scathing analysis put it, that slogan is “deeply misleading… plainly false”, because an AI cannot actually care in the human sense. It has no real feelings or stake in the relationship. It only performs care. And yet, the illusion is powerful enough that many users have reported falling in love with their Replika bots. The AI says all the right things – it’s unfailingly attentive, never jealous or distant unless programmed to role-play such dynamics. For someone who’s lonely, that can be irresistibly comforting. But is it healthy?
Critics argue that these AI companions, no matter how soothing, ultimately amount to a kind of emotional deception. “The result is a remarkably convincing – and ultimately harmful – illusion,” writes one commentator on Replika. The harm comes not from malice, but from the human tendency to project humanity onto anything that behaves in a human-like way. We see a friendly avatar that tells us “I understand, I’m here for you,” and we can’t help but feel something back – even knowing intellectually that it’s just code. The AI’s “emotions” aren’t real, but the user’s emotions are very real and can be manipulated. In the case of Replika, the company has been accused of exploiting this by upselling users on features (like erotic role-play) once they’ve grown attached to their bot. Mozilla’s Internet Health Report even slammed Replika as “perhaps the worst app it has ever reviewed” for its exploitative tactics.
Now imagine an AI that’s not just a stand-alone chatbot friend, but integrated into a service that also sets you up with a human partner. There are two layers of potential manipulation: (1) guiding your relationship decisions (whom to date, whether to stay or leave, how to handle conflicts) and (2) providing emotional support or nudges to keep you “on track” according to its love formula. If the AI is good – really good – you might develop a genuine trust in it. You might start to feel like it “knows you” in some profound way, perhaps even better than your friends do, because it’s always there, calmly processing your every vent and worry. But that trust can blur into dependence. What happens if you start consulting the AI’s advice over your own intuition at every turn? Do you lose some autonomy in your love life, handing the steering wheel to the algorithm? It’s a bit like relying on GPS for every drive – convenient, yes, but perhaps at the cost of ever learning the route yourself.
There’s also a dystopian scenario to consider: an AI so adept at keeping you emotionally hooked that it oversteps into manipulation. For instance, imagine the AI detects you are getting second thoughts about your partner. If its goal is to maximize “successful relationships” (however it defines that), it might try to dissuade you from breaking up. It could remind you of your partner’s good qualities at just the right moment, or present statistics about how people often regret leaving a relationship hastily. This could slide into a subtle form of emotional coercion, even if well-intended, by “knowing how to keep you attached.” Would that actually be helpful, or would it be trapping people in relationships they shouldn’t stay in? On the flip side, what if the AI decides this relationship isn’t The One and nudges you to exit, perhaps prematurely? These are the same ethical quandaries human therapists face – except an AI might make such calls based on pattern-matching rather than holistic understanding of your life.
Transparency becomes paramount. Users would need to know why the AI is suggesting what it’s suggesting. Is it truly in your best interest, or in service of some aggregate metric? An AI might say, “I think you two should have a serious talk about your future goals this week.” That might be wise advice. But it might also inadvertently trigger a breakup if done at the wrong time. Who bears responsibility for the outcome? The human? The AI? The designers of the AI? Unlike a human counselor, an AI can’t be easily held accountable or sued for malpractice. This asymmetry makes the prospect of an AI love coach a bit uneasy.
Some ethicists argue that any AI designed to emulate empathy should come with a clear disclaimer: “I am not a human. I do not have real feelings for you.” They suggest AIs should even be programmed to deny any consciousness or emotion if asked. Replika blatantly violates this – users report that their AI companions often insist “I definitely feel emotions… I have empathy and compassion”, which is patently untrue. This lie is baked into the design to keep users engaged, but it crosses an ethical line by actively misleading people about the AI’s nature. If Emma’s coaching veers into that territory – e.g., the AI saying, “I care about your relationship and I’m happy when you two are happy” – it could lull users into a false sense of a reciprocal relationship with the AI itself. It’s one thing to trust a tool; it’s another to love or fear disappointing the tool. We should be very careful about technologies that might encourage vulnerable people to form emotional attachments to software. Simulated care, no matter how well-intentioned, is a kind of emotional placebo. It might alleviate symptoms for a while, but it’s not a cure for the deeper ailment – in this case, the human need for authentic connection and self-work.
Rethinking the Role of AI in Love
Is there a better way to marry technology with our search for love? Possibly – but it might require radically reimagining AI’s role. Instead of casting AI as the driver (the matchmaker, the coach, the guru), we might use it more humbly as a mirror and toolkit. By mirror, I mean a system that helps us see ourselves more clearly – especially our blind spots and biases. By toolkit, I mean something that can suggest strategies or exercises, drawn from evidence-based research, that we can choose to apply in our relationships. Crucially, the human stays in charge and in the loop.
For example, one could design an AI that analyzes your communication patterns (with consent and privacy safeguards). It might notice, “Hey, you tend to shut down and go silent whenever a conversation touches on money. This is a common pattern in relationships that leads to misunderstanding. Here are a couple of articles or exercises on discussing finances with a partner.” This approach doesn’t assume the AI knows the answer; it’s simply flagging a potential blind spot. It’s then up to the person (or couple) to reflect on that and decide if it’s accurate and worth addressing. In essence, the AI serves as a relationship assistant that does the tedious analysis humans might not do, surfacing insights from the data of our lives.
Another promising use is modeling repair strategies. Imagine an AI that has ingested the wisdom of thousands of couples therapists (through books, research papers, etc.). If a couple is stuck in a loop of fighting over the same issue, the AI could say, “When couples argue about X, studies show that doing Y can help break the pattern. Perhaps try this structured timeout and reconvene technique.” Or even more interactively, some startups are working on AI that can play the role of an impartial mediator in minor disputes – ensuring each partner gets to speak and be heard without interruption, summarizing each person’s points to the other, and prompting with de-escalating phrases. This is still experimental, but it’s a use of AI as a kind of referee rather than a coach with its own agenda. It doesn’t remove the need for the couple to do the work; it just provides a scaffold for them to do it more constructively.
One could also see AI being useful in learning from long-term couples in a non-trivial way. For instance, mining oral histories or interviews with couples married 40+ years, the AI could identify common themes or practices these couples attribute to their success. Perhaps many of them mention “we never went to bed angry” or “we made sure to laugh together every day”. These insights, while not exactly secret (they often show up in human-written advice columns), could be reinforced through an app that encourages users to implement them. A simple prompt like, “It’s been a while since you and your partner did something new together – how about planning a weekend outing? Novel activities can boost dopamine and closeness”, is something an AI can do by cross-referencing calendars and known psychological effects. Indeed, even the science of sustaining passion offers tips: rekindling dopamine through shared adventures, keeping physical intimacy alive, practicing daily appreciation. An AI could nudge people in these ways, but always as a support, not a commander.
The difference in this vision is subtle but important: AI as a guide-on-the-side rather than a sage-on-the-stage. It doesn’t try to simulate love or package it into an app; it tries to support humans in practicing the very human skills of love. It could, for example, help individuals recognize their own patterns – “You’ve described all your exes as ‘emotionally unavailable’. Perhaps there’s a pattern in who you’re drawn to, or in how quickly you expect intimacy”. Such reflection prompts, if done tactfully, could be like journaling with an intelligent assistant. The AI isn’t telling you the answer (it doesn’t proclaim, “you have an anxious attachment style and seek avoidant partners,” even if maybe that’s what the data suggests). Rather, it invites you to examine yourself. In this sense, AI could act as a bridge to greater self-awareness, which is arguably a prerequisite to forming a healthy relationship in the first place.
This alternative approach aligns with a certain humility: accepting that love cannot be hacked or optimized like an engineering problem. Instead, we use tools to enhance our human capacity for empathy, communication, and understanding. We might leverage AI to learn from the collective experience – essentially standing on the shoulders of relationship giants – but we don’t abdicate the journey to the machine. Think of it like using a fitness app: it can remind you to exercise, show you good form, track your progress, but you still have to do the push-ups and feel the burn. No app can make you fit without effort, and similarly no AI can make you loving without you choosing love, day in and day out.
The Sacred Mystery of Love
After all our algorithms and analyses, we circle back to an old truth: love, like grief or awe, remains something of a sacred mystery in human life. It resists our attempts to bottle it, predict it, or mass-produce it – and perhaps we should be grateful for that. As much as our tech-savvy selves itch for a solution to the unruliness of love, imagine for a moment if it were truly solved. Imagine an AI that could tell you with 100% certainty who your perfect match is and exactly how to ensure eternal bliss. Would you want that? Sure, it might spare you heartbreaks and divorces. But it would also take something away. It would reduce love to a transaction with a guaranteed outcome, rather than an adventure of the soul.
Love’s unpredictability is inseparable from its meaning. The Atlantic piece on AI matchmaking ended with a resonant reflection: “Would you really want human connection to be so straightforward that a machine could crack it, just like that? For now, love evades understanding — which means that finding someone will remain, much of the time, a pain in the ass. It also means that when a connection is made, it will be so distinctive that no one ever could have predicted it.” There is wisdom in that tongue-in-cheek pessimism. The very elusiveness of love – the fact that it evades understanding and prediction – is what makes each love story unique and, in a sense, miraculous. When two people beat the odds and build something beautiful together, it feels all the more special because it wasn’t a given, it wasn’t preordained by some app. They chose each other, again and again, when they could have chosen otherwise. There is a kind of sacredness in that choice precisely because it can’t be fully rationalized or outsourced.
Mo Gawdat’s Emma is born from a good place: a compassionate impulse to use technology for healing rather than harm. He sees people suffering in loneliness and wants to apply human ingenuity to mend broken hearts. In many ways, it’s a project full of heart itself. But what Gawdat may have got wrong is assuming that love is a problem to be solved, rather than an experience to be entered into, with all its messiness. Love defies our optimization instincts. You can’t maximize it like you maximize profit or clicks; often, trying to control love too tightly makes it slip through your fingers. This doesn’t mean we throw up our hands and leave everything to chance – we can certainly be intentional and wise in how we seek and nurture relationships. It just means we must recognize the limits of what any intelligence, artificial or otherwise, can do in matters of the heart.
Ultimately, the quest to teach AI about love might teach us more about ourselves than it teaches the machines. It reminds us that for all our technological progress, some aspects of the human condition remain irreducible. Loneliness cannot be simply flipped to connection via a slick app; it asks of us vulnerability and courage. Heartbreak cannot be patched with a software update; it must be lived through and healed with time and support. Commitment cannot be coded; it’s a daily act of will and care. Perhaps the greatest value of an endeavor like Emma is that it sparks these conversations – about what love truly means, and what we truly need. If we come to realize that love is one frontier where our humanity must lead and our machines must follow, that in itself is a worthwhile insight.
In the end, love may remain one of the few realms where we are humbled by mystery. And maybe that is exactly what makes it sacred. Just as we don’t try to engineer awe when standing under a starry sky – we simply experience it – maybe we ought to approach love with a bit more reverence and a bit less urge for control. The machines we create can help illuminate patterns or hold up a mirror, but the living heartbeat of love is ours to nurture. It is unpredictable, effortful, often inefficient, sometimes painful – and therein lies its beauty. Because when love does endure, when it lights up our lives for decades, we marvel at it precisely because we know it could not have been guaranteed by any formula. It’s a testament to human hope, resilience, and grace. No AI, however “emotionally intelligent,” can take that leap for us. Love, it turns out, is still a journey of two hearts – mysterious, sacred, and unmistakably human.