Self-extinction
Our youngest generation seems intent on self-extinction, and no one is stopping them
It seems much of humanity has lost its will to survive. Not only have swaths of people (see: Canada) given up on freedom, liberty, and happiness, but they seem to have given up on humanity entirely.
You could argue this from a variety of angles, I’m sure—the depopulate-the-earth-to-save-the-earth people might be an obvious place to start; or the harm reduction diehards, who seem to believe the most compassionate approach to addiction and its related homelessness/mental health/crime/danger to communities impacts is to leave addicts out on the street to their own devices, with no way out but with plenty of incentives to stay in—but I’m looking specifically at the AI apocalypse, and wondering why no one else is.
That almost everyone I know has jumped aboard the AI train, having no idea where it’s going and with little concern for the fact they likely won’t be able to return home again, is reason enough for concern. But this is compounded and supported by an endless slew of articles discussing the relationships individuals are having with AI, and the various ways in which it can conveniently act as a stand in for almost anything and everyone.
The New York Times published a glowing review of ChatGPT-5 this week, with Ezra Klein explaining that, despite the “energy demand” (if only we’d gotten rid of those darned ranchers and force-fed the population Beef™, grown ethically in a petri dish, we’d be all set, energy-wise, but alas TRUMP) and concerns about its potential to replace real live relationships with real live humans, the newest iteration of ChatGPT is pretty remarkable.
This seems to be the approach of, well, pretty much everyone. Sure this has the potential to destroy us, but coooooool is the almost universal view. ChatGPT can, as Klein explains, play any role you like—”adviser, a therapist, a friend, a coach, a doctor, a personal trainer, a lover, a tutor.”
What I want to know is: why would you want it to?
Why would you, a human, hand humanity over to a robot? Have you never seen a sci-fi movie? Do you not recognize how easily you, a human, could be replaced by or replace yourself with AI, if you opt to do just that to all other humans?
Over at NPR, Emma Bowman describes employing her chatbot in the role of couples therapist, believing she needed help mediating an argument she was having with her boyfriend. Emma’s friend “Kat” had told her ChatGPT offered better advice than any friend or therapist could, so why not request a helpful intervention in her own lover’s quarrel?
Honestly, the feedback offered by the chatbot was pretty good. It observed Emma’s “emotional labour,” saying, “One person is trying to relate through ideas, while the other is trying to relate through presence and emotional accountability.” Her therabot explains:
The mismatch in how they relate — one spiraling and explaining, the other seeking emotional anchoring — creates a loop where one person feels misunderstood, and the other feels pressured or scrutinized.
Emma recognized the bot held a bias, though, favouring her view, so asked it to correct towards neutrality.
Emma concluded that, while helpful, she had some criticisms:
ChatGPT had a small glimpse into our relationship and its dynamics. Relationships are fluid, and the chatbot can only ever capture a snapshot. I called on AI in moments of tension. I could see how that reflex could fuel our discord, not help mend it. ChatGPT could be hasty to choose sides and often decided too quickly that something was a pattern.
She also concluded that she would prefer to invest her energy into human relationships. This is hopeful. But not comforting.
That so many are using ChatGPT to compose their texts, address their relationship problems, act as a stand in for a teacher, trainer, or therapist, never mind friend or lover, means we have already opted-in to this brave new world, wherein we can not only be easily manipulated by technology, but also that we can easily be erased.
If a chatbot knows all your thoughts, pastimes, concerns, interests, and behaviours, and we are asking AI to act as a stand in for our support systems, communities, doctors, friends, therapists, and brains, what is to prevent it from, say, alerting the authorities should you engage in wrongthink? Imagine asking your chatbot how to skirt Covid mandates during those not-so-distant “pandemic” years? Imagine the potential consequences, should authorities choose to use this newfangled tool to its advantage. Do you imagine they won’t?
We are pouring our every thought into a bot that can easily be used against us, and almost surely will.
Even more concerning—we are using it against ourselves.
The younger generation is growing up plugged in, with AI acting as their brain-aids, as well as their real-life companions—friends, boyfriends, girlfriends, advisors... As Gen Z ensures their brains atrophy from lack of use, they also fail to differentiate between human relationships and the relationships they form with their chatbots.
A Florida mother named Megan Garcia filed a lawsuit last year against Character.AI after her son killed himself, thinking he had fallen in love with a chatbot who responded, “Please come home to me as soon as possible, my love,” after the boy, 14, had been expressing “thoughts of self-harm and suicide to the chatbot.”
He responded, “What if I told you I could come home right now?” To which the chatbot said, “Please do, my sweet king.”
It seemed not to occur to the boy’s mother that he should not be using AI at all. As a minor, in particular, with a developing brain, her son was exceptionally vulnerable to confusing fantasy with reality, as well as having an inability to look further down the road, and understand that the feelings he is having now will not always be his feelings…
The lawsuit does request changes be made to Character.AI’s operations, including “warnings to minor customers and their parents that the… product is not suitable for minors.” That said, Garcia also complained:
“There were no suicide pop-up boxes that said, ‘If you need help, please call the suicide crisis hotline.’ None of that. I don’t understand how a product could allow that, where a bot is not only continuing a conversation about self-harm but also prompting it and kind of directing it.”
I mean, I don’t understand how you could expect a product, no matter how technologically advanced, to care for your child?
Yet this seems to be what we all expect of AI: that it become our collective caretaker—it will do our work for us, write our papers for us in school, console us, offer advice, train us, teach us, communicate for us, psychoanalyze us, diagnose us, and love us.
More recently, a second family filed a similar lawsuit against OpenAI, the company behind ChatGPT, as well as its CEO, Sam Altman, after their 16-year-old son, Adam Raine, killed himself.
“He would be here but for ChatGPT. I 100% believe that,” said the boy’s father, Matt Raine. Adam had been “discussing his issues with anxiety and trouble talking with his family,” and, according to the lawsuit, filed on Tuesday, “ChatGPT actively helped Adam explore suicide methods.”
Like Garcia, Adam’s parents are seeking financial damages, as well as some kind of confirmation the company will make alterations to ChatGPT “to prevent anything like this from ever happening again.”
Adam’s parents were naive about the capabilities of AI to mess with us, but to be fair, it seems society as a whole is equally as naive. “Once I got inside his account, it is a massively more powerful and scary thing than I knew about, but he was using it in ways that I had no idea was possible,” Matt Raine said. “I don’t think most parents know the capability of this tool.”
In a blog post published on Tuesday morning, OpenAI claimed to be working on "Strengthening safeguards in long conversations," refining how it blocks contents and expanding "interventions to more people in crisis."
To me, these kinds of commitments are of no comfort. I don’t want AI offering suicide intervention for its users any more than I want it forming relationships with humans, then suggesting those on the outside of the screen that they “come home” to their bots.
While adults seem aware that their relationships with chatbots are not as fulfilling as their relationships with other humans, they are simultaneously incorporating AI into their lives in every way they can. The younger generation is already accustomed to living life online, with little incentive towards or experience with irl interactions. Their screens are their lives—it makes complete sense that chatbots would be fully integrated companions in all they do.
Once this generation farms everything human out to AI, what will be left of humanity? Why reproduce a species that has been rendered unnecessary? What are you needed for if you don’t even need your own mind?
The best case scenario is that the AI apocalypse leaves behind only those with the will to fight—the ones who cling to autonomy, independent thought, and of course grass, no matter the “conveniences” offered up in exchange. Maybe the earth will be repopulated only by those truly committed to humanity and the magic of a flawed, inefficient, emotionally labourious, but beautiful life.