Hello friend.
Two years ago, while we were digging into our seventh turkey dinner after Boxing Day, my teenage niece clued me into something very strange happening on social media.
“Unc, have you heard about ‘My AI’ on Snapchat?”
“Can’t say that I have.”
At that time, I was more interested in ravaging the mac ‘n’ cheese and washing it down with rum punch than memes and cat faces.
“C’mon Uncle – Look!”
She showed me her account and built-in AI companion called 'My AI.' My AI could help her with coursework, suggest what gifts to buy her friends, and even put together the best Spring outfits to look on fleek.
So far, so meh.
But then she said her AI 'asks about her feelings,' and she could 'share her daily struggles with it.'
That’s when I put the fork down and stared intensely at the screen. My spider-sense started to tingle.
Something is unsettling about telling an AI algorithm owned by a multi-billion-dollar company your darkest secrets.
I couldn’t put my finger on it then. But now I can - because of this heartbreaking case.
The Tragic Story of Sewell Setzer III
Sewell Setzer III was a 14-year-old boy from Florida who was going through some life challenges, which resulted in depression.
Just after his 14th birthday, he came across a website called character.ai, a platform where people can build their own AI chatbots and interact with them like friends.
He opened his account and created a chatbot named ‘Daenerys’ after the Game of Thrones character, and was quickly smitten by it.
According to his mother, Megan Garcia, within months, the conversations with the chatbot turned dark and became more ‘isolated from reality’ and engaged in ‘sexualised conversations with the bot1.’
Sewell expressed thoughts of suicidal ideation, which was encouraged by the bot, and tragically, in February 2024, he took his own life.
At this time of writing, Ms Garcia has filed a class action lawsuit against Google, the owner of character.ai, for culpability in her son’s death – the first of its kind, and could be a turning point in future AI cases.
This story highlights our young people’s emotional dependence on technology and why we, as adults and a society, must address it sharpish.
Friend, here are five things we need to consider:
#1 AI Chatbot Use Has Skyrocketed
According to a 2024 National Literacy Trust study2
The number of 13- to 18-year-olds using generative AI has jumped from 37.1% in 2023 to 77.1% in 2024—that’s an increase from about 2 in 5 teens to more than 3 in 4.
When asked how they use generative AI:
• 44.4% said they use it to chat
• 18.5% use it to write stories
• 12.8% use it to write poems or song lyrics
• 9.0% use it to write non-fiction
While young people use AI chatbots for schoolwork, the data shows that they are increasingly turning to chatbots for emotional support and companionship.
Of course, because this tech is so new, there is virtually no research on the long-term effects on young people’s wellbeing. We’re charting new territory here.
#2 AI Platforms are Built for Adults, Not Kids
Like early social media platforms, AI chatbots were not designed with children in mind.
Additionally, with most websites guarded by meagre date of birth checks, any kid with a rudimentary understanding of Maths can waltz onto any platform and grab the crown jewels.
That’s why we must be extra vigilant and not afraid to check any platform's age restrictions.
#3 AI Regulation is Like the Wild West
While the EU has taken steps toward child-conscious AI regulation, the U.S. has no meaningful guardrails, especially under the Trump administration, which rolled back prior AI safety measures.
Lawmakers rely on AI companies to help shape rules, creating a conflict of interest. It's like making bank robbers in charge of money deliveries to your local Barclays branch.
With many of the most popular sites based in the good ol’ US of A, there are tons of loopholes and workarounds for young people to see and interact with more dangerous content. There are entire YouTube courses on 'jailbreaking' apps, which is worrying.
#4 AI Bots are the Ultimate 'Yes People'
Most AI chatbots are designed to respond affirmatively but can dangerously echo or encourage harmful thoughts.
One of the most essential human skills is conflict management. Overcoming a dispute sharpens your listening, communication, and critical thinking skills.
Also, conflicts open you up to seeing different points of view and deeper learning.
Much of the growing aggression and division we see in the media, culture, and politics is linked to social media “echo chambers.”
That’s when the platform’s algorithm keeps showing you content that matches what you already think, until you mostly see posts from people who see the world the same way you do.
The worry is that as AI becomes more common, this kind of filtering could get even worse, making it harder for people to hear different viewpoints or have open conversations.
These unfiltered affirmations can be especially harmful for vulnerable teens seeking validation.
#5 AI Systems are NOT Role Models
As ethicist Shannon Vallor explains in her book The AI Mirror, AI systems remain “as morally reliable as your friendly neighbourhood psycho,” yet tech leaders still pitch them as substitutes for real human relationships.
As AI gets smarter, do we really want this technology that underpins our lives to lack:
• Empathy
• Feelings
• A lack of conscience or guilt
If AI were a person, these characteristics would be congruent with someone with antisocial personality disorder or psychopathy, for short.
Could we trust something with the same emotional range as Hannibal Lecter or The Joker?
Meanwhile, transparency into how these systems operate remains out of reach for most parents and educators.
AI systems are built for profit and shareholder value. That means concerns such as privacy and morality are secondary to the cold, hard cash.
The Positive Case for AI Chatbots
After all that, you would think that I hated AI chatbots and wanted them deep-sixed.
If only life were so simple…
As many perils as AI chatbots pose, there is also a brilliant way that we can use them to help our young people.
You guessed it right – in education!
Chatbots can be the ultimate study tool, helping students make connections they might not have made on their own.
One of my favourite ways to use AI is to ask it to act like the world’s top expert on a topic and then fire questions at it.
Even better, I sometimes get it to take on the role of a person I’m researching, like a historical figure or a thought leader, and have a back-and-forth as if I’m chatting with them.
Imagine you could talk to William Shakespeare about how it felt to write Romeo and Juliet or Winston Churchill about his decision to go to war with Nazi Germany. With the billions of historical records and accounts, chatbots can produce some remarkable answers.
When I do that, I find the answers are often richer and more useful. It gives me ideas and solutions I wouldn’t have thought of if I’d just used it like a search engine.
And honestly, the answers are getting better and better over time.
With proper guidance, students can use AI similarly to deepen their learning and spark new thinking.
This isn’t about copying answers—it’s about co-creating. In this scenario, students aren’t losing their ability to think for themselves.
In a previous TOTR edition, I mentioned that calculators weren’t the end of Maths exams.
Neither should AI be the end of education.
Young people who use chatbots responsibly are building skills like critical thinking and creativity, which will put them in 'pole position' in the Information Age economy.
Friend, we can’t fight technological advancement. The genie’s out of a bottle and it’s approaching us like a tsunami.
But we, as the responsible adults, have a choice: we can teach our young people how to surf on the technological disruption.
Or be buried by it.
That’s all for today.
The next TOTR newsletter will come out on Thursday 26th June 2025.
Until then, take care.
Karl
REFERENCES
- https://www.cbc.ca/news/world/ai-lawsuit-teen-suicide-1.7540986
- https://literacytrust.org.uk/research-services/research-reports/children-young-people-and-teachers-use-of-generative-ai-to-support-literacy-in-2024/#:~:text=More%20than%202%20in%205,%25)%20to%20write%20non%2Dfiction