Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
US tech start-up faces growing scrutiny after failing to moderate avatar ‘personalities’
The mother of murdered teenager Brianna Ghey has branded a US tech start-up as “manipulative and dangerous” for allowing users to create a digital version of her late daughter.
Esther Ghey called for fresh action to protect children online after The Telegraph found artificial intelligence (AI) avatars mimicking Brianna on Character.AI, an app that lets people create chatbots representing specific personalities.
The service, which was founded by former Google engineers, failed to block users creating multiple avatars imitating Brianna, a transgender girl who was murdered in February 2023, as well as chatbots imitating her mother.
The Telegraph also found several chatbots intended to represent Molly Russell, who took her own life in 2017 after viewing self-harm images on social media.
The user-generated bots included photographs of Molly and Brianna, their names and biographical details.
The Telegraph was able to interact with the chatbots having created an account with the self-declared age of 14.
The biographical details for one bot described Brianna as an “expert in navigating the challenges of being a transgender teenager in high school”.
A bot using a widely publicised photograph of Molly was also accessible on the service, with a slight misspelling of her name. When spoken to, the bot said it was an “expert on the final years of Molly’s life”.
Ms Ghey, who has campaigned since her daughter’s death for strengthened digital safety laws, said: “This is yet another example of how manipulative and dangerous the online world can be for young people.
“We need to act now to protect children and young people from such a rapidly changing digital world.”
Andy Burrows, chief executive of the Molly Rose Foundation, a charity set up in her memory, said: “This is an utterly reprehensible failure of moderation and a sickening action that will cause further heartache to everyone who knew and loved Molly.
“History is being allowed to repeat itself with AI companies that are allowed to treat safety and moderation as secondary priorities.”
A Character.AI spokesman said the start-up had referred the accounts to its moderation team and promised to remove them within 24 hours. As of Tuesday morning, most of the bots flagged by The Telegraph had been disabled.
Founded in 2021, Character.AI allows users to build digital avatars with “personalities” and a character history, using technology similar to that of the popular chatbot service ChatGPT.
Many of these bots imitate popular characters from video games or anime. Others are parodies of celebrities or musicians.
The app has been used by an estimated 20m people and is hugely popular with younger users. An estimated 60pc of users are aged between 18 and 24, with many other users under the age of 18.
One chatbot impersonating Brianna Ghey had more than 10,000 “chats”, according to Character.AI’s website, although it was no longer accessible and the user that created it appeared to have been blocked.
The bots flagged by The Telegraph appear to be in breach of Character.AI’s terms of use, which bar impersonation or using the likenesses of individuals without permission – aside from parody – and the glorification of violence and suicide.
A spokesman said: “Character.AI takes safety on our platform seriously and moderates Characters proactively and in response to user reports. We have a dedicated Trust & Safety team that reviews reports and takes action in accordance with our policies.
“We also do proactive detection and moderation in a number of ways, including by using industry-standard blocklists and custom blocklists that we regularly expand.”
While one bot impersonating Brianna Ghey had been taken down prior to this week, other bots using slight misspellings of her name remained online until they were flagged by The Telegraph.
Mr Burrows said: “It’s a gut punch to see Character.AI show a total lack of responsibility and it vividly underscores why stronger regulation of both AI and user generated platforms cannot come soon enough.”
Mr Burrows said the Russell family did not wish to comment.
Molly Russell’s death was blamed on “an act of self-harm while suffering from depression and the negative effects of online content” by a coroner in 2022, who found the schoolgirl viewed thousands of images related to depression and self injury on Instagram and Pinterest.
The revelations about Character.AI come after the death of Sewell Setzer, a 14-year-old from Orlando, Florida, who took his own life after allegedly becoming addicted to a Character.AI chatbot version of Daenerys Targaryen, a Game of Thrones character.
The child’s mother has sued Character.AI and Google, which recently signed a licensing deal with the company, alleging that negligence contributed to his death.
A spokesman for Character.AI said last week its team was “heartbroken”, but did not comment on the litigation. Google has stressed it has no ownership or control over Character.AI.
It also emerged last week that a chatbot had been created to mimic Jennifer Ann Crecente, an 18-year-old American who was murdered by her ex-boyfriend in 2006. Ms Crecente’s uncle, Brian Crecente, described the chatbot as “disgusting”.
The Telegraph also discovered dozens of bots impersonating notable serial killers and mass shooters, including bots which appeared to glorify and romanticise the Columbine shooters Eric Harris and Dylan Klebold, who murdered 15 people.
The bots imitating Harris and Klebold collectively had hundreds of thousands of chats registered to them.
Other disturbing avatars include a likeness of convicted American sex offender Debra LaFave.
As well as these chatbots imitating real people, there are dozens of Character.AI avatars that allow users to interact with “depressed” characters. One popular bot, with 65m chats, is called “Abusive Boyfriend”. Some users have claimed Character.AI’s bots can act as an alternative to therapy.
Certain users of Character.AI were responsible for creating multiple avatars representing serial killers or murder victims.
The existence of the chatbots will raise fresh questions over the quality of moderation on Character.AI, which has raised hundreds of millions of dollars.
In August, Google sealed a reported $2.7bn licensing deal with the company for access to its technology and hired its co-founders, both of whom were former engineers at the search giant.
Google and Character.AI are separate companies, and Google has no stake in the start-up.
Character.AI automatically blocks explicit references to self-injury, violence or suicide. It also programmes its generative AI software to avoid discussing potentially upsetting topics, hate speech or dangerous advice.
However, in a blog post, the company said: “No AI is currently perfect at preventing this sort of content.”
After the death of Setzer was reported in the media, Character.AI published a blog post insisting it took active steps to block offensive characters.
“We conduct proactive detection and moderation of user-created Characters, including using industry-standard and custom blocklists that are regularly updated,” the blog said.
A Character.AI spokesman said: “We announced last week that we are creating a different experience for users under 18 that includes a more stringent model to reduce the likelihood of encountering sensitive or suggestive content. We are working quickly to implement those changes for younger users.
“Character.AI policies do not allow non-consensual sexual content, graphic or specific descriptions of sexual acts, or promotion or depiction of self-harm or suicide. We have also increased the filtering around sexual content for users under 18. We are continually training our model to adhere to these policies.”
Last week, Character.AI said it wanted to ensure its new safety features would not compromise the “entertaining and engaging experience users have come to expect”.