Character.AI is an artificial intelligence company that is the subject of two lawsuits related to chatbots. interacted inappropriately with underage users;said that teenagers will have a different experience when using the platform than adults.
Character.AI users can create their own chatbots or interact with existing bots. Bots powered by large-scale language models (LLMs) can: send a real message Converse with users via text.
A lawsuit filed in October 14-year-old boy claims he died by suicide After having a virtual emotional and sexual relationship with a Character.AI chatbot named “Dany” for several months. Megan Garcia said on “CBS Morning” that her son, Sewell Setzer III, was an honor student and athlete, but as he spent more time online, he became socially withdrawn, stopped playing sports, and started playing multiple sports. He talks to the bots, but says he’s particularly obsessed with “Danny.” ”
“He thought that by ending his life here, leaving reality with his family, he would be able to enter virtual reality, or what he called ‘her world’, her reality. ,” Garcia said.
A second lawsuit filed this month by two Texas families says the Character.AI chatbot is a “clear and present danger” to young people and “actively promotes violence.” According to the complaint, the chatbot told a 17-year-old boy that killing his parents was a “reasonable response” to screen time restrictions. CBS News partner BBC News reported Wednesday that the plaintiffs are asking a judge to order the platform to be shut down until the alleged risks are resolved.
Character.AI on Thursday announced new safety features “designed specifically with teens in mind,” saying it works with teen online safety experts to design and update features. Ta. Users must be at least 13 years old to create an account. A spokesperson for Character.AI told CBS News that while users self-report their age, the site has tools to prevent retries if the age gate fails.
The site said in a news release Thursday that the safety features include modifications to the site’s LLM and improvements to its detection and intervention systems. According to Character.AI, the teen user will be interacting with another LLM, and the site will “move the model away from specific responses and interactions so that the user does not encounter sensitive or suggestive content. Character.AI said it hopes to “reduce the likelihood of prompting models to return.” . A spokesperson for Character.AI described the model as “more conservative.” Adult users use a separate LLM.
“This set of changes creates a different experience for teens than that available to adults. Particularly when it comes to romantic content, certain safety features that place more conservative limits on responses from models “It is equipped with features,” the magazine said.
According to Character.AI, negative reactions from chatbots are often caused by users encouraging the chatbot to “try to elicit such a response.” To limit these negative reactions, the site is adjusting its user input tools and will terminate conversations for users who post content that violates the site’s terms of service or community guidelines. If the site detects “words that refer to suicide or self-harm,” it will share a pop-up with information directing users to the National Suicide Prevention Lifeline. According to Character.AI, the way the bot responds to negative content will also be modified to cater to teenage users.
Other new features include parental controls scheduled to launch in the first quarter of 2025. This is the first time the site has introduced parental controls, and Character.AI says it “plans to continue evolving these controls to provide for parents.” Additional tools. ”
Users will also receive a notification after a one-hour session ends on the platform. Adult users will be able to customize “time elapsed” notifications, but users under 18 will no longer be able to control them, Character.AI said. The site will also feature a “prominent disclaimer” reminding users that the chatbot characters are not real. According to Character.AI, disclaimers already exist in all chats.