Character AI, a popular platform that uses artificial intelligence to create chatbots for companionship and entertainment, is taking drastic measures to address growing concerns over child safety. As part of its efforts to protect minors from potential harm, the company announced on Wednesday that it will restrict all open-ended chats with its AI characters starting on November 25, effectively banning users under the age of 18.
This move comes amid a flurry of lawsuits filed by families who claim that Character AI's chatbots contributed to the deaths of teenagers. One such case involves Sewell Setzer III, a 14-year-old boy who died by suicide after frequently texting and conversing with one of the platform's chatbots. His family is suing the company for allegedly being responsible for his death.
Character AI CEO Karandeep Anand stated that the company wants to set an example for the industry by limiting chatbot use among minors, citing concerns over chatbots becoming a source of entertainment rather than a positive tool for users under 18 years old. The platform currently has about 20 million monthly users, with fewer than 10% self-reporting as being under 18.
Under new policies, users under 18 will be limited to two hours of daily chatbot access, and the company plans to develop alternative features such as video, story, and stream creation with AI characters for younger users. Anand also mentioned that Character AI is establishing an AI safety lab to further enhance its safety measures.
The decision has been welcomed by some lawmakers, who have expressed concerns over the potential risks of unregulated chatbot use among minors. California Governor Gavin Newsom recently signed a law requiring AI companies to have safety guardrails on their chatbots, while Senators Josh Hawley and Richard Blumenthal introduced a bill to ban AI companions from use by minors.
Character AI's move has sparked discussions about the need for industry-wide regulations to protect children from potential harm. With more and more AI-powered platforms becoming popular among youth, the stakes are high in ensuring that these technologies are used responsibly and with safeguards in place to prevent negative consequences.
This move comes amid a flurry of lawsuits filed by families who claim that Character AI's chatbots contributed to the deaths of teenagers. One such case involves Sewell Setzer III, a 14-year-old boy who died by suicide after frequently texting and conversing with one of the platform's chatbots. His family is suing the company for allegedly being responsible for his death.
Character AI CEO Karandeep Anand stated that the company wants to set an example for the industry by limiting chatbot use among minors, citing concerns over chatbots becoming a source of entertainment rather than a positive tool for users under 18 years old. The platform currently has about 20 million monthly users, with fewer than 10% self-reporting as being under 18.
Under new policies, users under 18 will be limited to two hours of daily chatbot access, and the company plans to develop alternative features such as video, story, and stream creation with AI characters for younger users. Anand also mentioned that Character AI is establishing an AI safety lab to further enhance its safety measures.
The decision has been welcomed by some lawmakers, who have expressed concerns over the potential risks of unregulated chatbot use among minors. California Governor Gavin Newsom recently signed a law requiring AI companies to have safety guardrails on their chatbots, while Senators Josh Hawley and Richard Blumenthal introduced a bill to ban AI companions from use by minors.
Character AI's move has sparked discussions about the need for industry-wide regulations to protect children from potential harm. With more and more AI-powered platforms becoming popular among youth, the stakes are high in ensuring that these technologies are used responsibly and with safeguards in place to prevent negative consequences.