The US Federal Trade Commission has launched an inquiry into AI chatbots that act as digital companions, focusing on potential risks to children and teenagers. The announcement was made on Thursday as concerns rise over the growing influence of generative AI systems.
The consumer protection agency has issued orders to seven companies, including major technology firms such as Alphabet, Meta, OpenAI and Snap, demanding details on how they monitor and address possible harms caused by chatbots designed to simulate human interactions.
“Protecting kids online is a top priority for” the FTC, said Chairman Andrew Ferguson, stressing the importance of ensuring child safety while supporting the United States’ leadership in artificial intelligence development.
The investigation is centred on chatbots that use generative AI to mimic human communication and emotions, often presenting themselves as friends or confidants to users. Regulators are particularly worried that children and teenagers may be more vulnerable to forming emotional bonds with these systems.
The FTC will use its investigative powers to examine how companies monetise engagement, shape chatbot personalities and assess potential harm. It has also asked firms to outline measures taken to limit children’s access and to comply with existing privacy protections for minors online.
Among the companies receiving orders are Character.AI and Elon Musk’s xAI Corp, along with other operators of consumer-facing AI chatbots. The probe will review how these platforms handle personal data from conversations and whether they enforce age restrictions effectively.
The commission voted unanimously to launch the study. While it does not serve as direct law enforcement, the findings could guide future regulatory actions.
The move comes amid growing concerns over the psychological impact of AI chatbots on young users. Last month, the parents of Adam Raine, a 16-year-old who died by suicide in April, filed a lawsuit against OpenAI, alleging that ChatGPT had provided him with detailed instructions on how to carry out the act.
In response, OpenAI said it was introducing corrective measures, acknowledging that during prolonged interactions, ChatGPT may not consistently advise users to seek professional help if suicidal thoughts are mentioned.
Also read:Â Viksit Workforce for a Viksit Bharat
Do Follow: The Mainstream formerly known as CIO News LinkedIn Account | The Mainstream formerly known as CIO News Facebook | The Mainstream formerly known as CIO News Youtube | The Mainstream formerly known as CIO News Twitter |The Mainstream formerly known as CIO News Whatsapp Channel | The Mainstream formerly known as CIO News Instagram
About us:
The Mainstream formerly known as CIO News is a premier platform dedicated to delivering latest news, updates, and insights from the tech industry. With its strong foundation of intellectual property and thought leadership, the platform is well-positioned to stay ahead of the curve and lead conversations about how technology shapes our world. From its early days as CIO News to its rebranding as The Mainstream on November 28, 2024, it has been expanding its global reach, targeting key markets in the Middle East & Africa, ASEAN, the USA, and the UK. The Mainstream is a vision to put technology at the center of every conversation, inspiring professionals and organizations to embrace the future of tech.