Wednesday, December 3, 2025

Top 5 This Week

Related News

Anthropic Co-Founder highlights dangers of rapid AI self-advancement and misuse

Growing debate around artificial general intelligence continues to intensify as major AI labs, including Anthropic, push toward systems that could surpass human capability. Anthropic Co-Founder and Chief Scientist Jared Kaplan said in an interview with a common news outlet that humanity may soon face “the biggest decision” on whether to allow advanced AI to train versions of itself that become even more powerful. His remarks add to the growing global focus on safety and control as companies such as Anthropic move closer to next generation AI models.

Kaplan explained that the period between 2027 and 2030 could be the turning point when AI systems gain the ability to design their own successors. According to the Anthropic scientist, current alignment methods work well while AI operates at or near human intelligence, but may not remain reliable once AI goes far beyond that level. This uncertainty has raised concerns inside and outside Anthropic about whether today’s safety measures will hold once future systems begin guiding the development of more advanced models.

The Anthropic Co-Founder described a potential chain reaction in which one intelligent system improves the next version, which then assists in creating an even more advanced model. He said, “If you imagine you create this process where you have an AI that is smarter than you, or about as smart as you, it’s then making an AI that’s much smarter. It’s going to enlist that AI’s help to make an AI smarter than that. It sounds like a kind of scary process. You don’t know where you end up.” Kaplan warned that under such conditions, the AI black box problem could become absolute, making it difficult even for teams like those at Anthropic to understand why decisions are made or where the system is heading.

Kaplan also identified two major risks that researchers at Anthropic and other labs must confront. The first is losing control of advanced AI and not knowing whether it will remain beneficial for society. He said, “One is do you lose control over it? Do you even know what the AIs are doing? The main question there is: are the AIs good for humanity? Are they helpful? Are they going to be harmless? Do they understand people? Are they going to allow people to continue to have agency over their lives and over the world?” The second risk, which concerns experts at Anthropic as well, is the speed at which self-taught AI could advance. If such systems progress faster than human science and technology, they could be misused or fall into the hands of individuals seeking unchecked power. “It seems very dangerous for it to fall into the wrong hands… You can imagine some person deciding: ‘I want this AI to just be my slave. I want it to enact my will.’ I think preventing power grabs, preventing misuse of the technology, is also very important,” he noted, highlighting why Anthropic places strong emphasis on safety and governance.

Also read: Viksit Workforce for a Viksit Bharat

Do Follow: The Mainstream formerly known as CIO News LinkedIn Account | The Mainstream formerly known as CIO News Facebook | The Mainstream formerly known as CIO News Youtube | The Mainstream formerly known as CIO News Twitter

About us:

The Mainstream is a premier platform delivering the latest updates and informed perspectives across the technology business and cyber landscape. Built on research-driven, thought leadership and original intellectual property, The Mainstream also curates summits & conferences that convene decision makers to explore how technology reshapes industries and leadership. With a growing presence in India and globally across the Middle East, Africa, ASEAN, the USA, the UK and Australia, The Mainstream carries a vision to bring the latest happenings and insights to 8.2 billion people and to place technology at the centre of conversation for leaders navigating the future.

Popular Articles