Tuesday, November 11, 2025

Top 5 This Week

Related News

Google unveils experimental AI model ‘HOPE’ designed for continual learning

Google has announced a major breakthrough in artificial intelligence research with the launch of an experimental model called ‘HOPE’. The company says the new model represents a significant step toward developing AI systems that can continually learn and improve over time.

According to Google researchers, HOPE features a self-modifying architecture that enables better long-context memory management than existing AI models. It serves as a proof-of-concept for a new approach called ‘nested learning’, which treats a single AI model as a “system of interconnected, multi-level learning problems that are optimised simultaneously,” the company said in a blog post on Saturday, November 8.

Google believes that nested learning could address one of the biggest challenges in modern large language models — the lack of continual learning. This capability is considered an essential step toward achieving artificial general intelligence (AGI), or human-like intelligence.

AI expert Andrej Karpathy, who previously worked at Google DeepMind, recently stated that AGI is still at least a decade away because no AI system has yet mastered continual learning. “They don’t have continual learning. You can’t just tell them something and they’ll remember it,” he said.

Google explained that current large language models struggle with what is known as ‘catastrophic forgetting’, where learning new information causes them to lose previously acquired knowledge. The company said, “We believe the Nested Learning paradigm offers a robust foundation for closing the gap between the limited, forgetting nature of current LLMs and the remarkable continual learning abilities of the human brain.”

The research, published in a paper titled Nested Learning: The Illusion of Deep Learning Architectures at NeurIPS 2025, highlights how nested learning views AI models as “coherent, interconnected optimisation problems nested within each other or running in parallel.” Each internal process learns from its own flow of information, allowing deeper computational understanding.

Google said the HOPE model achieved lower perplexity and higher accuracy than current state-of-the-art models when tested on a range of language and reasoning tasks. Researchers believe this approach could lead to the creation of more powerful and efficient AI systems capable of adapting and learning like humans.

Also read: Viksit Workforce for a Viksit Bharat

Do Follow: The Mainstream formerly known as CIO News LinkedIn Account | The Mainstream formerly known as CIO News Facebook | The Mainstream formerly known as CIO News Youtube | The Mainstream formerly known as CIO News Twitter

About us:

The Mainstream formerly known as CIO News is a premier platform dedicated to delivering latest news, updates, and insights from the tech industry. With its strong foundation of intellectual property and thought leadership, the platform is well-positioned to stay ahead of the curve and lead conversations about how technology shapes our world. From its early days as CIO News to its rebranding as The Mainstream on November 28, 2024, it has been expanding its global reach, targeting key markets in the Middle East & Africa, ASEAN, the USA, and the UK. The Mainstream is a vision to put technology at the center of every conversation, inspiring professionals and organizations to embrace the future of tech.

Popular Articles