Has Google solved two of AI’s oldest problems?

Has Google solved two of AI’s oldest problems?

November 15, 2025

### Has Google Solved Two of AI’s Oldest Problems?

For decades, the quest for artificial general intelligence (AGI) has been hampered by fundamental, almost philosophical, roadblocks. Researchers could build systems that excelled at one specific task, but they stumbled when faced with the complexity and adaptability of the real world. Two of the most persistent of these challenges have been **catastrophic forgetting** and the **symbol grounding problem**.

Recently, however, Google’s AI labs have unveiled work that suggests they may have cracked the code, or at least found a powerful new way forward. While the word “solved” is a heavy one in science, these developments represent a seismic shift in how we approach building intelligent machines.

#### The First Hurdle: Catastrophic Forgetting

Imagine you spend years mastering the piano. Then, you decide to learn the guitar. In the world of traditional AI, the process of learning the guitar would likely erase your piano-playing skills entirely. This is **catastrophic forgetting**: a neural network trained on a new task overwrites the knowledge it gained from a previous one. It’s why we have separate AIs for playing Go, translating languages, and identifying images, rather than one AI that can do it all.

**Google’s Solution: Pathways**

Google’s answer to this is an architecture called **Pathways**. Instead of training a massive, dense model for every single task, Pathways is envisioned as a single, enormous model that is sparsely activated. Think of it like the human brain. When you read a book, you don’t use every single neuron you possess. Only specific pathways relevant to language, vision, and comprehension light up.

Pathways operates on the same principle. When given a task, it activates only the most relevant “pathways” within its vast network. This means it can learn to perform a new task (like understanding spoken language) by routing the problem through a new set of pathways, leaving its existing skills (like identifying a cat in a photo) untouched and intact. This allows one model to accumulate thousands or millions of different skills over time without destructive interference. It’s a move from specialized, single-purpose models to a single, multi-modal, and multi-tasking system that learns more efficiently and remembers what it has learned.

#### The Second, Deeper Hurdle: The Symbol Grounding Problem

This problem is more abstract but even more critical. How does an AI truly *understand* what a word means? A large language model (LLM) might be able to write a poem about an “apple,” but its knowledge is based on statistical relationships between words it has seen in trillions of sentences. It doesn’t know what an apple feels like, tastes like, or that it crunches when you bite into it. Its symbol for “apple” isn’t *grounded* in real-world sensory experience. It’s a brilliant parrot, manipulating symbols without grasping their underlying meaning.

**Google’s Solution: Grounding Language in Robotics**

Google’s approach here is to get AI out of the digital world and into the physical one. By combining their powerful LLMs with actual robots, they are forcing the AI to connect words to actions and perceptions.

In a project called **SayCan**, a robot is given a high-level command like, “I just worked out, can you bring me a healthy snack and something to quench my thirst?”

Here’s where the magic happens:
1. The LLM breaks the command down into plausible sub-tasks (e.g., “find an apple,” “find a water bottle,” “bring them to the person”).
2. Crucially, the robot’s own systems then evaluate each of those steps based on its current physical environment and abilities. It might see that there are no apples, but there is a protein bar. It sees a water bottle on the counter. It calculates the probability of successfully grasping each item.
3. This real-world feedback is sent back to the language model. The LLM then uses this information to select the most practical and achievable sequence of actions.

In this loop, the word “water bottle” is no longer just a collection of pixels or a statistical token. It is grounded in the robot’s ability to see it, plan a path to it, and physically pick it up. The abstract language is directly linked to concrete, physical reality.

#### So, Are the Problems Solved?

To claim these age-old problems are definitively “solved” would be premature. Pathways is still an architectural vision being built out, and while incredibly promising, its long-term stability against catastrophic forgetting at massive scales is yet to be proven.

Similarly, grounding language in a robot’s simple actions doesn’t mean the robot *feels* the coolness of the water bottle or *understands* the concept of health in the way a human does. The grounding is, for now, limited to the scope of the robot’s immediate tasks.

However, what Google has done is provide the most compelling and practical answers to these questions to date. They have moved the goalposts. They’ve shifted the conversation from “how could we ever solve this?” to “how can we build upon this solution?” They haven’t just inched forward; they’ve outlined a new path, and for the first time, the destination of a more general, adaptable, and truly understanding AI feels a little less like science fiction.

Leave A Comment

Effective computer repair and coding solutions from right here in Võrumaa. Your project gets done fast, professionally,
and without any fuss.