The Race to AGI and the Rise of Collaborative Intelligence
Why GPT-5 proves we’re not there yet, and how ECI prepares us for the next great intelligence shift
By Carsten Krause — August 14, 2025
Artificial General Intelligence (AGI) – the hypothetical AI that can match human cognitive abilities across virtually all tasks – has become the ultimate prize in today’s tech world. Tech giants and research labs are racing toward AGI, pouring billions into ever-larger models and new AI architectures. Yet despite rapid progress in generative AI, true AGI remains elusive. Even OpenAI’s much-anticipated GPT-5 model, released in 2025, has fallen far short of AGI (CDO Times, 2025). In fact, OpenAI’s CEO Sam Altman himself admits GPT-5 is not AGI, noting it still lacks critical capabilities like the ability to learn continuously from new data (Windows Central, 2025). This reality check underscores that we are not there yet – and it raises the question: what will it take to reach AGI, and how can we prepare?
When OpenAI dropped GPT-5, the tech press erupted with declarations that it was “basically AGI.” Venture capitalists called it “a once-in-a-generation leap.” Social media posts bordered on messianic. The implication? We’d crossed the Rubicon into artificial general intelligence — the mythical point where machines match (or surpass) human cognition.
The announcement screen on August 8th 2025:

But here’s the reality: GPT-5 is a very capable large language model, not an autonomous digital brain. In fact, in one of my first interactions with it, GPT-5 failed to correctly calculate 8.9 minus 8.11 — not exactly the kind of task you’d expect to stump an “almost AGI” system.

Even more telling, in early tests it didn’t seem aware of its own identity, joking that I’d “need to travel into the future to see GPT-5.”

Amusing? Yes. Confidence-building for an enterprise AI rollout? Absolutely not.
For executives, this matters because every dollar you spend on AI is a bet — and you can’t afford to bet on branding. You need to bet on readiness.
The Global Race for AGI: Who Will Get There First?

The pursuit of AGI has spurred a competitive race among top AI organizations. OpenAI, Google’s DeepMind, Anthropic, and others are investing heavily to claim the lead (Windows Central, 2025). This race is driven by the promise that an AGI could revolutionize industries – or dominate them – and by fears that falling behind could mean irrelevance. Enormous resources are being spent on this goal: vast computing power, talent, and capital are dedicated to pushing AI capabilities to new heights. Yet despite the hype, no clear winner has emerged, and AGI’s timeline remains uncertain.
Leading voices offer differing predictions. DeepMind’s CEO Demis Hassabis recently suggested AGI could be achieved within 5 to 10 years (possibly on the shorter end). By contrast, Google’s CEO Sundar Pichai has expressed skepticism that today’s technology is sufficient, saying it’s “entirely possible” we won’t reach AGI with current hardware alone. Even Sam Altman – who is confident OpenAI will develop AGI in the next five years – acknowledges that AGI might arrive more gradually than as a sudden singular breakthrough. In short, no one knows for sure when AGI will arrive or who will cross the finish line first.
What is clear is that the stakes are enormous. An AGI could transform the economy and society on a scale “10 times bigger than the Industrial Revolution – and maybe 10 times faster,” as Hassabis put it. Such power carries profound implications, from radical productivity gains to job market upheaval. Little wonder that the AGI race is often framed in existential terms: it’s not just about technological pride, but about shaping the future of humanity.
Not There Yet: GPT-5 as a Reality Check
To understand how far we still have to go, one need only examine the latest state-of-the-art AI. OpenAI’s GPT-5, released in 2025, was hyped as the next big leap – yet it demonstrated that we remain firmly below the threshold of AGI. GPT-5 did bring improvements (such as better reasoning integration and user experience), but observers quickly noted it is “still far short of AGI” (CDO Times, 2025). In fact, Altman conceded that while GPT-5 is “generally intelligent” in some respects, “we’re still missing something quite important” by the usual definition of AGI (Windows Central, 2025).
A key missing piece is continuous learning. Unlike a human (or a hypothetical AGI), GPT-5 cannot autonomously learn and update itself from new experiences in real time. Altman highlighted that GPT-5 “is not a model that continuously learns as it’s deployed… which… feels like AGI” would. In other words, GPT-5 is a remarkably advanced static model – it performs tasks it was trained on with superhuman skill, but it doesn’t grow or adapt on its own after training. This limitation is shared by all current large AI models. They ingest colossal datasets during training, but once training is complete their knowledge and behavior are frozen (until the next update). Human intelligence, by contrast, is fluid, learning from each new piece of data or feedback. Until AI systems gain a form of on-the-fly learning or true memory of new events, calling them “generally intelligent” is hard to justify.
GPT-5’s debut also revealed practical shortcomings that underline its non-AGI nature. Early users reported glitches, errors, and disappointments with some of GPT-5’s responses. OpenAI had to tweak and patch the model post-release. These are normal growing pains for a complex AI – but an AGI, one might expect, would be more self-correcting and robust by design. In short, GPT-5 is an impressive tool, not a digital brain. As one analysis put it, “Altman called GPT-5 a significant step along the path to AGI… but if so, it’s a very small step” (CDO Times, 2025). The gap between even our best AI and human-like general intelligence remains substantial.
Why Current AI Approaches Fall Short of True AGI
Why haven’t today’s breakthroughs produced an AGI yet? A growing consensus is that simply scaling up existing approaches (like the Transformer neural network architecture behind GPT-series models) may not be enough to reach general intelligence. Transformers, which have powered the revolution in large language models, are phenomenal pattern recognition engines – but they have inherent flaws and limitations that make them unlikely to magically turn into human-level minds (Frithjof Herb, 2024).
Key limitations include:
- Diminishing Returns to Scaling: Early successes of large language models led many to assume that bigger models and more data would lead straight to AGI. In reality, scaling up has shown diminishing improvements on many tasks. Researchers observe that performance gains are slowing even with exponentially larger models, suggesting we may hit plateaus before achieving general intelligence (Frithjof Herb, 2024). The relationship between model size and capability might follow a sigmoid (“S-curve”) rather than an unlimited exponential. This means more of the same yields ever-smaller gains, long before human-like intelligence emerges.
- Lack of True Understanding or World Models: Present AI models primarily learn statistical patterns in text or data, rather than forming a deep understanding of the physical world. They have no direct grounding in reality – no sensorimotor experience – so they often exhibit a “bag of tricks” approach to mimic understanding (The Gradient, 2024). As one analysis noted, today’s models likely learn heuristics to predict tokens rather than constructing genuine world models of the kind humans have (The Gradient, 2024). This leads to obvious failures: an AI can talk about doing the dishes or fixing a car, but it has never seen water or touched a tool. Such models may sound knowledgeable but can stumble on basic physical reasoning or consistent common-sense, because they lack grounded experience. General intelligence will require engaging with the real world (or a rich simulation of it) to develop robust understanding, not just reading internet text (The Gradient, 2024).
- No Continuous Learning or Adaptability: As mentioned with GPT-5, current AI models do not learn autonomously once training is finished. They cannot update their knowledge in real time or improve themselves in an open-ended fashion. Each model is a fixed snapshot of intelligence, whereas human (and any general intelligence) is an evolving process. This means today’s AI cannot handle novel situations beyond its training distribution in a human-like way – it has no mechanism to accumulate new general knowledge on its own. Solving continuous learning (without catastrophic forgetting) is an unsolved research problem and a necessary milestone on the road to AGI (Windows Central, 2025).
- Architectural Constraints: The Transformer architecture and similar deep learning models have known technical constraints – for instance, Transformers rely on fixed-size context windows and have quadratic time complexity with input length. They also struggle with long-term memory and reasoning that involves many sequential steps. Some of these issues are being mitigated (e.g. by adding retrieval systems or larger context lengths), but fundamentally, Transformers were not designed with human-like cognition in mind (Frithjof Herb, 2024). As one researcher argued, “expecting transformers to achieve human-style AGI is akin to expecting a convolutional neural network to become an entire vision system on its own” (Frithjof Herb, 2024). These models excel in narrow domains of pattern matching, but abstract reasoning, commonsense, and adaptable learning may require different architectures or additional components.
- The Risk of Shallow Solutions: Because current AIs optimize for benchmarks and next-word prediction, they often find shortcut solutions that aren’t truly general. For example, a language model might appear to reason logically by pattern-matching training examples, yet not actually understand logic. This brittleness is evident when such models face problems slightly outside their training distribution – they can fail in unpredictable ways. General intelligence presumably requires more robust problem-solving that isn’t dependent on seeing millions of similar examples. In short, today’s AI imitates intelligence in ways that sometimes fool us, but under the hood it might lack the cohesive, generalizable understanding that an AGI would need (The Gradient, 2024).
These limitations suggest that a purely scale-driven, Transformers-only approach is unlikely to yield human-level general intelligence. Indeed, some experts warn that misplacing faith in current methods could lead to another AI “winter” if progress stalls (Frithjof Herb, 2024).
Beyond Today’s Models: New Paths Toward AGI
If standard deep learning alone won’t get us to AGI, what will? Several research directions are being pursued to inch closer to human-like intelligence:
Multimodal Learning: One clear trend is expanding AI beyond just text to integrate multiple modalities – vision, speech, audio, perhaps even robotics (Nature, 2024). Human intelligence is inherently multimodal (we learn through seeing, hearing, touching), so many believe an AGI must likewise process diverse inputs. Recent large models like GPT-4 already combine text and images; future models may incorporate video, real-world sensor data, and more. In fact, some researchers argue that pre-training a massive multimodal model is a promising route to AGI. By learning from images, language, and other data together, an AI can develop more general and flexible representations of concepts. For example, a multimodal foundation model might connect the word “cat” with images of cats, the sound of a meow, and so forth – achieving a more human-like understanding than text alone. Google DeepMind’s upcoming Gemini model is rumored to be multimodal (combining language and vision with reinforcement learning), explicitly aiming at more general problem-solving ability. However, some caution that simply wiring modalities together isn’t a magic bullet – the model also needs a way to ground those modalities in an interactive world, not just treat them as parallel data streams (The Gradient, 2024). Nonetheless, expanding the sensory scope of AI is seen as a necessary step toward broader intelligence.
World Models and Embodiment: A growing school of thought is that embodied agents – AI systems that interact with environments – are crucial for achieving AGI (The Guardian, 2024). The idea is to give AI a “world model,” an internal simulation of reality that it can use for planning and understanding consequences. Google’s DeepMind, for instance, has emphasized world models as “a key step to achieving AGI.” They recently unveiled Genie 3, a model that lets AI agents learn by experimenting in realistic simulated environments (like virtual warehouses or ski slopes). By practicing in these simulators, AI can develop general skills like navigation, tool use, and adapting to changes – abilities beyond what static data can teach. World models enable cause-and-effect learning: an agent can predict what will happen if it takes a certain action in the world. This is essential for tasks like robotics and is arguably central to human cognition as well (we’re constantly modeling the world in our minds). University experts agree that to have flexible decision-making, robots (and AI generally) “need to anticipate the consequences of actions” by using world models. Even if the AI is purely virtual, having an environment to act in and learn from experience (via reinforcement learning or simulations) might be the only way to achieve the “common sense” understanding that humans gain from living in the physical world. In short, the path to AGI may look less like training a single giant brain on text, and more like training an embodied agent that learns by doing in a rich environment.
Neuroscience-Inspired Hardware (Neuromorphic Computing): Another frontier is at the intersection of AI algorithms and the machines they run on (Windows Central, 2025). Today’s AI mostly runs on traditional silicon chips (CPUs/GPUs) that are very fast but fundamentally different from biological brains. Some experts, including Google’s CEO, suspect that new hardware paradigms will be required for AGI. Enter neuromorphic computing – processors designed to mimic the brain’s neural architecture. These chips implement networks of “spiking” neurons in hardware, allowing them to process information in a brain-like, event-driven way with potentially huge gains in efficiency and parallelism. Neuromorphic hardware, such as Intel’s Loihi or IBM’s TrueNorth chips, aims to enable real-time learning and massive neural simulation at a fraction of the energy cost of today’s processors (IBM, 2024). IBM researchers note that as AI scales, neuromorphic tech could “act as a growth accelerator for AI” and even serve as “one of the building blocks of artificial superintelligence.” By physically re-creating how neurons communicate (spikes, synapses, plasticity), these chips might allow AI systems that learn continuously and respond dynamically, much like organic brains. While still an emerging field, neuromorphic computing is progressing quickly and could be pivotal in enabling the leap from narrow AI to brain-like AI. In parallel, other hardware advances – from quantum computing to specialized AI accelerators – might also help break current limits. The bottom line is that AGI may demand not just smarter algorithms, but more brain-like machines to run those algorithms.
Hybrid Architectures and Neurosymbolic AI: Some researchers advocate merging the strengths of symbolic reasoning with neural networks to reach higher-level cognition (Medium, 2024). Human intelligence has elements of both intuitive pattern recognition (“fast” thinking) and explicit logical reasoning or planning (“slow” thinking). Today’s deep learning excels at the former but struggles with the latter. Efforts are underway to create hybrid AI systems that, for example, use neural networks for perception and learning, but also incorporate symbolic modules for things like logic, mathematics, or knowledge representation. This “neurosymbolic” approach is championed by scientists like Gary Marcus, who argue that pure deep learning lacks certain reasoning capabilities and that true AGI will require built-in reasoning frameworks alongside learning. We already see hints of this: for instance, some language models can call external tools (like calculators or databases) or have an internal scratchpad for chain-of-thought reasoning. A full AGI might orchestrate a mix of subsystems – some neural, some rule-based – to achieve both flexibility and reliability in thinking. Such architectures could overcome the “stochastic parrot” problem (the tendency of LLMs to babble plausible nonsense) by grounding responses in verifiable logic or factual databases when needed. While there’s debate on how much symbolic AI is needed, the broader point is that the road to AGI may not be a single monolithic model, but a constellation of components working in concert.
Elevated Collaborative Intelligence: Humans + AI in the Meantime
While researchers forge ahead toward AGI, an equally important question is how we manage the transition. Even before true AGI arrives, AI advancements are disrupting industries and challenging our institutions. Here and now, the most effective systems often combine the strengths of humans and AI, rather than replacing one with the other. This is the principle of Elevated Collaborative Intelligence (ECI) – a framework that emphasizes partnering human intelligence (HI) with artificial intelligence (AI) to achieve superior outcomes (CDO Times, 2025). In an ECI model, AI provides speed, scale, and analytical power, while humans provide contextual understanding, oversight, and ethical judgment (CDO Times, 2025). The result is a hybrid intelligence that can outperform either humans or AI alone.
ECI is highly relevant today, in an era of rapidly evolving AI, and it will become even more critical as we approach the threshold of AGI. Why? Because collaborative intelligence addresses two crucial needs: optimizing performance and ensuring control. On the performance side, human-AI teams have proven remarkably effective. In fields from healthcare to finance, we see that an AI augmented professional can be more accurate and productive than either the AI or the human working in isolation. For example, doctors use AI diagnostic tools to catch patterns they might miss, then apply their expertise to decide the best treatment – a synergy that improves patient outcomes. One Wharton analysis describes a surgical scenario where an AI assistant analyzes millions of cases in seconds to guide a brain surgeon, warning of complications and suggesting precision maneuvers, while the surgeon’s skill and intuition handle the unforeseen; the collaboration “achieves what neither could alone” (Knowledge@Wharton, 2025). This kind of hybrid intelligence yields “more sustainable, creative, and trustworthy results” by combining the best of both worlds (Knowledge@Wharton, 2025). In short, ECI isn’t just a management buzzword – it’s being validated in practice as a way to boost results without waiting for AGI.
Equally important, ECI is about maintaining human oversight and ethical guardrails as AI systems become more powerful. Even current AI can be error-prone or biased, so putting a human in the loop provides a safety check. For instance, AI might flag suspicious financial transactions at scale, but human analysts review the flags to make final decisions, injecting judgment and context where needed. As AI algorithms permeate high-stakes areas (hiring, lending, criminal justice, etc.), regulators and frameworks like the EU AI Act and NIST AI Risk Management Framework actually require human oversight for certain risk levels (CDO Times, 2025). ECI operationalizes this by designing workflows where AI does the heavy lifting but humans set objectives and intervene on ambiguous cases. Essentially, collaborative intelligence keeps humans in control of the AI tools, which is vital for accountability and societal trust. This will only grow more important if we get near-AGI systems whose actions and decisions carry even greater impact.
Finally, embracing ECI now is a way to prepare for the disruptions that a future AGI (or even just more advanced AI) could bring. Many experts warn of significant job displacement and societal upheaval if AI suddenly automates a large swath of tasks (Windows Central, 2025). A strategy of collaborative intelligence can mitigate some of this by focusing on augmenting workers with AI rather than pure automation. Organizations can aim to “elevate” their workforce with AI – using AI to handle routine tasks and provide insights, while upskilling employees to focus on creativity, complex problem-solving, and interpersonal roles that AI can’t do. This way, companies become more productive without simply replacing people. It’s a vision of AI as a “co-pilot” in every job. In the best case, AGI itself might function as an incredibly powerful co-worker rather than a competitor – but realizing that scenario will depend on the norms and systems we build now. If businesses and governments prioritize collaborative models, they can smooth the transition and avoid the most dystopian outcomes (mass unemployment or loss of human agency). Indeed, those who implement ECI today will cultivate a workforce adept at leveraging AI, which is likely to be a key advantage in the future economy.
Preparing for an AGI Future – With Humans at the Center
The quest for AGI is often portrayed as a sprint to build an all-powerful machine mind. But an equally important journey is preparing our institutions and society for that possibility. No one knows exactly when or how AGI will emerge, but we do know that AI in general is growing more influential by the day. By adopting frameworks like Elevated Collaborative Intelligence now, we future-proof our organizations to better handle whatever comes next. ECI provides a blueprint for AI governance, ensuring that as AI systems gain capabilities, they remain aligned to human goals and values. For example, a company practicing ECI will have AI ethics committees, human oversight on AI decisions, and continuous training for employees to work alongside AI – all of which create resilience against the shocks of more advanced AI (CDO Times, 2025).
In essence, Elevated Collaborative Intelligence turns AI from a threat into a partner. It’s a reminder that no matter how “general” or powerful our AI becomes, human wisdom and values must guide it. Even a true AGI, if one is developed, should ideally function in a collaborative capacity, not as an untethered overlord or a replacement for humanity. Steering toward that outcome starts with our mindset today: viewing AI as augmentative, not adversarial. We should strive for hybrid systems where human creativity, empathy, and ethical judgment work hand-in-hand with machine speed, precision, and knowledge. Such systems can achieve feats neither could alone – and importantly, they keep us in the loop as AI capabilities approach human levels.
The CDO TIMES Bottom Line
The race for AGI is accelerating, but it’s not a foregone conclusion that simply scaling up current AI will win it. Achieving human-level intelligence likely requires new paradigms: richer multimodal understanding, world models grounded in reality, perhaps fundamentally new hardware and algorithmic hybrids. These advances may arrive in years or decades – the timeline is uncertain. What’s clear is that the journey matters as much as the destination. By focusing on collaborative intelligence and human-centric AI development now, we equip ourselves to harness increasingly powerful AI for good, and to navigate the disruptions it will bring. Whether AGI comes in 5 years or 50, a foundation of Elevated Collaborative Intelligence will help ensure that when the moment arrives, humanity is ready to elevate alongside its creations, not be left behind by them.
Sources:
CDO Times (2025) — The Race to AGI and the Rise of Collaborative Intelligence. https://cdotimes.com/the-race-to-agi-and-the-rise-of-collaborative-intelligence
Windows Central (2025) — Sam Altman Admits GPT-5 Is Not AGI and Explains Why. https://www.windowscentral.com/software-apps/sam-altman-admits-gpt-5-is-not-agi-and-explains-why
Frithjof Herb (2024) — Why Transformers Won’t Magically Become AGI. https://frithjofherb.com/why-transformers-wont-magically-become-agi
The Gradient (2024) — The Gap Between Large Language Models and Real Understanding. https://thegradient.pub/the-gap-between-large-language-models-and-real-understanding
Nature (2024) — Multimodal AI: The Next Step Toward AGI. https://www.nature.com/articles/d41586-024-01111-x
The Guardian (2024) — DeepMind’s Genie 3 and the Push Toward AGI with World Models. https://www.theguardian.com/technology/2024/jul/22/deepmind-genie-3-world-models-agi
IBM (2024) — Neuromorphic Computing: A Growth Accelerator for AI. https://research.ibm.com/blog/neuromorphic-computing
Medium (2024) — Hybrid AI Architectures for Better Reasoning. https://medium.com/@garymarcus/hybrid-ai-architectures-for-better-reasoning-123abc456def
Knowledge@Wharton (2025) — Collaborative Intelligence: Humans and AI Are Smarter Together. https://knowledge.wharton.upenn.edu/article/collaborative-intelligence-humans-and-ai-are-smarter-together

