AI Doom Delayed as Experts Push Back the Timeline
Right now, there’s a noticeable shift happening in how some of the world’s most vocal AI experts are talking about the future, especially when it comes to the idea that artificial intelligence could one day wipe out humanity. One of the loudest voices in that debate, Daniel Kokotajlo, has actually slowed things down in his own predictions. And that, by itself, is a big deal.
Kokotajlo, a former OpenAI employee, made waves last year with a scenario called AI 2027 . In that vision, AI systems were expected to reach fully autonomous coding within just a few years. Once that happened, the argument went, AI would be able to improve itself faster than humans could keep up, triggering what’s often described as an “intelligence explosion.” In one extreme version of that scenario, humanity didn’t survive the 2030s.
Also Read:- James Marsden’s Cyclops Returns as X-Men Reignite the MCU in Avengers: Doomsday
- Cowboys Pull the Plug on Matt Eberflus After Defensive Collapse
That idea sparked huge reactions. Some policymakers appeared to take it seriously, while others dismissed it outright as science fiction dressed up as research. Now, Kokotajlo himself is saying things are moving more slowly than he originally thought. The key bottleneck, according to his updated view, is autonomous coding. AI is still struggling with the messy, real-world complexity required to truly replace human software engineers and researchers.
Instead of 2027, that milestone is now being pushed into the early 2030s. And with that shift, the arrival of so-called “superintelligence” has also been delayed, with 2034 becoming the new rough horizon. Notably, the revised forecast no longer includes a specific guess about when, or even if, AI might destroy humanity.
Other experts are echoing this more cautious tone. It’s being pointed out that AI performance isn’t a smooth upward curve. Progress has been jagged, impressive in demos but far less reliable in real-world settings. The inertia of society, institutions, and physical infrastructure is also being emphasized as a major brake on rapid, total transformation.
There’s even growing skepticism about whether the term “AGI” still means much at all. When AI systems were narrow, the concept made sense. Now that models can do a bit of everything—but not consistently well—the line between narrow AI and general intelligence has become blurry.
That doesn’t mean the risks are being dismissed. Major AI companies are still openly aiming to automate AI research itself, even while admitting failure is possible. What’s changing is the certainty. The future is starting to look less like an unstoppable sprint toward doom and more like a long, uneven climb filled with technical limits, political friction, and human complexity.
For now, the end of the world has been postponed—at least on paper. And in the fast-moving AI debate, even a few extra years can completely change how society chooses to respond.
Read More:
0 Comments