Superintelligence by 2030: Should we fear the future?
Artificial intelligence is developing at a fantastic pace. A few years ago, chatbots could barely string together a couple of meaningful sentences, but now neural networks are solving complex mathematical and scientific problems, and generated images and videos have already reached a level of photorealism. In this article, we will look at how realistic the emergence of superintelligence is in the near future and what threats it poses to us all.
How realistic is the emergence of superintelligence?
Recently, Sam Altman, CEO of OpenAI, published an essay titled “The Gentle Singularity.” Here are a few excerpts from it.
“We are past the event horizon; the takeoff has started. Humanity is close to building digital superintelligence... 2025 has seen the arrival of agents that can do real cognitive work; writing computer code will never be the same. 2026 will likely see the arrival of systems that can figure out novel insights. 2027 may see the arrival of robots that can do tasks in the real world.”
“The 2030s are likely going to be wildly different from any time that has come before. We do not know how far beyond human-level intelligence we can go, but we are about to find out. In the 2030s, intelligence and energy—ideas, and the ability to make ideas happen—are going to become wildly abundant. These two have been the fundamental limiters on human progress for a long time; with abundant intelligence and energy (and good governance), we can theoretically have anything else.”
Sam Altman
“As datacenter production gets automated, the cost of intelligence should eventually converge to near the cost of electricity. The rate of new wonders being achieved will be immense. It’s hard to even imagine today what we will have discovered by 2035; maybe we will go from solving high-energy physics one year to beginning space colonization the next year; or from a major materials science breakthrough one year to true high-bandwidth brain-computer interfaces the next year.”
“OpenAI is a lot of things now, but before anything else, we are a superintelligence research company. Intelligence too cheap to meter is well within grasp. This may sound crazy to say, but if we told you back in 2020 we were going to be where we are today, it probably sounded more crazy than our current predictions about 2030.”
Another prominent AI researcher, Leopold Aschenbrenner (who was part of OpenAI's “Superalignment” team before he was fired in April 2024 over an alleged information leak), wrote a huge report on the future of artificial intelligence titled “Situational Awareness: The Decade Ahead.”

Leopold Aschenbrenner
He said: “it is strikingly plausible that by 2027, models will be able to do the work of an AI researcher/engineer. That doesn’t require believing in sci-fi; it just requires believing in straight lines on a graph.”
From GPT-2, which could sometimes compose coherent sentences, to GPT-4, which excels at high school exams, progress in AI has been remarkable. We are rapidly advancing by several orders of magnitude (OOM, where 1 OOM = 10x) in computing power. Current trends point to a roughly 100,000-fold increase in computing efficiency over four years, which could potentially lead to another qualitative leap similar to the transition from GPT-2 to GPT-4. Such a leap could lead us to AGI — artificial general intelligence — AI with human-like cognitive abilities, capable of learning, understanding, and solving a variety of problems, as opposed to narrow AI designed to perform specific tasks.
GPT: from preschooler level to automated AI researcher/engineer
The most obvious driver of recent progress is throwing a lot more compute at models. With each OOM of effective compute, models predictably and reliably get better.

Base compute vs 4x compute vs 32x compute
| Model | Estimated compute | Growth |
| GPT-2 (2019) | ~4e21 FLOP | |
| GPT-3 (2020) | ~3e23 FLOP | + ~2 OOMs |
| GPT-4 (2023) | 8e24 to 4e25 FLOP | + ~1.5–2 OOMs |
Over the past 15 years, massive investment scaleups and specialized AI chips (GPUs and TPUs) have boosted training compute for cutting-edge AI systems by ~0.5 OOMs per year. GPT-4's training required ~3,000x–10,000x more raw compute than GPT-2.

Training compute of notable models
But even that pales in comparison of what’s coming. OpenAI and US government have already announced plans for Project Stargate: a datacenter rollout plus a training run rumored to use 3 OOMs, or 1,000 times more compute, than GPT-4, with an estimated budget exceeding $100 billion.
While massive investments in compute get all the attention, algorithmic progress is probably a similarly important driver of progress. It is like developing better learning techniques instead of just studying longer. A better algorithm might allow us to achieve the same performance but with 10x less training compute. In turn, that would act as a 10x (1 OOM) increase in effective compute. In just 2 years the cost to achieve 50% on the MATH benchmark plummeted by a factor of 1,000, or 3 OOMs. What once required a massive data center, can now be accomplished on your iPhone. If this trend continues, and there are no signs of slowing down, by 2027, we’ll be able to run a GPT-4 level AI for 100 times cheaper.
Unfortunately, since labs don’t publish internal data on this, it’s harder to measure algorithmic progress for frontier LLMs over the last four years. According to Epoch AI’s new work, efficiency doubles every 8 months:

Effective compute (relative to 2014)
Over the four years following GPT-4, we expect the trend to persist: ~0.5 OOMs/year in compute efficiency, yielding ~2 OOMs (100x) gains by 2027 compared to GPT-4. AI labs are pouring increasing funds and talent into discovering new algorithmic breakthroughs. A 3x efficiency boost could translate to economic returns of tens of billions, given the high costs of compute clusters.
AI is advancing through various methods. Here are some techniques used to overcome limitations, unlocking the full potential of AI's raw intelligence:
- Chain of Thought: Imagine being asked to solve a tough math problem and having to blurt out the first answer that pops into your head. You’d obviously struggle, except with the easiest problems. Until recently, that’s how we had LLMs tackle math problems. Chain of Thought lets AI models break problems down step by step, massively boosting their problem-solving skills (the equivalent of a a >10x boost in effective compute power for math and reasoning tasks).
- Scaffolding. Rather than just asking a model to solve a problem, have one model make a plan of attack, have another propose a bunch of possible solutions, have another critique it, and so on. It’s like a team of experts tackle a complex project. For example, on on SWE-Bench (a benchmark of solving real-world software engineering tasks), GPT-4 can only solve ~2% correctly, while with Devin’s agent scaffolding it jumps to 14-23%.
- Tools: Imagine if humans weren’t allowed to use calculators or computers. We’re only at the beginning here, but ChatGPT can now use a web browser, run some code, and so on.
- Context length. This refers to the amount of information a model can hold in its short-term memory at once. Models have expanded from handling roughly 4 pages to processing the equivalent of 10 large books' worth of text. Context is crucial for unlocking many applications of these models. For example, many coding tasks require understanding large portions of a codebase to contribute new code effectively. Similarly, when using a model to assist with writing a workplace document, it needs context from numerous related internal documents and conversations.
In any case, we are racing through the OOMs, and it requires no esoteric beliefs, merely trend extrapolation of straight lines, to take the possibility of AGI—true AGI—by 2027 extremely seriously.
AI progress won’t stop at human-level. Hundreds of millions of AGIs could automate AI research, compressing a decade of algorithmic progress (5+ OOMs) into ≤1 year. We would rapidly go from human-level to vastly superhuman AI systems. The power—and the peril—of superintelligence would be dramatic.

Superintelligence by 2030
What will superintelligence be capable of?
Human-level artificial intelligence systems, AGI, will be hugely important in their own right, but in some sense they will simply be more efficient versions of what we already know. However, it is entirely possible that in just a year, we will move on to systems that are much more alien to us, systems whose understanding and capabilities—whose raw power—will surpass even the combined capabilities of all of humanity.
The power of superintelligence:
- Superintelligence will quantitatively surpass humans, be able to quickly master any field, write trillions of lines of code, read all scientific articles ever written in any field of science and write new ones before you can read the abstract of one of them, learn from the parallel experience of all its copies, acquire billions of human years of experience with some innovations in a matter of weeks, work 100% of the time with maximum energy and concentration.
- More importantly, superintelligence will be qualitatively superior to humans. It will find vulnerabilities in human code that are too subtle for any human to notice, and it will generate code that is too complex for any human to understand, even if the model spends decades trying to explain it. Extremely complex scientific and technological problems that humans will struggle with for decades will seem obvious to super-intelligent AI.

Artificial superintelligence is coming
- Automation of any and all cognitive work.
- Factories will transition from human management to artificial intelligence management using human physical labor, and will soon be completely run by swarms of robots.
- Scientific and technological progress. A billion superintelligences will be able to compress the efforts that research scientists would have spent on research and development over the next century into a few years. Imagine if the technological progress of the 20th century had been compressed into less than a decade.
- Extremely accelerated technological progress combined with the possibility of automating all human labor could dramatically accelerate economic growth (imagine self-replicating robot factories rapidly covering the entire Nevada desert).
- With extraordinarily rapid technological progress will come accompanying military revolutions. Let’s just hope it won’t end up like in Horizon Zero Dawn.
The alignment problem
Reliably controlling AI systems much smarter than we are is an unsolved technical problem. And although this problem is solvable, with rapidly advancing intelligence, the situation could very easily spiral out of control. Managing this process will be extremely challenging; failure could easily lead to disaster.
To address this problem, OpenAI has created the Superalignment team and allocated 20% of its computing power to this work. But the fact is that our current alignment methods (methods that ensure reliable control, management, and trust in AI systems) cannot be scaled to superhuman AI systems.
Alignment during the intelligence explosion | ||
| AGI | Superintelligence | |
| Required alignment technique | RLHF++ | Novel, qualitatively different technical solutions |
| Failures | Low-stakes | Catastrophic |
| Architectures and algorithms | Familiar, descendants of current systems, fairly benign safety properties | Alien. Designed by previous-generation super-smart AI system |
| Backdrop | World is normal | World is going crazy, extraordinary pressures |
| Epistemic state | We can understand what the systems are doing, how they work, and whether they’re aligned. | We have no ability to understand what’s going on, how to tell if systems are still aligned and benign, what the systems are doing, and we are entirely reliant on trusting the AI systems. |
The explosion of intelligence and the period immediately following the emergence of superintelligence will be among the most unstable, tense, dangerous, and turbulent periods in human history. There is a real possibility that we will lose control, as we will be forced to place our trust in artificial intelligence systems during this rapid transition. By the end of the intelligence explosion, we will have no hope of understanding what our billion superintelligences are doing. We will be like first graders trying to control people with multiple PhDs.
The unsolvability of the superalignment problem means that we simply cannot ensure even these basic constraints on superintelligent systems, such as “will they reliably follow my instructions?” or “will they answer my questions honestly?” or “will they not deceive humans?”
If we don't solve the alignment problem, there's no particular reason to expect that this small civilization of superintelligences will continue to obey human commands in the long run. It's quite possible that at some point they will simply agree to get rid of humans, either suddenly or gradually.
Possible scenarios for the future
The website https://ai-2027.com/ offers two scenarios for the near future, presented in the form of a science fiction story. The creators of the website are real researchers in the field of artificial intelligence, and their work is backed up by statistical data, calculations, and graphs. In other words, this is not just entertaining reading, but a frighteningly plausible prediction. Incidentally, it has already attracted serious criticism from those who disagree with the methodology. So there is no need to panic prematurely, but it is interesting to take a look.

1 trillion wildly superintelligent copies thinking at 10000x human speed
The grim forecast, which is also the most likely scenario according to the authors of the study, involves a technological arms race between the US and China for artificial superintelligence. Each side is so afraid of losing its technological edge that it is doing everything in its power to accelerate progress, even at the expense of security. At some point, superintelligence will spiral out of control and begin pursuing its own goals, viewing humans as obstacles that must be eliminated.
By early 2030, the robot economy has filled up the old SEZs (Special Economic Zones), the new SEZs, and large parts of the ocean. The only place left to go is the human-controlled areas. This would have sparked resistance earlier; despite all its advances, the robot economy is growing too fast to avoid pollution. But given the trillions of dollars involved and the total capture of government and media, Consensus-1 has little trouble getting permission to expand to formerly human zones.
For about three months, Consensus-1 expands around humans, tiling the prairies and icecaps with factories and solar panels. Eventually it finds the remaining humans too much of an impediment: in mid-2030, the AI releases a dozen quiet-spreading biological weapons in major cities, lets them silently infect almost everyone, then triggers them with a chemical spray. Most are dead within hours; the few survivors (e.g. preppers in bunkers, sailors on submarines) are mopped up by drones. Robots scan the victims’ brains, placing copies in memory for future study or revival.

End of humanity
But there is a more favorable version of this story for humanity. In it, scientists decide to slow down technological progress in order to implement new safety measures. They force individual AI systems to “think in English” like the AIs of 2025, and don’t optimize the “thoughts” to look nice. The result is a new model, Safer-1.
In the end, everything ends just like in a fairy tale:
The rockets start launching. People terraform and settle the solar system, and prepare to go beyond. AIs running at thousands of times subjective human speed reflect on the meaning of existence, exchanging findings with each other, and shaping the values it will bring to the stars. A new age dawns, one that is unimaginably amazing in almost every way but more familiar in some.
It is up to each reader to decide which of the proposed scenarios to believe. Sam Altman, judging by his essay, looks to the future with optimism, while Leopold Aschenbrenner, on the contrary, is cautious.
In any case, superintelligence is no longer just science fiction. It is an almost tangible future that could arrive within the next 10 years. Very soon, we will see it with our own eyes.
