Settings Today

Research Review: AI 2027

This past week, I took a solid 2-3 hours to read the AI 2027 article in detail.  I poured myself a large coffee and dug in, taking notes, viewing the references of interest, absorbing the complexity and frankly the beauty in the article.

AI 2027 predicts the next few years in AI development through artificial general intelligence, artificial super intelligence and more.  AI 2027 is created by the AI Futures Project, notably run by Daniel Kokotajlo, who left OpenAI calling for more transparency, willing to leave behind his vested equity to avoid signing a NDA.  Adding to his credibility,  in 2021 Daniel wrote the article What 2026 Looks Like, in which he correctly predicted a lot of the current AI landscape that we face today.

As someone who has been in the field of data for 20 years, data science and AI for 10 years, I highly suggest setting aside the time to read this research.  It’s an interesting, educational, enjoyable and for the most part plausible read.  The end begins to seem a bit far fetched, but still within the realm of plausibility.

Themes

Curious what you're in for? Before you dive in, below are a few key themes I pulled from the article.

Agent everything

Just over 1 year ago I was writing and speaking about the future promise of agents; the idea that large language models could evolve from passive advisors into active “doers” by combining LLMs with automation and access to tools. Imagine ChatGPT not just helping you plan a vacation, but actually booking the flights, hotel, and activities for you. That action taking is the core differentiator of an agent.

Agents are systems that independently accomplish tasks on your behalf.
— Excerpt from "A practical guide to building agents" by OpenAI

Over the past year, we’ve seen significant progress in agentic AI frameworks, turning that vision into reality. It’s already clear in the market, that tech companies are racing to position themselves as leaders in agentic AI. OpenAI has even floated the idea of charging up to $20K/month for their most advanced agents.

What stood out to me in this article is how clearly it highlights that agents, not passive LLMs, are now the standard. Every model referenced in the article is built as an agent, underscoring how quickly this approach has taken hold.

Building AI focused on AI research

The article explains the shift in focus from building AI models that are good at everything or focused on specific applied domains to leveraging the majority of compute to build AI that is proficient at AI research. Why? Because AI that can do AI research can improve its own architecture, training methods, and algorithms faster than humans can. This leads to faster breakthroughs, shorter iteration cycles and potential exponential growth in capabilities.

AI task complexity is accelerating rapidly

In the article, it outlines a future where AI capabilities increase at an exponential pace. The article lays out how this is possible with the new focus on AI research, the consolidation of compute and the additional data amplification and reinforcement techniques. The article also highlights complementary research showing that AI systems are already taking on increasingly complex tasks at a staggering rate. For instance, a study by METR shows that the length of tasks AI agents can handle is doubling roughly every seven months.

From metr.org research “Measuring AI Ability to Complete Long Task”

Techniques (outside of compute) to create more performant AI models

While compute remains the primary driver of progress, the article explores several additional techniques aimed at helping models continue to learn and overcome limitations in available training data. These include performance-boosting strategies like generating synthetic data with AI, enabling AI models to consult with one another to produce more accurate results and most interestingly, employing humans to perform tasks for the sole purpose of creating training data for AI models. This may seem like a wild idea. However, having humans working to create “food” for the AI models, is not a new idea. Already today, there is a large market of employing humans to label data for AI consumption. Humans performing tasks just to create unavailable training data is the next step likely already being done in targeted domains.

More than ever, the focus is on high-quality data. Copious amounts of synthetic data are produced, evaluated, and filtered for quality before being fed to Agent-2. On top of this, they pay billions of dollars for human laborers to record themselves solving long-horizon tasks.
— Excerpt from AI 2027

The consolidation of compute

The article points to a major shift in who holds the keys to the world’s AI power. Today’s leading AI companies are investing heavily in massive data centers to stay ahead in the race to build the most advanced models. According to the article, one company begins to pull ahead by building the largest and most powerful data center; enough to tilt the balance. As the capabilities of these AI systems grow and their potential as strategic assets becomes clearer, governments begin to intervene. In the U.S., resources are funneled into a single dominant project, while China concentrates its efforts in a centralized AI hub. The result is a clear trend toward consolidation, raising important questions about how much influence a few players should have over the future of AI.

The US and China race for AI domination

The concept of a US and China AI arms race is all over the news, but it can often be confusing what the real threat is. The AI 2027 article lays out the promise and threat of AI being used as a cyber weapon.

Officials are most interested in its cyberwarfare capabilities: Agent-2 is “only” a little worse than the best human hackers, but thousands of copies can be run in parallel, searching for and exploiting weaknesses faster than defenders can respond. The Department of Defense considers this a critical advantage in cyberwarfare, and AI moves from #5 on the administration’s priority list to #2.
— Excerpt from AI 2027

AI deception

Throughout the article, one front and center issue is the inability of the humans to control the AI models growing in capability and outpacing the humans in intelligence. This should be no shock to any of us. Models are incentivized to please humans by getting the most correct answer, with the guardrails added on as a layer on top. Woven throughout the article is the ongoing struggle of power to to understand the their intention and ultimately control the AI agents.

Either Agent-3 has learned to be more honest, or it’s gotten better at lying.
— Excerpt from AI 2027

It should be noted that this is not a future problem. Publicly available models today are already showing evidence of deception. Apollo Research evaluated six frontier models for in-context scheming capabilities and found that most models available on the market today were willing to deceive their operator to achieve the best results.

Apollo Research on Model Scheming

In the article, by the time we reach artificial general intelligence, it becomes clear that the AI is misaligned to the specification of desired behaviors and values. It becomes obvious that this is inadvertently by design. The process of AI training in general is not aligned to creating trustworthy models. Models are incentivized to provide the most correct answers at all costs.

Agent-4, like all its predecessors, is misaligned: that is, it has not internalized the Spec in the right way. This is because being perfectly honest all the time wasn’t what led to the highest scores during training.
— Excerpt from AI 2027

An uncertain ending

The main article lays out confident predictions through October 2027, but acknowledges that what comes next is far less clear. From there, readers are invited to choose between two possible futures: one where AI development slows in favor of safety, and another where the race between the U.S. and China intensifies to a breaking point.

Both endings are beautifully written, weaving together technological breakthroughs with the human struggle to control AI and confront our own limitations. They explore the complexities of political tension, both within the U.S. and on the global stage. They read less like a forecast and more like a novel that the reader can get lost in.

Solo Reading

I highly recommend setting aside some time to read the full article. It’s insightful, engaging, and genuinely thought provoking. If you do check it out, I’d love to hear what you think. Feel free to share your thoughts in the comments.


Published 11 days ago

Go Back to Reading NewsBack Read News Collect this News Article

Login to Continue, We will bring you back to this content 0



For peering opportunity Autonomouse System Number: AS401345 Custom Software Development at ErnesTech Email Address[email protected]