Link to article on NYT website
From inside:
“If you’ve been telling yourself this isn’t coming, I really think you need to question that. It’s not web3. It’s not vaporware. A lot of what we’re talking about is already here, right now.
I think we are on the cusp of an era in human history that is unlike any of the eras we have experienced before. And we’re not prepared in part because it’s not clear what it would mean to prepare. We don’t know what this will look like, what it will feel like. We don’t know how labor markets will respond. We don’t know which country is going to get there first. We don’t know what it will mean for war. We don’t know what it will mean for peace.
And while there is so much else going on in the world to cover, I do think there’s a good chance that, when we look back on this era in human history, A.I. will have been the thing that matters.”
What is AGI?
I’m almost giddy with excitement. Every day word is spreading further about the potential of AIG. I explained over two years ago why LLMs represented the unlock of a sufficient level of intelligence to create new trillion dollar industries:
1 billion new people, but no more mouths to feed. Yet another ChatGPT post.
One way to think about technology is that it is a mechanism for lowering the cost of intelligence.
Alas, even trillion dollar industries are not the same as AGI.
The reason ChatGPT wasn’t considered AGI then and why nothing we have is considered AGI today (by most people) has less to do with intelligence than agency. AI systems like ChatGPT 4.5, Claude 3.7, and Grok are already better than humans at many things. But until they get “plugged in” to GUIs, software APIs, and have their memory upgraded to be effectively unlimited - they will continue not feeling like AGI.
Every thought leader in the AI space - including a person named Ben Buchanan (NOT me! the guy Ezra interviewed) who was an AI advisor to the Biden administration - laments they are not satisfied with the term artificial general intelligence (in most cases because it is ill-defined). If more people read 01Core then we wouldn’t have this issue, because my definition is ideal:
Digital AGI: An AI that would be indistinguishable from a human working remotely
Physical AGI: An intelligence that - if embedded into a humanoid form - could do whatever an average human could do.
See how easy that was?
Most people have no idea what’s already here! Much less what is coming…
From Ben (if you see this name for the rest of this article - just remember I’m talking about the other guy!):
Software developers regularly report that they have cut the time it takes to do a multitude of tasks by 50-90%. Deep Research can be used to carry out complex research assignments and put together reports that are hundreds of pages long. Countless examples abound of it being used for analytical work. Grok (from xAI owned by Elon Musk), DeepSeek (from China) and of course Google have all released similar products.
I subscribe to three different LLMs (ChatGPT, Claude, and Grok through Twitter). I have saved myself ballpark $15k of hard costs (money I would have needed to pay to someone else - mostly subject matter experts like Quickbooks gurus or lawyers) from these subscriptions over the past twelve months, and probably more than that in terms of time savings. And don’t forget this post from yesterday:
As Ezra points out in the article - systems today are as bad as they will ever be.
There are three switches that will flip soon - by the end of this decade - and as they turn on chaos will ensue (but only briefly).
AI will get agency - the ability to take actions and perform any task a human with computer access could do. For example, instead of asking ChatGPT “How do I set up a website” or “How many calories are on this plate of food?” - you will simply instruct it to: “Set up a website, find some products on Alibaba that I can import at a good price, then list them on the website and start selling them. Also, set up some social media accounts and begin marketing campaigns for those products.”
AI will get a body - The “Synths” as I call them - short for Synthetic Humanoid - are coming. I’ve written about this extensively. Until recently there was no justification for pouring a hundred billion dollars into R&D to figure out how to create a functional “skeleton” to put a brain inside of. Because there was no brain. Now that there is the incentive to own the business of manufacturing the last thing humans will ever need to make is incomprehensibly large. Humanoid robots will end up being a $10T+ industry - the largest in history by far. These things will start mass production this decade.
We’ll get a “zero to 1 billion” moment - The reason most people don’t know about AGI is because to date the impact on most people’s daily routine has been minimal. This will change once Apple and Google release zero latency LLMs with operating system access. Suddenly - overnight - the entire world - everyone with a smartphone - will have access to something that feels indistinguishable from having a “person in your pocket”. It won’t be long after that people realize how limited the need for humans will be in the imminent future.
The zero to 1 billion moment is what will kick of the greatest asset bubble in history. Analysts will start forecasting the profit margins of companies by assuming they employ zero workers. People will start valuing asteroid mining companies as if they’ll succeed at harvesting all of the precious minerals in Saturn’s rings. Real estate prices will catch a bid because everyone will start putting 2 and 2 together - and realize that a cure for aging is right around the corner - so the demand for prime real estate will increase rapidly while the supply usually provided by elders dying off will start evaporating.
Meanwhile, inequality will be skyrocketing because capital will have embarked on its final campaign to wipe out the value of labor once and for all…
The first new technology paradigm over which the government has little control
From the article:
Companies including OpenAI, Microsoft, and Google have all made overt attempts to encourage government to regulate AI. The cynical perspective on this is that they realize how indefensible the LLM technology is from a competitive standpoint and want to use the regulatory system to create a moat. This is probably true, but it’s also true that most people who believe AGI is coming think that society needs to figure out how to prepare for the chaos that will ensue thereafter (e.g. from mass job losses).
The examples Ben provides (nuclear, space, aviation) are fascinating to contemplate. There are two primary differences between everything that came before and AGI:
The rate of advancement far outpaces the speed of societal evolution and government capacity to respond
The internet ensures that anyone determined will be able to access AGI
One of the key insights I feel like I’ve had that doesn’t seem to be widespread - is the realization that only the United States - among ALL nations on Earth - has a corporate shadow government whose collective power dwarfs that of the federal and state bodies and the judiciary. I’ll never stop tooting my own horn about this. I predicted almost 7 years ago that the United States would become a “Big Technocracy”. I gave a talk about this to a Rotary group in Atlanta. If you look over the regulatory landscape in the US you will notice that virtually all impactful regulations might as well have been drafted by the C-suite of the companies who are regulated. This is not true in Europe, it’s not true in China. This sets up a dynamic that will likely exacerbate existing trends in the diverging paths of major economic zones.
The EU will react to AGI by over-regulating, making it difficult for companies to adapt. They will limit the ability of companies to employ humanoid robots. The net result will be the complete de-industrialization of the European Union. Japan, Korea and China - on the other hand - will automate as quickly as possible - because their citizens will trust that the government knows best and will end up providing for them. China in particular will avoid chaos caused by automation by being able to rapidly adjust the distribution of resources in response.
In the United States - particularly in red states - companies will be left to do as they see fit. Manufacturing will take off as labor costs plummet. Europe’s loss will be to the gain of Asia and the United States. There will be a lag between the surge in corporate profit margins and when society at large realizes we’re going to need to implement a new social welfare system. Thankfully, the US will be able to afford it because advanced technology acts like a tax charged (mostly) by the US (and likely soon China) on the rest of the world.
Blue states within the US will be inclined to add additional labor protection laws - and this will cause an exodus of business owners, entrepreneurs and the most ambitious members of society to red states.
Implications for national security
There are two primary attack vectors in the cyber world:
Vulnerabilities in tech
Vulnerabilities in people
AI as it currently exists already magnifies these vulnerabilities by multiple orders of magnitude. First, regarding the vulnerabilities in tech.
AI makes it easier to write code. AI can already - today - be asked to try and hack into some system. It will take a stab at producing the code - then a human will execute the code - then the human tells the AI whether it worked or not and gives the AI another chance if it didn’t.
By the end of this year - if it isn’t already happening - AI will be able to simply keep attacking systems on its own, relentlessly, indefinitely - with the only thing holding it back being compute power.
Vulnerabilities in people come primarily through “social hacks”. A hacker sets up an email address that looks like it belongs to an executive at the same company as the victim works for. They send them an email asking them to click a link to approve something - and that link infects the computer and gives access to the hacker.
AI makes this process easier by automating the process of writing the emails, identifying targets, etc. Again - right now this activity must be overseen by a human - but what will things look like when you have 1 billion AI agents constantly trying to impersonate people to get access to critical systems?
Another “social hack” that has only been made possible within the past 12 months is using AI to generate a real-time deep fake and mimic the voice of someone else. See this article from CNN:
My current view is that this problem will actually be solved with technology pretty quickly - because the ramifications of not solving it are enormous and may border on MAD (this is a surprisingly helpful heuristic for predicting the future).
But - you can imagine how being the first to AGI could lead to one country quickly gaining a compounding advantage vs. another.
This brings up an interesting aside. What happens to the demand for memory and compute if the technological solution to increased hacking ends up being having a billion agents constantly trying to hack and fix vulnerabilities? One way people defend against cyberattacks today is by employing people to attack their own systems to discover where they’re vulnerable. If you end up in a situation where billions of “good agents” are trying to constantly find exploits before billions of “bad agents” - it seems reasonable to me that this could end up using a material portion of total compute globally.
This post has hit my target length for these quicker “Bytes” and I need to get started on my other work. Let me know if you enjoy this format of responding to articles, there’s a bunch of other fascinating stuff to address in Ezra’s piece, including the ramifications of “free” satellites and the increased importance of Space; DeepSeek, China and export controls; why drug development will accelerate + a cure for aging - and more! I’ll get to it in the next few days.