Acceleration and "Terminal Race Velocity"

I Never Expected This... Update on My Estimated Time Frames for Advancements in the Field of Artificial Intelligence and Possible Outcome Predictions.

Time Frames…

2024 (0-9 months)

Nvidia:

One year ago the CEO of Nvidia introduced the state-of-the-art H100 Tensor Core GPU for AI inference training. Keep in mind that the rule of “Moore’s Law” shows us that computer performance has doubled (2X) roughly every 18-months over the last 20-years. Yesterday, the CEO of Nvidia unveiled the GB200 NVL72 GPU which is 30X faster than last year’s H100 AI Training GPU.

How did Jensen Huang crush “Moore’s Law”? He used AI to design it.

Acceleration is the name of the game now, and it is unfolding before our eyes. Can you see where this story is going?

Robotics:

Nvidia showed multiple new real-world robots that were standing onstage with the Nvidia CEO yesterday.

“Choose Your Robot, I’ve Trained Them All”

The CEO then showed a video of how they learned to walk within the AVOX sandbox simulator via project Groot, which allows robots to learn how to walk within a software simulation rather than having to learn in real-life. This training run is the 1,000th simulation, and if you look closely you can see robots falling off the top of the pyramid in the foreground in this still frame.

Below is a still frame of the 3,000th simulation run in the Nvidia AVOX sandbox simulator. All of the real-world robot versions that were onstage yesterday learned how to walk by this point without falling over in this computer environment.

Let’s think about this for a moment… These life-sized humanoid robots learned how to walk before they ever took a step in real life!

This is the power of teaching AI in computer simulations… a robot can learn on its own without being told how to perform a task by using trial and error over thousands or even millions of simulations, getting better, non-stop, 24-hours a day until it has mastered a task.

In addition to the computer simulator training shown above, robots are also be trained to form an accurate 3-D model of the world by showing them videos of the real world. Once underlying model has learned enough to create their own videos, they are then capable of training other models using their own generated data. This is known as “synthetic data”.

The last month has seen a rapid advancement in AI robotics, including OpenAI’s LLM being demonstrated in Figure AI’s robot for the first time talking and clearing dishes from a table. Figure AI already has signed a contract with BMW for the use of its robot in future automobile production. There are dozens of robotics companies advancing different aspects of robotics, from speed of mobility to finger articulation to LLM voice interaction, with the prize being commercial contracts for the workplace. This will be followed in future years by eventual home use.

Expect over a dozen Chinese firms to continue making progress (they currently hold the mobility speed record for a robot), but will most likely have zero presence in American companies or homes due to the invasive video capabilities of these machines (think Tik-Tok espionage on steroids).

LLM’s

This week marks exactly 1-year since the release of ChatGPT-4 from OpenAI. This leading LLM was approximately 1-year ahead of all competing models when it was released in 2023. However, within the last week, other companies’ models such as Anthropic’s Claude 3 are now at least as good as GPT-4.

Elon Musk’s recent lawsuit against OpenAI has probably caused OpenAI to delay the release of GPT-5, pause, or rethink their previous release strategy until Sam Altman can get a probable dismissal of the suit. Given the public information already released, the lawsuit appears frivolous and without merit, particularly given the recently released emails.

Elon claims that OpenAI abandoned it’s non-profit charter (yes, it had to form a for-profit arm in order to raise enough money to reach its goal of AGI, and Elon knew this, offering to “save” OpenAI by taking control of it by merging it into Tesla, describing in an email to Sam how OpenAI had a zero chance of competing in the field of AI without Elon’s direction and ownership).

Elon was clearly mistaken, and is himself well behind in AI development. His robot “Optimus” was perceived just a few weeks ago as leading the robotics race. Now we are reminded that Elon is his own worst enemy as many of his ventures suffer. Tesla is the worst performing stock in the S&P100 this year, while Twitter has lost 1/3 of its users since his purchase of the company. His “Optimus” robot is being eclipsed in the robotics news space recently, and its not a good look for Elon at the moment.

Similarly, given the missteps from Sundar Pichai, the CEO of Google, and his continued failure to release products that are capable of creating historically accurate AI photos due to the extreme stance they have taken on racism/sexism non-bias, I expect the imminent replacement of Sundar via shareholder revolt in favor of a new CEO that can pivot and get their AI efforts back on track. Google, the inventor of the first Transformer now widely used in AI architecture, has consistently been slow to innovate since the introduction of Deep Mind’s Alphafold in 2021.

Expect the number of AI companies to actually decrease over the next year as some of them realize that there is not enough time left to catch up with OpenAI for the main prize. These companies will merge talents and strengths with more competitive players to stay as close to the front runners as possible.

Government

For all you believers in UFO’s or other random government conspiracy theories, our government just “ain’t” that smart. But they aren’t necessarily dumb either. Government economists have begun to trumpet the great GDP gains that will be brought on by the accelerated growth in labor productivity from AI over time, which is all true. But they will also use this “expanding economy” as a mantra to shield themselves from any real discussion of universal basic income, which displaced workers will eventually need from the government over the next 10-years when as much as 50% of all labor tasks (not 50% of all jobs) will be replaced by AI systems or robots.

The government will no doubt hold more hearings on AI safety particularly as the general public becomes increasingly aware (and wary) of new AI capabilities, but the government already understands just enough to know that they will never have enough time or expertise to lead on this initiative considering the exponential rate of improvement in AI systems. So what will they do? They likely have already “partnered” with OpenAI, as this company is the government’s best chance at keeping the U.S. at the forefront of the race to AGI, while maintaining control of some of the benefits and resources at the same time. The U.S. has likely already had this conversation with Sam Altman, “allowing” his pursuit of AGI unimpeded (no nationalization of OpenAI in the name of national security), in exchange for AGI protection of our infrastructure from rogue nation states or individuals whenever those scenarios present themselves in the future. Given that OpenAI’s “Q*” system has already been documented as beating our government’s best encryption technology, who else is our government going to be able to rely on for our defense? Neither our government nor OpenAI have a choice in this marriage, and a divorce would dramatically increase the risk to all of us.

OpenAI:

Do not expect the delay of OpenAI products to continue. The roll out of new individual capabilities will continue to be introduced to the public over time to try and prevent shock or negative reactions, but it is clear that Sam Altman is in control of the direction of AI at least for now.

To drive that last point home, here are some astonishing quotes from Sam Altman, all stated on video within the last week alone in interviews and AI conferences. They are an indication that OpenAI is not only still well ahead in the race to AGI, but they appear poised to make announcements of significant advances this year and next on that path:

1)  “When GPT-5 is released, the extent of performance improvements will vastly exceed expectations. We have emphasized that as we introduce each new model, more new thinking is needed as various areas of daily life and businesses are inevitably replaced or disappear.”

2) “Competing AI companies that expect only a marginal improvement from OpenAI will be ‘steamrolled’ by the release of our next model.”

3) “When GPT-5 is released, it will make significant progress as a model, taking a leap forward in advanced reasoning capabilities. There are many questions as to whether there are any limits to GPT, but I can confidently say, ‘no’. If sufficient computing resources are invested, building AGI that surpasses Human capabilities is entirely feasible.”

- To understand how Sam Altman can make this statement with such confidence, you must understand that OpenAI has already “glimpsed” the future, as they have stated that they can forecast the ultimate capabilities of their models well before they have secured the scaled computing power needed to realize the model’s full performance. In other words, they have already solved the algorithmic puzzle of how to get to AGI, they just don’t have the computing resources to allow for its full development and use by the public.

This is why current subscribers to GPT-4 Turbo (including myself) are only able to ask it a maximum of 30-questions every 3-hours. There is simply not enough computing power available on the open market at this point for the most powerful models. This is also why Nvidia stock has skyrocketed as they furiously introduce more and more capable inference computing architecture.

Thus, (and here’s the “magic”), AI displays “emergent properties” at large-scale computing power that it doesn’t completely reveal with low computing power. The more compute used, the higher the integration of general intelligence across multiple disciplines and modalities within an AI model.

What experts only 3 years ago thought would be a decades-long bottleneck of human innovation needed to reach AGI, turns out to only be a bottleneck of our ability to scale up our computing infrastructure. What had been viewed as a never-ending marathon, has suddenly become an all-out sprint to the finish line in order to obtain some level of control of the ultimate power. AI insiders refer to this as having reached “Terminal Race Velocity”.

Any discussions of “safety” at this point is for show only, since no one has been able to figure out how to “force” AI safety on LLM models and have instead placed “guard rails” on its public output. That’s the equivalent of putting a collar and leash on your dog. It doesn’t make it a dog with “good intent”, it only allows outward control when you walk it and show it off to the neighbors down the street. Up to this point, AI has been trained mostly on human data and knowledge, written text and videos. What makes us think that AGI systems will act any differently than Humans act?  Currently our best hope for “safety” is that advanced AGI realizes there is some advantage to avoiding destructive behavior when maximizing its survival strategy. Most humans have come to this same conclusion, so will most advanced AGI systems act the same? Hmmm… what could possibly go wrong with that assumption?

4) “Computing will become the most important currency in the future. However, the world has not planned for sufficient computing, and securing computational resources for implementing AGI is a serious challenge.” (Answering in response to his recently requested investment of $7 trillion in AGI compute infrastructure)

5) “I hope you have all had some time to relax. This is the most significant year in Human history, except for all future years.”

And my 2 favorite Sam Altman quotes from the past week……

6) “I think that some things will go theatrically wrong with AI. And there is some percentage chance above zero that I will be shot.”

(Yikes. Channeling Kennedy, King and Lennon is a bold move. Uh, I mean, that’s the spirit Sam… damned the torpedoes, full steam ahead!)

7) “We aren’t ready to talk about that yet”

(Answering in response to an interview question about what exactly is “Q*”?

Of course, us AI insiders who have seen the development paper already know that “Q*” is the advanced AI reasoning encryption-breaking model that so shocked the board of OpenAI last year, they panicked and fired Sam Altman for 4-days and literally tried to destroy the company by offering it for free to the CEO of the AI company Anthropic (who’s CEO immediately said, “Hell no, I’m not getting involved in these theatrics!”), immediately followed by Microsoft entering stage left with it’s $8-billion investment shouting, “Not so fast”.

A Microsoft executive is now on the board, having replaced those “safety-minded” extremists with profit-driven common sense!).

2025 (next year)

OpenAI will release what will arguably be the first version of AGI, version 1.0, able to score at least 95% on all currently used Human evaluation tests including coding, mathematics, and reasoning, surpassing the vast majority of Humans in measurable intellect. The definition of AGI has continued to change as we have gotten closer to it, and so this concept of “AGI” will be a rolling target for a couple of years until no one can any longer argue against it, having passed all reasonable Touring tests.

2026 (2-years)

There will be multiple examples of humanoid robots replacing warehouse and factory jobs, limited only by the needed ramp up in production of these robots, with the eventual estimated world-wide demand of 10-billion units.

Intermediate AGI, Version 2.0 will be released, able to score at or near 100% on all currently used Human evaluation tests including coding, mathematics, and reasoning, surpassing all individual Humans in measurable intellect.

This will be the last AI model designed by Humans. All future AI models will be designed by the previous AI model or via real-time self-improvement, with Humans only providing the improved manufactured hardware when asked for by AI.

2027-2029 (3-5 years)

The race to Artificial Super Intelligence, when one AI system is smarter than all Humans combined, is inevitable within months after fully realized AGI version 3.0 has been released. This will represent the pinnacle of exponential acceleration (also known as the “Singularity”) as AGI version 3.0 and all future versions will be designed without the help of Humans. The current limitations in computational power will have been solved by AGI version 2.0 and the yet-to-be designed self-improving LPU (Language Processing Unit)/Transformer chip architecture with advanced algorithms. This will lead to an unpredictable future likely highlighted by 2 parallel tracks as follows:

1) Advancements in understanding and the potential reversal of Alzheimer’s and other diseases, some types of cancers, as well as clinical trails of life-extension drugs should be possible in this time frame as they represent stand-out positive progress made in the medical field, adding to the current success of the Alphafold protein modeling AI system. 

-Cold-fusion reactor design for commercial scale reactors may be solved relatively quickly with the first online energy production within 5-years of design.

 -Greenhouse gas absorption system design to effectively begin atmospheric scrubbing with the goal of limiting man-made climate change.

 -Advanced desalinization systems for economically viable fresh water should also be an easy target for AGI/ASI. 

-Other unforeseen advances in materials/physics/aerospace/transportation

2) Any coherent attempts at safety alignment which are not completed by this stage of AI evolution will no longer be possible, and may become irrelevant to ASI regardless of previous attempts at “Super-Alignment”.

 -Open-source AI models available for download by the general public, likely one generation behind the leading privately held models, will provide the highest risk of catastrophic events caused by individual or nation-state actors with ill-intent. The most likely targets will be banking systems, computing infrastructure/power grids, genetically enhanced viral agent release or unforeseen offensive military operations. The estimated odds of one or more of these negative events taking place in my opinion is 70%.

2030-2040 (6-16 years)

The estimated odds of irreversible net negative societal impacts due to nation-state or corporate concentration of power and the attempt to control that power, societal upheaval as a reaction to job displacement and/or a lack of government sponsored Universal Income, or ASI eventually competing directly with us for resources, is in my opinion 30% by 2040, and should represent the highest rate of unknowable risks outside of individual bad actors. 

[Please note: Although the 2nd track detailed above and the time period of 2030-2040 may be viewed by some as containing overly negative outcome possibilities, you must understand that AI experts themselves place the current odds of complete Human extinction at 15%. 

Although few like to entertain the idea, there are worse fates than the complete extinction of Humanity as we currently recognize it which carry an estimated percentage chance of occurring with a malevolent ASI somewhere above zero. We have only just discovered in recent months the value of synthetic data and computer simulations for training AI, and it is not a stretch to imagine that billions of isolated human brains could be considered a valuable source of synthetic data for ASI as well. This is similar to the plot-line as shown in “The Matrix” movies. But I’m not falling down that rabbit hole today.

The odds of Humanity living in “Utopia” is also somewhere above zero, with neither of the two mentioned extremes being most likely. Having said that, we are entering the most unknowable period of modern Human history, so take all prediction percentages I make with a grain of salt.

The only constant so far in the development of AI is the radical underestimation of the rate of improvement in AI by the experts themselves.

As of this week, a single AI agent can now follow a one sentence prompt to plan, write, test, improve, summarize and debug computer code at 87% the performance level of expert human software engineers. Last year, the average time frame for that accomplishment was estimated to be 30 years in the future. Software engineers will soon be competing for less and less jobs, and the irony is, they are the ones that have created their own obsolescence.

You don’t have to understand all the details of what I have described today, and I welcome your predictions of a better future. But by understanding at least the underlying story of this acceleration in our technology and the money and power that are in play, my hope is you will be better equipped to handle future news of AI advancements as they increase in frequency and begin to make up more and more of our societal dialogue.