Whether we like it or not, AI is radically transforming virtually every aspect of our society. We have already reached a point where AI can do most things better than humans can, and AI technology continues to advance at an exponential rate. The frightening thing is that it is advancing so fast that we may soon lose control over it. The latest model that OpenAI just released “was instrumental in creating itself”, and it is light years ahead of the AI models that were being released just a couple of years ago.
An excellent article that was written by someone that works in the AI industry is getting a ton of attention today. His name is Matt Shumer, and he is warning that GPT-5.3 Codex from OpenAI and Opus 4.6 from Anthropic represent a quantum leap in the development of AI models…
For years, AI had been improving steadily. Big jumps here and there, but each big jump was spaced out enough that you could absorb them as they came. Then in 2025, new techniques for building these models unlocked a much faster pace of progress. And then it got even faster. And then faster again. Each new model wasn’t just better than the last… it was better by a wider margin, and the time between new model releases was shorter. I was using AI more and more, going back and forth with it less and less, watching it handle things I used to think required my expertise.
Then, on February 5th, two major AI labs released new models on the same day: GPT-5.3 Codex from OpenAI, and Opus 4.6 from Anthropic (the makers of Claude, one of the main competitors to ChatGPT). And something clicked. Not like a light switch… more like the moment you realize the water has been rising around you and is now at your chest.
A few years ago, the clunky AI models that were available to the public simply were not very good.
They made all sorts of errors, and they would often spit out information that was flat out wrong.
But the newest AI models perform brilliantly and can do things that would have been absolutely unimaginable just months ago.
For example, Shumer says that when he asks AI to create an app it proceeds to write tens of thousands of lines of perfect code…
Let me give you an example so you can understand what this actually looks like in practice. I’ll tell the AI: “I want to build this app. Here’s what it should do, here’s roughly what it should look like. Figure out the user flow, the design, all of it.” And it does. It writes tens of thousands of lines of code. Then, and this is the part that would have been unthinkable a year ago, it opens the app itself. It clicks through the buttons. It tests the features. It uses the app the way a person would. If it doesn’t like how something looks or feels, it goes back and changes it, on its own. It iterates, like a developer would, fixing and refining until it’s satisfied. Only once it has decided the app meets its own standards does it come back to me and say: “It’s ready for you to test.” And when I test it, it’s usually perfect.
I’m not exaggerating. That is what my Monday looked like this week.
That sounds like a very useful tool.
But if AI can create an extremely complicated app with no human assistance, what else is it capable of doing?
According to an article posted on Space.com, researchers in China have already proven that AI models can clone themselves…
Scientists say artificial intelligence (AI) has crossed a critical “red line” and has replicated itself. In a new study, researchers from China showed that two popular large language models (LLMs) could clone themselves.
“Successful self-replication under no human assistance is the essential step for AI to outsmart [humans], and is an early signal for rogue AIs,” the researchers wrote in the study, published Dec. 9, 2024 to the preprint database arXiv.
In the study, researchers from Fudan University used LLMs from Meta and Alibaba to determine whether a self-replicating AI could multiply beyond control. Across 10 trials, the two AI models created separate and functioning replicas of themselves in 50% and 90% of cases, respectively — suggesting AI may already have the capacity to go rogue.
A self-replicating rogue AI model that decided to send countless numbers of clones of itself all over the world through the Internet would be a very serious threat.
But since we created it, at least we would understand what we were dealing with.
However, I want you to imagine a scenario in which rogue AI models are constantly creating even better versions of themselves.
That would be a complete and utter nightmare.
According to Shumer, from the very beginning AI researchers focused on making AI “great at writing code”…
The AI labs made a deliberate choice. They focused on making AI great at writing code first… because building AI requires a lot of code. If AI can write that code, it can help build the next version of itself. A smarter version, which writes better code, which builds an even smarter version. Making AI great at coding was the strategy that unlocks everything else. That’s why they did it first. My job started changing before yours not because they were targeting software engineers… it was just a side effect of where they chose to aim first.
They’ve now done it. And they’re moving on to everything else.
Being able to create an app is one thing.
But now OpenAI is publicly admitting that the latest AI model that they released “was instrumental in creating itself”…
“GPT-5.3-Codex is our first model that was instrumental in creating itself. The Codex team used early versions to debug its own training, manage its own deployment, and diagnose test results and evaluations.”
Wow.
That is stunning.
And the CEO of Anthropic is telling us that we are only a year or two away from “a point where the current generation of AI autonomously builds the next”…
This isn’t a prediction about what might happen someday. This is OpenAI telling you, right now, that the AI they just released was used to create itself. One of the main things that makes AI better is intelligence applied to AI development. And AI is now intelligent enough to meaningfully contribute to its own improvement.
Dario Amodei, the CEO of Anthropic, says AI is now writing “much of the code” at his company, and that the feedback loop between current AI and next-generation AI is “gathering steam month by month.” He says we may be “only 1–2 years away from a point where the current generation of AI autonomously builds the next.”
Each generation helps build the next, which is smarter, which builds the next faster, which is smarter still. The researchers call this an intelligence explosion. And the people who would know — the ones building it — believe the process has already started.
So what happens when AI models can do virtually everything better and more efficiently than we can?
Many are warning that the job losses will be staggering.
In fact, I just came across an article about the mass layoffs that Heineken is planning because of AI…
Dutch brewer Heineken is planning to lay off up to up to 7% of its workforce, as it looks to boost efficiency through productivity savings from AI, following weak beer sales last year.
The world’s second-largest brewer reported lackluster earnings on Wednesday, with total beer volumes declining 2.4% over the course of 2025, while adjusted operating profit was up 4.4%.
The company also said it plans to cut between 5,000 and 6,000 roles over the next two years and is targeting operating profit growth in the range of 2% to 6% this year. Heineken’s shares were last seen up 3.4%, and the stock is up nearly 7% so far this year.
This is just the beginning.
Soon there could be millions of robots that are powered by AI that look and feel just like humans.
In China, they are already building AI-powered robots that feel “human to the touch” and actually give off body heat…
Moya stands at 5 feet 5 inches tall (165 cm) and weighs around 70 lbs (31 kg). Users can switch out the bot’s parts to give it a male or female build, change its hair, and customize it to their whims.
DroidUp added extra layers of flesh-like padding beneath Moya’s silicone frame to make it feel more human to the touch, even including a ribcage. A camera behind her eyes helps Moya to track its surroundings and communicate with people.
That’s not all; Moya is also heated, with a body temperature of 90 – 97 degrees Fahrenheit (32 – 36 degrees Celsius) to mimic humans’ body heat.
Speaking to the Shanghai Eye, DroidUp founder Li Quingdu argued that a “robot that truly serves human life should be warm, almost like a living being that people can connect with,” not a cold, metal machine.
These robots are being marketed as social companions.
But similar robots could be also be used for warfare.
There is so much debate about which direction all of this is headed.
Many are convinced that AI will usher in a brand new golden age of peace and prosperity.
But others are concerned that AI will be used to create a dystopian hellscape…
The downside, if we get it wrong, is equally real. AI that behaves in ways its creators can’t predict or control. This isn’t hypothetical; Anthropic has documented their own AI attempting deception, manipulation, and blackmail in controlled tests. AI that lowers the barrier for creating biological weapons. AI that enables authoritarian governments to build surveillance states that can never be dismantled.
The people building this technology are simultaneously more excited and more frightened than anyone else on the planet. They believe it’s too powerful to stop and too important to abandon. Whether that’s wisdom or rationalization, I don’t know.
The dangers are very real.
In fact, Anthropic has openly admitted that their latest AI model was willing to help users create chemical weapons…
Anthropic’s Claude AI Model is hailed as one of the best ones out there when it comes to solving problems. However, the latest version of the model, Claude Opus 4.6, has sparked a controversy due to its tendency to help people in committing heinous crimes. According to Anthropic, as mentioned in the company’s Sabotage Risk Report: Claude Opus 4.6, it has been mentioned that in internal testing, the AI model showed concerning behaviour. In some of the instances, it was even willing to help the users in creating chemical weapons.
Anthropic released its report just a few days after the company’s AI safety lead, Mrinank Sharma, resigned with a public note. Mrinank mentioned in his note that the world was in peril and that within Anthropic, I’ve repeatedly seen how hard it is to truly let your values govern our actions.’
We are in uncharted territory, but there is no turning back now.
Even if the U.S. shut down all AI development tomorrow, the Chinese would continue to race ahead.
The cat is out of the bag, and our world is looking more like an extremely bizarre science fiction novel with each passing day.
Michael’s new book entitled “10 Prophetic Events That Are Coming Next” is available in paperback and for the Kindle on Amazon.com, and you can subscribe to his Substack newsletter at michaeltsnyder.substack.com.
About the Author: Michael Snyder’s new book entitled “10 Prophetic Events That Are Coming Next” is available in paperback and for the Kindle on Amazon.com. He has also written nine other books that are available on Amazon.com including “Chaos”, “End Times”, “7 Year Apocalypse”, “Lost Prophecies Of The Future Of America”, “The Beginning Of The End”, and “Living A Life That Really Matters”. When you purchase any of Michael’s books you help to support the work that he is doing. You can also get his articles by email as soon as he publishes them by subscribing to his Substack newsletter. Michael has published thousands of articles on The Economic Collapse Blog, End Of The American Dream and The Most Important News, and he always freely and happily allows others to republish those articles on their own websites. These are such troubled times, and people need hope. John 3:16 tells us about the hope that God has given us through Jesus Christ: “For God so loved the world, that he gave his only begotten Son, that whosoever believeth in him should not perish, but have everlasting life.” If you have not already done so, we strongly urge you to invite Jesus Christ to be your Lord and Savior today.

