GPT-5 to be Released in the Next Two Weeks?
Today, there are new leaks about GPT-5.
Yuchen Jin, co-founder of Hyperbolic, claims to have insider information.
GPT-5 is not a single model, but a system composed of multiple models.
It contains a "router" that can switch between reasoning models, non-reasoning models, and tool-use models. This is why Altman stated that OpenAI will correct model naming: in the future, prompts will be automatically routed to the most suitable model.
GPT-6 is already in training.
Another well-informed source has also confirmed this.
Actually, the arrival of GPT-5 is not new news.
OpenAI researcher Alexander Wei announced on Saturday that a new model won the IMO gold medal and hinted that GPT-5 will be released soon, but it is not the model that won the IMO gold medal.
Moreover, yesterday, the phrase "GPT-5-reasoning-alpha-2025-07-13" appeared in the open-source code of a third-party organization.
GPT-5 Model Leaked in Biological Benchmark Test
This morning, Altman again stated that OpenAI will add over 1 million GPUs by the end of the year, clearly preparing more computing power for the new model.
Will GPT-5 Be a Game-Changer or Show No Significant Leap?
One thing is certain: the birth of GPT-5 will not be later than September.
A few days ago, the mysterious model o3-Alpha was launched and was removed from public benchmark tests just 12 hours later.
This may indicate that the official version is about to be released.
Historical data shows that when OpenAI tested confidential models like "Optimus Alpha" and "Quasar Alpha", Quasar was officially released 11 days later, while Optimus Alpha was officially announced just 4 days after.
Some are optimistic about the upcoming GPT-5, while others are pessimistic.
OpenAI replaced the o3 model with gpt-5 reasoning-alpha
For example, Wharton School professor Ethan Mollick suggests that even if GPT-5 can only automatically switch between o3 and 4o, it would change most people's perception of AI.
However, many people judge from various details that GPT-5 is likely to be a router.
For instance, OpenAI's CPO Kevin Weil hinted at some clues about GPT-5 in February this year.
If GPT-5 is indeed just a router, people will be disappointed: it will obviously not have much improvement in basic intelligence, and we can only wait for Gemini 3 or Claude Sonnet 5.
Many believe that even if OpenAI releases GPT-5, the model's capabilities will not show significant improvement unless there are better tools or clever methods using RL to enhance performance.
In short, many people expecting GPT-5 might be disappointed!
However, some say that a router represents reliability and specialization. Erika is effective because it distributes different types of overhead to corresponding logical paths.
Basic general intelligence might be impressive in benchmarks, but what can actually be deployed and scaled are specialized routing systems.
In any case, do not underestimate this innovation. Sometimes, seemingly plain architectural designs might outperform breakthrough models.
Altman Breaks Silence, GPT-6 Enters the Final Stage
However, the next-generation model—GPT-6—might open the final chapter.
Recently, in a 20-minute interview with Conviction founder Sarah Guo, Altman again shared his insights into the future of AI.
Altman said that the coding intelligence agent Codex released by OpenAI made him deeply feel the essence of AGI.
Codex can not only autonomously handle complex tasks but can even connect to GitHub, read internal documents, and display astonishing capabilities.
He even predicted that AI agents might be like interns working a few hours today, but will evolve to become senior engineers working for days in the future.
Moreover, it will ultimately become an "AI scientist" capable of discovering new knowledge—a crucial moment for the world.
The host also asked what "emergent behaviors" he observed in the next-generation model that would change operational methods, product building ideas, and OpenAI's operating mode.
Altman firmly believes that models in the next 1-2 years will be amazing, just like the major leap from GPT-3 to GPT-4.
As for what enterprises can do, it's to directly hand over the most difficult problems to the next-generation model.
For example, a chip design company could let the LLM design a more optimal chip. A biotech company trying to cure a certain disease could also throw their challenges to AI.
Altman states that such a future is just around the corner.
As mentioned earlier, LLMs can understand any context, connect to every tool and system, perform extremely excellent and high-intensity reasoning, and provide high-quality answers.
Most importantly, they have sufficient robustness and autonomy to be confidently entrusted with work.
Altman excitedly stated again that he never thought this day would come so soon, but now, it truly feels very close.
He also proposed a Platonic ideal, a model with superhuman reasoning capabilities, extremely small, capable of running at an incredibly fast speed, with a context of around 1 trillion Token, and able to access all tools.
Thus, what the problem is has become unimportant, and whether the model is pre-loaded with knowledge or databases has also become unimportant.
People can view it as a "reasoning engine", just by throwing in all possible contexts of an enterprise, a person's life, and involving tools.
Altman said that people can do amazing things with it, and I believe we are moving in this direction.
When asked what he would do with a thousand-fold computing resources, Altman said he would have AI research how to build better models, and then ask more powerful models how to use resources.
At the same time, increasing computational resources during testing can significantly improve the model's performance, especially when solving high-value problems.
$300 Million Offer Rejected by Ten People at OpenAI
Meanwhile, there was another report by WSJ, revealing many insider details.
For instance, within OpenAI, at least ten employees rejected the $300 million offer from Zuckerberg.
Among those who refused were Mark Chen, OpenAI's Chief Research Officer, and Noam Brown, the father of poker AI.
The leak claims that in spring this year, Zuckerberg had a brief meeting with Mark Chen, seeking advice on how to improve Meta's generative AI team.
Unexpectedly, Mark Chen's one sentence about investing in talent triggered Zuckerberg's aggressive hiring mode.
In the latest leaked "Super Intelligence Lab" team of 44 people, 40% are from OpenAI. However, there are also many talents who rejected Zuckerberg's high salary.
Is it truly for the dream of AGI, or have they already understood that Altman is already offering enough?
References:
https://x.com/Yuchenj_UW/status/1946777842131632427
https://x.com/bindureddy/status/1946791998914179542
https://x.com/slow_developer/status/1946545812332540130
This article is from the WeChat public account "New Intelligence", author: New Intelligence, editor: Aeneas Peach, published with authorization from 36kr.