- O3 and O4-Mini are available for professional users, more and team now, free users can also try o4-mini
- They can combine and use each tool in the chatgpt arsenal
- O3 and O4-Mini add reasoning using chatgpt capacities
OPENAI has just given Chatgpt a massive boost with new O3 and O4-Mini models which are available at the moment for pro, more, team and even tier users.
The new models considerably improve chatgpt performance and are much faster to reason with the tasks than preceding Openai reasoning models like Chatgpt O3-Mini and O1.
More importantly, they can intelligently decide the different OpenAi tools to complete your request, including a new ability to reason with images.
OPENAI has provided a live flow for the exit:
Here are the three most important changes:
The two new reasoning models can use and combine each tool in Chatgpt. This means that they have access to the whole box of chatgpt stuff, including web navigation, python coding, image and file analysis, image generation, canvas, automation, file search and memory.
The important thing, however, is that Chatgpt now decides if it needs to use a tool itself according to what you have asked.
When you ask Chatgpt to do something complicated using the new models, it shows you each step it makes and which tool it uses and how it happened to this decision.
Once he has done all research, notes on his work process disappear and you get a report on his conclusions.
2. Better performance
The way O3 and O4-Mini can intelligently decide which tools to use is a step towards the intelligent model switching which was promised to us with Chatgpt 5, when it finally happens.
As you can imagine with advanced reasoning models, the relationship you get at the end is extremely detailed and contains links to all the sources used.
According to Openai, “the combined power of advanced reasoning with full access to the tool is translated into much stronger performance between academic references and real world tasks, establishing a new standard in terms of intelligence and utility.”
The result of the real world is that these models can more effectively attack the multiple facets, so do not be afraid to ask them to perform several actions at the same time and produce an answer or a report that combines several questions together.
3. Reasoning with images
The two new models are the first published by Openai which will integrate the images downloaded in its channel of thought. They actually reason by using the images, so for example, you can download an image of certain cars and ask what the name and the model of the cars are, then the retail value that they will hold in five years.
This is the first time that Chatgpt has been able to integrate images into a reasoning chain and has a real step forward for the multimodal AI.
Presentation of Openai O3 and O4 -Mini – our most intelligent and most competent models to date. For the first time, our reasoning models can use and combine agent All tools in Chatgpt, including web search, python, image analysis, file interpretation and image generation. pic.twitter.com/rdaqv0x0weApril 16, 2025
My verdict
I tried the new models on the positive level and I am impressed by the speed and completeness of the responses to my requests. Although I have always appreciated the depth of reasoning that the O1 and O3-Mini models have provided, it always means to wait longer for an answer.
The O3 model has now become my default model to choose more, because it is fast enough so that I do not feel like I’m waiting for too long for an answer, but I get a quantity of satisfactory details.
In short, I am impressed. The new models resemble a natural evolution of chatgpt in something more intelligent and more capable. I also like the way he can decide which of the different chatgpt tools he must use to provide the best answer.
Try
Here’s how to try the new chatgpt models for yourself:
In addition, professional and team users will see that they can select Chatgpt O3, Chatgpt O4-Mini and Chatgpt O4-Mini-De-Haut in the LLM drop-down menu inside Chatgpt, and free level users can have access to O4-Mini by selecting the reason button in the composer before submitting your request. EDU users will have access in a week.
There is no visual notification for free level users that they now use the 4o-mini reasoning model, but if you click on the button and ask chatgpt which LLM it uses, it is now said 4o-mini.
There will be a rate limit for the number of times a free level user can use the reasoning function and for more high users is much higher.
OPENAI says they expect to publish O3-Pro “in a few weeks”, with a complete support for tools. PRO users can already access O1-Pro.