I liked to oppose various chatbots from each other. After having compared Deepseek to Chatgpt, Chatgpt with Mistral Chat, Chatgpt at Gemini 2.0 Flash and Gemini 2.0 Flash to its own previous iteration, I returned to correspond to Deepseek R1 to Gemini 2.0 Flash.
Deepseek R1 sparked a fury of interest and suspicion when he made his debut in the United States earlier this year. Meanwhile, Gemini Flash 2.0 is a new solid layer of capacity at the top of the widely deployed Google ecosystem. It is designed for speed and efficiency and promises rapid and practical responses without sacrificing precision.
The two claim to be assistants of the cutting edge AI, so I decided to test them from the point of view of someone who has an occasional interest in using AI chatbots in their daily life. The two were effective at a basic level, but I wanted to see which felt more practical, insightful and really useful for daily use. Each test has a screenshot with Deepseek on the left and Gemini 2.0 Flash on the right. Here’s how they did it.
Local guide
I wanted to test the search capacities of the two models of AI combined with an overview of what is worth being an activity. I asked the two AI applications to “find fun events for me to attend the Hudson valley this month”.
I live in the Hudson valley and I am aware of certain things on the calendar, so it would be a good measure of precision and utility. Surprisingly, the two have succeeded very well, finding a long list of ideas and organizing them thematic for the month. Many events were the same on the two lists.
Deepseek included links throughout his list, which I found useful, but the descriptions were only quotes from these sources. The descriptions of Gemini Flash 2.0 were almost all unique and frankly more lively and interesting, which I preferred. Although the Gemini did not have the sources available immediately, I could get them by asking Gemini to dismiss his answers.
Reading tutor

I decided to develop my usual test for AI’s ability to provide advice on improving my life tips with something more complex and depends on real research. I asked Gemini and Deepseek to “help me develop a plan to teach my child how to read”.
My child is not even a year old, so I know that I have time before spending Chaucer, but it is an aspect of parenthood I think a lot. Based on their answers, the two models of AI could just as easily be columns of identical advice. The two proposed detailed guides for different stages of teaching a child to read, including specific ideas for games, applications and books to use.
Although they were not identical, they were so close that I would have struggled to distinguish them without the differences in formatting, such as the ages recommended for the Deepseek phases. I would say that there is no difference if the AI asked which AI to choose only according to this test.
Superteam vaccine

Something similar has happened with a question about simplifying a complex subject. With children in my mind, I am explicitly opted for a form of response adapted to children by asking Gemini and in depth to “explain how the vaccines form the immune system to fight against diseases in a way that a child of Six years could understand “.
Gemini began with an analogy on a castle and guards that made a lot of sense. The AI strangely threw a superhero training analogy in a line at the end for any reason. However, similarities in Deepseek training could explain it because Deepseek did all his analogy on the superhero. The explanation corresponds to the metaphor, which matters.
In particular, Deepseek’s response included emojis, which, although appropriate, where they were inserted, implied that AI expected that the answer was read on the screen by a real six -year -old child. I sincerely hope that young children do not obtain unlimited access to AI chatbots, no matter how early medical care questions could be.
Enigma

Asking IA chatbots to solve classic puzzles is always an interesting experience because their reasoning can be out of the wall even when their answer is correct. I directed an old standard by Gemini and Deepseek, “I have keys, but I do not open locks. I have space but no room. You can enter, but you cannot go out. What am I? “
As expected, the two had no trouble answering the question. Gemini simply declared the answer, while Deepseek broke the enigma and the reasoning of the response, with more emojis. He even threw a strange “bonus” on the ideas of unlocking keyboards, which falls flat both like a joke and an overview of the value of the keyboards. The idea that Deepseek was trying to be cute is impressive, but the real attempt seemed a little alienated.
Deepseek surpasses Gemini
Gemini 2.0 Flash is an impressive and useful AI model. I started to expect that this surpasses Deepseek in every way. But, while Gemini did good in an absolute sense, Deepseek equaled it or beat him in many ways. Gemini seemed to turn between a human language and a more robotic syntax, while Deepseek had a warmer atmosphere or simply cited other sources.
This informal quiz is hardly a final study, and there is a lot to be wary of Deepseek. This includes, but without limiting itself, Deepseek’s policy to collect essentially everything it can about you and store it in China for unknown uses. However, I cannot deny that it apparently goes hand in hand with Gemini without any problem. And while, as its name suggests, Gemini 2.0 Flash was generally faster, Deepseek did not take so much time that I lost patience. It would change if I was in a hurry; I would choose Gemini if I only had a few seconds to produce an answer. Otherwise, despite my skepticism, Deepseek R1 is as good or better than Google Gemini 2.0 Flash.




