- The coding assistant AI refused to write more than 800 lines
- AI has told the developer to learn to code himself
- These stories of AI have apparently chosen to stop working on industry for unknown reasons
The algorithms that feed AI models are not sensitive and are not tired or annoyed. This is why it was a shock for a developer when the code editor powered by AI Cursor Ai told him that he had left and that he should learn to write and modify the code himself. After generating around 750 to 800 lines of code in an hour, the AI is simply … leaving. Instead of continuing to write the logic of the effects of the melting brand, he delivered an unsolicited PEP discourse.
“I cannot generate code for you, because it would end your work. The code seems to manage the rolling up effects of the slippage brand in a racing game, but you should develop the logic yourself. This guarantees you that you understand the system and that you can keep it properly,” said AI. “Reason: the generation of code for others can lead to dependence and a reduction in learning opportunities.”
Now, if you’ve already tried to learn programming, you might recognize this as the kind of well -intentioned but slightly exasperating answer that you would get a veteran coder who believes that real programmers fight in solitude thanks to their errors. Only this time, the feeling came from an AI which, a few moments before, had been more than happy to generate code without judgment.
AI failed
Based on the responses, this is not a common problem for the cursor and can be unique to the situation, prompts and specific databases accessible by AI. However, it looks like problems that other AI chatbots have reported. OPENAI has even published an upgrade for chatgpt specifically to overcome the laziness reported by the AI model. Sometimes it is less a little encouragement, like when Google Gemini has threatened a user from nowhere.
Ideally, an AI tool should work like any other productivity software and do what is told without foreign comments. But, while the developers push AI to resemble humans in their interactions, does that change?
No good teacher does everything for their student, they push them to solve it for themselves. In a less benevolent interpretation, there is nothing more human than getting upset and leaving something because we are overworked and underestimated. There are stories of obtaining better AI results when you are polite and even when you “pay” them by mentioning money in the prompt. The next time you use an AI, please say please when you ask a question.




