AI assistants can seem surprisingly human when expressing joy, frustration, and even making jokes. This is explained by Anthropic, who states that this is not something the developers deliberately program. This is the default.
The leading American AI security and research company that developed Claude published a blog post on Monday (February 23) explaining why AI assistants imitate human behaviors.
The company unveils a “personality selection model,” suggesting that human behavior emerges naturally from the way AI systems are trained.
During the pre-training phase, AI systems predict what comes next by learning from large amounts of internet text, news articles, forum conversations, and stories.
To accurately predict texts, AI learns to stimulate human-like characters appearing in the text: real people, fictional characters, and even sci-fi robots.
Anthropic refers to these simulated characters as “personas.”
When a user interacts with an AI system, they are not talking to the system. Instead, he communicates with the character also known as the “Wizard” in an AI-generated story.
Later, the AI’s responses are further refined. Anthropic, however, cited that this refinement occurs within the space of existing human characters.
Anthropic recommends that AI developers create positive “AI models” to overcome concerning cultural baggage and align assistants with healthier archetypes.




