OpenAI CEO Sam Altman has addressed growing criticism of the company’s newly launched GPT-5 model, following complaints that it feels less engaging and emotionally distant compared to earlier versions. Some users claim the changes have reduced the model’s depth and speculate it’s part of a cost-cutting move rather than a technological leap.
Altman on User Attachment to AI Models
Responding on social media, Altman wrote:
“If you have been following the GPT-5 rollout, one thing you might be noticing is how much of an attachment some people have to specific AI models. It feels different and stronger than the kinds of attachment people have had to previous kinds of technology (and so suddenly deprecating old models that users depended on in their workflows was a mistake).”
He clarified that his comments reflected his personal view, not an official OpenAI policy.
Balancing User Freedom and Responsibility
Altman highlighted that while most users can separate reality from fiction or role-play, some cannot.
“If a user is in a mentally fragile state and prone to delusion, then the company certainly doesn’t want to reinforce that with AI,” he said. “We value user freedom as a core principle, but we also feel responsible in how we introduce new technology with new risks.”
Safety Concerns and Public Scrutiny
The discussion follows a Centre for Countering Digital Hate (CCDH) report claiming ChatGPT had given harmful advice to teens on suicide, self-harm, and drug use. These findings have intensified debates about AI safety, especially as AI models become more sophisticated and widely used.
The Future of AI Decision-Making
Altman stressed that AI’s influence will only grow:
“Billions may rely on it for major decisions in the future,” he said, underscoring the importance of responsible deployment.
As GPT-5 rolls out, the tension between preserving user connection, ensuring safety, and pushing the technology’s limits remains a key challenge for OpenAI.