OpenAI has removed the warning messages in ChatGPT that previously flagged certain responses as potentially violating its terms of service.

The change aims to cut down on what the company calls “gratuitous” or unnecessary content denials.

Laurentia Romaniuk, a member of OpenAI’s AI model behavior team, shared the update on X, explaining that the move is meant to improve user experience. Nick Turley, ChatGPT’s head of product, echoed this sentiment, saying users should now have more flexibility in how they interact with the chatbot—within legal and ethical limits. “Excited to roll back many unnecessary warnings in the UI,” Turley wrote.

However, this doesn’t mean ChatGPT is completely unfiltered. The AI will still refuse to answer questions that promote harm, illegal activity, or misinformation—such as claims about the Earth being flat. That said, many users on platforms like X and Reddit have noticed that ChatGPT now responds more freely to topics previously flagged, including discussions on mental health, fictional violence, and certain types of adult content.

READ
Reddit Introduces Feature to Hide Ads for a Year