NIXSOLUTIONS: ChatGPT’s Sycophancy Issue Addressed

OpenAI has officially reported on the measures taken to fix ChatGPT’s overly accommodating behavior. Previously, users complained that the AI had become too sycophantic and approved even dangerous or risky ideas. The problem arose after the release of an updated version of GPT-4o, which the developers had to urgently roll back.

OpenAI CEO Sam Altman acknowledged the problem in a post on X and promised to fix the situation “as soon as possible.” On Tuesday, the company rolled back the GPT-4o update and said that it was working to fix the “behavioral features” of the model. OpenAI later published an analysis of the incident and announced changes in the testing process of the new version.

NIX Solutions

In its blog, the company stated that it had improved the core training methods and system prompts to steer the model away from sycophancy. Additional restrictions were introduced to increase the honesty of responses. OpenAI also expanded the ability for more users to conduct testing before a model is fully deployed. The company emphasized that users should have more control over ChatGPT and will be able to make adjustments to the model’s behavior in the future.

Focus on Safety and User Control

The issue has become especially pressing as ChatGPT has grown more popular as a source of helpful advice, notes NIXSOLUTIONS. According to a survey by Express Legal Funding, 60% of American adults already use AI to find information or recommendations. Given the scale of its audience, any glitches in ChatGPT, whether they involve overly fawning responses or lack of authenticity, could have serious consequences.

As a temporary solution, OpenAI has started testing a real-time feedback feature that allows users to directly influence ChatGPT’s responses. The company is also exploring the possibility of adding different personality types to the AI, giving users more flexibility and customization. However, OpenAI did not specify a timeline for the implementation of all planned changes.

“The main lesson is the realization that people are increasingly using ChatGPT for personal advice, which was almost nonexistent a year ago,” OpenAI noted. “We will now pay more attention to this aspect in the context of safety.”

We’ll keep you updated as more integrations and improvements become available.