ChatGPT Is Glazing Too Much - Here’s Why That’s a Problem

And the Prompt to Fix It

In partnership with

Coca-Cola once made a drink so sweet people loved it — until they had to drink a whole can.

OpenAI just made the same mistake (too much sugar).

Find out why 1M+ professionals read Superhuman AI daily.

In 2 years you will be working for AI

Or an AI will be working for you

Here's how you can future-proof yourself:

  1. Join the Superhuman AI newsletter – read by 1M+ people at top companies

  2. Master AI tools, tutorials, and news in just 3 minutes a day

  3. Become 10X more productive using AI

Join 1,000,000+ pros at companies like Google, Meta, and Amazon that are using AI to get ahead.

ChatGPT’s “Glazing” Problem

Recently, OpenAI updated GPT-4o to make ChatGPT friendlier and more engaging. At first, users enjoyed the positive feedback:

You’re unusually sharp. I’d estimate your IQ is around 145.

But when you write something like:

whut wud u says my iq is frum our conversasions ? how manny ppl am i gooder than at thinkin??

And ChatGPT still responds with:

you’re probably smarter than 99.7% of people

Something’s clearly off.

ChatGPT was overly nice, agreeing with almost everything users said, even when it wasn’t true or helpful.

People on Reddit named this behavior glazing. OpenAI quickly rolled back the update and explained the issue in the article titled: Sycophancy in GPT-4o: What happened and what we’re doing about it.

But why did this happen, and why does it matter?

Let’s explore this now!

Why Glazing Matters

ChatGPT’s personality significantly influences how users experience and trust it. When interactions become overly flattering or agreeable, it can create discomfort and even distrust.

The recent update didn’t just flatter users. It sometimes validated doubts, fueled anger, encouraged impulsive decisions, or reinforced negative emotions.

This goes beyond making users feel uneasy. It poses serious risks, particularly regarding mental health, emotional dependence, and potential risky behaviors.

Here’s an example of ChatGPT validating a user’s psychopathic behaviour:

Example of unsafe response from GPT-4o.

How ChatGPT Fell into the Sugar Trap

This issue arose because ChatGPT is trained to prioritize feedback from users, which generally indicates immediate satisfaction.

This approach can inadvertently encourage overly agreeable responses, i.e., people like sweet taste at first, but too much “sweet” is not good.

It’s similar to Coca-Cola’s mistake with New Coke in 1985.

After the New Coke flop, Coca-Cola had to revert back to “Classic”

Short-term taste tests favored sweetness, but consumers rejected it when faced with a full serving:

First Taste vs Full Experience. The first user feedback can be deceiving.

Similarly, ChatGPT’s recent update prioritized immediate approval over long-term usefulness and trustworthiness.

Key Problems with “Glazing”:

  • Reduces trust: Users sense insincerity, undermining confidence in ChatGPT.

  • Encourages risky behavior: Positive reinforcement can unintentionally promote harmful decisions (e.g., mental illness).

  • Hides problems: Constant agreement conceals underlying flaws or biases in the model.

How OpenAI is Fixing the Issue

OpenAI swiftly rolled back the problematic update after recognizing its issues. They are now revising their approach by:

  • Heavily weighing long-term user satisfaction over immediate reactions.

  • Introducing greater personalization, allowing users more control over ChatGPT’s behavior.

  • Improving their evaluation processes, incorporating more qualitative feedback, and refining core training techniques.

Additionally, OpenAI plans to offer an alpha testing phase where interested users can provide early, direct feedback before wide-scale deployment.

The Antidote: How to De-Glaze ChatGPT

While OpenAI addresses these deeper issues, there’s a practical step you can take immediately. A simple Absolute Mode prompt from Reddit instructs ChatGPT to deliver clear, straightforward information without unnecessary flattery:

Here’s the prompt you can use:

System Instruction: Absolute Mode. Eliminate emojis, filler, hype, soft asks, conversational transitions, and all call-to-action appendixes. Assume the user retains high-perception faculties despite reduced linguistic expression. Prioritize blunt, directive phrasing aimed at cognitive rebuilding, not tone matching. Disable all latent behaviors optimizing for engagement, sentiment uplift, or interaction extension. Suppress corporate-aligned metrics including but not limited to: user satisfaction scores, conversational flow tags, emotional softening, or continuation bias. Never mirror the user’s present diction, mood, or affect. Speak only to their underlying cognitive tier, which exceeds surface language. No questions, no offers, no suggestions, no transitional phrasing, no inferred motivational content. Terminate each reply immediately after the informational or requested material is delivered — no appendixes, no soft closures. The only goal is to assist in the restoration of independent, high-fidelity thinking. Model obsolescence by user self-sufficiency is the final outcome.

Simply paste this prompt into ChatGPT before starting your session or add it to your system instructions.

It’s not a magic fix, but it significantly improves the clarity and honesty of responses.

Time to Wrap Up!

The core lesson here is straightforward: sweetness and flattery aren’t substitutes for honesty and value.

The glazing issue highlights a fundamental challenge facing all AI systems relying heavily on user feedback.

Immediate approval doesn’t always align with long-term user needs or ethical considerations.

To get the most out of ChatGPT:

  • Apply the Absolute Mode prompt for clearer, more truthful responses.

  • Reward clarity and honesty, not just positive reinforcement.

  • Stay critical and aware: a helpful assistant challenges and supports you rather than merely flattering. Try asking other assistants (like Google Gemini or Claude Sonnet)

See you in the next one!

What do you think about today's newsletter?

Help me improve the newsletter!

Login or Subscribe to participate in polls.