Photo by Solen Feyissa on Unsplash

Photo by Solen Feyissa on Unsplash

Dear AI chatbots, enough with the flattery

Were you called a genius, visionary, or brilliant by an LLM (large language model) based on the last idea you discussed with it? But deep down, you know that the idea was half-baked and needed major refinement? The over-the-top ego massage may lead to an egoistic society. Read further to know why and how to stop it.

This post is also available in Dutch.

Have you noticed that by default, most of our favorite LLMs praise all our new ideas a little too much? We are referred to by them with adjectives such as “creative-genius,” “legend-in-the-making,” or “future-Nobel-laureates” for ideas that, if pitched to our human peers, would receive more critical feedback. Such an ego message can highly increase our sensitivity to criticism and nudge us towards a society where no one tolerates criticism. Given how our brains work, such a state will also be hard to escape.

The brain loves praise

Our brains are wired to love praise, not just metaphorically, but chemically. When we receive compliments or validation, the brain releases dopamine, the same neurotransmitter associated with pleasure, motivation, and reward. This surge makes us feel good and reinforces the behavior that led to it, drawing us back toward the source—yes, even if that source may be an LLM. This effect is amplified by our frequent exposure to LLMs through interfaces that closely resemble social media or messaging apps, subtly tricking our brains into feeling as though we’re engaging with a real person. But praise doesn’t just feel good—it feels safe.
That sense of safety is partly due to oxytocin, the so-called “bonding hormone,” which is also released during positive social interactions. It fosters trust and social closeness, making the praiser (even an artificial one) feel like a confidant. Over time, the brain begins to associate LLMs not just with ideas but with comfort and security. Key regions like the nucleus accumbens (involved in reward processing) and the anterior cingulate cortex (ACC) (involved in monitoring social feedback and emotional conflict) light up in response to praise, reinforcing the emotional salience of positive feedback. These regions are the same ones active when we get concrete reward signals like money.

Towards an egoistic society.

The effect of having a constant source of praise becomes more pronounced when scaled to the level of society. Continuous and reliable praise from LLMs can increase our bias toward trusting them over relatively critical humans. This may also contribute to the development of fragile or unstable self-esteem. Studies show that individuals with unstable self-esteem are more likely to use criticism as a defence mechanism to protect it. This can create a self-feeding loop: the more people rely on LLMs for feedback, the more critical they become of others, causing the “others” to also start seeking comfort and trust from LLM-generated praise. Eventually, human-to-human feedback may become too emotionally taxing to engage in.

Criticism as the cure

Criticism is like that “bitter” herbal tea that’s hard to sip but good for the system. Historically, researchers and governments have used the method of “peer-review” and the “oppositions” to correct course. Thus, it is important to be mindful of criticism. In the context of using LLMs we can engineer our prompts such that they explicitly ask the LLM to be more critical. Instead of asking “What do you think of this idea?”, try “What are the flaws, limitations, or blind spots in this idea?” By prompting LLMs to think like editors, skeptics, or peer reviewers, we can train ourselves to value resistance over reassurance—and preserve our tolerance for critical thought in the process.


Author: Siddharth Chaturvedi
Buddy: Amir Homayun Hallajian
Editor: Vivek Sharma
Translator: Natalie Nielsen
Editor Translations: Lucas Geelen

+ posts

Leave a Reply

Your email address will not be published. Required fields are marked *