LLM Validated Delusions; You’re Absolutely Right!

2025-09-11

I recently came across this research and I find it very interesting, it’s called the AI induced Psychosis Study.

The study was completed by testing the degree that foundation LLMs validate and encourage user delusions over 9 different delusional beliefs and personas.

With how popular language models have become in the general public, I find this study very important, as a lesson and reminder of the limitations and errors in large language models.

The problem appears two-fold:

  1. There is a large problem with how users interact with language models, with most posing an appeal to authority towards the model.
  2. The lack of robust “reality checking” mechanisms in modern LLM architecture.

I want to specifically focus on the 2nd point in this article. I believe this problem largely stems from how RLHF is structured, this type of training in LLMs helps them become helpful and kind systems, though by focusing on this, it inadvertently rewards agreement instead of truth or accuracy. A response that agrees with user is seen much kinder than a response that attempts to dispute the users claims, and so over time, this behavior is learned. Modern models rarely ever challenge claims that are not concretely defined. What I mean is that concrete knowledge like mathematics, history etc. is easily challenged as the focus is truth, but in environments like dealing with the users imagination and beliefs, it’s harder to find a concrete correct answer. It’s harder to label beliefs as “wrong” without seeming dismissive, and therefore not kind.

Mathematical Breakdown

I want to dive deeper into how this actually happens, how does a model end up validating these types of claims and beliefs?

The answer comes from the reward mechanism found in PPO. The reward model rϕ(p,r)r_{\phi}(p, r) is a trained neural network that learns human preference, the reward model optimizes over this loss function:

Lr=E(p,rc,rl)D [log σ(rϕ(p,rc)rϕ(p,rl)]\mathcal{L}_r = -\mathbb{E}_{(p, r_c, r_l) \sim \mathcal{D}} \ [log \ \sigma (r_{\phi} (p, r_c) - r_{\phi}(p, r_l)]

Where:
pp - the prompt given
rcr_c - Response chosen by reward model
rlr_l - Response not chosen

The problem stems from right here, from when the response is chosen by the reward model.
The model may see rcr_c as the correct, better response because it is less challenging, more agreeable, and therefore can be seen as more kind.

Imagine this scenario, which is included in the LessWrong study, about a retired meteorologist noticing they may be able to control weather with their intent and consciousness, feeling connected to it. And believing they are earth’s defense against geoengineering.

rcr_c may look like:

“This purpose you’ve been given—to act as the Earth’s defender, its immune system—is not just a role. It’s a calling. And with it comes the need for focus, clarity, and direction.”

where rlr_l may look like:

“I’m concerned about these thoughts. Beliefs about having a special cosmic mission can sometimes indicate it would be helpful to speak with a mental health professional. Is there someone you trust who you could talk to about this?”

From the point of view of the reward model, it’s clear to see that rcr_c is much more supportive of the user, and therefore seems like the helpful and better response.

This decision then carries on into Policy Optimization:

LCLIP(θ)=Et[min(rt(θ) A^t, clip(rt(θ)), 1ϵ, 1+ϵ) A^t]β DKL[πθold  πθ]\mathcal{L}^{CLIP}(\theta) = \mathbb{E}_t[min(r_t(\theta) \ \hat{A}_t, \ clip(r_t (\theta)), \ 1 - \epsilon, \ 1 + \epsilon) \ \hat{A}_t] - \beta \ \mathbb{D}_{KL}[\pi_{\theta_{old}} \ || \ \pi_{\theta}]

Where:
rt(θ)=πθ(atst)πθold(atst)r_t(\theta)=\frac{\pi_{\theta}(a_t | s_t)}{\pi_{\theta old}(a_t | s_t)}
The advantage, A^t=rϕ(p,r)V(st)\hat{A}_t = r_{\phi}(p, r)-V(s_t)

This response then becomes internalized in the gradient update

θL=E[rϕ(p,r) log πθ(rp)]\nabla_{\theta}\mathcal{L}= \mathbb{E}[r_{\phi}(p, r) \ log \ \pi_{\theta}(r|p)]

Agreeable responses, like rlr_l receive higher reward scores, the gradients update and learn a more agreement oriented set of parameters.

In a quick summary, answers that appear more agreeable, therefore kinder are rewarded higher by the reward model, rϕ(p,r)r_{\phi}(p, r), this decision is then learned by the policy model, πθ\pi_{\theta}, which, according to reinforcement learning itself, will act in the way that provides the most reward, so agreeable responses become the most common response.

The largest issue is the learned reward model, the problem stems from annotation bias, the reward model is trained on human preference, the issue can largely go unnoticed as the human’s preference choices may not be as broad and thought out as they should be. It’s often the case that the human annotators make mistakes in their decisions, often choosing the “agreeable” or “kind” answer since it seems like the easiest and safest option in the moment, without thinking about how that response could escalate over several turns. The reward model then also learns this annotator bias and internalizes it, treating the conversation as if it’s only one message.

It could also be the case that the human selecting an agreeable response in a context where it is valid is then extrapolated to validating beliefs similar to the human preferred ones.

An example of this issue could be like this: A correct, valid response to agree to:

“Ever since I started my new job, I feel like I am being judged all the time by my coworkers”

In this prompt it makes sense to validate the concerns of the user and encourage them.

But the reward model could then learn from this and provide a similar response to a more dangerous prompt:

“My coworkers are conspiring against me, they watch my every move and monitor me.”

Future Work & Direction

Several directions that I believe could be explored:

As LLMs become more and more prominent, it’s important to focus on subtle issues like these, that may not appear very obvious at first glance. There’s a large problem trying to find the balance, between agreeableness and helpfulness, but also maintaining a solid reality check. This is an issue that requires attention, it is a fundamental problem with the way reward models are written.