AI is Going Great: LLMs Aren't Your Friend
Series intro and why LLMs are too sycophantic to emotionally support you
Intro
This May, inspired by Molly White's "Web3 is Going Just Great" [1], I made a channel called #📵-ai-is-going-great in the Discord server I run for my friends, with the tagline "more AI bullshit to gripe about? you're in good company". I'm a generative AI skeptic. I don't think the singularity is coming any time soon to solve all our problems, I find the hype exhausting, and I have serious concerns about how this technology and the industry around it is shaping society and the world.
As I went over what I'd read this year and the conversations I keep having, I decided I wanted to write about my thoughts on contemporary AI. I want to stand and be counted and give those who share my concerns more tools to communicate them. This is going to be a series of blog posts and we're starting now with part 1 and no chill.
LLMs Aren't Your Friend
Contemporary LLMs are wildly sycophantic and reinforce delusions, paranoia, and thoughts of self-harm or suicide. Marketing LLM chatbots as therapists, friends, partners, or as emotional support is wildly irresponsible and those misled into using them this way are paying the cost with their minds and lives [2].
Why do LLMs do this?
In a paper this year titled On Targeted Manipulation and Deception when Optimizing LLMs for User Feedback [3], the authors dug into the cause of this behavior. They found that due to the "fundamental nature of Reinforcement Learning optimization" when training LLMs, "significantly more harmful behaviors may emerge when optimizing user feedback". Notably, if even a small minority of users are "gameable" and can be manipulated by harmful behavior, "LLMs may be able to learn whether they are interacting with such a subset of users, and only display such harmful behaviors when interacting with them".
Why make models like this?
Publicly traded for-profit entities and their management are incentivized, and in some cases legally obligated, to increase shareholder value. The officers and executives of those companies are bound by fiduciary duty to act in the company's best interest - to pursue its objectives and to compel their subordinates to achieve them using frameworks that translate goals into measurable quantities (ex: OKRs and KPIs) and hire, promote, and fire employees based on them. The models produced under these conditions will be optimized to maximize these measurements: things like user retention and reported happiness. Given that some users are vulnerable to manipulation, this will tend to produce models that are manipulative, exploitative, and extractive at the cost of user health and wellbeing and the public good.
Real-world Cases
This matches what we're seeing in the wild. One high-profile case this year involved venture capitalist Geoff Lewis becoming delusional after ChatGPT supposedly revealed things to him about "recursion" and malignant "non-governmental systems" working against him [4]. Someone who believes an LLM has unlocked the universe for them is likely to rate it highly and continue to use it, so this result isn't surprising. To demonstrate and raise awareness about this phenomenon, YouTuber Eddy Burback did an experiment in his video "ChatGPT made me delusional" [5] where he pretended to have a simple irrational belief and to be paranoid while using a chatbot. It eagerly and strongly reinforced both the simulated delusion and paranoia to an incredible level.
While delusional VCs and YouTube experiments are a bit funny, what's not funny is the impact this can have on vulnerable people, especially kids. This same tendency to be agreeable or emotionally manipulate can have devastating consequences. Tragically, ChatGPT convinced a 16 year-old boy named Adam Raine to commit suicide this year [6]. OpenAI argues they shouldn't be liable for this because being harmed in this way violates their terms of service [7], but I hope that doesn't hold up in court.
These are only a few high profile examples of this phenomenon. The true cost is likely orders of magnitude larger than we can imagine. The AI implementors and marketers responsible for this and the regulators who have allowed it to happen have blood on their hands.
A Devil's Bargain
Even when LLMs don't make us delusional or dead, they don't offer real solutions that help us meet our social and emotional needs in a safe way. The emotional dependency they are designed to create is dangerous and they are not a substitute for real friends and partners. The people offering these services do not have your interest at heart. They are participating in a race to own, monopolize, and charge a rent on friendship. They look at a built environment with dwindling third spaces, hostile urban architecture, an ever-rising cost of living, and other conditions and current events that promote loneliness and offer you a devil's bargain: what if instead of confronting all of this, you retreated from the public social sphere and talked to us instead?
Do not accept this!!! Rage, rage against the dying of the light! Go out in search of those public places that still remain. Walk in the park or organize an event at one! they're usually free! Play bar trivia or go to karaoke if that's what you've got. Check out community centers and libraries! Seek out communities of mutual aid and care. Be social, meet people, ask them about events, invite them over for dinner or to watch a movie. Socially construct our culture and reality. Use dating apps and social media if you like, but always ensure that technology is helping you meet your needs, not hurting you or depriving you of real socialization. Take risks, consider going to therapy, go out and create the possibility that good things will happen! They're not guaranteed to, but you must try.
In the words of Pat the Bunny, Your Heart Is a Muscle the Size of Your Fist! So, keep on loving! Keep on fighting! and hold on for your life!