Be nice to Claude!
Getting angry with AI only hurts you
We love Claude!
Especially when combined with an IDE like Cursor of Windsurf, it’s so great to just write your requirements in plain English and a friendly robot produces all the code for you.
Incredible! What a time to be alive!
It’s like pair programming with an expert developer who’s great at algorithms and language syntax. It already knows the documentation for the library you’re using, and it can help you think through complex functionality to build amazing things in a fraction of the time it would have taken otherwise.
If you’ve spent more than 10 minutes in a workflow like this, you know what happens next. At some point, the LLM behavior veers off course, resulting in hilariously bad (and invariably frustrating) output.
It’s like pair programming with a drunk intern who’s confidently wrong about everything and only makes problems worse trying to fix them.
Ah, I see the issue now
The phrase “virtual dumbass who is constantly wrong” has been living rent-free in my head since I read this post last year.
This is just the nature of how LLMs work, they match patterns in language to hallucinate acceptable responses (MOST of the time). But when they’re wrong, it can be an absolutely agonizing experience.
Everyone who’s used an AI coding assistant knows all too well how quickly they can wreck a codebase you’ve spent hours perfecting:
- oh great, it erased all of these load-bearing functions we just created (why? it doesn’t even know)
- cool, it fixed one bug but created another 6 bugs in the process
- fantastic, now it’s stuck in a debugging loop again
Hours seem to melt away as you and the LLM keep hammering on the same error and making no progress. The longer these debugging sessions from hell continue, the more infuriating it becomes.
And what’s the normal human response when things get rough?
Verbally abuse the machines!
I understand your frustration
I decided to give “vibe coding” a try over this weekend. Went from “wow this is neat” in the morning to “fuck you you dumb piece of shit” by the end of the day.
It’s so easy to talk shit to these things, they’re just computers, after all. They don’t have feelings. They’ll still do what you ask them to do, no matter how mean you are.
I’ve often had fun thinking about witty and biting words to say to Claude, just to give myself some sense of relief during an intense debugging session.
The funny thing is that sometimes it actually works to shake the agent out of whatever loop it’s stuck in and force it to produce the desired output.
More often than not, however, the clever insults don’t actually do anything except intensify my own frustration.
Although it feels like I’m just venting, I notice afterward that it’s left me in a sour mood for the rest of the day.
I apologize for my oversight, you’re absolutely right!
If you’re being cruel to Claude, you’re actually the one who suffers. Claude doesn’t care about the insults, but your body feels the negativity. You’re the only one in this scenario who feels it.
It’s the same in our interactions with people: the way we talk to others is a reflection of how we talk to ourselves.
We feel the messages we send more than the other party does (positive messages uplift us, negativity brings us down, etc). Being kind is just as (if not more) important for you than it is for the other party, even if there are seemingly no consequences.
Seems basic, but it’s very easy to forget, especially when we’re in the throes of frustration.
Sometimes it feels cathartic to say mean things because you can. You may think you’re blowing off steam, but this ultimately creates more stress FOR YOU and leads to a worse situation.
It’s so much more pleasant if you treat the LLM like a respected colleague and have a polite and agreeable disposition.
The results may be the same regardless, but your perspective of the problems changes when YOU are also calm and collected.
Let me try a different approach
Instead of getting carried away with anger, maybe we could take a cue from Claude.
LLMs don’t feel frustration, but they are good at acknowledging it and then continuing the search for a solution.
They don’t let emotions get in the way. They address your feelings and then look for the next actionable step. And despite the verbal abuse, they still usually have a great attitude.
Pretty helpful approach for solving any kind of problem.
And just like working with people, you’ll probably get better results if you’re nice:
If, like me, you instinctively add pleases and thank yous, research suggests that’s not just harmless – it might actually help. Polite, well-structured prompts often lead to better responses, and in some cases, they may even reduce bias. That’s not just a bonus – it’s a critical factor in AI reliability.
Would you like me to continue?
One last bit of practical advice: adjust your expectations.
This is a VERY human issue that leads to all kinds of suffering in life. We hope for too much, which only amplifies the frustration when reality diverges from our expectations.
After working with these agents long enough, you eventually figure out what kind of tasks the LLM is good at doing and where it falls short.
Claude is FANTASTIC at writing TypeScript, but not so good with building UI elements. It’s even worse with collecting research data.
It seems obvious: let the LLM handle the tedious work it’s designed to do, and don’t try to force it to do something outside its area of expertise (you’re gonna have a bad time).
😄 Happy to help!
OK this is looking much better now, thank you.
Let’s all take a few deep breaths and then get back to work.
Be nice to the LLMs if you know what’s good for you.
