…and now I'm recovering

After just a month of use I can see that my relationship with Claude Code is unhealthy. Like I mentioned when I tried Claude Code for a month, even when it was wasting my time I was having fun. Pretty big red flag. 

The fixation on the usage meter, the prompting late at night instead of sleeping, the lack of learning at the end of it all. There is something fundamentally wrong with how coding agents work today. When you get started you give it something easy enough to do first because you don't trust the tech, the tool one-shots it and now you're overconfident in it's ability to do what you want. It's all downhill from there. 

I cancelled my Claude code subscription. I have a tendency to fixate on things and overdo it. Most of the time I can manage and keep things mostly under control, but this has the potential to do real damage. Worst of all, I wouldn't have anything to show for it at the end. Coding agents have a way to turn your brain off: no learning, no big picture, just prompting until it does your bidding. If I ever need such a tool, I'll run it locally with one of the open-source coding models. I'll use LLMs for code like people use Shein for clothes. They both have terrible impact on communities and the environment, the comparison seemed fitting.

After the high of the first success, talk to someone else, show what you made and how you made it. Nothing breaks the illusion quite like peer feedback. LLMs will make you feel good while making you worst at your job. There will be great success shared to trigger FOMO, but I'm betting the context, time and money invested to get there will not be remotely possible for most people. Users will be blamed for the tool shortcomings and misleading marketing: if you're not getting good results you don't know how to use the tool, you need better context, better prompting, etc. Sounds a bit like what happened with React a few years ago no?  We're in a new version of churn = engagement except this time there is a paywall. Use our bigger, more expensive model, follow this training, get this promptmaster certification. There is even less incentive to make the tooling better. 

The only sliver lining I can find: with non tech people having more power to create things, we might get a new era of weird going on. I'm looking forward to the weird software that does this one ridiculous thing and crashes your computer half the time doing it. 

In the meantime before reaching for LLMs tools, I'll ask myself: does it really need to be done? Do I need to use "AI" because the codebase is too complex and folks gave up trying to fix the complexity? Do I need "AI" for this, or is it compensating for a bad working environment? 

 

For a better written post about Coding Agents I recommend: Agent Psychosis: Are We Going Insane? it's worth a read.