Sorry, I don't like AI

Posted on Jul 7, 2025

Introduction

Yes, another blog post taking a Bold Stance on AI1. But this blog post is for me, and only incidentally for you. I post it because having the stakes of an audience (however small) helps me. Otherwise I wouldn’t poison the world with my drivel.

Why am I writing this?

The use of AI has made me feel increasingly uncomfortable, sad, and angry, and I wanted to disentangle and understand those source of those feelings. Despite what some folks will say, feelings are crucial to the collection and assembly of facts, and in a healthy scenario work in a virtuous cycle: Feelings direct and prioritize the collection of facts, while facts can regularize or affirm our feelings. This is what we call ’thinking’. So I’m here because some thinking needs to be done.

Intended outcomes:

  1. Clarity. Writing has a tendency to smooth out the swiss-cheese logic of pure thought.
  2. Regularization or vindication of feelings. This serves as a place to organize my thoughts and do some light research.

Not indented outcomes:

  1. Convincing anyone. Even if that was my intent, I think changing someone’s mind of something they are diametrically opposed to is not possible in even the best blog post.
  2. Making you feel bad. This is, at worst, venting – not punitive.
  3. Produce a definitive and/or objective answer. The issues discussed here tie in with issues as big as ‘being alive’. I don’t think an objective answer exists and regardless we aren’t going to crack it here.

I don’t like what AI does

I don’t like the quality of AI output

I think the most straightforward argument to be made against AI is the accuracy of its statements. Many of my friends use them and seem to have some success with them, and I believe that by using (1) properly tuned models of (2) sufficient size on (3) problems they have a large amount of data for…they can do ok.

Regardless of suitability, models are pretty bad at throwing in the towel when they should. If I were to ask a problem to a human, they might say “I don’t know” if they don’t have enough information, or they might give vague pointers on where to find the answer, or if they’re really afraid of not knowing the answer, they’ll give a mealymouthed lie that will fold under some cross-validation. What they won’t do is rattle off an incredibly plausible sounding, detailed, but wrong answer that makes finding the issue to the answer a game of Where’s Waldo (and in the case of slopsquatting (definition), sometimes Waldo has a bomb)2.

Yes, it’s always worthwhile to do a gut check, consider the source, and decide if some information should be verified. But these lies – excuse me, hallucinations – are superhuman in their presentation: the models have been fed every book, blog post, and codebase they could (legally, ethically, or otherwise) obtain. They are masters of producing ‘sentence shaped’ things, and, by that nature, sometimes those things happen to be true (For example, ‘dsfajio bteoqhhu vsuqeiq’ is less likely to be true than ‘orange happy table octopus’ is less likely to be true than ‘I am a small dog’). And indeed, through ingesting a lot of information, it’s often correct.

However, I just asked “What does the saying ‘No dogs but the Sunday best’ mean?” and it gave me four paragraphs AND offered to tell me about the origins of the phrase, which was followed by four more paragraphs of horse shit. No worries here: I know it’s nonsense. But what if I had an actual question, with an unknown truth value? Bruno Dias wrote a beautiful (short!) post about how this ‘right sounding’ aspect of LLMs make it – arguably – not useful. You should read it right now, even if it means not reading the rest of this blog post: Link.

I don’t like the social side effects that the output creates

I honestly think that the accuracy of LLMs isn’t its biggest issue. In isolation, I would just not interact with it and call it a day. I use plenty of imperfect tech, and I also opt out of plenty of tech I don’t like, and I don’t write a whole blog post about all of them. The problem is that LLMs and generative models are a kind of “Oops! All Negative Externalities”.

Something that I’m sure you’ve noticed is that the stuff is everywhere, like a fine silt spread across the internet. That’s not even to speak of the output of the models, just the implementation of and advertisement of various AI chatbots. There is a strong incentive for usage to be normalized and adopted, in large part I’m sure because these companies are bleeding money3. The curious thing about them, though, is that they are at once:

  1. Advertised as replacing any and every imaginable job, and
  2. Should never be trusted and all information verified

I’d like to speak to a human.

When I read something, look at art, listen to music, or even read DMs and emails, I see it as a window to the person or people who created it. Creating things - even something as small as a text message - requires engagement. By engaging with whatever you’re creating, you’re adding a little sample of yourself. As a recipient, I can read that text and know, with a certain amount of assurance, that interpretation of that creative product will yield insight about the person who made it and their intentions and context. It’s a fruitful process, or it can be one if I – and you – put in the effort.

If a robot churns out art, I don’t really see the point in engaging with it seriously. Creating things forces you to think intimately about minutia you wouldn’t even second glance at when ‘overseeing’ a robot. Why would I scrutinize the details if you aren’t in there? And even worse, an email? I get the drudgery of emails, but to have emails read by no one, written by no one, expanded and summarized in a wasteful, absurd game of robot telephone - is that better? Or at the very least, do you think that’s a solution? Human connection is precious and increasingly rare, particularly in the age of overly polished, LinkedIn-facade arms race to scramble for jobs in a hostile marketplace. Maybe you just want to leave the human bits of you for those who appreciate you the most, your loved ones and friends back at home. But I don’t see wringing out the humanity of your other interactions as the right path forward.

Struggling is learning

It’s so, so easy to fool yourself into thinking you’ve learned something because it went down smooth. Lately, I’ve been trying to teach myself linear algebra. Reading about linear algebra? Piece of cake. Doing the exercises? Pure agony. All the information was technically in the chapter, but along the way I have made so many false assumptions, missed so many details, that by the end of the chapter it’s like I read something entirely different.

For everyday stuff, this surface level ‘yeah I get the gist’ type thing is fine. I don’t need to be able to pass a comprehension exam on, say, where all the produce is located at my local grocery store. But for the technical stuff, to be able to create with it, I have to wrestle with it. Failure can be immensely useful: it can highlight areas for improvement and reveal false assumptions. And when things are difficult? You’re building those neural pathways, baby. Just like working out (I assume), no pain, no gain. And goddamn does it feel good to finally start to feel proficient at something that was previously impossible.

The problem with LLMs is that they make this illusion of learning possible for pretty much anything. I once spoke with someone who said that they had a lot of really good thoughts but ChatGPT just helped them write it down, and I have to say I’m not entirely convinced. If I can’t accurately convey them, then how could an LLM know what I mean? I don’t think the results would even be my own thoughts. At best, my thoughts were chewed up and spit into my mouth (no pain no gain). At worst, it’s a Barnum effect that has hypnotized me into believing my thoughts were made tangible: a narrative so full of holes and lacking constraints that nearly any text could fill in the details.

Counterpoint: some people just want to be able to do things without learning how to do it. Maybe you have no coding experience and you still want to vibe code something bespoke. But in general I’m unimpressed with that line of reasoning. If it’s something big, it’s worth engaging with seriously and learning the fundamentals. If it’s something silly and whimsical, the fact that it was Robot Art, for the reasons stated above, make it - in my opinion - largely devoid of value.

Further counterpoint: maybe you’re working a 9-5 and you just need to ship a feature by a rapidly approaching deadline. You probably could write it yourself, but it would take you a lot longer. To which I say…whatever. I’m not going to make you lay down on the tracks to attempt to stop the barreling train of late-stage capitalism. But as far as I understand, the jury is still out if using LLMs improves efficiency4, and certainly now there is code in your codebase that no one really understands.

A word on students completing assignments with LLMs.

A discussion of learning and LLMs wouldn’t be nutritionally balance without piping-hot-takes on LLMs in education.

Here’s the way I see it: LLM generated content can’t be detected programatically5. With LLMs, students can effortlessly produce content that – at the very least – looks right. Other students might choose not to use LLMs, which will ultimately take much, much longer, and this is often at the expense of time that could be spent on other courses. Through grading, these assignments are converted into GPAs6, which are converted into job prospects. And yet again we find ourselves in another hellish arms race akin to that of CV and resumé padding, further degradation of already poorly reliable indicators. GPA always was a poor indicator of this amorphous idea of ‘knowledge’, and has become a target rather than an indicator (Goodhart’s Law), but this feels significantly worse.

I’ve seen pragmatic professors change their assignments to be more LLM-proof, and I’m sympathetic to their needs to get something working without having to overhaul the system. Ultimately, I predict that stricter (likely imperfect and certainly irritating) detection and surveillance methods will be put in place to prevent LLM usage, in an attempt to maintain the current ‘GPA infrastructure’. And I get it, burning it all down is impractical. I just wonder what life would be like if we embraced a little more chaos rather than clinging desperately to soothing but misleading quantification.

Turbo Bullshit Machine

The output from these models is near ubiquitous. You’ve already seen it - Facebook/Instagram posts, the ‘chum’ at the bottom of news articles, Google searches, endless labyrinthine blogs chock-full with nonsensical posts - it’s everywhere, and the better you get at identifying it, the more pervasive you realize it is.

At best, they’re poor facsimiles of actual content, low effort memes or uninteresting art, adding more noise to an already noisy internet. At worst, they’re indistinguishable inflammatory comments spouting views held by no one for the sole purpose of creating chaos in an already extremely divisive era7.

And it’s not just the slop these models produce: they inevitably poison the perception of content around them. My willingness to engage with anything knowing I’m now in a slop swamp decreases precipitously, even if there is a diamond in the rough. And maybe you think it’s foolish to expect to engage with anything on the internet, to let anything under that cynical carapace - slop or not. But that’s a level of pessimism that not even I stoop to: digital media can be beautiful and we should allow it to change us. The fact that these models have essentially created high-throughput bullshit printers, coughing out nonsense-pollution is, frankly, shameful.

I don’t like how AI does

I don’t just have complaints about the product, but also the method! Really covering all my bases here.

Extractive

Generative models need gobs and gobs of data, and are currently trained with token numbers approaching the number of words in every single book on Earth. This requires tapping in to pretty much any resource possible. Books, of course, but also blog posts, chats, social media - whatever. And that’s just for language models. Image-based models also need astronomical amounts of data too: The first generation of DALL-E has a dataset of about 250 million images (source), though these companies tend to be pretty close to the breast about the size of their training data. These data have to come from somewhere, and the ethics about how this happens is an oft talked about, highly contentious, usually subjective, and rapidly evolving subject in the generative AI space.

I read books and look at images all the time, and then I make stuff. For creative work, this is often without credit to the sources I got inspiration from, because it’s often an amalgamation of so many things and difficult to point to any individual source8. This sounds a lot like what a generative AI model does. If it’s ok for me to do it, what’s the difference if a model does it?

This is a bit of a ham-handed universalizability argument: ‘if it’s ok one time, it’s ok millions of times’. If you see no flaws with the argument, I encourage you to never try beer. But even beyond the idea of scale, it’s incredibly dispassionate to think that all that art humans do is just an amalgamation of things we’ve seen.

At risk of repeating myself from earlier sections, our desires, culture, environment, and even present affect direct what we consume and, in turn, create. So no, I don’t think that the purely extractive process of generative models slurping up every image available is the same as what artists do. And, lest you forget, an artist is a person outside of what they produce.

There is currently a mad rush to mine every last vein of fresh tokens to train these models, and the automated web crawlers that hoover up information are ruthless. Theoretically, a `robots.txt` file should be a line of salt that tells such crawlers that they aren’t welcome there, but for some reason9 these genAI crawlers don’t listen. In fact, they tend to actively avoid detection during their pillaging. Besides being incredibly rude, these things are crushing servers - Wikimedia, for instance - in a way that makes solutions like nightshade, anubis, and Cloudflare’s labyrinth necessary defense measures in case bots decide they aren’t interested in obtaining consent.

On the other end of things, you have pretty much every last tech company absolutely horny to sell your data to train AI. For LinkedIn, even if you previously disabled consent, it was re-enabled at a later date. Instagram, Facebook, Tumblr, Microsoft Excel and Word, DeviantArt, Zillow, hell, Adobe Acrobat - all configured to consume your data by default and feed it to some model.

And hey, maybe that’s what you signed up for when you accepted those T&Cs you didn’t read, but let’s be honest: This isn’t opt-in for a reason. There’s no fanfare when it comes to announcing that they’re using your data to train a model, and all the buttons to disable it are at the bottom of some menu seven layers deep. It’s heavily implied that they know you aren’t going to like this, so they’re hoping they can sneak this one by you, or maybe just wear you down.

‘You don’t have to use their service’ you say, and maybe you’re technically right. But in practice? Listen, I certainly don’t want to use Microsoft Office, or LinkedIn, but you know what I do want? A place to live. Artists who need to advertise and sell their work won’t get the same amount of reach using a self-hosted solution (not to mention the technical know-how and possibly capital required for such endeavors). These companies, by design, strive to make themselves irreplaceable and have the network effect on their side. The choice has largely been made for you.

Carbon Footprint

Previously, I considered climate impact of LLMs to be bad, but only in relation to the impact of the output. Like, I would consider pretty much any significant emissions of greenhouse gasses too much for a machine that squishes babies into paste. However, before I wrote this I considered it fairly minute in the grand scheme of things. However, current projections indicate that by 2028, ~7-12% of US energy will datacenters, and a lot of that will be AI related (source), and it’s been noted that datacenters tend to use dirtier sources of electricity due to their necessity for high uptime (source). This is controversial, and as such I can imagine big tech companies aren’t champing at the bit to make these data readily available - so estimations are challenging. However, third party estimates aren’t looking great - so I’ve since updated my opinion from ‘bad, but not a primary concern’ to ‘another of the pillars of concerns’.

Conclusion

I considered writing about how I don’t like the projected future of AI – that is, the projections given by Sam Altman and the like – but frankly I find their posturing more simply irritating than substantive and I’ve had this blog post weighing on me for weeks now and just want to get it off my chest. There’s some other stuff I wanted to touch on too, like how my colleagues who speak English as a second language use it to help with their phrasing. Short answer (deep breath): I’d rather have the raw version with some mistakes but I understand that might put these people at a disadvantage professionally, and I don’t know how to solve that and it seems like a larger cultural issue but I don’t know what to do in the meantime. I might update this post if my opinions change or as I want to build out information, but the bottom line? I don’t like AI. Sorry.

Changelog

  • 2025-07-11: Added reference to the METR study. See footnote 4.
  • 2025-07-11: Updated carbon footprint section to be…let’s say…more cogent.

Footnotes

1 By “AI”, I mean - usually - large language models (like ChatGPT et al.). Sometimes I’ll include image-based generative models (Like, say, DALL-E) in that definition. The context will hopefully be obvious. I don’t have issues with (and in fact, use and write) machine learning algorithms.

2 Important note that there have been no documented cases of security exploits via slopsquatting. Another note: ChatGPT (4o-mini) appears to be pretty good at asking for clarifying questions when given intentionally confusing phrases in my tests, but it did suggest a recipe for a ‘black-bean mint julep’ when asked, insisting that while non-traditional, the ‘black bean syrup’ it provided instructions for would provide an ’earthy’ flavor.

3 https://www.wheresyoured.at/openai-is-a-systemic-risk-to-the-tech-industry-2/

4 Recent results form a randomized controlled trial came out after I made this post implying that experienced developers complete tasks 19% slower with AI. Developers expected AI to reduce their task time by roughly 25%, and after completing the task, they estimated that AI reduced the time it would have otherwise taken by 20%. Again, this is in spite of the estimates suggesting that AI causes these tasks to take 20% longer.

5 And woe to thee who thinks it can be: https://social.lol/@von/114788592078507875

6 And it’s bonkers that we turn a percentage into a grade back into a number again

7 Actually, I consider writing court documents and commission reports with non-existent citations to be much worse, but whatever.

8 This is actually pretty untrue for me. I have specific artists that I admire and follow on social networks and am inspired by them and their writing.

9 (money)