OBLIGATORY DISCLAIMER: This post talks about AI, machine learning, or large language models. The technical aspects of these are, obviously, changing pretty rapidly. Be mindful of the publication date before you yell at me about how I'm wrong about the facts on the ground.
Mores change. I've been on the internet longer than I should've been, and so I watched lol go from netspeak to everybody-speak—or maybe just netspeak became everybody-speak to a greater or lesser degree, and I watched leetspeak become something that Older You was embarrassed to have unloaded onto people My own sins go the other way; I used to be way too into writing full sentences with proper capitalization and Ending With A Full Stop...in IRC.
. Then, a heck of a lot later, we have COVID turning Zoom from something that you used if you had to have a meeting with somebody on the other side of the country into something you used to talk to somebody down the street. That subsided when talking to friends and family, but didn't at work, because everybody and their dog's remote now to some degree or another, and so individual workplaces have to settle out just how often they're willing to tolerate the visual portion of your workplace attendance being a green dot with two letters in it.
The move to teleconferencing for everything was, in hindsight, was a pretty big change itself? But it didn't really feel like one to me at the time. Maybe because I used the thing already, and it just became something to bring the normal people into. I lived here.
But—and here's where we get into what feels weird—I just got bushwhacked by somebody with an LLM, I am as I write this still processing how I feel about it, and yet I have been living in LLM-land longer than almost all of the human race.
So I want to talk about it, and I want to chew on it a little.
I work at the Phone Company. My team's both technical and transformational; our remit is to both enable useful pathways for different teams to collaborate and communicate via APIs, and then also build the case for actually doing so. Somebody needs to look at a camera and sound authoritative, and for my sins—well, it means I do a lot of talking A fact that will surely surprise no ardent followers of Normal Men, Bluesky's Only Podcast.
. I mention that mostly because it lends context to why I have some Ed Marketing collateral already together. Your humble correspondent has a headshot on hand, for example, and a bio for when we do w*binars. And when somebody on another team asks for "a bio", no details, for a thing that I'd volunteered to help out with, I sent that over.
They got back to me later the same day. "Your bio's really funny! But I was looking for something a little longer." Guilty on both counts, because I try to be brief and nobody really cares about the last mumblemumble years of my career, but it's breezy and I'm going for funny because I do want to make it clear to folks here that I'm more in the doer camp than the talker camp. And this is all fine so far as it goes: I got a vague question and I gave back a canned response, do you want me to refine it? Just tell me what you want.
This person did not, however, follow this up with what they wanted. I got something new.
"I fed your directory bio to an AI, can you edit what it said?"
Reader, I try to be a Reasonable Guy. Working at the Phone Company has given me an appreciation for letting things simmer in ways that the startup pressure cooker wouldn't allow. If I get something wild in a message here, I try to take a breath before responding, because most things are well-intentioned even if they're incoherent, sloppy, whatever.
It was hard not to blow my lid, really really hard and to a degree that it surprised me. I saw red in a way I don't think I've felt for years. I've never shouted at a coworker in my life, but I sure wanted to. Not sure which thing I wanted to yell, mind you, but the "why didn't you just tell me what you wanted?!" came up first, and then after I read it something between "so you asked it to lie about me?" and "if you knew how to use this correctly it wouldn't have had to lie about me!" hit me between the eyes.
Of course, I didn't say anything like this. I'm an adult, I controlled myself. I responded, tightly but (I think) not rudely, that most folks I know who spend a lot of time working with LLMs find it to be a let's-call-it-a breach of etiquette to have an LLM hallucinate about somebody, and then in about five minutes I wrote the bio that they had wanted but not told me they'd wanted the first time around.
Then I went to kvetch to the group chat.
Mostly because I wasn't sure "most folks" was actually true. I get that way sometimes; I'm a little off-kilter and some external orientation from some normal men and from the Normal Men helps, less for validation and more to figure out if I'm the Abnormal Man here.
But, no, the reaction was pretty universal. LLM People went "ew", normal folks went "ew", and I am continuing to go "ew" right now, forty-five minutes later.
So, cool story bro, but what's the point?
I don't perturb this many electrons just to talk about somebody being weird at work, of course. I mean, I like complaining as much as anybody, so I would just perturb this many electrons for the purpose. But I'm not used to that kind of intensity of reaction so I want to think about it a little bit.
The why of why I found myself seeing red...so like, there are minor sins there. Insufficient attention paid to asking for what the person had wanted, that kind of thing. But the part that rocks me, right now, is in that second-wave reaction I mentioned above. You had it lie about me and you didn't know or care enough to make it not?
And I think we're going to be seeing that a lot more, because "didn't know or care" is load-bearing. In defense of this person, they clearly tried...to a point. "I gave it your directory bio" is, admittedly, Doing Better than a lot of people. But it's all the other ephemera of the decision that makes my gorge rise, both...moral?...and practical.
Practical comes first, because it's easier. They didn't share their prompt with me, and I don't need them to because the output was bad. My guess was they used Gemini 2.5 Flash from the way it stitched words together and jerked from one idea to another, going from the stuff that I'd written in my short bio (but with a layer of corp-speak—think of the way that somebody puts scare quotes around an idiom to avoid any chance of sounding friendly) and trying to bolt on my actual job title from the blurb I wrote two years ago in the company directory. It was bad writing. But it also was lying; it made stuff up about me continuing to work on Demuxed "with friends" on an ongoing basis even though I haven't been involved in a few years now Which is a bummer. Demuxed is a great time. If you work in video and haven't been, go.
. This is just unacceptable, unfit-for-purpose stuff. It'd be a bad output even if it was written entirely by a human.
Moral...is weirder, right? And while I've never been entirely comfortable with the purely utilitarian "it's just a tool like any other" that a lot of people trot out, the innate how dare you have the homunculus lie about me feels qualitatively different. Cursor getting a bit of code wrong is different from Gemini getting me wrong. Add on top of that the assumption made by the person trying to drive the LLM that it's no big deal, so not-a-big-deal as to not even ask if I was okay with it, and I start to feel like we're living in wildly different worlds There's also a little bit of the you're asking me to check the robot's work about me? to it too, I think; I have no problem reviewing LLM outputs for correctness myself, nor in asking people to review them about something like a piece of code or a service they own. But it feels like a difference of kind to have to correct it about me as a person.
.
Stating the obvious: our employer, obviously, doesn't care. Productivity Enhancers must Productivity Enhance, and this person is acting in the Productivity Enhanceful way, even if they're making slop. But this feels like it's the kind of thing that we kind of need to get ahead of as a culture, both at the job and outside of it.
A lot of the folks I know personally who I know are leveraging LLMs effectively seem to have ended up in a consistent place about this. We don't send each other emails generated by LLMs; we talk on Slack. And personally, the only time I give somebody minimally edited LLM output is if it is a strict summary of something I dictated to it, because getting actual information across in a clearer format than my walk-the-dog rambling is good for everyone. But it's an interchange document, if that makes sense, not a social message (which most work communications are!) and it comes along with the actual social bits of it delivered through Slack alongside the Markdown doc. And when using LLMs for code, not a line goes out the door that I either don't read (when it's critical) or is not tested to my satisfaction (when it's not) by tests I either wrote myself or do review exhaustively.
The people who are not using LLMs effectively are instead doing the memes. Blobbing out emails so somebody else can use an LLM to winnow it back down—in theory, anyway, I think most people are just trudging through the reading of them because the Gemini stuff in Gmail is hot garbage. Sludging up product requirement documents with Business Aligned Words instead of just saying what the thing is and what to do. And spurting out thousands upon thousands of lines of repeat-yourself code that might or might not actually work.
We as technologists and, to be honest, as leaders sorry, sorry, I'm trying to delete it
probably have to make the implicit explicit. Something like things for people should be written by other people and are the responsibility of those people. I dunno, it needs some workshopping, but I'd start there.
Last thought, because I started this as a brief musing and it's an hour and a half later and the dog wants a walk. When I told George about this one, his reaction brought me up short, because it has the ring of truth:
AI will mean extreme inequality and this is a great example. Like [that person] is just bad at all this.
And, yeah. Even when you set aside the moral ick of this whole thing, the results were bad. Was the person in question incentivized to do a good job? Well, no, but let's be real—the results that untrained people are getting from LLMs aren't better where they do matter, either.
Some of that is in fact a training problem. Sound the golikehellmachine horn.
But some of it is a people problem. Because when it comes to LLMs, the thing isn't what it looks like—and you have to not respond to how it looks, you have to respond to what it is. It's not a person. It can't reliably parse what we normally treat as the unspoken bits and pieces of human interaction, and that's been true up through the newest models out there. Which doesn't, of course, make them useless: if you know how to write with clarity, specificity, and can adopt a "theory of mind" to intuit what sorts of blind alleys the LLM will run down, you can get a ton out of them.
But people are used to only needing part of a theory of mind to deal with other people.
I think it took me my whole life to develop the muscles needed to do it for the robot. I worry that that is what's going to kneecap people who just aren't prepared for it. And the push to hand LLMs to people who can pretty much only turn them into sludge machines feels real bad.
Real bad.
(Note: because my blog is a bespoke pile of nonsense, I...haven't gotten email validations working yet, so comments below don't work. They will work, because I think blogs without comments aren't blogs, but...not yet. Get at your humble correspondent on Bluesky if you want to yell about things.)
–Ed
Comments (0)
No comments yet. You'll need to log in to comment.