A heavily pixelated picture of a car being constructed by robotic arms. Lenny Kuhne @ Unsplash; edits by me

The abstraction you didn't ask for

When I say "generative AI isn't going away," people hear "and you have to like it." You don't, and you might be right not to. But the is-ought divide here is real and we should all be preparing for both outcomes.

Dec 30, 2025

tech


The widening of the is-ought divide—the rhetorical disconnect between trying to describe what is and state what ought to be—is one of the more annoying parts of Our New LLM Future. I am tired of it.

Okay, I'll cop—I say that I'm tired of it, of specifically the way that conversations on Bluesky and in other internet watering holes get derailed by "none of it works! It can't work!", but it would be dishonest not to recognize the major cause of the polarization that's brought us here. The hype cycle's at a fever pitch and everyone and their dog is running a startup that, in fine dot-com fashion, is "we do XYZ but we added AI to it." It is exhausting and it is dumb and it is understandable to be annoyed and outright resentful.

Where things have gone entirely off the rails is that is-ought divide. If you're online (non-LinkedIn edition) you've probably seen plenty of people get real real mad at attempts to describe what we're seeing today, rather than prescribe it; this post is in part inspired by watching the pitchforks come out for an educator who's saying something like "generative AI is not going away and reasonable pedagogy grapples with that fact"with no indication that they think that this is a good thing or not. And, man, the replies to the good doctor are grim. The usual claims that none of this stuff works at all, Ph.D's sniffing that they didn't sign up to "teach AI", and the rest of (he said heavily) the usual.

 

Something I find myself saying over and over again is "you don't have to like it, but you probably do have to understand it," and this too gets taken as endorsement. In particular, there's a flavor of software developer that's invested in the insistence that no, none of this stuff actually works. But software development is, I think, settled; we can table that. Instead, I want to talk through how I look at this stuff and why I think it's such a powerful lever. And in doing so, I am aware h/t to my friend Conan, who read the first four paragraphs of this post and then one-shot a better description of the problem here than I managed that I'm asking some of the audience to acknowledge that interrogating a system honestly does sometimes mean reaching past the (often very real!) ethical concerns to understand where these things are and are not effective, even powerful. We cannot have all conversations at once, and understanding is prerequisite to changing anything in useful ways.

What I think is more useful to toss around, right now, is what happens when we widen the scope, what happens when we look at other disciplines in under that general umbrella of "knowledge work". In this I include both the creation of knowledge—which means I am setting myself up for nasty tweets from scholars—as well as the leveraging of existing knowledge.

What are LLMs, really?

Let's talk briefly about what, under the chatbot hood, I contend that these weird little guys actually are. So here's my working description: an LLM is a semi-autonomous tool for working with labeled, coherent information at a continuously variable level of abstraction.

This has implications. Like: LLMs are not oracles. If given no information, or too much incoherent and unlabeled information for it to quickly search for context, they don't work. And this is, to be clear, a pretty rampant problem—I've written before about the off-putting nature of having somebody ask Gemini to hallucinate about me in a speaker bio and it's this use of LLMs that's among the most dangerous: credulous, unfiltered asks that couldn't "Couldn't" in terms of the tools available to and the knowledge of the operator. possibly be satisfied, and the lack of theory of mind to not understand why they couldn't in the first place.

 

Other implications aren't so universally negative, though. My friend Sean coined "information forklift" to describe a useful (and healthy) way to interact with these tools And by these tools I don't mean all uses of LLMs. There are tons of ways to automate problems with some LLM input. Instead I mean the general affordances of directly interacting one in some kind of chat interface. (I think you should be using Claude Code or similar over a web chatbot in order to really make this click, but that's a topic for another time.) and with your permission (not his though) I'm going to torture the metaphor a little because I think it holds up awfully well. A forklift moves stuff. More specifically, it moves packaged stuff; if you just have a stack of ramen packs on that pallet they're going to go flying when you lift it. Instead they're in boxes, sometimes in bigger boxes, sometimes shrink-wrapped—the point is that you're not moving ramen, you're moving pallets, and you only have to really care about the outward-facing behaviors of that pallet: its dimensions, whether you can stack one on top of another, that sort of thing.

But eventually somebody has to stock the shelves. Somebody has to crack open that pallet, fight with thirty feet of plastic shrinkwrap, and figure out what to do with way too many packs of spicy shrimp This metaphor has thoroughly gotten out of hand.. That somebody might not be you in any given situation, but maybe it is; hence the continuously variable part of the definition. That variability is a place where (I think) LLMs have a pretty fascinating trick up their sleeve: when used with intent and with an understanding of what they can and can't do, it turns out that you can direct an LLM to do a lot of useful stuff. And, where it fails, you frequently can direct them in reusable ways to address those failings both now and for next time. Your variable-level-of-abstraction can opener gets better at being a can opener the more you use it.

Not every task is at the pallet level or the box level. Sometimes it's at the "boil the ramen" level: writing code from a specification, summarizing a survey, classifying what a text is talking about. And so somebody's gotta care about that, too. Which? Fair. But that's also where the semi-autonomous part of the definition comes into play, too. It doesn't take a genius to go boil some water and throw in that ramen, it just takes a little bit of time and attention. LLMs are, within their operating parameters, a source of time and attention. And the contention of the reasonable-AI types (of whom I consider myself one) is that there are a lot of tasks in the knowledge-work space that probably are closer to pot-boilers than anything else. And, that when approached by somebody who's looking for them, they can be isolated, tested, and implemented This doesn't necessarily mean now, either. I often invoke the "infinite paper tape LLM" as shorthand for ever-larger contexts and ever-better instructed (not trained, though Opus 4.5 sure suggests that training and structuring help) models. I currently think that anything where we can comprehensively describe inputs, outputs, and transform steps in words is probably within the capabilities of that infinite-paper-tape LLM, and the gap from here to there is a difference of degree rather than kind..

 

The above is not, of course, a universal truth. This is a normative view of the world and I am making a pretty fundamental claim that knowledge can be handled in this abstract way. (And I would think it can; I am a software developer. More on that in a minute.) But at the same time I want to stress that while I think this view is probably true, there is a critical descriptive component to it too: it doesn't really matter so much if I believe it, because a whole lot of people with money believe it, and they're actively working to implement the consequences of this worldview across the world.

Which is to say—you don't have to agree with me. But I think it's worth being prepared for the possibility that this is right, considering the competitive implications for Having A Job in that sort of economy, and having done a little bit of back-pocket preparation just in case.

The danger of overlooking mētis

In the previous bit, I described a pretty standard kind of cognitive abstraction. And software developers are used to abstractions, because that's what developers do: we take a real-world problem and (to a greater or lesser degree) try to shape it to fit a conceptual model that we develop. In skilled hands and with simple problems, that square real-world peg might just need a little bit of rounding at the corners to fit the abstracted hole of our solution; in less skilled hands or with more complicated problems, you might be looking at sanding it down to a circle or worse before it fits.

There's an excellent talk by Chris Krycho, called "Seeing Like a Programmer", that consciously calls back to James C. Scott's Seeing Like a State. I actually started a re-listen of the talk in order to refresh myself on the first half of it, which is very nuts-and-bolts for software development; the back half is what sniped me and in part prompted this post.

As Krycho invoked I am assuming that approximately 90% of the internet has read Seeing Like a State by now, so I'm being brief here. If you haven't read it, you should; it is worth the price of admission. Sean Goedecke also wrote a tremendous post, "Seeing like a software company", that serves as an excellent companion piece to Krycho's talk and is also well worth a computer toucher's time., mētis is the Greek term Scott uses to describe tacit and practical knowledge. "Hand work," in a way; the stuff that resists being written down. A farmer knows when you let a field go fallow, and then what you plant when it's ready to be brought back, not because of a book but because of knowledge from their own experiences and those of the people around them. Including the failure states when you do something wrong. Scott contrasts this with high modernism: a 20th-century project of making the world legible to bureaucracies. Standardizing, rationalizing, flattening local variation into manageable systems, disregarding mētis for the clean abstraction in somebody's head. Not, by itself, necessarily malicious The perfectly-spherical-cow idea of perfectly-spherical-cow farms obviously cannot be separated from dekulakization and other Soviet policies that sacrificed millions for economic and political positioning, and I don't intend to here.. But it came with a body count in the millions of Ukrainians and Kazakhs, far from the planning bureau.

Software developers don't run a central planning bureau (or at least, I hope we don't). But we make decisions about the appropriate level of abstraction all the time while imputing a kind of vibes into our own work. We struggle to capture the mētis of our own work—the why of design decisions, the load-bearing context that lives in our heads and not in the comments. And we're quick to flatten everyone else's mētis into requirements documents and user stories.

Krycho's point is that these are the same failure mode, and I agree with him. I try to approach this conservatively in my own work. I also know a lot of people and a lot of companies who don't.

 

So as part of saying that LLMs are a tool for working at variable levels of abstraction, I'm making a claim that some portion of mētis can be encoded, or at least approximated well enough to be useful One of the things that can be counterintuitive about using LLMs is that there are contexts where they can reverse-engineer mētis. It's not too difficult to ask Claude to identify patterns behind the writing of code in specific ways across multiple contexts, for example. Relying on it to do so perfectly by itself is fraught, but the more data you have, the more information you can distill.. Which I think is true. Earlier this month, I came across a paper published by the Society of Indexers that insisted that building a back-of-the-book index was not something an LLM could do, or even meaningfully help with. You probably shouldn't be so confident on the internet, because somebody's going to take you up on it, and in about four afternoons I had a solid proof-of-concept index generator in place. It's not a thing I want to do for a living or write as a SaaS, so I left it at the proof-of-concept stage. But we're talking about something that had a clear path to 90%, maybe better, of the "perceptual quality" of something built by a person.

My hypothetical book-indexer could argue that 90% isn't enough. That the last 10% is the part that matters, that my robot doesn't encode intuition and judgment about what a reader needs or a feel for the conventions of the discipline. And they might be right that that 10% is all vibes and isn't recoverable through effort, isn't embedded in wordless practice. But I suspect that that's not true, and even if it is: the first quarter of the 21st century should make it very clear that 90% of the thing for 5% the price is hard to argue with.

Of course, I suspect that we will find many cases where "90% of the thing for $20" doesn't pencil out. Somebody will try and it won't work: the thing isn't fit for purpose, the output is quality-inelastic in ways where spending the money to get the rest of the way there is merited.

But I also suspect that there are a whole lot more where it will pencil out than we'd like to admit, across a lot of fields. It's already hitting home for software developers, where you see a lot of talented folks insistent that Claude can't write code as well as they can. And maybe it can't. But if you are a turn-and-burn, short-order website developer, it absolutely can and "how to climb above the rising sea" is a real problem today. I don't think it'll stop at code.

 

I think there is a path that splits the difference, and for the better. I've written previously about cognitive exponents, where the more you can put into the system the more you get out of it, and I think that's probably the big-hitter win of all of this. We'll see the most effective use and the biggest returns from subject-matter experts with strong intuition and understanding of their fields, being able to leverage that knowledge while also dispatching to those semi-autonomous agents the work that can tolerate the trade-offs that come with sending out "an eager intern" to shuffle through that pile of stuff and come back with its findings or whatever. This feels like an uncomfortable way for people to think, but automation-versus-perfection is a common tradeoff in software; while I don't necessarily think software developers are optimal operators for LLM-aided workflows, I think the idea of it might resonate better in a lot of cases.

I do suspect that this middle ground is high-functioning. But I also suspect it's a synthesis of skills that might be hard to source. I'm betting that this is dry land, though, and personally this is what I'm doing with my career, right now.

Why I'm out here on this hill

You'd be forgiven for inferring a positive read to how I describe the use of LLMs for knowledge work. And that's because, in aggregate and on balance, I'm pretty excited about them as tools. I think there's a lot of there there and I'm excited to continue figuring out where they're good and how to use them to do stuff I care about.

But this has downsides, too, and some of them are profound, and it's stupid not to acknowledge that. As Scott describes in Seeing Like a State, high-modernist agricultural schemes tried to simplify farming into clearly legible (abstracted) systems. The goal was to make for a more understandable, more coherent overall system that could be planned and controlled centrally, and eventually those system planners did learn (some things) and did incorporate (some of) the mētis that made local farming work. But it did starve people. And the people who literally starved to death did not exactly get to reap the rewards learned from their loss.

Nothing I'm saying should be taken to delegitimize the justified anxiety and anger of folks who are on the receiving end of the 90%-for-$20 implementation. Which should be mostly taken to mean the producers of work so disrupted, it's probably pretty likely to make the consumption of that work worse, too. But I'm describing, I'm not prescribing. Whether or not it will be net-good or not—and I am certainly plenty ambivalent about parts of it if not the whole—I do believe that it will happen.

And that's just the things that do work, isn't it? Like, the way companies are going to try to use these tools, fueled in no small part by frontier model providers' breathless hype, is going to hurt people, already has hurt people. I do think creative destruction is real and (again) on balance positive; I suspect that my team at work is going to accelerate substantially in 2026 after some pretty heavy layoffs late in 2025, and our LLM strategy is a lot of why I feel that way. But it's inhumane not to acknowledge that yeah, businesses might like the "was impacted" language for a layoff but there's a moral obligation to regard it more concretely than that.

But "this has negative consequences for some things and some people" is not the same as what you see from AI denialists. Any of the usual chestnuts should set off your alarms.

"It doesn't know anything."

"It's just autocomplete."

"It can't tell you how many R's are in 'strawberry'."

This stuff is willful ignorance. And, to try to take the strongest form of the position, I think it mostly stems from a distaste for that marketing and that hype. It's polarization. But it's dangerous, too. It is a form of actively not-looking that leaves people unprepared. I say "you can't argue with the weather" a lot The motivations of the leading AI hype-guys in this space should not be discounted for an instant, but at the individual, how-this-affects-us level, these changes may as well be motiveless. It doesn't matter if the blizzard has ill intent, it's still snowing., and this is why: because leaving tools out of our toolboxes makes us less prepared for new situations and changes in the ground under our feet.

And when you go beyond the individual to the group, to society at large? Denial cedes the conversation to people who know what they're talking about. LLMs deserve a better class of critic, because the transformational upside has disruptive downside and both need to be thought through—and like, I just can't think of many folks who fit that bill, aside from Anil Dash.

 

Ultimately, if I'm wrong, and all this LLM stuff is just hype and air, then I'll go back to doing what I was doing before: building useful stuff, mostly with computers but sometimes not. If I'm not wrong, then I worry that the robot will squash a lot of people. I would rather that people thrive on top of the robot than get squashed underneath. And I think, and I argue, that that means understanding what it is and how it does work, even if you didn't ask for it and don't like it.

Happy New Year, all.

–Ed


back to blog index