a pixelated shot of a factory in Texas where a worker is assembling wood-decked steel trailers Kiefer Likens @ Unsplash; edits by me

The shape of a knowledge worker

In a recent post, I threw around the term "cognitive exponent" a bunch. Today I'd like to talk about a thing that might help us frame our investigation of what puts someone on the right side of that exponential graph.

Jan 5, 2026

tech


PREVIOUSLY, ON ED'S RAMBLINGS, I fumbled towards the description of LLMs as a cognitive exponent. But, as with the numerical concept, it turns out that the base of your bx really kinda matters—and at the time I wasn't sure how to even start calculating it.

Good news! I'm still not. But I did run across something that's interesting enough to kick around a little bit.

Jordan Carlson on Bluesky tipped me off to the existence of the OECD's Programme for International Assessment of Adult Competencies (PIAAC), and specifically as to their December 2024 report: "Do Adults Have the Skills They Need to Thrive in a Changing World?" [PDF]. In so doing, Jordan was kind enough to counteract my inherent laziness by pointing me at the definitions of levels of functional literacy that came out of their testing. Which was enough of a hook baited to get me to read more of it.

Jordan undersold it, though. PIAAC does a lot of work. Their investigations span 31 countries and over 160,000 adults. Their studies don't just look at literacy, though that's a major component: their evaluations also cover numeracy and a new-to-2024 category of adaptive problem solving, with each domain measured on a scale of 0 to 500 and grouped into levels of execution competence. And it's in the definitions they give of effective literacy and effective problem solving that I think we—maybe!—have something interesting to kick around.

PIAAC level definitions for literacy Here's a sidenote that will age poorly: I'm experimenting with this tabbed-subdocument thing and apparently my CSS has decided to rebel. I trust you'll get the idea, though.

At Level 2, adults are able to access and understand information in longer texts with some distracting information. They can navigate within simple multi-page digital texts to access and identify target information from various parts of the text. They can understand by paraphrasing or making inferences, based on single or adjacent pieces of information. Adults at Level 2 can consider more than one criterion or constraint in selecting or generating a response.

The texts at this level can include multiple paragraphs distributed over one long or a few short pages, including simple websites. Noncontinuous texts may feature a two-dimension table or a simple flow diagram. Access to target information may require the use of signaling or navigation devices typical of longer print or digital texts. The texts may include some distracting information. Tasks and texts at this level sometimes deal with specific, possibly unfamiliar situations. Tasks require respondents to perform indirect matches between the text and content information, sometimes based on lengthy instructions. Some tasks statements provide little guidance regarding how to perform the task. Task achievement often requires the test taker to either reason about one piece of information or to gather information across multiple processing cycles.

PIAAC level definitions for adaptive problem solving

PIAAC doesn't have a Level 5 definition for adaptive problem solving.

Adults at this level can identify and apply solutions that consist of several steps in problems that require considering one target variable to judge whether the problem has been solved. In dynamic problems that exhibit change, adults at this level can identify relevant information if they are prompted to specific aspects of the change or if changes are transparent, occur only one at a time, relate to a single problem feature, and are easily accessible. Problems at this level are presented in well-structured environments and contain only a few information elements with direct relevance to the problem. Minor impasses may be introduced but these can be resolved easily by adjusting the initial problem-solving procedure.

Adults at Level 2 engage in the following cognitive processes:

  • develop mental models for simple to moderately difficult problems and adapt these as needed,
  • adequately react to changes that are presented in visible increments, and
  • adapt resolution strategies to changes in the problem statement and the environment if these changes are of low or moderate cognitive complexity.

Adults at this level engage in the following metacognitive processes:

  • monitor progress towards a solution that consists of one specific goal,
  • search for optimal solutions by evaluating alternative solution paths within a given problem environment of low to moderate complexity, and
  • reflect on the chosen solution strategy if an impasse occurs and when explicitly prompted to adapt.

So if you've given these a quick skim, you can probably guess why I called this an LLM-related post. The higher levels of these descriptions sound like an LLM operations manual. "Define the nature of problems in ill-structured and information-rich contexts." "Integrate multiple sources of information and their interactions." "Identify and disregard irrelevant information." "Adapt the problem-solving process to changes even if [...] changes are not obvious, occur unexpectedly, or require a major reevaluation of the problem."

Or try this one on for size: "Continuously reflect and monitor the problem-solving process," while "constantly revisit[ing] and reevaluat[ing] [our] mental model, available information, and goal attainment." I mean—this isn't something abstractly related to prompt and context engineering, that's just what this is. It's the exact skillset that most of my coworkers and 99% of Bluesky are tired of hearing me go on about.

 

PIAAC's report also counterintuitively mentions that the levels tested-for don't strongly correlate in individuals; someone scoring very high in literacy doesn't suggest a high score in adaptive problem solving. I think this is an interesting lens through which to regard the more successful cohorts of LLM users that I see in my misadventures I try very hard to draw the line between prodigious uses of LLMs (ones that output a lot of text that somebody has to slog through) and effective uses of LLMs (outputs that match the requirements and are designed to be validatable). The former is not success: reading and evaluating the thing, applying criticism, and feeding one's own analysis back into the machine is part of adaptive problem solving.. Let me pull from my Cognitive Exponents piece:

where I've actually found the biggest immediate jump is in staff engineers who code because they like to code, who can communicate clearly and are comfortable moving up and down the stack.

And I had this in mind when I tried to come up with a clear definition of what an LLM actually is:

an LLM is a semi-autonomous tool for working with labeled, coherent information at a continuously variable level of abstraction.

The mental picture picture of the person I run into who capital-G Gets the definition laid out above, who is Having A Great Time with LLM tools and is Getting Things Done, is a mid- to late-career staff engineer The use of "engineer" is not an endorsement here. I'm not an engineer, I'm a software developer. But the industry calls me an engineer. Sorry.. They're not necessarily an amazing writer, but they read and they read effectively, both in terms of extracting meaning from words but shortcutting around what parts matter and what parts don't Which is to say, if I give them a doc with a table of contents, I don't have to then go "and pages 6-8 are relevant here".; we're talking about what PIAAC classifies as Level 3-ish, roughly top half of the distribution. But we start talking about top-third, Level 3 (this time 3 out of 4, remember), for adaptive problem-solving. Sometimes I use the phrase "theory of mind" as shorthand, but you can expand that just fine to "generating mental models for moderately to highly complex problems" and "monitor comprehension of the problem and the changes in the problem", where "the problem" is defined as "what the hell is that giant pile of matrix math doing to my document/codebase/whatever?".

These aren't new skillsets, of course. If anything, they're new problems being stacked on top of the old one; interpreting the large language model is an and not an instead of understanding business requirements and translating them into systems, modules, and contracts. (But, of course, the payoff is accelerating the process of chewing on all of that stuff, so I think it's worth it.)

Almost invariably, too, those are the staff engineers who practice systems thinking as if by habit. The book dorks (hi) among us talk about stuff like Thinking in Systems, Normal Accidents, A Pattern Language, or at least know what we're nodding at when they come up. And I think this is a relevant commonality, too. They're about understanding systems, but in so understanding they acknowledge that you can't avoid abstraction. You might slide down that ladder Sorry, Tim Rogers, cool people slide down ladders. when the implementation matters to the system as a whole or when you need to rationalize the behavior and the "worldview" of two disparate subsystems, but most of the time? You're looking at the box with the label, eyeing the hose ports and plugs coming off of it, and figuring out what needs to connect where Software development, being a matryoshka doll of systems, means that this often holds when you're not talking about the whole, too; most folks (for better or worse) aren't exactly eyeballing every line of every dependency they pull in, either..

The wielding of abstractions and systems thinking is the straw that stirs the LLM drink, too. New context, same skill. (And, relatedly, the drive to put LLMs in every set of hands in my profession, whether or not they're exhibiting these skills, keeps me up at night.)

 

I see people, smart on many axes who don't succeed with these, too. When discussing cognitive exponents, I had to contrast it with "cognitive amplifiers", which are almost a failure case of LLMs: amplifiers amplify everything. Signal? It gets bigger. Noise? That gets bigger, too. And you often see this showing up a couple different ways. One is insufficient self-criticism to see whether or not those outputs are actually any good. The other is more of a quitting kind of it. "It hallucinates all the time." "Spicy autocomplete." The assumption that an LLM is an oracle or it's useless, that determinism is the only way to get something useful out the other end Remind me again how deterministic a human being is?, the rejection of the idea that a system with probabilistic elements can be guided and can have correcting functions layered on top of them to produce consistent results—this is, if you squint, a different failure mode of adaptive thinking.

 

As with the discussion of cognitive exponents, I'm not here to give you a One Weird Trick for finding these people. And I have no idea if it's something that can be trained in situ rather than made as the product of long experience of doing stuff in the world (whether with a computer or not). But the reason I wanted to write briefly Briefly for me, anyway. about the PIAAC stuff is I think that their definitions port pretty cleanly as a lodestar for what we want and where we want people to go in order to succeed with these tools and in the weird new working world that they're opening up. If nothing else, the PIAAC assessment provides a naming-of-parts for the prerequisite skills that I can get behind, and maybe from there there's a way to start thinking about how to make work-sample tests or other ways to find what we're looking for.

–Ed


back to blog index