In a recent post, I threw around the term "cognitive exponent" a bunch. Today I'd like to talk about a thing that might help us frame our investigation of what puts someone on the right side of that exponential graph.
When I say "generative AI isn't going away," people hear "and you have to like it." You don't, and you might be right not to. But the is-ought divide here is real and we should all be preparing for both outcomes.
I know a few people for whom LLMs have been a near-immediate multiplier of attention and effort. I know a lot for whom LLMs clearly make them worse at thinking and doing things. So: why?