The Incuriosity Engine

Cooper Lund
6 min readFeb 19, 2025

--

There was a great article that Namanyay Goel wrote this week about how AI is leading to junior developers who can’t code that’s worth your time. He’s not a hater or an AI skeptic, he’s someone who maintains his own AI code analyzer. In it he raises some points about how development used to work

AI gives you answers, but the knowledge you gain is shallow. With StackOverflow, you had to read multiple expert discussions to get the full picture. It was slower, but you came out understanding not just what worked, but why it worked.

Think about every great developer you know. Did they get that good by copying solutions? No — they got there by understanding systems deeply and understanding other developers’ thought processes. That’s exactly what we’re losing.

It’s something that resonated with me, and it resonated with me because I could see my own career path in it.

I don’t talk about what I do for work online all that much, and I’ve never written a blog about it, but I’m a Senior Systems Administrator with over a decade of experience. It’s also something I didn’t go to school for, I have a degree in International Relations. I was curious about computers from a young age, and after college I found that to be a far more saleable skill. As it turns out, a little curiosity can take you a very long way if you’re willing to let it.

Jeopardy super-champ Amy Schneider write a great article for Defector called “How I Got Smart”. It’s another great article worth reading, and one of the things she mentions that I really liked was the idea that being smart is a commitment and a desire to be smart, and a dedication to learning and its fruits. It’s not an easy task!

Over the years I’ve built a mental model for how I work, and it’s similar to what Schneider talks about for her process of learning. First, a problem is identified. If you’re in an entry level position, like the ones that Namanyay’s junior developer would tackle or that I tackled as a helpdesk, the problem is generally obvious because something isn’t working, but as you advance in a career the problems become larger and more abstract. How do we scale this system so that it’s no longer overloaded? How can we perform necessary upgrades on this portion of the system without causing issues with the rest of it? Personally, I like the problems that involve how users interact with the system and how to eliminate friction, I like asking “How can we do this better?”

This triggers an investigation. At an entry level you might get an error code that you can look up, and that brings up documentation to read that has the fix for a problem. As the problems get larger and more abstract, investigating becomes less about finding an answer and more about collecting and sorting information to understand the system and considering solutions from there. You find pain points and you find big problems, but you also find little things that can be changed that add up. The larger and more complicated the problem, the more unwieldy this becomes, but you poke and prod and learn. That leads to a solution that needs to be implemented.

The reality of a technical position is that most of what you do isn’t what people think counts as technical work like writing code, the majority of the work is being an investigator and a thinker. It’s also the part of the process that creates growth. Computer systems are impossibly complex things, they creak and groan under their own weight. When a problem presents itself, the process of discovering a solution illuminates a part of that system that might not have been lit up for you before. That’s the joy of the thing—your curiosity leads to results—but it’s also what leads to your own growth. The work is the point.

The older that I get, the more I understand that there aren’t a lot of people who are stupid, but there are a lot of people who are incurious, and their incuriosity is understood by people who were curious about something as stupidity. Users aren’t stupid, they just don’t think that they need to figure things out, and they generally don’t — that’s the job of those of us with technical jobs. However, there also needs to be some friction to get people to take a moment and think about things. If you just do something for someone they don’t really stop to think about what is happening or why, and understanding suffers. To use an example from my domain, mobile devices like smartphones and tablets are so good at just working that users aren’t learning basic computer skills like how to use a file system, and that affects their ability to use the computer, which is still the primary tool for almost every white collar job.

I’ve kept this in my domain of tech, but my primary worry with AI is much larger. AI is an engine that promises us unlimited, easy answers to any question we might have. You don’t have to think about what to cook for dinner, you can let AI tell you what to make with what’s in your fridge. You don’t have to think about how to talk to a coworker, you can have AI figure it out. Hell, you don’t need to put in the practice to become an artist, you can let the AI do it for you. But easy answers lead to incurious people, and incurious people get stagnant. It’s a professional problem, where workers don’t learn new skills beyond how to query the AI better, but it’s also a personal problem.

The best people I know are lifelong learners, and it’s something I strive to be as well. Asking an AI isn’t really learning, it’s receiving an answer. Learning, as was previously mentioned, is a deeper and longer process that involves you absorbing more information, some of which you weren’t expecting to, and that information can improve your life in unexpected ways. A good teacher doesn’t give you an answer, the teacher makes you think about how to get the answer. AI fails at that, and more importantly, AI isn’t sold as a teacher.

To come back to Amy Schneider’s piece, she talks about how the process of learning about something is different than just receiving an answer

So now not only have I acquired the (fairly useless) knowledge of the definition of oviparous, I’ve gained greater insight into how our society organizes itself, and the motivations (and thus implicit biases) that drive scientists. And of course any of these threads will lead in their turn to an ever increasing array of further threads, and following those threads is a richly rewarding experience, and one that will not just make you better at Jeopardy!, but better at living in society. At understanding what is going on around you, and why, and what might happen next, and how you might prepare for it.

It reminds me of a different writer, Kurt Vonnegut, who once told this story

DAVID BRANCACCIO: There’s a little sweet moment, I’ve got to say, in a very intense book — your latest — in which you’re heading out the door and your wife says what are you doing? I think you say — I’m getting — I’m going to buy an envelope.

KURT VONNEGUT: Yeah.

DAVID BRANCACCIO: What happens then?

KURT VONNEGUT: Oh, she says well, you’re not a poor man. You know, why don’t you go online and buy a hundred envelopes and put them in the closet? And so I pretend not to hear her. And go out to get an envelope because I’m going to have a hell of a good time in the process of buying one envelope.

I meet a lot of people. And, see some great looking babes. And a fire engine goes by. And I give them the thumbs up. And, and ask a woman what kind of dog that is. And, and I don’t know…

And, of course, the computers will do us out of that. And, what the computer people don’t realize, or they don’t care, is we’re dancing animals. You know, we love to move around. And, we’re not supposed to dance at all anymore.

I’m not a Luddite, but I do think that the most important thing that defines us as humans is a capacity for curiosity, and if we don’t keep that curiosity and find new things to be curious about, we’re sacrificing a part of our humanity to the machine in a way that we’re all going to regret. This isn’t to say that there’s no use for AI, as someone with a love of Brian Eno and cybernetics, I love the idea of a computer than can think and create, but we can’t let that replace our desire to think and create. If we do that, or if we let our work simply be a passthrough for whatever comes out of the AI, who are we anymore?

--

--

Responses (16)