The Art of Wanting
David Bau is a former Google engineer and current computer science professor at Northeastern University whose research focuses on understanding how AI systems work. His lab studies the structure and interpretation of deep networks. This post was originally published to David’s personal blog on January 17, 2026.
Some folks like Sam Altman define AGI in terms of the usefulness of AI surpassing some human threshold, when we have “a highly autonomous system that outperforms humans at most economically valuable work.”
The idea here is that AGI is about money. When AI outperforms humans at most economically valuable work, it could go collect all those dollars itself. By this measure, we might have already passed AGI a year or two ago....
A Normal Revolution
If you total up all the dollars, I could believe AI can already do most of the work people are paid for today. It is an economically transformative invention. But today AI is a “normal technology revolution”: a big deal, but similar to the industrial revolution, the invention of the wheel, or the taming of fire.
Before the industrial revolution, “labor” meant just that: most work people did was physical. Hauling things around, pulling crops out of the earth, tending animals. A few artisans spun thread and created manufactured items. The creation of engines put an end to that way of life.
The Jacquard loom workers lived through it. Textile manufacturing was a major part of the economy in the early 1800s. Patterned fabric required two people per loom: a skilled weaver and a draw boy who sat alongside the machine, manually raising and lowering each warp thread to create the pattern. The Jacquard loom, driven by punch cards, eliminated the draw boy entirely while increasing output more than twenty-fold. Within a few decades, steam-powered looms replaced the skilled weaver too. Pattern designers remained important, a small and celebrated profession, but this tiny number of jobs did not create employment for the millions of displaced weavers. Those jobs were gone forever.
Yet the ability to specify patterns on punch cards turned out to be the seed of something larger: paper tape, magnetic storage, software. Today millions of people make their living designing instructions for machines that have nothing to do with fabric. No one in 1800 could have imagined these jobs. The problems they solve didn’t exist yet.
In today’s world, most dollars paid to people are for “intellectual labor”: hauling information around, digging for it, harvesting it, organizing it. And the power of AI means that a lot of this work will now be automated. Maybe most of the dollars in the whole economy. The trajectory will be the same. New work will emerge that we do not yet recognize as work.
What will the new work be? I think it will be the work of wanting.
The Work of Wanting
Any artist will tell you that art is not just about the craft of creating the artifact, but about understanding what you want to say. That impulse to shape something and share it, that need to communicate. Understanding what you really want to say is nontrivial, utterly difficult, essentially human.
This isn’t just about high-minded art. Any engineer, any nurse, any office worker will tell you that beyond the mechanics of their job, they constantly face the question: what kind of world do they want to create? There is not just one way to do any complex thing. You can do it badly, without care and without love. Or you can be intentional about it: really understand what you’re going for, take care to define what it means to do it well, pay attention to what has happened once it’s done, circle around to improve your vision and perfect the thing, then showcase it, explain it, share it.
This “high practice of wanting” is very hard to do well. It is nontrivial, utterly difficult, essentially human. And today we do not think of wanting as work. But it will be.
What does it mean to want something well?
It is not as simple as it sounds. To truly want something, to want it in a way worthy of what it costs you to care, it requires at least three things.
First: understand the implications of your choice. This is hard. A four-year-old who eats the whole bag of candy does not really know what he wants. He wants the taste; he does not want the stomachache. This only gets harder as the world becomes more complex. AI can help us model implications we would never trace on our own: the paths forward, the paths not taken, why we might want one over another. But it is not enough for AI to know the implications. To understand and navigate the choices facing us, we need to know what AI knows.
Second: specify your choice. Even when you think you know what you want, it is easy to misspecify or underspecify it. Consider: I want to get from Boston to LA by tomorrow, as cheaply as possible. I don’t have much money; I don’t need a comfortable seat; just make it cheap. Well: the cheapest option is to slice you up and send you as pieces of meat by overnight parcel. Careful what you ask for. Underspecification is emerging as a serious safety issue in AI.
Third: recognize your choice. After you act, you must remain attentive. Did you get what you had in mind? In the AI era, our activities will become far more complex than we are accustomed to. In my own experience vibe coding, I found I could manage roughly twenty times the complexity for the same amount of attention and effort. At that scale, thorough understanding and precise specification become nearly impossible up front. Things will go wrong. You need to notice, stop, and repeat the process: improving your understanding, honing your specification, trying again.
The elements of wanting: understanding, specifying, recognizing; these are what people will do in the era of AGI. These are the jobs of the future. Wanting well means taking responsibility for your choices.
Who Gets to Decide
But there is a deeper question around AGI, which is: once thinking is all automated, is there a need for people at all anymore?
At the root of this question is not whether AGI can do most of the thinking that is done today, but whether AGI is recursive: if AGI is so smart that no matter what humans might choose to spend their time doing in the future (including the work of improving AI), AI will immediately be better at that too. That AGI is so good at learning, predicting, and imitating humans that there is no room left for people to be needed for anything.
What if AGI can even do all the “wanting” for humans?
That is a stronger definition of AGI. I do not think we are there, and I do not think we are close. We could delude ourselves into pretending the AGI can do the wanting. But it cannot.
Why not? Because recursive AGI isn’t really about capability at all. It’s about who gets to decide what matters.
A plant wants to point at the sun. We can describe that mechanistically: phototropism, the auxin gradient, differential cell growth. The plant “wants” sunlight the way a thermostat “wants” a certain temperature. We don’t consult the houseplant about urban planning.
That’s not because the plant is bad at identifying goals. It is because the plant has no normative authority. Its wants aren’t the kind of wants that count.
AI is the same. Even if AI becomes impossibly good at identifying goals, predicting outcomes, recommending actions, it will always be like the plant growing toward the sun. A process that optimizes, not a subject that decides. AI will get better at anticipating human wants, at guessing our goals, and this will make it more useful. Yet as humanlike as AI becomes, its wants (if we can call them that) aren’t normative. Not in our human world.
Wanting the world to be a certain way is our privilege and our unique responsibility. We are the ones with skin in the game: the ones who live and die, who suffer and flourish, whose children inherit whatever world we leave behind. To be the judge, the arbiter, the decider: this is what it means to be human in relation to our tools.
We are far from being able to offload that responsibility to AI. And we have a choice in how we build it.
We can design AI that dulls our ability to want, to understand, to choose. Or we can design AI that sharpens our ability to comprehend a complex world, and to take responsibility for the things we want.
This is the most important choice we face in AI today.
![[re:alignment]](https://substackcdn.com/image/fetch/$s_!8Hbc!,w_40,h_40,c_fill,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe9e9766b-b43e-4a00-881a-ec783053a81d_500x500.png)
![[re:alignment]](https://substackcdn.com/image/fetch/$s_!gP8S!,e_trim:10:white/e_trim:10:transparent/h_72,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F256bbc89-f15b-4b68-b762-34edc9d74614_1344x256.png)
![[re:alignment]'s avatar](https://substackcdn.com/image/fetch/$s_!oavk!,w_36,h_36,c_fill,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b9facf9-9e46-43e5-a864-c3f1a21a8ba8_380x380.png)



One thing that this makes me wonder: what is gained/lost in communicating our wants? This applies to communicating our wants to ourselves, to others, and to AIs. Right now, we act upon many of our wants without having to explicitly communicate them.
As David notes, wants are very complex, very human, and maybe the communication of them through language might serve as a compression of what these actually incorporate—a compression through which some vital information may be lost. On the other hand, maybe in formalizing/operationalizing them, we come to understand them better ourselves and can take a critical eye to our wants and how they might not actually serve our wellbeing.
nnsight & ndif <3
https://nnsight.net