What are we building for? Artificial intelligence, alignment, and a future worth wanting
Shape the conversation around a life well-lived
This essay marks the beginning of our new publication, [re:alignment], a venue for a broader, more humanistic conversation about artificial intelligence and society.
We are a coalition of technologists, academics, artists, and workers, united by a belief that this conversation must be wider, deeper, and more rigorous. Through analytic essays, personal interviews, and literary and visual art, we will critique orthodoxies and collectively envision what AI development could instead look like—all while equipping a broader audience to engage meaningfully in these conversations and build a future worth wanting.
Drowning in lines of code meant to train networks of artificial neurons, code that will one day give rise to an intelligence surpassing my own, I wonder: Have we lost sight of what we’re building this for? Who gets to say? I fear that despite equipping ourselves with something so capable, we’re risking things far more valuable. In folktales, the wise protagonist is the one who never loses sight of what really matters. Today, our story concerns the most powerful technology in history. Have we lost sight?
Since beginning my graduate studies in machine learning, I’ve been carrying this feeling. A feeling which, at first, I couldn’t quite place. Was it resentment? Contempt? Exasperation? Resignation?
The feeling, I’ve realized, was a sort of helplessness, and with it, paralysis. I know I’m not the only one who feels this way. Increasingly, the sentiment seems to be that great games are being played at great scale and speed, which will reshape our lives in ways we’re powerless to alter. Ways which may turn out to be for the worst.
But helplessness has an antidote: deliberate action. [re:alignment] is our attempt at such action—action that will help deliver the power to shape our future.
Current discourse is broken. We are having narrow, siloed, and ill-informed conversations about a technology even experts struggle to understand, a technology with profound normative implications that go largely unaddressed. The structures in which we are developing it—institutional, political, economic—will not, if left untouched, deliver outcomes worth wanting.
[re:alignment] is an offshoot of broader work within AI alignment: how to develop AI such that it produces good outcomes in the real world. While simple enough to define and understand in principle, alignment has proven to be a Pandora’s box, and remains one of the most important research questions of our lifetimes.
Much of the AI industry, however, sees the alignment problem as a technical question. Consequently, most discussion and research live in this category: how do we make AI systems reliably execute what their developers intend? How do we ensure that they follow instructions, avoid unintended behaviors, and operate within specified constraints?
But this notion of technical alignment is just one piece of the puzzle. With it come intense normative questions of alignment, which are often ignored or addressed exclusively by a small cadre of like-minded technologists.
So how do we make sure AI is actually a good thing? What does it need to be like—what values should it hold, what ways should it act, what powers should it have—so that it is conducive to our flourishing? These are not questions we can ignore, nor are they questions technologists can answer alone. So who gets to decide?
And who decides the conditions of its creation? The economic, political, and institutional structures in which AI is now incubating—are they the right ones? Is truly beneficial AI likely to emerge from them? If not these structures, what else?
To me, these are the biggest questions in AI today, and they will remain even if we figure out how to technically align AI. But discussion of them is scant and mostly happens among a coterie of academics and technologists, making decisions that will affect everyone. Yes, these systems are complex, and yes, experts should continue to play a significant role, but complexity doesn’t exempt a technology from broader deliberation, especially one that will reshape how all of us live.
These conversations further suffer from dominant frameworks such as treating the future as a quantifiable optimization problem of “maximizing good.” Somewhere along the way, though, we’ve lost track of what we’re actually optimizing for.
There is a familiar pattern here: we identify something that matters and create metrics to track it. But over time, the metrics become the goal. We optimize for the targets and forget what the targets were supposed to be for. Extending human life, curing disease, eradicating mindless work—these are not ends in themselves. They are instrumental goods, valuable because of what they make possible: a life that feels worth living. Intrinsic goods—meaning, beauty, connection, the texture of a life actually lived—are what we are ultimately trying to protect and enable. But they are harder to measure, and so they fade from view.
When the instrumental is mistaken for the intrinsic, we end up building toward a future that hits every benchmark while entirely missing the point.
We shouldn’t abandon technical progress or reject the genuine goods that this progress might deliver. Rather, we need to remember why we are building it in the first place. We need to keep the intrinsic goods in view and as the orienting purpose of the whole endeavor.
These are big fucking questions, which is why we need to work together in addressing them. But we are increasingly divided into AI insiders and outsiders.
Present social structures don’t allow for meaningful participation from those outside the technological cadre. And many people don’t understand AI and alignment well enough to contribute to the conversation, even if they could. Thus, each side of this disconnect mutually reinforces the other: exclusion breeds unfamiliarity, while unfamiliarity justifies exclusion. The cycle continues, the conversation narrows, and the decisions get made without us.
This cycle is exactly what [re:alignment] exists to break. We will question the structures at hand and broaden the conversation, equipping a wider group of people to shape the development of artificial intelligence. We will open lines of communication with those who currently hold power over these systems, lines through which that power might be redistributed.
I do also understand the impulse to opt out, to disconnect. I feel it myself sometimes.
There is a version of this that feels like integrity—a refusal to participate in building a future we didn’t ask for. Going offline, rejecting the technology entirely, can seem like the cleanest way to register dissent. The path of least resistance when something feels this overwhelming or objectionable is to simply walk away.
But, if everyone who thinks like this exits the conversation, we cede the future to those who remain. And those who remain will either be people who don’t share our concerns, or people whose perspectives, no matter how thoughtful or well-intentioned, remain narrow. The only way to have any chance at a future worth living for is to be active participants in its construction, even when engagement feels compromising, even when the current trajectory feels repugnant. This is the tension we seek to address: transforming withdrawal into action that might actually change something.
But what form should this take?
Meaningful participation requires more than policy briefs and technical papers. The challenge we face is to actively envision humanistic possibilities for AI, and logical analysis alone will not suffice. [re:alignment] will therefore feature literary and visual art aimed at that challenge and to grasp an epistemology otherwise out of reach.
There are things we can know that we cannot argue our way to. Art accesses knowledge that logical argument alone cannot, knowledge that is core to what being and flourishing actually feel like.
If we want to understand what makes a life worth living, and build technology around that, then art is not a supplement to this inquiry, but a necessary means by which we may come closer to the answer.
Art also serves as a reminder of what lies beyond. There is something about sustained engagement with technical systems—lines of code, optimization targets, capability benchmarks—that can make us forget what we are trying to protect in the first place. Our eyes glaze over and the texture of a particular life, the things that make existence meaningful, not measurable, fade from our view. Art interrupts that forgetting. It recenters our irreducible humanity and insists that it not be optimized away.
What are we fighting for? We’re fighting for a future where AI delivers on its genuine promise without sliding into equally likely dystopias.
We don’t want the narcotized contentment of Brave New World, where pleasure morphs into pacification. We don’t want the consumptive atrophy of WALL-E, where convenience replaces purpose. Yet we see it coming—each of these futures draws closer every day we remain on the sidelines. The only way to build something different is to actively participate in the creation of the future we want.
What if instead we built a future where we are free of constraints on our ability to pursue what we think meaningful? A future where we seek education for the good of knowledge in itself, not to increase our job prospects or economic utility. A future where we are able to develop deep local communities, not atomized metaversal escapes. A future where we are stewards of nature, not conquerors of exoplanets because we destroyed the one best suited for us. A future where we are all artists and musicians, where the previously marginalized share fully in the abundance we’ve created.
We’re fighting for this future by engaging in hard yet necessary conversations and by equipping people with the knowledge to shape the development of AI. We cannot rely on others to create this future for us.
AI will permeate nearly every aspect of our lives—but we get to choose which, in what ways, why, and how, if we take action together.
We are seeking contributors from all backgrounds—researchers, philosophers, historians, artists, writers, workers—anyone who wants to be part of this conversation. We want individuals willing to take risks, who seek to change others’ minds as well as their own, writers who earn their conclusions through rigorous argument but can do so accessibly. We want to grapple with the hard questions of alignment and resist easy answers. Building this future won’t be easy, but we have to try.
If you have something to say, we want to hear it. If you have art to create, we want to see it. If you want to get involved in any way: as a writer, strategist, editor, or artist, reach out.
Some of the subject areas we’re eager to explore with you:
Technical, value, and structural alignment
Human flourishing and wellbeing
Meaning, purpose, and connection
Labor and the economy
Governance and accountability
Power and inequality
Art and culture
Current trajectories and historical parallels
Environmentalism and sustainability
Some of the questions we seek to address in our work together:
On alignment:
What does alignment really mean—technically, ethically, structurally—and how do these dimensions interact?
Where are we on the path towards alignment?
Who should be entrusted with the normative questions underlying AI development, and through what processes should they be addressed?
On flourishing and wellbeing:
What does human flourishing look like? What does it feel like?
What would AI that genuinely enhances people’s lives look like—and what kind of society would that require?
How do we measure progress toward flourishing without letting the metrics become the goal?
On meaning, purpose, and connection:
How is AI changing how we relate to purpose and meaning in our lives?
How might misaligned AI weaken our connection with others? How could it instead be a mechanism to deepen human connection?
On labor and the economy:
What is the actual value of work? How might our relationship to labor change if AI can and does much of what we do?
As AI displaces human labor, how do we ensure people can still live, and live well?
How could AI actually serve to de-alienate labor when economic utility ceases to be the primary driver?
On governance and accountability:
How do present institutional, economic, and political structures shape AI development? What alternative structures might better serve aligned AI?
How should AI be regulated, and by whom?
When AI systems cause harm, who is responsible? How do we distribute accountability in increasingly automated systems?
On power and inequality:
Is AI more likely to lift the marginalized or to automate their exclusion?
Who benefits from AI’s current trajectory? Is it likely to lead to widespread flourishing or further the concentration of resources and power?
Will AI democratize opportunity or entrench existing inequalities? What would it take to ensure its benefits are broadly shared?
On art and culture:
What role do art and imagination play in shaping technological futures?
What is lost in using AI for things like writing and art?
How does AI serve as a homogenizing force in society? Can art serve as a site of resistance to this?
What happens to culture when the digital world is saturated with AI-generated content?
On our trajectory and historical precedents:
How do we know if we’re on the right path? What tradeoffs are being made and which are we willing to accept?
What can we learn from past technological developments? Where has AI’s trajectory mirrored these, and where has it diverged?
On environmentalism and sustainability:
How does AI development relate to ecological sustainability—both as a resource-intensive technology and as a potential tool for environmental solutions?
What would it mean to develop AI as stewards of the natural world rather than as its exploiters?
Can AI help us become better inhabitants of this planet, or does it accelerate our estrangement from it?
![[re:alignment]](https://substackcdn.com/image/fetch/$s_!8Hbc!,w_80,h_80,c_fill,f_auto,q_auto:good,fl_progressive:steep,g_auto/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe9e9766b-b43e-4a00-881a-ec783053a81d_500x500.png)
![[re:alignment]](https://substackcdn.com/image/fetch/$s_!gP8S!,e_trim:10:white/e_trim:10:transparent/h_72,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F256bbc89-f15b-4b68-b762-34edc9d74614_1344x256.png)
![[re:alignment]'s avatar](https://substackcdn.com/image/fetch/$s_!oavk!,w_36,h_36,c_fill,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b9facf9-9e46-43e5-a864-c3f1a21a8ba8_380x380.png)

❤️❤️❤️
To be oneself or someone else, to love or to hate, to use logic or emotion, no one knows how these questions will be answered. All we can hope is that AI will do the right thing for humanity. What’s right to me may not be right to you. Does the cycle ever end or does it create an uncontrollable cycle of “self” conflict? Who’s self? Do you need to be born in this world and have a beating heart to have a “self”? Maybe no one decides and they do, that’s terrifying.
Great post Nick, hope that you continue to question the “why”. Life’s intricacies are meant to be questioned, unfortunately most times the question bears no answer.