<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[[re:alignment]]]></title><description><![CDATA[publication dedicated to building a humanistic future in the age of ai]]></description><link>https://mag.re-alignment.com</link><generator>Substack</generator><lastBuildDate>Sat, 11 Apr 2026 06:04:10 GMT</lastBuildDate><atom:link href="https://mag.re-alignment.com/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[[re:alignment]]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[airealignment@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[airealignment@substack.com]]></itunes:email><itunes:name><![CDATA[[re:alignment]]]></itunes:name></itunes:owner><itunes:author><![CDATA[[re:alignment]]]></itunes:author><googleplay:owner><![CDATA[airealignment@substack.com]]></googleplay:owner><googleplay:email><![CDATA[airealignment@substack.com]]></googleplay:email><googleplay:author><![CDATA[[re:alignment]]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[Delight in the Details]]></title><description><![CDATA[Finding joy in a changing world]]></description><link>https://mag.re-alignment.com/p/delight-in-the-details</link><guid isPermaLink="false">https://mag.re-alignment.com/p/delight-in-the-details</guid><dc:creator><![CDATA[[re:alignment]]]></dc:creator><pubDate>Sat, 14 Feb 2026 15:54:35 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!R8N3!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F96bec1ff-45de-4804-aae2-bde69108a8cb_1250x1600.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em><strong>Atharva</strong> leads the student AI Safety Team at Brown University. He cares, broadly, about making the future go well. Currently, he spends his time reading, writing, and creating something, every single day.</em></p><div><hr></div><p>One of my worries about AI, is that it will take away the joy of the details. I enjoy doing work. I enjoy having a deep, gears-ey understanding of what&#8217;s going on. Of understanding what each individual line of code is doing; of drafting and redrafting an essay to its bones; of spending an entire afternoon muddling through a math proof. It&#8217;s deeply satisfying to get under the hood &amp; see what makes the system tick.</p><p>You lose this joy if an LLM does this work for you. You type out some text on the screen. Hit enter. And a blinking caret starts weaving up an answer. It&#8217;s deeply clinical and sterilized. You&#8217;re no longer in the muck of it all; you wave your hands and it all works out. There is no resistance, no challenge to overcome. You&#8217;re in minecraft and playing on creative mode.</p><p>&#8212;</p><p>Well, there&#8217;s a part of this sentiment that deeply resonates with me. And also &#8211; it&#8217;s not quite true?</p><p>Part of the delight of work lies in mastery. When you know exactly what chords to hit, and how hard to strike the keys, and which ones are a little stiffer, or softer than others. When you understand a system from the inside out.</p><p>But what does it mean to understand something? To truly, deeply understand something?</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!R8N3!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F96bec1ff-45de-4804-aae2-bde69108a8cb_1250x1600.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!R8N3!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F96bec1ff-45de-4804-aae2-bde69108a8cb_1250x1600.png 424w, https://substackcdn.com/image/fetch/$s_!R8N3!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F96bec1ff-45de-4804-aae2-bde69108a8cb_1250x1600.png 848w, https://substackcdn.com/image/fetch/$s_!R8N3!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F96bec1ff-45de-4804-aae2-bde69108a8cb_1250x1600.png 1272w, https://substackcdn.com/image/fetch/$s_!R8N3!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F96bec1ff-45de-4804-aae2-bde69108a8cb_1250x1600.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!R8N3!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F96bec1ff-45de-4804-aae2-bde69108a8cb_1250x1600.png" width="308" height="394.24" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/96bec1ff-45de-4804-aae2-bde69108a8cb_1250x1600.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1600,&quot;width&quot;:1250,&quot;resizeWidth&quot;:308,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!R8N3!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F96bec1ff-45de-4804-aae2-bde69108a8cb_1250x1600.png 424w, https://substackcdn.com/image/fetch/$s_!R8N3!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F96bec1ff-45de-4804-aae2-bde69108a8cb_1250x1600.png 848w, https://substackcdn.com/image/fetch/$s_!R8N3!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F96bec1ff-45de-4804-aae2-bde69108a8cb_1250x1600.png 1272w, https://substackcdn.com/image/fetch/$s_!R8N3!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F96bec1ff-45de-4804-aae2-bde69108a8cb_1250x1600.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>This, here, is a sim-card ejector. Or, rather, &#8220;sim-card ejector&#8221; is just the <a href="https://www.lesswrong.com/s/evLkoqsbi79AnM5sz/p/bRcsFM6jm272ELyx8">label we assign it</a>. From the perspective of the object, though, that is a horribly unjust flattening of everything about itself. &#8216;Ejecting sim-cards&#8217;, for instance, is but one of the many uses it might have. You could use this same object for, eg. scraping out dirt from under your fingernails, or etching your name on a rock, or blowing tiny soap bubbles.</p><p>(And that is only when we think about it in terms of its &#8216;uses&#8217;. But what of everything else? Do you know how sunlight reflects off it? How it feels when gently pressed against your palm? How it tastes?)</p><p>The point is, searching for &#8216;deep understanding&#8217; is a quest without limit. You write code in Python, but do you know what&#8217;s happening in the packages you&#8217;re calling? Well, do you know how the language is compiled and run? Do you know C? Machine code? How bits are processed on a computer? Heck, how do computers even work in the first place? And what&#8217;s up with electricity again?</p><p>The quest for understanding is a rabbit hole that goes straight down to wonderland.</p><p>And sure, you can definitely have greater mastery over certain domains than others. But, especially in our modern day and age, you&#8217;re standing on foundations that others have erected. Abstractions that are floating far above the ground. You can dig for ages, and still never hit bedrock.</p><p>There&#8217;s no special reason for the floor you&#8217;re currently on. In fact, if there&#8217;s any direction you go, it would be up. As you climb higher, you find abstractions that are more expressive, more powerful. The rarified air helps you see clearer, farther. You float on clouds &#8211; on tools, systems, and frameworks that let you fly miles high in the sky, unburdened by the worries of those trudging below. If you do ever descend, it&#8217;s often because you want to make life better at the higher levels you&#8217;re perched on. (Or, because getting into the muck is something you enjoy.)</p><p>&#8212;</p><p>Mastery over an interface can be a source of great meaning. But that doesn&#8217;t mean we should be afraid of new tools.</p><p>For one, new tools might be more expressive than extant ones. Sure, if you were born in the 80s, there&#8217;s a part of you that might actually enjoy going through pages of stack traces and segfaults. But we&#8217;re not training LLMs in C anytime soon. If you have excavators, why would you ever dig ditches with teaspoons?</p><p>Sure, at times, it can be useful to get down into the weeds. Other times, though, it&#8217;s more like knitting &#8211; something that helps you wind down, or serves as a social pastime, or, maybe, just something you find intrinsically meaningful. However, if you want a blanket, you&#8217;re probably getting one that&#8217;s been put together by a machine.</p><p>This, if you squint, is also the story of <a href="http://www.incompleteideas.net/IncIdeas/BitterLesson.html">The Bitter Lesson</a>. Throughout the history of AI, we have struggled to make machines think the way we do. In attempts to do so, we&#8217;ve imposed strict structures and finicky constraints based on our own (limited) understanding of cognition. But time and time again, these intricate approaches have lost out to more general methods that leverage the power of computation instead.</p><p>The reason this works is Moore&#8217;s Law. Computers have gotten cheaper and faster year on year on year, for the past 60 years. General purpose methods that leverage this capacity will scale far better than any fixed, finicky methods involving algorithmic tricks or human-like structures.</p><p>Wariness of human-imposed structures is relevant for not only training models, but also using them. It&#8217;s easy to get attached to our domain, our work. It&#8217;s easy to cling on to the tools we&#8217;ve grown comfortable using. However, as AI systems continue to become more capable, the tools that will succeed are those that scale with this new interface afforded to us.</p><p>&#8212;</p><p>This is a period of change. This can be valuable &#8211; it&#8217;s a re-rolling of the dice, it&#8217;s wiping the slate clean for the chance of a better future. But while change might have real benefits, it&#8217;s not easy to dismiss its costs. We do get attached to the interfaces we have. It&#8217;s a sort of creeping, growing calcification that&#8217;s natural in times of dormancy. The image here, is a once-proud log, now lying on the forest floor. Years of decomposition have led to moss, mycelia, and a teeming community of life to grow atop. This rich ecosystem arises through gradual, cumulative changes; only possible if the log is secluded from external shocks. Put it in a children&#8217;s playground, and the trampling, stamping feet afford no chance for this rich complexity to build up over time.</p><p>And this happens everywhere. Progress is cumulative. Regularities matter. They let you know where your friends&#8217; classes are, and who might you bump into when walking to your 9am. They also let you build a civilization <a href="https://homosabiens.substack.com/p/review-conor-moretons-civilization">atop the glue of communication and trust</a> &#8211; if done right. It&#8217;s only when you have your basics in place &#8211; food, water, shelter, etc &#8211; can you worry about self-actualization and higher-order goals.</p><p>&#8212;</p><p>What then, about the details? Must we be conscribed to spending our hours in front of a chatbox with a glowing caret? Or will we ever feel the sun on our skin and the breeze in our hair again?</p><p>Well, false dichotomy. Sure, change might mean we&#8217;re never as proficient with our tools as we&#8217;d like. But, if you&#8217;ve ever found joy in your work, I&#8217;m confident that you can do so again. The world is wide and varied. There will always be challenges that arise &#8211; be they external to us, or internal. There is room to knit and to crochet. There is still space for the small.</p>]]></content:encoded></item><item><title><![CDATA[The Outer Loop of Alignment]]></title><description><![CDATA[The Structural Preconditions of Alignment, Part 1: Corporate Orthogonality and The Outer-Outer-Alignment Problem]]></description><link>https://mag.re-alignment.com/p/the-outer-loop-of-alignment</link><guid isPermaLink="false">https://mag.re-alignment.com/p/the-outer-loop-of-alignment</guid><dc:creator><![CDATA[nick]]></dc:creator><pubDate>Fri, 06 Feb 2026 17:35:15 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/dbeb83b9-9db7-4b50-8f14-db057bec3436_1188x714.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Upon his departure from OpenAI in March of 2024, Jan Leike <a href="https://x.com/janleike/status/1791498182125584843">expressed</a> his discontent on X: the Superalignment team was routinely &#8220;struggling for compute,&#8221; finding alignment research &#8220;harder and harder&#8221; as &#8220;safety culture and processes [had] taken a backseat to shiny products.&#8221;</p><p>Nine years earlier, with a fresh draft of the company&#8217;s founding charter in hand, Sam Altman had committed his new company&#8217;s resources, influence, and reputation to building safe AI. He<a href="https://openai.com/charter/"> pledged</a> the company would act &#8220;in the best interests of humanity.&#8221;</p><p>Prioritizing products over safety? &#8220;Not at OpenAI,&#8221; he&#8217;d say.</p><h4><strong>So what happened?</strong></h4><p>What no one fully appreciated at the time was the scaling required: that developing frontier AI models would eventually cost billions of dollars&#8212;magnitudes of capital that, under our current economic system, only flow to corporations promising considerable returns.</p><p>OpenAI soon realized: <em>in order to develop AI for the benefit of humanity, they must become a for-profit corporation.</em></p><p>With this came, perhaps unintentionally but inevitably, the prioritization of products over safety. And following the departure of Leike along with several other senior researchers, the company<a href="https://www.wired.com/story/openai-superalignment-team-disbanded/"> dissolved</a> the Superalignment team entirely. Researchers who had been hired to work on safety found themselves reassigned to other teams or pushed out.</p><h4><strong>OpenAI&#8217;s founders knew this would happen.</strong></h4><p>Their founding charter didn&#8217;t just promise safe AI; it specifically<a href="https://openai.com/index/introducing-openai/"> committed</a> to the development of the technology &#8220;unconstrained by a need to generate financial return.&#8221; They foresaw the threat. They built a nonprofit to resist it. And when that structure gave way to a for-profit model, they thought they could withstand the pressure. But they couldn&#8217;t.</p><p>OpenAI shifted to a for-profit model to access the capital needed to develop AI designed for the benefit of humanity.  In practice, this shift only made it harder for them to deliver on the goals they set out to achieve. Even with explicit awareness of the danger, even with institutional safeguards designed to resist it, OpenAI could not withstand the pressure of profit-maximization.</p><h4><strong>The structures underlying AI development are not neutral vessels.</strong></h4><p>OpenAI&#8217;s story is not one of corruption, hypocrisy, or individual failings. It is a story of structural pressures powerful enough to overwhelm good actors. We tend to assume that institutions are passive scaffolding through which individuals pursue objectives, but they are not. In the AI ecosystem, the principal structural element is the corporation, and it is not a benign, static layer on top of which research is conducted. The corporation operates as a goal-directed system optimizing for its own objective. It is <a href="https://www.lesswrong.com/w/orthogonality-thesis">orthogonal</a>&#8212;it is a capable system that optimizes relentlessly for narrow objectives, whether or not they serve the values we hoped they would.</p><p>We need to recognize the nested element to the alignment problem: we are attempting to build aligned AI systems within institutions that themselves exhibit features of misaligned agents. Technical alignment researchers worry that using an imperfectly aligned agent to oversee the training of an AI system may compound alignment errors across successive generations.</p><p>We, however, fail to apply this same logic to the base model of the entire chain: the corporation. If the base model providing the initial oversight, resource allocation, and deployment decisions is itself misaligned, the entire chain is compromised from the start. Whether or not we can exit the inner loop of alignment, we remain trapped inside its outer loop.</p><h4><strong>OpenAI is just a single instance of a much older, deeper pattern.</strong></h4><p>The corporation has been around for over 400 years (with precursors as far back as <a href="https://en.wikipedia.org/wiki/Publicani">Ancient Rome</a>). For the majority of this history, the corporation was understood to serve a variety of social ends. It often sought to balance obligations across multiple stakeholders&#8212;workers, owners, communities, nations&#8212;to provide social value.</p><p>But the past half century or so has given rise to a new dominant theory of the firm: <a href="https://en.wikipedia.org/wiki/Shareholder_primacy">shareholder primacy</a>. Popularized by thinkers like Milton Friedman and his <a href="https://en.wikipedia.org/wiki/Chicago_school_of_economics">Chicago school of economics</a>, this theory holds that the corporation bears no social responsibility beyond serving the interests of its shareholders. In other words, &#8220;<a href="https://www.nytimes.com/1970/09/13/archives/a-friedman-doctrine-the-social-responsibility-of-business-is-to.html">The Social Responsibility of Business Is to Increase Its Profits</a>.&#8221;</p><p>Importantly, it wasn&#8217;t that a new proxy was chosen, but that the entire ontology of the firm shifted. The goal itself is profit-maximization. Objectives of the corporation are no longer mere proxies&#8212;shareholder value is the entire purpose.</p><p>Once profit became the purpose itself, it came to feel like a definitional feature of what corporations <em>are</em>. Over time, we&#8217;ve normalized a choice made only 50 years ago and now mistake it for an inevitability.</p><p>Had a different choice been made, social media might function as a tool to democratize communication, opioids as a means of managing pain, and fossil fuels as a bridge to cleaner energy. Instead, social media platforms reward compulsive use, pharmaceuticals drive addiction, and fossil fuel companies increase production. Corporations optimize for profit while externalities leak and become everyone else&#8217;s problem.</p><p>The issue here is not of individual moral failings, insufficient legal side constraints, or the malintentions of bad actors; it&#8217;s a systemic issue of structural misalignment. These are failures of systems architecture, of goal <a href="https://www.lesswrong.com/posts/mMBoPnFrFqQJKzDsZ/ai-safety-101-reward-misspecification">misspecification</a> in powerful systems. Widespread flourishing and human wellbeing routinely fall outside the objective function of the modern corporation.</p><p>Is this how we want an intensely powerful institution in our society to be constructed?</p><p>Some might say yes: &#8220;Look at all of the value created over the last 50 years! Corporations have done a great many things for society!&#8221; While this is true, it tells us nothing about causation or the counterfactual. Was this value created because of shareholder primacy? Despite it? What value might have been produced under different structures? How might society look different? The harms, on the other hand, are not speculative: half a million opioid deaths, surging teen suicide rates, the destruction of our ecosystem.</p><p>The relevant question isn&#8217;t whether the current structure has produced value&#8212;that&#8217;s undeniable. The relevant question is whether this structure is the best we can come up with; whether this structure can facilitate the robust alignment research, safe products, and collaboration across firms and stakeholders that are required for AI that serves human flourishing.</p><p>I don&#8217;t think it can.</p><h4><strong>These same structures are building AI systems.</strong></h4><p>One might object that alternative corporate structures already exist. <a href="https://www.law.cornell.edu/wex/public_benefit_corporation">Public Benefit Corporations</a>, for instance, are legally required to consider interests beyond shareholders. But while a PBC constrains <em>how</em> an entity can pursue its objectives, it does not fundamentally change <em>what</em> those objectives are. OpenAI is now a PBC. Do you think that&#8217;s enough? The structural pressures that marginalize safety research continue regardless of the legal wrapper. Changing the constraints is not the same as changing the objective function&#8212;and it&#8217;s certainly not the same as changing the competitive environment in which these systems operate.</p><p>Our focus in AI safety has been to align the technology with a vague conception of &#8220;human values.&#8221; The deeper issue is that when organizations with their own misaligned objectives undertake this work, our task of aligning AI becomes much harder. Misalignment doesn&#8217;t require a desire for harm. Quite the opposite, it&#8217;s most dangerous when it&#8217;s slightly off-kilter in ways that make it hard to diagnose. A small misspecification in the objective function compounds over the trajectory of development, and we&#8217;ve been living inside a compounding misspecification for fifty years.</p><p>We feel this. For the first time in reported history, young people reported <a href="https://www.weforum.org/stories/2024/04/youth-young-people-happiness/">lower life satisfaction</a> than those older than them&#8212;a reversal of a decades-long pattern. Despite technological advances, constant GDP growth, and longer life expectancies, we&#8217;re less happy with our lives and more pessimistic about the future. It&#8217;s clear that whatever we&#8217;re optimizing for isn&#8217;t the right thing. The misspecification we made 50 years ago&#8212;making profit the sole purpose of the corporation&#8212;has compounded over time and will continue to do so unless we change it.</p><p>While this issue of misaligned optimization and corporate orthogonality is not distinctive to AI, its gravity is. The same harms these structures have subjected us to in the past will only compound faster and cut deeper.</p><p>This argument is in no way an indictment of AI companies or people within them. Intentions are largely pure, but their relative irrelevance is precisely the point. The story of corporate AI development is not one of villainy, but one of systems pursuing misspecified objectives.</p><p>The alignment community has spent a decade theorizing about AI systems that might exhibit dangerous properties&#8212;capability, coherence, goal-directedness, poor controllability. Meanwhile, systems that already exhibit these properties are the ones building AI systems.</p><h4><strong>The good news is that this is a contingent problem, not a natural inevitability.</strong></h4><p>Shareholder primacy is not a law of nature. It&#8217;s a contingent design choice. It&#8217;s a hypothesis formalized just fifty years ago but now so ubiquitous it <em>feels</em> inevitable.</p><p>But it&#8217;s not. Corporations existed for centuries before this, and the structure we have now is not the structure we&#8217;ve always had. It was chosen. Now, we can choose something else.</p><p>The fact that alignment research has progressed as much as it has is a testament to the individuals pushing against the systematic pressures&#8212;leaving behind big paychecks when safety is sidelined, fighting for compute when pushed to the side, publishing unpopular research for the benefit of the community.</p><p>But a field that depends on heroic resistance to its own institutional structure is not a field positioned to succeed. The question we need to answer is whether we can build structures worthy of the people trying to do this work. The window is open. AI is moving fast, but we&#8217;re not yet at a point of no return. We have time. Not infinite, but enough to act if we recognize the problems for what they are.</p><p>This is not a call for better corporate ethics or more responsible leadership. It is a call for structural reform, whether through new institutional models, alternative funding mechanisms, or different ownership structures. When the problem is architectural, the solution must be too.</p>]]></content:encoded></item><item><title><![CDATA[Cheaper Than Human]]></title><description><![CDATA[On the impending collapse of credibility in the AI slop epidemic]]></description><link>https://mag.re-alignment.com/p/cheaper-than-human</link><guid isPermaLink="false">https://mag.re-alignment.com/p/cheaper-than-human</guid><dc:creator><![CDATA[[re:alignment]]]></dc:creator><pubDate>Thu, 29 Jan 2026 15:02:09 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/964f3722-5892-4589-ace7-6c9ea0a41f4d_1280x853.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I have been thinking a lot about how human labor will change if AI can (and does) do many of our jobs. If you have been following the AI discourse, you may have heard of some facetious-yet-anxiety-ridden rhetoric like the &#8220;<a href="https://www.newyorker.com/culture/infinite-scroll/will-ai-trap-you-in-the-permanent-underclass">permanent underclass</a>.&#8221; While intended as dark humor, it reveals a crippling concern that AGI will not&#8212;as Marx or Keynes might deduce&#8212;liberate humans from alienation and usher in utopia but will instead cause mass immiseration by collapsing the value of human labor.</p><p>This speculation may appear overly dystopian, but given that we are dealing with a<a href="https://dictionary.apa.org/preparadigmatic-science"> pre-paradigmatic</a> field, we should not outright dismiss it. After all, by<a href="https://forum.effectivealtruism.org/posts/ZqkK8fshf78fR3Txe/what-i-mean-by-the-phrase-taking-ideas-seriously"> taking the idea seriously</a>, we may discover some compelling economic reasons to prepare for a systematic hollowing out of the labor market.</p><p>To understand how, it may be useful to review some key dynamics in the capitalist mode of production. Every business enterprise requires two<a href="https://en.wikipedia.org/wiki/Productive_forces"> productive forces</a>: labor and capital (e.g., tools, equipment, raw materials, land, and so on). When entrepreneurs act as <a href="https://en.wikipedia.org/wiki/Homo_economicus">self-interested, rational agents</a> aiming to maximize profits, they are likely to pay their employees as little as possible&#8212;limited only by the relative bargaining power of the laborers. The minimum wage should, at the very least, cover workers&#8217; <a href="https://www.marxists.org/archive/marx/works/1847/wage-labour/ch06.htm">means of subsistence</a>; otherwise, they will leave that labor market, tightening the labor supply and strengthening the bargaining position of those who remain.</p><p>However, decades of <a href="https://www.pewresearch.org/short-reads/2025/08/27/majorities-of-adults-see-decline-of-union-membership-as-bad-for-the-us-and-working-people/">declining union membership</a>, <a href="https://www.epi.org/publication/attack-on-american-labor-standards/">weakened labor protections</a>, and globalized supply chains have eroded workers&#8217; bargaining power to the point where many in the US are barely surviving (not to mention <a href="https://academic.oup.com/book/58011/chapter/477418668">those</a> <a href="https://www.aljazeera.com/opinions/2020/1/8/worker-organising-can-counter-labour-abuse-in-the-global-south">in</a> <a href="https://www.ecchr.eu/en/cluster/exploitation-global-supply-chains/">the</a> <a href="https://truthout.org/articles/heres-what-workers-of-the-global-south-endure-to-create-corporate-wealth/">global</a> <a href="https://www.opendemocracy.net/en/beyond-trafficking-and-slavery/harsh-labour-bedrock-of-global-capitalism/">south</a>, whose labor is systematically exploited and subject to even fewer protections). <em>What then happens when a computer can perform the same task as a human at a cheaper cost? </em>The subsistence constraint is more elastic than we expected. Think: the cost of running Claude Max vs. the cost of hiring an intern to do routine work. If AI can perform the duties of a particular role just as well as humans, there is no incentive to pay human workers, plummeting human wages. As AI capabilities advance, such displacement will spread across sectors. At the most extreme, one may envision AI performing <em>every</em> task at a lower cost than humans, thereby leaving no human workers in the market. The<a href="https://en.wikipedia.org/wiki/Reserve_army_of_labour"> reserve army of labor</a> expands until it encompasses the entirety of the working class.</p><p>There are differing views on the scope of this displacement. Some jobs demanding prestige (e.g., consultant, lawyer) or interpersonal skills (e.g., teaching, nursing, elder care) might be more resistant to automation than entry-level white-collar work. As of this writing, AI-powered robots are still having difficulty using their fine motor skills to perform complex physical tasks, like construction or plumbing, but there is <a href="https://www.forbes.com/sites/sap/2025/05/01/the-ai-robots-coming-for-blue-collar-jobs/">no reason to assume</a> that this stagnation will last forever. Depending on who you hang out with, you may also get different perspectives on the time scale. An ardent techno-optimist (like<a href="https://arxiv.org/pdf/2503.04941"> this</a>,<a href="https://arxiv.org/pdf/2403.12107"> this</a>, and<a href="https://www.truthdig.com/articles/effective-accelerationism-and-the-pursuit-of-cosmic-utopia/"> this</a>) will point to the rapid pace of AI progress, which has repeatedly outpaced expert predictions, whereas a thoughtful intellectual (like<a href="https://x.com/joshgans/status/1812492809326276786"> this</a> and<a href="https://finance.yahoo.com/news/nobel-laureate-paul-romer-sees-093000071.html"> this</a>) may contend that underlying economic frictions, such as regulatory barriers or the inertia of our large and lumbering infrastructure, can stymie adoption&#8212;heck, we might even be nearing the peak of the <a href="https://en.wikipedia.org/wiki/Gartner_hype_cycle">Gartner hype cycle</a> before the inevitable collapse.</p><p>Regardless of which camp(s) you belong to, the model of AI as an incremental labor-replacing mechanism is partially misleading: it falsely assumes that in order for the substitution to occur, AI must outperform an average human at that particular task. The growing popularity of<a href="https://livebench.ai/#/"> LiveBench</a>,<a href="https://lmarena.ai/blog/about/"> LMArena</a>, and other benchmarking tools exemplifies this philosophy. Although a rewarding field of study, I do not believe a human vs. AI evaluation is the right proxy for the technology&#8217;s impending impact on society. Presenting the labor problem as one that will only arise once AI reaches human-level capabilities misses the point, and it fails to acknowledge the socioeconomic harms that are already underway.</p><h4><strong>We are bad at diagnosing disruption</strong></h4><p>Diffusion of technology is neither linear nor incremental. In the early days of the internet, Clifford Stoll<a href="https://www.newsweek.com/clifford-stoll-why-web-wont-be-nirvana-185306"> wrote</a> in Newsweek that &#8220;no online database will replace your daily newspaper, no CD-ROM can take the place of a competent teacher and no computer network will change the way government works.&#8221; Robert Metcalfe, the inventor of Ethernet, likewise<a href="https://web.archive.org/web/20050702080157/https://www.infoworld.com/cgi-bin/displaynew.pl?/metcalfe/bm120495.htm"> predicted</a> that the internet would crash in 1996, arguing that online advertising could not match TV&#8217;s measurement standards.</p><p>These experts, despite their deep domain knowledge and noblest intentions, were overly preoccupied with <em>benchmarking</em> the internet to existing media and institutions. What they failed to see was how the World Wide Web would redefine the norms of society. Shopping went online, work turned remote, and communication spread around the globe. Unencumbered by the red-tape-adhering gatekeepers of the past emerged a new class of celebrities, influencers, bloggers, and vloggers.</p><p>We are making the same mistake in AI discourse: AI does not need to be of human quality to replace human labor. Today&#8217;s AI is disruptive enough (certainly more so than the internet) that we ought to acknowledge its systematic second-order impact, which&#8212;I repeat&#8212;does not necessitate AI having an absolute advantage over humans in a specific skill. Likewise, we should expect some social disruption when decent AI becomes accessible and affordable enough to deploy on a large scale.</p><h4><strong>The cost of &#8220;decent enough&#8221; content</strong></h4><p>The current deluge of AI-generated content is a direct result of this. Because LLMs are designed to supply the least controversial response to a prompt, much of the resulting content is often of poor, uninspiring quality. But it does not really matter given that one can McDonaldize vast amounts of formulaic, dopamine-hacking content at a comparatively low cost. When the cost of doing something goes down, more people want to capitalize on it. Platforms that see potential in this content model also incentivize it through ad revenue and algorithmic promotion, causing <a href="https://us.macmillan.com/books/9780374619329/enshittification/">enshittified</a> slop to supplant thoughtful, high-quality craftsmanship.</p><p>The damage extends far beyond the<a href="https://en.wikipedia.org/wiki/Dead_Internet_theory"> copious amounts of slop</a>. LLMs do not understand language in the same way that humans do: they convert words into numerical embeddings in high-dimensional space before predicting the most likely next word or token based on the context of previous tokens (and a pinch of prejudiced reinforcement learning). The output reads like English, but it remains a cheap ersatz that lacks what one might call a &#8220;human touch.&#8221; As millions of people read LLM messages daily, we increasingly <a href="https://archive.ph/chChJ">mimic</a> the lexicon used by these models.</p><p>Since<a href="https://www.wittgensteinproject.org/w/index.php/Logisch-philosophische_Abhandlung#3.12"> language shapes our perception of reality</a>, we should also anticipate a dramatic decline in our quality of thought. There are several criteria for determining whether an idea is promising, but I believe it is strongly correlated with intuition and taste, which are cultivated through deliberate engagement with the field. An entrepreneur with a keen sense of taste, for example, can <a href="https://paulgraham.com/ambitious.html">forecast the future</a> several years in advance, but such ability is dependent on having seen enough market cycles, failed products, and user behaviors to recognize patterns before they fully emerge. Likewise, having a good &#8220;<a href="https://x.com/fchollet/status/1966893993339597034">research taste</a>&#8221; involves knowing which research trajectories will have the most impact and are worth working on.</p><p>Taste cannot be taught, but it can be developed through tight feedback loops that help you learn to discern good from bad. Imagine now that all you see is the LLM&#8217;s response confirming your flimsy theories, offering proof, and baiting you into believing you have made a groundbreaking discovery when you have not. Because big tech companies prioritize user satisfaction over anything else, they have little incentive to reduce the<a href="https://www.law.georgetown.edu/tech-institute/insights/tech-brief-ai-sycophancy-openai-2/"> models&#8217; sycophancy</a>. With this feedback loop, we are increasingly delegating critical thinking to the LLM, which puts us at risk of pursuing more underwhelming, mediocre-at-best ideas than we would have otherwise.</p><h4><strong>Cheap to create, but costly to verify</strong></h4><p>Another problem is that while AI can reduce the cost of generation, it elevates the cost of verification in many domains. LLMs can generate a code snippet or essay draft in a matter of seconds, but it takes a lot of time and effort to assess whether the code handles edge cases or whether the draft is technically cohesive. AIs are fallible&#8212;this we all know. The problem is that AI errors differ fundamentally from human errors, making them much more difficult to detect.</p><p>In a review of more than 4,000 research articles accepted for NeurIPS 2025, GPTZero<a href="https://gptzero.me/news/neurips/"> found</a> 53 publications with AI-hallucinated citations that managed to bypass three or more expert reviewers. Indeed, some errors are trickier to spot than others, but because the sample space of possible failure modes is undefined, such oversights are likely to aggregate more and more over time. A distracted writer, for example, may forget to cite a few sources, but they will never<a href="https://researchlibrary.lanl.gov/posts/beware-of-chat-gpt-generated-citations/"> cite fictitious papers</a>,<a href="https://www.nature.com/articles/s41598-023-41032-5"> hallucinate author names</a>, or<a href="https://theconversation.com/a-weird-phrase-is-plaguing-scientific-papers-and-we-traced-it-back-to-a-glitch-in-ai-training-data-254463"> concoct new jargon</a> that no one in the field has ever heard of. Granted, some human scientists may create something that is borderline &#8220;slop.&#8221; And even so, the work continues to benefit the peer review community, possibly as a nonexemplar that fosters future high-quality research. AI slop, on the other hand, provides no such benefit.</p><p>Ultimately, I see two possible reactions to this: either (a) we lower verification standards to accommodate the volume, or (b) we stop verifying entirely and accept whatever AI produces. Regardless of which course we take&#8212;both of which are undesirable&#8212;our collective judgment will surely deteriorate. When the public cannot do their due diligence and must rely on what seems credible, they are more susceptible to fraud and manipulation. Without intervention, the problem will exacerbate over time, as we currently lack sufficient quality monitoring and control. With this deficiency come delayed (or nonexistent) market feedback loops that give way to an influx of both AI-generated slop and more overtly harmful deployments of the technology.</p><p>We are already witnessing some kind of AI abuse in domains where verification is costly:</p><ul><li><p><a href="https://care.org.uk/news/2024/07/top-politicians-were-victims-of-deepfake-pornography#:~:text=12%20July%202024,health%2C%20relationships%2C%20and%20employment.">Over thirty female British politicians</a> had their deepfakes &#8220;leaked&#8221; on pornographic websites before the UK general election.</p></li><li><p>With an 85% voice match with as little as three seconds of audio, the global losses from deepfake-enabled fraud<a href="https://deepstrike.io/blog/deepfake-statistics-2025"> exceeded $200 million</a> in Q1 2025 alone.</p></li><li><p>Medical researchers<a href="https://www.researchgate.net/publication/392966707_Assessing_the_System-Instruction_Vulnerabilities_of_Large_Language_Models_to_Malicious_Conversion_Into_Health_Disinformation_Chatbots"> tested</a> five major AI chatbots with medical queries and discovered that all of them provided misleading information and, in some cases, outright false claims.</p></li></ul><p>Beyond these direct breaches, AI also erodes our trust in nonverbal &#8220;signals&#8221; for communication. Prior to the LLM era, we wrote personalized happy birthday wishes to express care. In a similar vein, professors assigned essays because clear prose calls for a deep comprehension of the subject. Now that AI can replicate much of what we used to consider signals of individual effort, care, and thoughtfulness, these markers of commitment are no longer pertinent. Worse, it may result in a self-fulfilling prophecy in which we reduce our own effort in anticipation of AI&#8217;s ability to replicate these gestures, hastening the erosion we are lamenting.</p><p>Some readers may propose using AI to filter out AI slop. While intriguing in principle, current AI lacks the autonomy to make sound and reliable recommendations. Although the model can peruse the vast literature and verify some factual claims, it cannot, for example, guarantee that the main idea of the work is novel or that the lines of reasoning are cogent. If you ask your favorite garden-variety models to rate a piece of work (without further steering), they will gladly give it their highest rating before flipping it right when challenged. One may say that they lack the stable epistemology that is essential for verification in complex domains. I hope that we will soon discover a way to overcome the <a href="https://arxiv.org/abs/2302.00487">continual learning problem</a>, allowing them to gain cumulative expertise, but in the meantime, we should expect the quality of our information ecosystem to diminish before any AI-driven solution can emerge.</p><h4><strong>Parting Thoughts</strong></h4><p>Seeing how far AIs have come is actually quite thrilling. Despite raising valid concerns, I believe the AI safety conversation largely overlooks the trends outlined in this article. When the current trust protocols break down, many organizations will likely resort to thorough monitoring or tracking, but how will they do so while protecting employees&#8217; privacy rights? In-person interactions (both business and casual) might reclaim their premium value at levels not seen since the internet era. Cryptographic <a href="https://arxiv.org/abs/2504.03752v1">proof-of-humanity</a> systems, while imperfect, could also become standard safeguards against deepfake fraud.</p><p>Reputation and track record will become more valuable in many high-stakes domains, raising the barrier to entry for newcomers and decreasing social mobility. I think we will need structured avenues that reward mentor-apprentice relationships to disperse opportunities away from the well-established goliaths of the field and allow individuals with talent but little experience to break in. This plan could work in tandem with some nonprofit, arXiv-inspired &#8220;reputation commons&#8221; where people can build verified portfolios without elite institutional affiliations. I must admit that I do not have complete solutions to these problems&#8212;and frankly, I am skeptical of anyone who claims they do. If anything, I hope this post serves as a reminder for everyone to think more deeply about this issue.</p><p><strong>We have unleashed the genie; the question now is whether we will build a world worth living in with it.</strong></p>]]></content:encoded></item><item><title><![CDATA[The Art of Wanting]]></title><description><![CDATA[Wanting the world to be a certain way is our privilege and our unique responsibility. Understanding what you really want is nontrivial, utterly difficult, essentially human.]]></description><link>https://mag.re-alignment.com/p/the-art-of-wanting</link><guid isPermaLink="false">https://mag.re-alignment.com/p/the-art-of-wanting</guid><dc:creator><![CDATA[[re:alignment]]]></dc:creator><pubDate>Thu, 22 Jan 2026 15:37:55 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/1a5c0343-bfde-43db-a5e2-76afaa5d2d9a_900x741.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em><strong>David Bau </strong>is a former Google engineer and current computer science professor at Northeastern University whose research focuses on understanding how AI systems work. His <a href="http://baulab.info">lab</a> studies the structure and interpretation of deep networks. This post was originally published to David&#8217;s <a href="https://davidbau.com/">personal blog</a> on January 17, 2026.</em></p><div><hr></div><p>Some folks like Sam Altman define <a href="https://en.wikipedia.org/wiki/Artificial_general_intelligence">AGI</a> in terms of the usefulness of AI surpassing some human threshold, when we have <a href="https://openai.com/charter/">&#8220;a highly autonomous system that outperforms humans at most economically valuable work.&#8221;</a></p><p>The idea here is that AGI is about money. When AI outperforms humans at most economically valuable work, it could go collect all those dollars itself. By this measure, we might have already passed AGI a year or two ago....</p><h4>A Normal Revolution</h4><p>If you total up all the dollars, I could believe AI can already do most of the work people are paid for today. It is an economically transformative invention. But today AI is a &#8220;<a href="https://knightcolumbia.org/content/ai-as-normal-technology">normal technology</a> revolution&#8221;: a big deal, but similar to the industrial revolution, the invention of the wheel, or the taming of fire.</p><p>Before the industrial revolution, &#8220;labor&#8221; meant just that: <a href="https://humanprogress.org/trends/the-changing-nature-of-work/">most work people did was physical</a>. Hauling things around, pulling crops out of the earth, tending animals. A few artisans spun thread and created manufactured items. The creation of engines put an end to that way of life.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!VsUE!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7d7cc0e5-21a7-4b82-893f-b376ad190df4_1592x492.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!VsUE!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7d7cc0e5-21a7-4b82-893f-b376ad190df4_1592x492.jpeg 424w, https://substackcdn.com/image/fetch/$s_!VsUE!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7d7cc0e5-21a7-4b82-893f-b376ad190df4_1592x492.jpeg 848w, https://substackcdn.com/image/fetch/$s_!VsUE!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7d7cc0e5-21a7-4b82-893f-b376ad190df4_1592x492.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!VsUE!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7d7cc0e5-21a7-4b82-893f-b376ad190df4_1592x492.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!VsUE!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7d7cc0e5-21a7-4b82-893f-b376ad190df4_1592x492.jpeg" width="1456" height="450" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7d7cc0e5-21a7-4b82-893f-b376ad190df4_1592x492.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:450,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:&quot;Jacquard loom punch cards. Each card encoded a row of the weaving pattern, and stringing thousands of cards together automated the work of skilled weavers. The same idea of encoding instructions on cards evolved into paper tape, then magnetic storage, then modern computer programs. The punch card is the ancestor of software.&quot;,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="Jacquard loom punch cards. Each card encoded a row of the weaving pattern, and stringing thousands of cards together automated the work of skilled weavers. The same idea of encoding instructions on cards evolved into paper tape, then magnetic storage, then modern computer programs. The punch card is the ancestor of software." srcset="https://substackcdn.com/image/fetch/$s_!VsUE!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7d7cc0e5-21a7-4b82-893f-b376ad190df4_1592x492.jpeg 424w, https://substackcdn.com/image/fetch/$s_!VsUE!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7d7cc0e5-21a7-4b82-893f-b376ad190df4_1592x492.jpeg 848w, https://substackcdn.com/image/fetch/$s_!VsUE!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7d7cc0e5-21a7-4b82-893f-b376ad190df4_1592x492.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!VsUE!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7d7cc0e5-21a7-4b82-893f-b376ad190df4_1592x492.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>The <a href="https://en.wikipedia.org/wiki/Jacquard_machine">Jacquard loom</a> workers lived through it. Textile manufacturing was a major part of the economy in the early 1800s. Patterned fabric required two people per loom: a skilled weaver and a draw boy who <a href="https://www.youtube.com/watch?v=Is8Segc6fy8&amp;t=13s">sat alongside the machine, manually raising and lowering each warp thread</a> to create the pattern. The Jacquard loom, driven by punch cards, eliminated the draw boy entirely while <a href="https://aiwhisperer.org/blog/when-change-is-looming-what-the-weaving-loom-teaches-us-about-ai-s-5-year-horizon">increasing output more than twenty-fold</a>. Within a few decades, steam-powered looms replaced the skilled weaver too. Pattern designers remained important, a small and celebrated profession, but this tiny number of jobs did not create employment for the millions of displaced weavers. Those jobs were gone forever.</p><p>Yet the ability to specify patterns on punch cards turned out to be the seed of something larger: paper tape, magnetic storage, <a href="https://softwareimpact.bsa.org/">software</a>. Today millions of people make their living designing instructions for machines that have nothing to do with fabric. No one in 1800 could have imagined these jobs. The problems they solve didn&#8217;t exist yet.</p><p>In today&#8217;s world, <a href="https://www.bea.gov/data/gdp/gdp-industry">most dollars paid to people</a> are for &#8220;<a href="https://en.wikipedia.org/wiki/Economy_of_the_United_States">intellectual labor</a>&#8221;: hauling information around, digging for it, harvesting it, organizing it. And the power of AI means that a lot of this work will now be automated. Maybe most of the dollars in the whole economy. The trajectory will be the same. New work will emerge that we do not yet recognize as work.</p><p>What will the new work be? I think it will be the work of wanting.</p><h4>The Work of Wanting</h4><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!teXL!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fca7ff7bd-679e-4d85-a53b-3e1d87c17529_1280x480.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!teXL!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fca7ff7bd-679e-4d85-a53b-3e1d87c17529_1280x480.jpeg 424w, https://substackcdn.com/image/fetch/$s_!teXL!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fca7ff7bd-679e-4d85-a53b-3e1d87c17529_1280x480.jpeg 848w, https://substackcdn.com/image/fetch/$s_!teXL!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fca7ff7bd-679e-4d85-a53b-3e1d87c17529_1280x480.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!teXL!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fca7ff7bd-679e-4d85-a53b-3e1d87c17529_1280x480.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!teXL!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fca7ff7bd-679e-4d85-a53b-3e1d87c17529_1280x480.jpeg" width="1280" height="480" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/ca7ff7bd-679e-4d85-a53b-3e1d87c17529_1280x480.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:480,&quot;width&quot;:1280,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:&quot;Monet's water garden at Giverny. He spent forty years designing this garden specifically so he could paint it, wanting a world into existence and then attending to it until he understood what he'd made.&quot;,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="Monet's water garden at Giverny. He spent forty years designing this garden specifically so he could paint it, wanting a world into existence and then attending to it until he understood what he'd made." srcset="https://substackcdn.com/image/fetch/$s_!teXL!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fca7ff7bd-679e-4d85-a53b-3e1d87c17529_1280x480.jpeg 424w, https://substackcdn.com/image/fetch/$s_!teXL!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fca7ff7bd-679e-4d85-a53b-3e1d87c17529_1280x480.jpeg 848w, https://substackcdn.com/image/fetch/$s_!teXL!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fca7ff7bd-679e-4d85-a53b-3e1d87c17529_1280x480.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!teXL!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fca7ff7bd-679e-4d85-a53b-3e1d87c17529_1280x480.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><a href="https://www.goodreads.com/book/show/187633.Art_and_Fear">Any artist will tell you</a> that art is not just about the craft of creating the artifact, but about understanding <a href="https://poets.org/text/letters-young-poet-first-letter">what you want to say</a>. That impulse to shape something and share it, that need to communicate. Understanding what you really want to say is nontrivial, utterly difficult, essentially human.</p><p>This isn&#8217;t just about high-minded art. Any engineer, any nurse, any office worker will tell you that beyond the mechanics of their job, they constantly face the question: what kind of world do they want to create? There is not just one way to do any complex thing. You can do it badly, without care and without love. Or you can be intentional about it: really understand what you&#8217;re going for, <a href="https://davidbau.com/archives/2025/12/16/vibe_coding.html">take care to define what it means to do it well</a>, pay attention to what has happened once it&#8217;s done, circle around to improve your vision and perfect the thing, then showcase it, explain it, share it.</p><p>This &#8220;high practice of wanting&#8221; is very hard to do well. It is nontrivial, utterly difficult, essentially human. And today we do not think of wanting as work. But it will be.</p><p>What does it mean to <a href="https://davidbau.com/archives/2017/06/28/volo_ergo_sum.html">want something well</a>?</p><p>It is not as simple as it sounds. To truly want something, to want it in a way worthy of what it costs you to care, it requires at least three things.</p><p>First: <strong>understand</strong> the implications of your choice. This is hard. A four-year-old who eats the whole bag of candy does not really know what he wants. He wants the taste; he does not want the stomachache. This only gets harder as the world becomes more complex. AI can help us model implications we would never trace on our own: the paths forward, the paths not taken, why we might want one over another. But it is not enough for AI to know the implications. To understand and navigate the choices facing us, we need to <a href="https://www.pnas.org/doi/10.1073/pnas.2406675122">know what AI knows</a>.</p><p>Second: <strong>specify</strong> your choice. Even when you think you know what you want, it is easy to misspecify or underspecify it. Consider: I want to get from Boston to LA by tomorrow, as cheaply as possible. I don&#8217;t have much money; I don&#8217;t need a comfortable seat; just make it cheap. Well: the cheapest option is to slice you up and send you as pieces of meat by overnight parcel. Careful what you ask for. Underspecification is emerging as <a href="https://arxiv.org/abs/2011.03395">a serious safety issue</a> in AI.</p><p>Third: <strong>recognize</strong> your choice. After you act, you must remain attentive. Did you get what you had in mind? In the AI era, our activities will become far more complex than we are accustomed to. <a href="https://davidbau.com/archives/2025/12/16/vibe_coding.html">In my own experience vibe coding</a>, I found I could manage roughly twenty times the complexity for the same amount of attention and effort. At that scale, thorough understanding and precise specification become nearly impossible up front. Things will go wrong. You need to notice, stop, and repeat the process: improving your understanding, honing your specification, trying again.</p><p>The elements of wanting: understanding, specifying, recognizing; these are what people will do in the era of AGI. These are the jobs of the future. Wanting well means taking responsibility for your choices.</p><h4>Who Gets to Decide</h4><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Wtle!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F01d24daa-0308-461c-814c-5e535288f920_1218x384.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Wtle!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F01d24daa-0308-461c-814c-5e535288f920_1218x384.jpeg 424w, https://substackcdn.com/image/fetch/$s_!Wtle!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F01d24daa-0308-461c-814c-5e535288f920_1218x384.jpeg 848w, https://substackcdn.com/image/fetch/$s_!Wtle!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F01d24daa-0308-461c-814c-5e535288f920_1218x384.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!Wtle!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F01d24daa-0308-461c-814c-5e535288f920_1218x384.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Wtle!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F01d24daa-0308-461c-814c-5e535288f920_1218x384.jpeg" width="1218" height="384" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/01d24daa-0308-461c-814c-5e535288f920_1218x384.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:384,&quot;width&quot;:1218,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:&quot;A houseplant reaching toward the light. Phototropism is a goal-seeking behavior, but the plant has no awareness of what it is doing. We are surrounded by goal-seeking systems: thermostats, algorithms, markets, organisms. None of them bear responsibility for their goals. Only humans do.&quot;,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="A houseplant reaching toward the light. Phototropism is a goal-seeking behavior, but the plant has no awareness of what it is doing. We are surrounded by goal-seeking systems: thermostats, algorithms, markets, organisms. None of them bear responsibility for their goals. Only humans do." srcset="https://substackcdn.com/image/fetch/$s_!Wtle!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F01d24daa-0308-461c-814c-5e535288f920_1218x384.jpeg 424w, https://substackcdn.com/image/fetch/$s_!Wtle!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F01d24daa-0308-461c-814c-5e535288f920_1218x384.jpeg 848w, https://substackcdn.com/image/fetch/$s_!Wtle!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F01d24daa-0308-461c-814c-5e535288f920_1218x384.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!Wtle!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F01d24daa-0308-461c-814c-5e535288f920_1218x384.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>But there is a deeper question around AGI, which is: once thinking is all automated, is there a need for people at all anymore?</p><p>At the root of this question is not whether AGI can do most of the thinking that is done <em>today</em>, but whether AGI is <a href="https://en.wikipedia.org/wiki/Recursive_self-improvement">recursive</a>: if AGI is so smart that no matter what humans might choose to spend their time doing in the <em>future </em>(including the work of improving AI), AI will immediately be better at that too. That AGI is so good at learning, predicting, and imitating humans that there is no room left for people to be needed for anything.</p><p>What if AGI can even do all the &#8220;wanting&#8221; for humans?</p><p>That is a stronger definition of AGI. I do not think we are there, and I do not think we are close. We could delude ourselves into pretending the AGI can do the wanting. But it cannot.</p><p>Why not? Because recursive AGI isn&#8217;t really about capability at all. It&#8217;s about who gets to decide what matters.</p><p>A plant wants to point at the sun. We can describe that mechanistically: <a href="https://en.wikipedia.org/wiki/Phototropism">phototropism</a>, the auxin gradient, differential cell growth. The plant &#8220;wants&#8221; sunlight the way a thermostat &#8220;wants&#8221; a certain temperature. We don&#8217;t consult the houseplant about urban planning.</p><p>That&#8217;s not because the plant is bad at identifying goals. It is because the plant has no <a href="https://en.wikipedia.org/wiki/Normativity">normative</a> authority. Its wants aren&#8217;t the kind of wants that count.</p><p>AI is the same. Even if AI becomes impossibly good at identifying goals, predicting outcomes, recommending actions, it will always be like the plant growing toward the sun. A process that optimizes, not a subject that decides. AI will get better at anticipating human wants, at guessing our goals, and this will make it more useful. Yet as humanlike as AI becomes, its wants (if we can call them that) aren&#8217;t normative. Not in our human world.</p><p>Wanting the world to be a certain way is our privilege and our unique responsibility. We are the ones with skin in the game: the ones who live and die, who suffer and flourish, whose children inherit whatever world we leave behind. To be the judge, the arbiter, the decider: this is what it means to be human in relation to our tools.</p><p>We are far from being able to offload that responsibility to AI. And we have a choice in how we build it.</p><p>We can design AI that dulls our ability to want, to understand, to choose. Or we can design AI that sharpens our ability to comprehend a complex world, and to take responsibility for the things we want.</p><p>This is the most important choice we face in AI today.</p>]]></content:encoded></item><item><title><![CDATA[What are we building for? Artificial intelligence, alignment, and a future worth wanting]]></title><description><![CDATA[Shape the conversation around a life well-lived]]></description><link>https://mag.re-alignment.com/p/what-are-we-building-for-artificial</link><guid isPermaLink="false">https://mag.re-alignment.com/p/what-are-we-building-for-artificial</guid><dc:creator><![CDATA[[re:alignment]]]></dc:creator><pubDate>Tue, 13 Jan 2026 15:49:56 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!AZZj!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F753cb2c7-aa86-4028-b75a-696bfd21210a_4000x3000.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>This essay marks the beginning of our new publication, [re:alignment], a venue for a broader, more humanistic conversation about artificial intelligence and society.</em></p><p><em>We are a coalition of technologists, academics, artists, and workers, united by a belief that this conversation must be wider, deeper, and more rigorous. Through analytic essays, personal interviews, and literary and visual art, we will critique orthodoxies and collectively envision what AI development could instead look like&#8212;all while equipping a broader audience to engage meaningfully in these conversations and build a future worth wanting.</em></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://mag.re-alignment.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!AZZj!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F753cb2c7-aa86-4028-b75a-696bfd21210a_4000x3000.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!AZZj!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F753cb2c7-aa86-4028-b75a-696bfd21210a_4000x3000.jpeg 424w, https://substackcdn.com/image/fetch/$s_!AZZj!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F753cb2c7-aa86-4028-b75a-696bfd21210a_4000x3000.jpeg 848w, https://substackcdn.com/image/fetch/$s_!AZZj!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F753cb2c7-aa86-4028-b75a-696bfd21210a_4000x3000.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!AZZj!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F753cb2c7-aa86-4028-b75a-696bfd21210a_4000x3000.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!AZZj!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F753cb2c7-aa86-4028-b75a-696bfd21210a_4000x3000.jpeg" width="1456" height="1092" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/753cb2c7-aa86-4028-b75a-696bfd21210a_4000x3000.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1092,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:4353897,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://mag.re-alignment.com/i/184399065?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F753cb2c7-aa86-4028-b75a-696bfd21210a_4000x3000.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!AZZj!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F753cb2c7-aa86-4028-b75a-696bfd21210a_4000x3000.jpeg 424w, https://substackcdn.com/image/fetch/$s_!AZZj!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F753cb2c7-aa86-4028-b75a-696bfd21210a_4000x3000.jpeg 848w, https://substackcdn.com/image/fetch/$s_!AZZj!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F753cb2c7-aa86-4028-b75a-696bfd21210a_4000x3000.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!AZZj!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F753cb2c7-aa86-4028-b75a-696bfd21210a_4000x3000.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Drowning in lines of code meant to train networks of artificial neurons, code that will one day give rise to an intelligence surpassing my own, I wonder: Have we lost sight of what we&#8217;re building this for? Who gets to say? I fear that despite equipping ourselves with something so capable, we&#8217;re risking things far more valuable. In folktales, the wise protagonist is the one who never loses sight of what really matters. Today, our story concerns the most powerful technology in history. Have we lost sight?</p><p>Since beginning my graduate studies in machine learning, I&#8217;ve been carrying this feeling. A feeling which, at first, I couldn&#8217;t quite place. Was it resentment? Contempt? Exasperation? Resignation?</p><p>The feeling, I&#8217;ve realized, was a sort of helplessness, and with it, paralysis. I know I&#8217;m not the only one who feels this way. Increasingly, the sentiment seems to be that great games are being played at great scale and speed, which will reshape our lives in ways we&#8217;re powerless to alter. Ways which may turn out to be for the worst.</p><h4>But helplessness has an antidote: deliberate action. [re:alignment] is our attempt at such action&#8212;action that will help deliver the power to shape our future.</h4><p>Current discourse is broken. We are having narrow, siloed, and ill-informed conversations about a technology even experts struggle to understand, a technology with profound normative implications that go largely unaddressed. The structures in which we are developing it&#8212;institutional, political, economic&#8212;will not, if left untouched, deliver outcomes worth wanting.</p><p>[re:alignment] is an offshoot of broader work within <strong>AI alignment</strong>: how to develop AI such that it produces good outcomes in the real world. While simple enough to define and understand in principle, alignment has proven to be a Pandora&#8217;s box, and remains one of the most important research questions of our lifetimes.</p><p>Much of the AI industry, however, sees the alignment problem as a <em>technical </em>question. Consequently, most discussion and research live in this category: how do we make AI systems reliably execute what their developers intend? How do we ensure that they follow instructions, avoid unintended behaviors, and operate within specified constraints?</p><p>But this notion of <strong>technical alignment</strong> is just one piece of the puzzle. With it come intense normative questions of alignment, which are often ignored or addressed exclusively by a small cadre of like-minded technologists.</p><p>So how do we make sure AI is actually a good thing? What does it need to be like&#8212;what values should it hold, what ways should it act, what powers should it have&#8212;so that it is conducive to our flourishing? These are not questions we can ignore, nor are they questions technologists can answer alone. So who gets to decide?</p><p>And who decides the conditions of its creation? The economic, political, and institutional structures in which AI is now incubating&#8212;are they the right ones? Is truly beneficial AI likely to emerge from them? If not these structures, what else?</p><p>To me, these are the biggest questions in AI today, and they will remain even if we figure out how to <em>technically</em> align AI. But discussion of them is scant and mostly happens among a coterie of academics and technologists, making decisions that will affect everyone. Yes, these systems are complex, and yes, experts should continue to play a significant role, but complexity doesn&#8217;t exempt a technology from broader deliberation, especially one that will reshape how all of us live.</p><h4>These conversations further suffer from dominant frameworks such as treating the future as a quantifiable optimization problem of &#8220;maximizing good.&#8221; Somewhere along the way, though, we&#8217;ve lost track of what we&#8217;re actually optimizing for.</h4><p>There is a familiar pattern here: we identify something that matters and create metrics to track it. But over time, the metrics become the goal. We optimize for the targets and forget what the targets were supposed to be for. Extending human life, curing disease, eradicating mindless work&#8212;these are not ends in themselves. They are instrumental goods, valuable because of what they make possible: <em>a life that feels worth living</em>. Intrinsic goods&#8212;meaning, beauty, connection, the texture of a life actually lived&#8212;are what we are ultimately trying to protect and enable. But they are harder to measure, and so they fade from view.</p><h4>When the instrumental is mistaken for the intrinsic, we end up building toward a future that hits every benchmark while entirely missing the point.</h4><p>We shouldn&#8217;t abandon technical progress or reject the genuine goods that this progress might deliver. Rather, we need to remember why we are building it in the first place. We need to keep the intrinsic goods in view and as the orienting purpose of the whole endeavor.</p><h4>These are big fucking questions, which is why we need to work together in addressing them. But we are increasingly divided into AI insiders and outsiders.</h4><p>Present social structures don&#8217;t allow for meaningful participation from those outside the technological cadre. And many people don&#8217;t understand AI and alignment well enough to contribute to the conversation, even if they could. Thus, each side of this disconnect mutually reinforces the other: exclusion breeds unfamiliarity, while unfamiliarity justifies exclusion. The cycle continues, the conversation narrows, and the decisions get made without us.</p><p>This cycle is exactly what [re:alignment] exists to break. We will question the structures at hand and broaden the conversation, equipping a wider group of people to shape the development of artificial intelligence. We will open lines of communication with those who currently hold power over these systems, lines through which that power might be redistributed.</p><h4>I do also understand the impulse to opt out, to disconnect. I feel it myself sometimes.</h4><p>There is a version of this that feels like integrity&#8212;a refusal to participate in building a future we didn&#8217;t ask for. Going offline, rejecting the technology entirely, can seem like the cleanest way to register dissent. The path of least resistance when something feels this overwhelming or objectionable is to simply walk away.</p><p>But, if everyone who thinks like this exits the conversation, we cede the future to those who remain. And those who remain will either be people who don&#8217;t share our concerns, or people whose perspectives, no matter how thoughtful or well-intentioned, remain narrow. The only way to have any chance at a future worth living for is to be active participants in its construction, even when engagement feels compromising, even when the current trajectory feels repugnant. This is the tension we seek to address: transforming withdrawal into action that might actually change something.</p><h4>But what form should this take?</h4><p>Meaningful participation requires more than policy briefs and technical papers. The challenge we face is to actively envision humanistic possibilities for AI, and logical analysis alone will not suffice. [re:alignment] will therefore feature literary and visual art aimed at that challenge and to grasp an epistemology otherwise out of reach.</p><p>There are things we can know that we cannot argue our way to. Art accesses knowledge that logical argument alone cannot, knowledge that is core to what being and flourishing actually feel like. </p><h4>If we want to understand what makes a life worth living, and build technology around <em>that</em>, then art is not a supplement to this inquiry, but a necessary means by which we may come closer to the answer.</h4><p>Art also serves as a reminder of what lies beyond. There is something about sustained engagement with technical systems&#8212;lines of code, optimization targets, capability benchmarks&#8212;that can make us forget what we are trying to protect in the first place. Our eyes glaze over and the texture of a particular life, the things that make existence meaningful, not measurable, fade from our view. <em>Art interrupts that forgetting</em>. It recenters our irreducible humanity and insists that it not be optimized away.</p><h4>What are we fighting for? We&#8217;re fighting for a future where AI delivers on its genuine promise without sliding into equally likely dystopias.</h4><p>We don&#8217;t want the narcotized contentment of <em>Brave New World</em>, where pleasure morphs into  pacification. We don&#8217;t want the consumptive atrophy of <em>WALL-E</em>, where convenience replaces purpose. Yet we see it coming&#8212;each of these futures draws closer every day we remain on the sidelines. The only way to build something different is to actively participate in the creation of the future we want.</p><p>What if instead we built a future where we are free of constraints on our ability to pursue what we think meaningful? A future where we seek education for the good of knowledge in itself, not to increase our job prospects or economic utility. A future where we are able to develop deep local communities, not atomized metaversal escapes. A future where we are stewards of nature, not conquerors of exoplanets because we destroyed the one best suited for us. A future where we are all artists and musicians, where the previously marginalized share fully in the abundance we&#8217;ve created.</p><p>We&#8217;re fighting for this future by engaging in hard yet necessary conversations and by equipping people with the knowledge to shape the development of AI. We cannot rely on others to create this future for us.</p><h4>AI will permeate nearly every aspect of our lives&#8212;but we get to choose which, in what ways, why, and how, <em>if</em> we take action together.</h4><p>We are seeking contributors from all backgrounds&#8212;researchers, philosophers, historians, artists, writers, workers&#8212;anyone who wants to be part of this conversation. We want individuals willing to take risks, who seek to change others&#8217; minds as well as their own, writers who earn their conclusions through rigorous argument but can do so accessibly. We want to grapple with the hard questions of alignment and resist easy answers. Building this future won&#8217;t be easy, but we have to try.</p><p><strong>If you have something to say, we want to hear it. If you have art to create, we want to see it. If you want to get involved in any way: as a writer, strategist, editor, or artist, reach out. </strong></p><p><strong><a href="mailto:us@re-alignment.com">us@re-alignment.com</a></strong></p><div><hr></div><p><strong>Some of the subject areas we&#8217;re eager to explore with you:</strong></p><ul><li><p>Technical, value, and structural alignment</p></li><li><p>Human flourishing and wellbeing</p></li><li><p>Meaning, purpose, and connection</p></li><li><p>Labor and the economy</p></li><li><p>Governance and accountability</p></li><li><p>Power and inequality</p></li><li><p>Art and culture</p></li><li><p>Current trajectories and historical parallels</p></li><li><p>Environmentalism and sustainability</p></li></ul><p><strong>Some of the questions we seek to address in our work together:</strong></p><p>On alignment:</p><ul><li><p>What does alignment really mean&#8212;technically, ethically, structurally&#8212;and how do these dimensions interact?</p></li></ul><ul><li><p>Where are we on the path towards alignment?</p></li><li><p>Who should be entrusted with the normative questions underlying AI development, and through what processes should they be addressed?</p></li></ul><p>On flourishing and wellbeing:</p><ul><li><p>What does human flourishing look like? What does it feel like?</p></li><li><p>What would AI that genuinely enhances people&#8217;s lives look like&#8212;and what kind of society would that require?</p></li><li><p>How do we measure progress toward flourishing without letting the metrics become the goal?</p></li></ul><p>On meaning, purpose, and connection:</p><ul><li><p>How is AI changing how we relate to purpose and meaning in our lives?</p></li><li><p>How might misaligned AI weaken our connection with others? How could it instead be a mechanism to deepen human connection?</p></li></ul><p>On labor and the economy:</p><ul><li><p>What is the actual value of work? How might our relationship to labor change if AI can and does much of what we do?</p></li><li><p>As AI displaces human labor, how do we ensure people can still live, and live well?</p></li><li><p>How could AI actually serve to de-alienate labor when economic utility ceases to be the primary driver?</p></li></ul><p>On governance and accountability:</p><ul><li><p>How do present institutional, economic, and political structures shape AI development? What alternative structures might better serve aligned AI?</p></li><li><p>How should AI be regulated, and by whom?</p></li><li><p>When AI systems cause harm, who is responsible? How do we distribute accountability in increasingly automated systems?</p></li></ul><p>On power and inequality:</p><ul><li><p>Is AI more likely to lift the marginalized or to automate their exclusion?</p></li><li><p>Who benefits from AI&#8217;s current trajectory? Is it likely to lead to widespread flourishing or further the concentration of resources and power?</p></li><li><p>Will AI democratize opportunity or entrench existing inequalities? What would it take to ensure its benefits are broadly shared?</p></li></ul><p>On art and culture:</p><ul><li><p>What role do art and imagination play in shaping technological futures?</p></li><li><p>What is lost in using AI for things like writing and art?</p></li><li><p>How does AI serve as a homogenizing force in society? Can art serve as a site of resistance to this?</p></li><li><p>What happens to culture when the digital world is saturated with AI-generated content?</p></li></ul><p>On our trajectory and historical precedents:</p><ul><li><p>How do we know if we&#8217;re on the right path? What tradeoffs are being made and which are we willing to accept?</p></li><li><p>What can we learn from past technological developments? Where has AI&#8217;s trajectory mirrored these, and where has it diverged?</p></li></ul><p>On environmentalism and sustainability:</p><ul><li><p>How does AI development relate to ecological sustainability&#8212;both as a resource-intensive technology and as a potential tool for environmental solutions?</p></li><li><p>What would it mean to develop AI as stewards of the natural world rather than as its exploiters?</p></li><li><p>Can AI help us become better inhabitants of this planet, or does it accelerate our estrangement from it?</p></li></ul><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://mag.re-alignment.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item></channel></rss>