Cheaper Than Human
On the impending collapse of credibility in the AI slop epidemic
I have been thinking a lot about how human labor will change if AI can (and does) do many of our jobs. If you have been following the AI discourse, you may have heard of some facetious-yet-anxiety-ridden rhetoric like the “permanent underclass.” While intended as dark humor, it reveals a crippling concern that AGI will not—as Marx or Keynes might deduce—liberate humans from alienation and usher in utopia but will instead cause mass immiseration by collapsing the value of human labor.
This speculation may appear overly dystopian, but given that we are dealing with a pre-paradigmatic field, we should not outright dismiss it. After all, by taking the idea seriously, we may discover some compelling economic reasons to prepare for a systematic hollowing out of the labor market.
To understand how, it may be useful to review some key dynamics in the capitalist mode of production. Every business enterprise requires two productive forces: labor and capital (e.g., tools, equipment, raw materials, land, and so on). When entrepreneurs act as self-interested, rational agents aiming to maximize profits, they are likely to pay their employees as little as possible—limited only by the relative bargaining power of the laborers. The minimum wage should, at the very least, cover workers’ means of subsistence; otherwise, they will leave that labor market, tightening the labor supply and strengthening the bargaining position of those who remain.
However, decades of declining union membership, weakened labor protections, and globalized supply chains have eroded workers’ bargaining power to the point where many in the US are barely surviving (not to mention those in the global south, whose labor is systematically exploited and subject to even fewer protections). What then happens when a computer can perform the same task as a human at a cheaper cost? The subsistence constraint is more elastic than we expected. Think: the cost of running Claude Max vs. the cost of hiring an intern to do routine work. If AI can perform the duties of a particular role just as well as humans, there is no incentive to pay human workers, plummeting human wages. As AI capabilities advance, such displacement will spread across sectors. At the most extreme, one may envision AI performing every task at a lower cost than humans, thereby leaving no human workers in the market. The reserve army of labor expands until it encompasses the entirety of the working class.
There are differing views on the scope of this displacement. Some jobs demanding prestige (e.g., consultant, lawyer) or interpersonal skills (e.g., teaching, nursing, elder care) might be more resistant to automation than entry-level white-collar work. As of this writing, AI-powered robots are still having difficulty using their fine motor skills to perform complex physical tasks, like construction or plumbing, but there is no reason to assume that this stagnation will last forever. Depending on who you hang out with, you may also get different perspectives on the time scale. An ardent techno-optimist (like this, this, and this) will point to the rapid pace of AI progress, which has repeatedly outpaced expert predictions, whereas a thoughtful intellectual (like this and this) may contend that underlying economic frictions, such as regulatory barriers or the inertia of our large and lumbering infrastructure, can stymie adoption—heck, we might even be nearing the peak of the Gartner hype cycle before the inevitable collapse.
Regardless of which camp(s) you belong to, the model of AI as an incremental labor-replacing mechanism is partially misleading: it falsely assumes that in order for the substitution to occur, AI must outperform an average human at that particular task. The growing popularity of LiveBench, LMArena, and other benchmarking tools exemplifies this philosophy. Although a rewarding field of study, I do not believe a human vs. AI evaluation is the right proxy for the technology’s impending impact on society. Presenting the labor problem as one that will only arise once AI reaches human-level capabilities misses the point, and it fails to acknowledge the socioeconomic harms that are already underway.
We are bad at diagnosing disruption
Diffusion of technology is neither linear nor incremental. In the early days of the internet, Clifford Stoll wrote in Newsweek that “no online database will replace your daily newspaper, no CD-ROM can take the place of a competent teacher and no computer network will change the way government works.” Robert Metcalfe, the inventor of Ethernet, likewise predicted that the internet would crash in 1996, arguing that online advertising could not match TV’s measurement standards.
These experts, despite their deep domain knowledge and noblest intentions, were overly preoccupied with benchmarking the internet to existing media and institutions. What they failed to see was how the World Wide Web would redefine the norms of society. Shopping went online, work turned remote, and communication spread around the globe. Unencumbered by the red-tape-adhering gatekeepers of the past emerged a new class of celebrities, influencers, bloggers, and vloggers.
We are making the same mistake in AI discourse: AI does not need to be of human quality to replace human labor. Today’s AI is disruptive enough (certainly more so than the internet) that we ought to acknowledge its systematic second-order impact, which—I repeat—does not necessitate AI having an absolute advantage over humans in a specific skill. Likewise, we should expect some social disruption when decent AI becomes accessible and affordable enough to deploy on a large scale.
The cost of “decent enough” content
The current deluge of AI-generated content is a direct result of this. Because LLMs are designed to supply the least controversial response to a prompt, much of the resulting content is often of poor, uninspiring quality. But it does not really matter given that one can McDonaldize vast amounts of formulaic, dopamine-hacking content at a comparatively low cost. When the cost of doing something goes down, more people want to capitalize on it. Platforms that see potential in this content model also incentivize it through ad revenue and algorithmic promotion, causing enshittified slop to supplant thoughtful, high-quality craftsmanship.
The damage extends far beyond the copious amounts of slop. LLMs do not understand language in the same way that humans do: they convert words into numerical embeddings in high-dimensional space before predicting the most likely next word or token based on the context of previous tokens (and a pinch of prejudiced reinforcement learning). The output reads like English, but it remains a cheap ersatz that lacks what one might call a “human touch.” As millions of people read LLM messages daily, we increasingly mimic the lexicon used by these models.
Since language shapes our perception of reality, we should also anticipate a dramatic decline in our quality of thought. There are several criteria for determining whether an idea is promising, but I believe it is strongly correlated with intuition and taste, which are cultivated through deliberate engagement with the field. An entrepreneur with a keen sense of taste, for example, can forecast the future several years in advance, but such ability is dependent on having seen enough market cycles, failed products, and user behaviors to recognize patterns before they fully emerge. Likewise, having a good “research taste” involves knowing which research trajectories will have the most impact and are worth working on.
Taste cannot be taught, but it can be developed through tight feedback loops that help you learn to discern good from bad. Imagine now that all you see is the LLM’s response confirming your flimsy theories, offering proof, and baiting you into believing you have made a groundbreaking discovery when you have not. Because big tech companies prioritize user satisfaction over anything else, they have little incentive to reduce the models’ sycophancy. With this feedback loop, we are increasingly delegating critical thinking to the LLM, which puts us at risk of pursuing more underwhelming, mediocre-at-best ideas than we would have otherwise.
Cheap to create, but costly to verify
Another problem is that while AI can reduce the cost of generation, it elevates the cost of verification in many domains. LLMs can generate a code snippet or essay draft in a matter of seconds, but it takes a lot of time and effort to assess whether the code handles edge cases or whether the draft is technically cohesive. AIs are fallible—this we all know. The problem is that AI errors differ fundamentally from human errors, making them much more difficult to detect.
In a review of more than 4,000 research articles accepted for NeurIPS 2025, GPTZero found 53 publications with AI-hallucinated citations that managed to bypass three or more expert reviewers. Indeed, some errors are trickier to spot than others, but because the sample space of possible failure modes is undefined, such oversights are likely to aggregate more and more over time. A distracted writer, for example, may forget to cite a few sources, but they will never cite fictitious papers, hallucinate author names, or concoct new jargon that no one in the field has ever heard of. Granted, some human scientists may create something that is borderline “slop.” And even so, the work continues to benefit the peer review community, possibly as a nonexemplar that fosters future high-quality research. AI slop, on the other hand, provides no such benefit.
Ultimately, I see two possible reactions to this: either (a) we lower verification standards to accommodate the volume, or (b) we stop verifying entirely and accept whatever AI produces. Regardless of which course we take—both of which are undesirable—our collective judgment will surely deteriorate. When the public cannot do their due diligence and must rely on what seems credible, they are more susceptible to fraud and manipulation. Without intervention, the problem will exacerbate over time, as we currently lack sufficient quality monitoring and control. With this deficiency come delayed (or nonexistent) market feedback loops that give way to an influx of both AI-generated slop and more overtly harmful deployments of the technology.
We are already witnessing some kind of AI abuse in domains where verification is costly:
Over thirty female British politicians had their deepfakes “leaked” on pornographic websites before the UK general election.
With an 85% voice match with as little as three seconds of audio, the global losses from deepfake-enabled fraud exceeded $200 million in Q1 2025 alone.
Medical researchers tested five major AI chatbots with medical queries and discovered that all of them provided misleading information and, in some cases, outright false claims.
Beyond these direct breaches, AI also erodes our trust in nonverbal “signals” for communication. Prior to the LLM era, we wrote personalized happy birthday wishes to express care. In a similar vein, professors assigned essays because clear prose calls for a deep comprehension of the subject. Now that AI can replicate much of what we used to consider signals of individual effort, care, and thoughtfulness, these markers of commitment are no longer pertinent. Worse, it may result in a self-fulfilling prophecy in which we reduce our own effort in anticipation of AI’s ability to replicate these gestures, hastening the erosion we are lamenting.
Some readers may propose using AI to filter out AI slop. While intriguing in principle, current AI lacks the autonomy to make sound and reliable recommendations. Although the model can peruse the vast literature and verify some factual claims, it cannot, for example, guarantee that the main idea of the work is novel or that the lines of reasoning are cogent. If you ask your favorite garden-variety models to rate a piece of work (without further steering), they will gladly give it their highest rating before flipping it right when challenged. One may say that they lack the stable epistemology that is essential for verification in complex domains. I hope that we will soon discover a way to overcome the continual learning problem, allowing them to gain cumulative expertise, but in the meantime, we should expect the quality of our information ecosystem to diminish before any AI-driven solution can emerge.
Parting Thoughts
Seeing how far AIs have come is actually quite thrilling. Despite raising valid concerns, I believe the AI safety conversation largely overlooks the trends outlined in this article. When the current trust protocols break down, many organizations will likely resort to thorough monitoring or tracking, but how will they do so while protecting employees’ privacy rights? In-person interactions (both business and casual) might reclaim their premium value at levels not seen since the internet era. Cryptographic proof-of-humanity systems, while imperfect, could also become standard safeguards against deepfake fraud.
Reputation and track record will become more valuable in many high-stakes domains, raising the barrier to entry for newcomers and decreasing social mobility. I think we will need structured avenues that reward mentor-apprentice relationships to disperse opportunities away from the well-established goliaths of the field and allow individuals with talent but little experience to break in. This plan could work in tandem with some nonprofit, arXiv-inspired “reputation commons” where people can build verified portfolios without elite institutional affiliations. I must admit that I do not have complete solutions to these problems—and frankly, I am skeptical of anyone who claims they do. If anything, I hope this post serves as a reminder for everyone to think more deeply about this issue.
We have unleashed the genie; the question now is whether we will build a world worth living in with it.
![[re:alignment]](https://substackcdn.com/image/fetch/$s_!8Hbc!,w_40,h_40,c_fill,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe9e9766b-b43e-4a00-881a-ec783053a81d_500x500.png)
![[re:alignment]](https://substackcdn.com/image/fetch/$s_!gP8S!,e_trim:10:white/e_trim:10:transparent/h_72,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F256bbc89-f15b-4b68-b762-34edc9d74614_1344x256.png)
![[re:alignment]'s avatar](https://substackcdn.com/image/fetch/$s_!oavk!,w_36,h_36,c_fill,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b9facf9-9e46-43e5-a864-c3f1a21a8ba8_380x380.png)


"In-person interactions (both business and casual) might reclaim their premium value at levels not seen since the internet era." Completely agree here. Now is an excellent time to brush up on the human-interaction skillsets, as I think they will become more vital to our sanity as our media gets flooded with shitty AI slop. I'm always reminded that one of the few commonalities across ALL blue zones is that those who lived longest were people who prioritized their interpersonal relationships. There is so much merit in keeping friends close and staying in touch with family and I think your essay is a good reminder of this!