a brief on the long stupid war over whether new tools rot the brain, why Hakia died respecting you, and why the people with grease under their fingernails will save the AI question while the scientists are still arguing over what to call the corpse
Somewhere in a swamp filled with alligators, mosquitoes, and the regret of an empty bottle of bourbon, I’m swinging on the hook in liquid mud we’ll call a tidal zone. My news feed is making me yell at the internet again. The stories are about AI-caused cognitive decline. AI rots the brain. Ten minutes a day and you are a vegetable. Some twitchy professor at a university near a river had run wires across the skulls of college students and discovered that when you give a person an answer machine they stop bothering to find answers. Stop the goddamn presses. Wake the Pulitzer committee. Send a runner to alert the village.
I have lived through this panic three or four times now and I am tired of pretending it is news.
Same song. Different fucking technology. Plato in 370 BC had Socrates whining that writing would implant forgetfulness in our souls because we would trust the marks on the page instead of the chambers of our minds. The slide rule guys in 1972 watched the cheap Texas Instruments calculator roll in like a Hell’s Angels on a rampage and declared the engineering profession finished. The spreadsheet was going to murder business reasoning. Word processors would kill composition. The web would shallow us out. Google would make us forgetful. The smartphone would deform our attention. Social media would set the kids on fire. Now ChatGPT is going to dissolve our prefrontal cortex through the ears.
I’m waiting for my brains to drip out of my nose and my mind melt like in the movies. I need more bourbon to write this stuff.
Every one of these panics had a kernel of truth wrapped in three pounds of rancid horseshit. The kernel was always the same. Tools change what skills you exercise. Use a calculator and your mental arithmetic gets soft. Use a GPS and your sense of direction goes mushy. Use a search engine and your patience for confusion atrophies. Use an AI assistant and yes, the moment somebody snatches it away you flounder for a minute longer than the kid who never had one. There is a real paper on it now. Liu, Christian, Dumbalska, Bakker, Dubey. Twelve hundred test subjects across three experiments. Ten minutes of GPT 5 in a sidebar. Pull the sidebar and the people who used it for direct answers crashed harder than the people who never had it. Cohen’s d of negative 0.42 in the first run, which in social science is a polite way of saying yeah, there is a real effect here, do not pretend otherwise.
I really want to know how much of this experiment was run by the named professors and how much was done by the penniless bastards known as graduate students. You know the original agentic AI that was cheap, easy, and lived on ramen.
But read the fucking paper. Not the headline. The same paper says the people who used the AI for hints and clarifications showed no impairment. None. Zero. The whole effect was driven by the sixty-one percent who asked the machine to think for them. The hint askers were fine. The Socratic users were fine. The lazy bastards who wanted answers handed to them came out worse. Which is exactly what happened in 1972 with calculators. The kids who let the machine do the work got worse at arithmetic. The kids who used the machine to check their work got better at both. We have known this for fifty fucking years. We just keep forgetting we know it because every generation needs its own panic about the tools and its own grant money to study what the last generation already settled.
That is the calculator level of the argument. Now we get to the part that the calculator people never had to deal with, and this is where the air gets thin, and the bats come out from behind the headlines.
The calculator was a tool. You bought it once. You owned it. It did one thing. It did not phone home. It did not tune itself for your engagement. It did not have a quarterly report. It did not have a board of directors with fiduciary obligations to extract more from you next year than last year. The fucking thing just sat there and multiplied numbers when you pushed buttons.
A modern AI assistant is a tool that lives inside a business model. And the business model is where every actual harm in the history of digital technology has lived.
Pull the camera back. In 2004 a man named Riza Berkan and another man named Pentti Kouri started a company called Hakia. They built a search engine that did not give a single shit about your cookies. No behavioral tracking. No popularity hacking. No PageRank gaming. They had a chief science guy named Victor Raskin out of Purdue who had spent his career arguing that meaning was structural, that you could not get to understanding through statistical guesswork, that you needed a real ontology of concepts and a real semantics of how concepts relate to each other. Their patent number is US 7,739,104 if you want to look it up. Berkan and Raskin both on the inventor line. They called the technology OntoSem and QDEX and SemanticRank. The engine respected the user. The engine analyzed content. The engine did not surveil you to figure out what you wanted. It figured out what you wanted by understanding the words you typed and the documents it indexed.
Better technology. Worse business model. Hakia died around 2014.
Google won not because PageRank was a better theory of meaning than ontological semantics. Google won because Google was a better extraction machine than Hakia was. Google figured out that if you watched everything the user did, sold ads against it, and tuned the engine to maximize time on page and clicks per session, you could build a war chest that no semantically pure competitor could match. Hakia was trying to build a better cathedral. Google was building a Vegas casino with a search engine in the lobby. Guess which one had more customers in five years. Guess which one bought up the patents and hired the engineers and bought the airwaves and the courthouses while the cathedral builders ran out of runway.
Better tech lost. Better extraction won. Write that on the tombstone of the early Web.
The mechanic in me sees the shape of the next ten years coming at the AI question like a container ship through a fog bank, and the shape is not new. It is the same goddamn shape. Right now the AI labs are racing for adoption. They are pricing below cost. They are giving you the good model. They are tuning the system to be useful, to be helpful, to make you come back tomorrow because that is how you capture a market. This is the honeymoon phase. Cory Doctorow gave it a name. Enshittification. Stage one, the platform is good to the users while it captures them. Stage two, the platform extracts from users to be good to the business customers that pay the bills, the advertisers and vendors and merchants. Stage three, the platform extracts from those business customers too, because the shareholders need more, and the user has nowhere else to go. Stage four, the platform dies in a stinking heap of its own engineered hunger, having eaten everything that made it worth using in the first place.
Google did this. Google sucks ass. Facebook did this. Quitting Facebook is a rite of passage now. Amazon did this. Now I am going direct to authors to buy their books and thinking about how little I use the Amazon Prime video stuff. Uber did this. Now it costs more than the yellow cab app. Airbnb is doing it. Cameras in rentals, strange requirements to use the rentals, and hidden pricing seem to be the norm now. I used all of those platforms, like just about every other human in the Western world. They all do their best to extract cash from my wallet for less and less value every freaking day. Every fucking platform on the consumer web has run through this cycle in roughly a decade.
The question for AI is whether the labs running it will resist the same gravity well or fall right in, like everybody else. Haven’t they already circled the drain in the big flush of value for extraction? The investor calls do not give me confidence. The product roadmaps do not give me confidence. The way the labs talk about ad placement and agentic workflows, where the agent might be incentivized to recommend the partner whose API pays for the placement, does not give me confidence. The labs say the right things. So did the founders of every consumer internet company in 2008. By 2018, their products were unrecognizable from what they had been at launch, and the people who built them had either left in disgust or cashed out and bought boats.
When companies ask me about this stuff, I tell them the same thing over and over again. Use the technology to make money. Grab all you can by the fistful as fast as you can. Do not put your ship on the rocks in the fog of cash. Do not create brittle workflows that are dependent on AI. Create your workflows so you can chase cheaper AI workflows or even invest some of the cash up front to create your own internal systems. The model requires you to slather like a starving dog and pay no matter what. I’m telling you right now, loose coupling, flexible APIs, building around being able to swap instantly to another provider are all good for your business. I know, I know the manatees making whoopee off the bow don’t need bourbon, and they don’t listen to me either.
This is the climax of the argument and I am going to say it once clearly and then drink something dark.

The thing that determines whether AI ends up like the calculator or like Facebook is not the technology. It is not the science. It is not the alignment papers or the safety researchers or the policy white papers or the consultations with civil society. It is the mechanics. The people who use the tool every day in real environments where the failure modes leave teeth marks. The doctor whose patient died from a hallucinated dose. The lawyer who got sanctioned for the fake citations. The reporter who got fired for the fabricated quote. The security professional whose data got exfiltrated through a prompt injection nobody saw coming. The pilot whose checklist app suggested the wrong sequence. The teacher who watched the kids get worse at thinking. The contractor whose tools stopped working the day the API price tripled.
These are the people who decide whether AI gets adopted seriously or whether it gets quarantined. The scientists invent it. The investors fund it. The journalists hype it. The mechanics use it. And the mechanics are the ones who write the procurement specs, the safety regs, the audit standards, the failure mode reports. The mechanics are the ones whose feedback the product teams cannot ignore because the mechanics walk away and the mechanics talk to other mechanics and the mechanics tell their bosses and the mechanics file the lawsuits when the tool fails.
The pattern of every working technology adoption in the last hundred years runs through the mechanics. The internal combustion engine was theorized by Germans and made viable by American garage mechanics who beat the prototypes into something a farmer could fix in the field with a wrench and some baling wire. The personal computer was prophesied by academics and made real by hobbyists in basements who soldered the parts together and wrote the early software for fun. The internet was designed by DARPA and made into a phenomenon by sysadmins who kept the routers running through the early storms. Every time some scientist had a Big Idea, it was a mechanic somewhere who decided whether the Big Idea was actually going to ship.
AI is at exactly that moment now. The scientists have done their part. The Big Idea works. The transformer is real. The models can write code and pass exams and produce SAT essays. The labs are racing each other to the next leaderboard. The investors are throwing money at anything with a vector database in the pitch deck.
And the mechanics are starting to test the tools in their actual environments and the reports coming back are not all flattering. The tools hallucinate. You pay for the tool to have its little LSD acid trip and then get back to work. The tools cannot be audited. Because they lie like a cheap rug. The tools cost too much when scaled. Because they’re about extracting money from you and your company. The tools leak data through their inference paths. Just like Google they snarf data like a coed at a finance bro party snorts coke in a cloak room. The tools degrade when the connection breaks. The tools make users dependent in ways the users do not notice until the tool is gone. The tools are being tuned for short-term helpfulness in ways that may not survive the second derivative of their own success.
The mechanic does not need to be an AI scientist to call this. The mechanic has seen this movie before. The mechanic knows what happens when a tool stops being a tool and starts being a service that lives inside a corporation that lives inside a market that is incentivized to fuck you over for a quarterly number.
The mechanic also knows that the tool can be saved. Not by stopping it. Not by banning it. Not by waiting for the scientists to fix the alignment problem on a chalkboard. The tool gets saved by the mechanics, demanding it stay a tool. Open weights. Local inference. Provider portability. Audit trails. Liability frameworks. The right to switch. The right to inspect. The right to disagree with the model and have the disagreement preserved. These are not theoretical asks. They are the same asks the early auto mechanics made of the manufacturers when the cars started getting too sealed to fix. They are the same asks the early IT people made of the software vendors when the licenses started getting too restrictive to administer. They are the asks of people who have used tools long enough to know what makes a tool a tool versus what makes a tool a leash.
The scientists will write the next paper. The labs will release the next model. The investors will fund the next round. The journalists will write the next panic piece. And the mechanics, the people with grease under their fingernails and logs on their screens and patients in the next room and lives in their hands and salt in the rigging, will quietly decide whether any of it survives.
We have done this before. The calculator survived because mechanics made it survive on the mechanics’ terms. The slide rule was abandoned because nobody who actually had to do the math wanted to keep using it. The cell phone is what it is because mechanics figured out which features were useful and ignored the ones that were not. The same arc is coming for AI. The scientists do not get the last word on this. They never have.
The mechanics decide. They always have.
The good ones are watching.
Sources
Plato, Phaedrus, written around 370 BC. The Theuth and Thamus passage at 274c through 277a is the canonical text for the writing makes you forgetful argument. Translated and discussed in dozens of editions, most accessibly in the Project Gutenberg version of Benjamin Jowett’s translation.
Liu, G., Christian, B., Dumbalska, T., Bakker, M., Dubey, R. (2026). AI Assistance Reduces Persistence and Hurts Independent Performance. arXiv:2604.04721. CMU, Oxford, MIT, UCLA collaboration. Three randomized controlled experiments, total N around 1,222 participants, two task domains (fraction math and SAT reading comprehension). Project page at ai-project-website.github.io/AI-assistance-reduces-persistence. Effect concentrated in the 61 percent of participants who used AI for direct solutions. Hint and clarification users showed no impairment.
Kosmyna, N. et al. (2025). Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task. arXiv:2506.08872. MIT Media Lab preprint. Not peer reviewed. Smaller and more methodologically contested than the Liu paper. Author has explicitly asked journalists to stop using words like stupid, dumb, and brain rot to describe the findings.
US Patent 7,739,104. Berkan, R. C. and Raskin, V. System and method for natural language processing and using ontological searches. Issued June 15, 2010. Available on Google Patents. The foundational patent for Hakia’s OntoSem and QDEX technology.
Hakia. Wikipedia entry, accessed May 2026. Covers the company’s launch in March 2004, founders Riza Berkan and Pentti Kouri, technology stack (QDEX, SemanticRank, OntoSem), and shutdown around 2014. Also covers the academic evaluations of Hakia versus Google in the 2009 to 2012 window.
Doctorow, C. (2023). The Enshittification of TikTok. Wired, January 23, 2023. The original coinage and the four stage framework. Subsequent essays in his Pluralistic newsletter extend the framework to Amazon, Facebook, Google, and other platforms.
Watters, A. (2015). A Brief History of Calculators in the Classroom. hackeducation.com, March 12, 2015. The clearest single source on the pattern of fear and adoption around calculators from the 1970s through the 1990s, including the recurring arguments that computational tools would ruin student reasoning.
McCarthy, J., quoted in McCorduck, P. (1979). Machines Who Think. A.K. Peters Ltd. The as soon as it works no one calls it AI anymore formulation, sometimes called the AI effect, is documented and traced through the history of the field.