I have been watching this movie for thirty years. Same script. Same actors. Same ending. Different costumes.
Every five to seven years a new technology shows up and the business press loses its mind. The boards nod. The consultants produce decks. The capital flows like somebody opened a fire hydrant of other people’s money. And then, on a schedule you could set your watch by, the technology settles into something more useful and more boring than the original story promised. Some of what got built was real. Most of the noise was not.
We are inside one of these moments right now. The technology is generative artificial intelligence and large language models. The promise being sold is the wholesale replacement of human work across every knowledge industry on earth. The capital flowing is bigger than anything that came before. The decks are being delivered. The boards are nodding. And the people selling the story do not understand the technology any better than the people buying it.
The argument I want to make is straightforward. The current AI moment is following the exact pattern of the dot com bubble, big data, the internet of things, blockchain, the metaverse, and every other major technology fad of the last three decades. Inside the fad sits a smaller, real, lasting capability that will do useful work for a long time. Wrapped around that capability is a story being told by people who do not know what they are talking about to people who cannot tell the difference, with capital and FOMO doing the steering. The story is human replacement. The reality is human assistance. Treating the second as the first is the source of the failed deployments piling up across every industry stupid enough to bet on agents doing the work that humans used to do.
AI is a tool. It is closer to a crescent wrench or an intern than it is to a seasoned engineer. Like the crescent wrench, it amplifies the capacity of the human holding it and does nothing useful when the human walks away. Like an intern, it produces output that needs review by somebody who knows what good output looks like. Sell it as the engineer and you are running the fad. Sell it as the wrench and you are doing the work.
The thesis runs in five movements. The pattern across thirty years. The Y2K case as the cleanest lens. The buyer class problem, which is the mechanism that lets the same fad cycle through the same population without the lessons sticking. The mapping of the current AI moment onto the pattern. And the closing argument about what AI is and is not. Stay with me.
The Pattern
In the early 1990s a generation of analysts watched the commercial internet emerge and most of them missed what it was. The press treated it as a curiosity until Netscape went public in 1995 and produced a stock chart that retrained an entire profession overnight. From that point through 2000 the markets carried the assumption that any company with a dot com suffix was a bet on the future. Pets.com. Webvan. Boo.com. eToys. All of them taking enormous capital against business models that did not work and could not work. The Nasdaq peaked in March 2000 and roughly five trillion dollars in market value evaporated in the next two years. The companies that survived were doing real work with real economics. The fad burned off. The residual was the internet economy we still live in.
The pattern repeated with social media and Web 2.0 in the mid 2000s. The story sold to the public was participation and connection. The product was advertising backed surveillance. The human cost of that product, in adolescent mental health and political polarization, took fifteen years to get measured. The residual was real. The framing was a lie from the first day.
Cloud computing rewrote enterprise architecture starting in 2006. The early years were engineering progress. Then every vendor with a hosted database started calling itself cloud and every consultant started selling cloud strategy decks to companies that had no idea what their workloads were. Smartphones followed in 2007 and won. The residual included most of the modern attention economy and most of the modern surveillance economy in the same package. We sold the public a phone and bought ourselves a tracking device. They paid us for it. They still are.
Big data was sold from 2010 to 2016 as the data driven future. McKinsey produced a 2011 report estimating trillions in unlocked value. Hadoop clusters got purchased at scale. Data lakes turned into data swamps because almost nobody asked the question that should have been asked first, which was what specific business problem are we trying to solve. By 2018 the term Hadoop was a punchline. The residual was real. Distributed computing primitives moved into cloud data warehouses and powered the next round of capability. The fad burned off. The clusters went quiet.
The internet of things arrived in the middle of the same decade with a forecast of fifty billion connected devices and a security posture you could drive a truck through. The Mirai botnet in 2016 demonstrated what a fleet of unauthenticated cameras and routers actually was, which was a weapon. Smart bulbs and smart locks turned out to be products almost nobody wanted enough to maintain. The residual moved to industrial sensor networks where the economics worked. The consumer fad died on the shelf at Best Buy.
Blockchain and crypto came in three waves between 2013 and 2022. Each wave carried a different fad layer riding on the same cryptographic primitive. Bitcoin got sold as digital gold. The 2017 ICO mania got sold as the future of fundraising. The 2021 NFT wave got sold as the future of digital ownership. Then Terra Luna collapsed and FTX failed and Sam Bankman Fried went to federal prison and the whole carnival folded its tent. The residual is narrow and real. Stablecoins doing payment work across borders. The rest of it was a Ponzi scheme with better marketing.
The metaverse moment ran from 2021 to 2024. Facebook renamed itself Meta and committed eighty billion dollars to a vision of immersive virtual workspaces that nobody asked for. Apple shipped the Vision Pro at thirty five hundred dollars in early 2024 and the device has not found its market. The fad cooled. The residual will be real, mostly in optics and a small enterprise training market, but the original story is dead. Mark Zuckerberg bet the company on a goggle and lost the bet.
Self driving cars have been two years away since 2015. Voice assistants peaked around 2018 and settled into kitchen timers and music control. 3D television came and went in five years. Google Glass lasted eighteen months as a consumer product before the term Glasshole entered the language. Each one had a real engineering story underneath and a hype layer that overpromised on top.
The shape is the same every time. A real capability emerges. A class of vendors and consultants packages it into a story about wholesale change. Capital flows on the strength of the story. The press carries the story without filtering. Boards approve budgets. Reality arrives, sometimes through a single high profile failure and sometimes through a slow accumulation of deployments that did not deliver. The fad burns off. The residual gets absorbed into normal operations at a fraction of the value the original story claimed.
Thirty years of this. Same pattern. Different costumes. Nobody learns anything.
The Y2K Case as Analytical Lens
Y2K is the cleanest example we have because the fad layer and the engineering layer ran on parallel tracks and the public ended up unable to tell them apart.
The engineering problem was real and specific. Mainframe and embedded systems written in COBOL and assembler in the 1960s and 1970s used two digit year fields to save memory at a time when memory was expensive. By the late 1990s those systems were running power grids, telecommunications backbones, banking transactions, air traffic control, and industrial control systems across the developed world. When the year rolled from 99 to 00, large categories of date sensitive logic would fail in unpredictable ways. Some failures would be cosmetic. Others would cascade into infrastructure outages.
Programmers had been warning about this since the 1980s. The remediation work began in the mid 1990s and ran at scale through 1999. Telecom carriers replaced core routing infrastructure. Power utilities replaced control plane equipment that would have caused islanding events and cascading outages within months of the rollover. Financial institutions modernized batch processing systems. Federal agencies catalogued and remediated tens of thousands of embedded systems. The total global spend ran between three hundred billion and six hundred billion dollars.
This work was real. It was hard. It was led by practitioners who understood the systems they were modifying. And it succeeded. On January 1, 2000, the lights stayed on, the planes did not fall, and the banking system processed transactions like any other Saturday.
Running parallel to this engineering effort was something different. The press carried stories about planes falling from the sky and grids collapsing. Federal preparedness offices issued guidance that bled into prepper culture. Evangelical end times content found a new vector. A small industry of bunker construction and freeze dried food sales boomed. The cable news cycle treated the rollover as a potential extinction event because fear sells advertising and competence does not.
When midnight passed and nothing visible broke, the public drew the wrong conclusion. The narrative that emerged in the weeks after was that Y2K had been overhyped and the remediation work had been unnecessary. This is the prevention paradox in its purest form. Successful remediation looks identical to a problem that never existed. The practitioners who did the work received no credit. The panic merchants who had sold the cultural layer moved on to the next story. And the lesson that should have been drawn, which is that real engineering work can sit underneath a cultural fad and the two need to be separated, was not drawn. Nobody wanted that lesson because it was too useful.
The Y2K case gives you the analytical structure for reading every wave that came after. Inside any given wave there is a real engineering or capability story. Wrapped around it is a cultural and financial story being told by people with different motivations than the practitioners doing the underlying work. Conflate the two and you get bad analysis. Separate them and you get good analysis. The residual of any wave comes from the engineering layer. The fad is the cultural layer that burns off.
This matters for the AI moment because the conflation is happening in real time at speed. The capability is real. The deployment philosophy being sold around it is fad layer. Most readers cannot tell the difference. Most boards cannot tell the difference. And the failures piling up are being read by some commentators as evidence that the underlying capability is hollow, when the cause of the failures is the deployment philosophy that the fad layer prescribed.
Same movie. Same conflation. Same audience. Same wrong lesson getting drawn.
The Buyer Class Problem
Now we get to the part nobody wants to say out loud.
The structural argument about technology fads usually points at the supply side. Vendors overpromise. Consultants oversell. Analysts produce reports that confirm what their clients want to believe. Press carries the story without filtering. Capital chases the narrative. All of these mechanisms are real and documented and they explain a piece of the story. They do not explain the whole story.
The harder argument sits on the demand side and it goes like this. The same population of senior executives, board members, and capital allocators who fell for the dot com bubble fell for big data, fell for blockchain, fell for the metaverse, and is now buying generative AI strategy decks. This is not a series of independent filter failures. This is one population with a consistent inability to evaluate technology claims, encountering different vendors selling different stories, and getting taken every single time.
The composition of corporate boards has shifted over the last forty years toward financial and legal expertise and away from operational and technical expertise. Most board members at most large companies cannot read a system architecture diagram. They cannot evaluate a security posture. They cannot tell a real distributed systems story from marketing material that uses the same vocabulary. They have no way to assess whether a vendor proposing a forty million dollar deployment understands the customer environment or is selling a reference architecture that will detonate on contact with reality. They sit in the boardroom. They listen to the pitch. They look at each other. They approve the spend. And six quarters later somebody else cleans up the mess.
This is not a moral failing. It is a structural one. The career path that produces large company directors selects for capital allocation skill, governance discipline, and political agility. It does not select for technical evaluation capacity. The result is a class of decision makers who are good at the parts of their job that match their training and useless at the parts that do not. And the parts that do not match their training have been dominating the strategic agenda across every industry for the last twenty years. So the failure modes have become structural. Permanent. Baked in.
The same thing operates one level down at the executive ranks. Most CIOs and many CISOs at most large companies cannot read the assessments they sign. Some once could and have drifted out of practice. Others never could in the first place. The job has become political and budgetary instead of technical, and the people who do it well are doing political and budgetary work. They go to the conferences. They give the keynotes. They sit on the panels. They sign the documents the staff prepared. When a vendor walks in selling a story that uses the right vocabulary and produces the right kind of confidence, the buyer has limited capacity to push back on the substance because the buyer cannot read the substance.
FOMO sits on top of this incompetence and amplifies it. A CEO who does not have an AI strategy in 2026 is being pressured by board members who read the same business publications and want to be told the company is positioned for the future. The path of least resistance is to sign the consultant engagement, approve the platform spend, and announce the AI strategy at the next earnings call. The path of resistance is to ask hard questions about specific use cases and to deploy capital only against problems where the technology will work. The first path is easier in the short run and ten times more expensive in the medium run. They take the easy path every time. Every. Single. Time.
The McKinsey big data report from 2011 got circulated at every Fortune 500 board meeting in the country. Almost nobody asked what specific question their data lake was meant to answer. The Hadoop cluster got purchased anyway. By 2018 most of those clusters were quiet and the term had become embarrassing and the consultants who sold them had moved on to the next thing. The same pattern is playing out now with AI. The decks promise productivity gains in the trillions. The pilots get funded. The deployments are running into problems that practitioners predicted and managers did not want to hear. The next round of write downs will arrive on schedule. You can mark your calendar.
The structural mechanisms that bypass filters work because the filter operators are marginal. The marginal filter operators stay in place because the structural mechanisms reward the behavior. The two reinforce each other. Better filters do not fix this. The fix is putting people who can read the architecture into the rooms where the decisions are being made, which requires changes to board composition, executive selection, and the way technical expertise is valued inside large organizations. None of that is going to happen on a short timeline.
The practitioners who did the Y2K remediation work, the people who replaced the core routers and the control plane equipment, were not in the rooms where the strategic decisions about technology were being made then and they are not in the rooms now. The same holds across every wave that followed. The people who could have evaluated the substance were not the people making the decisions. The people making the decisions could not evaluate the substance. This is the filter failure that produces the fad pattern. Everything else is downstream of this one fact.
AI in the Pattern
The current generative AI moment has the same shape as every wave I just walked you through. The capability is real. The deployment philosophy is fad layer.
The capability emerged through a long sequence of academic and industrial research stretching back decades, with breakthrough work on the transformer architecture in 2017, scaling work through the late 2010s, and the public consumer arrival of ChatGPT in November 2022. The technology can produce useful output across a wide range of tasks involving language, code, structured analysis, and pattern recognition. The output is probabilistic, not deterministic. That means it is right most of the time and wrong some percentage of the time in ways that can be hard to detect without domain expertise.
That last property is the most important technical fact about the current generation of these systems and almost nobody selling them mentions it. The output looks correct even when it is incorrect. A subject matter expert can spot the errors most of the time. A non expert cannot. So the value of the technology depends on whether a competent human is in the loop reviewing the output. With a competent human in the loop, the technology amplifies the human’s capacity. Without a competent human in the loop, the technology produces output that is right most of the time, wrong some of the time, and indistinguishable in form to the consumer of the output.
The deployment philosophy being sold around this capability ignores that second clause. Pretends it is not there. Hopes you will not notice. The story being told to boards and CEOs is that the technology can replace the human in the loop. Customer service can be automated end to end. Legal review can be done by agents. Software development can be done by AI engineers. Knowledge work in general can be reduced or eliminated. The story is being told by vendors who profit from selling platforms, by consultants who profit from selling deployment engagements, and by analyst firms who profit from selling the narrative. It is being received by buyers who cannot evaluate the technical claims and are under FOMO pressure to act. This is not a neutral exchange. This is one group selling a thing they know is wrong to another group too embarrassed to ask the question.
The deployments built on this philosophy are failing across every industry where they have been tried at scale. The Klarna customer service rollback, where the company replaced human agents with AI and reversed course when customer satisfaction collapsed, is one widely reported case. The McDonald’s drive through cancellation, where an AI ordering system was pulled after producing failures that became viral content, is another. The wave of legal sanctions against attorneys who filed briefs containing hallucinated case citations is another. The agentic deployments that have produced material damage inside their permission scope, what I have called agentic detonation in earlier work published on Substack, are accumulating in the security and risk literature faster than the business press is willing to cover.
The pattern of these failures is the same every time. The technology was deployed in a configuration that removed the human review step. The output was produced at volume. Some percentage of the output was wrong in ways that mattered. The cost of the failures exceeded the savings from the human reduction. The deployment was rolled back, where possible without notice, and where not possible with notice. The CEO took the win when it was announced. Somebody else took the loss when it cratered.
This is the prevention paradox running in reverse. In the Y2K case, successful remediation looked like the absence of a problem and the engineering work received no credit. In the AI case, failed replacement deployments are being read as evidence that the technology does not work, when the actual evidence is that the deployment philosophy does not work. The technology, deployed differently, works fine. Nobody wants to draw that distinction because the distinction is inconvenient for the people selling the platforms.
The augmentation residual is visible if you know where to look and you are willing to look. Code assistance tools have been adopted by developers at scale and produce real productivity gain when used by competent engineers reviewing the output. Document review tools have moved into legal practice in configurations where attorneys remain in the loop and review the work. Anomaly detection in security telemetry has been one of the strongest application areas, where the technology surfaces patterns and human analysts make the decisions. Predictive maintenance signal extraction in industrial settings, including the aviation MRO sector I work in, has produced real value where the engineering team uses the output as input to their judgment instead of as a substitute for it.
The common feature across all of these working applications is that the human is in the loop. The technology is doing what it is good at, which is pattern recognition and structured output at volume. The human is doing what humans are good at, which is judgment and accountability. The human spots the errors. The human decides which output to act on. The human is the safety mechanism that converts probabilistic output into deterministic action. Take the human out and the safety mechanism goes with them. Then you find out the hard way what your error rate actually is.
The replacement applications, the ones being sold by the fad layer, remove the safety mechanism in pursuit of cost reduction. The cost reduction is real on the deployment spreadsheet. The cost of the errors is real in the operating results. The first hits the budget that approved the deployment. The second hits a different budget, often a different department, and sometimes does not hit any budget directly because it shows up as customer churn or regulatory exposure or reputational damage. The deployment looks good for one quarter and bad for the next four. The board that approved it is rarely the board that has to write it off. This is what I have written about elsewhere as the AI alibi, which is the use of AI deployment as cover for workforce reduction decisions that were going to happen anyway. The fad layer makes that cover available. Convenient, that.
The Tool
Here is the thesis stated clean. AI is a tool. It is two things in combination. It is a crescent wrench, which is to say a general purpose lever that amplifies the work of a competent human holding it. And it is an intern, which is to say a junior worker who can produce volumes of output that need review by somebody who knows what good output looks like.
Both metaphors carry the same point. The value of the tool is determined by the human using it. A crescent wrench in the hands of a competent mechanic produces more work in less time. A crescent wrench in the hands of somebody who cannot identify the right bolt does no useful work and may do active harm. An intern working for a competent senior produces real value across a summer because the senior catches the errors and shapes the work. An intern working without supervision produces output that looks like work and is not.
The replacement story being sold by the fad layer assumes the tool can do the work without the user. This is wrong in the same way it would be wrong to assume a crescent wrench can fix a car without a mechanic. The tool extends human capacity. It does not substitute for it. You cannot fire the mechanic and keep the wrench and expect the cars to get fixed. The wrench will sit on the bench. The cars will sit in the lot. The customers will go somewhere else.
This framing produces practical guidance for any organization trying to deploy these systems. Three questions. First, which workflows in the organization have a competent human producing output that could be amplified by structured assistance. Those are the deployment opportunities. Second, which workflows have output going to a customer or counterparty without review. Those workflows should not be touched by an autonomous system because the failure mode is direct and immediate. Third, what is the cost of an error in each candidate workflow, including the costs that will not show up on the deployment spreadsheet. Those costs determine where the human review step needs to sit and how much it is worth paying for.
This is not a difficult framework. It is the framework any practitioner who has built systems for production use applies as a matter of course. It is not the framework being applied by the buyer class evaluating AI strategy decks, because the framework requires technical evaluation capacity that the buyer class does not have. The decks promise replacement. The buyers approve the spend. The deployments fail. The cycle continues.
The fad layer will burn off on its own schedule. The pattern of the previous waves suggests two to four years from peak hype to general acknowledgment that the original story was wrong. The Klarna and McDonald’s cases are early signals. The next wave of failures will be larger and more expensive because the deployments being approved now are larger and more ambitious than the early experiments that have already failed. By 2028 or 2029 the consultant class will have moved on to the next story and the AI strategy decks will be embarrassing artifacts in corporate archives. Nobody will admit to having signed them.
The residual that survives will be substantial. Code assistance, document review, anomaly detection, signal extraction, structured analysis at volume, all of these will be operating quietly inside organizations as part of normal practice. The companies that built deployments around the augmentation thesis will be ahead of the companies that built deployments around the replacement thesis, because the augmentation deployments will have produced compounding value while the replacement deployments were being rolled back. Same pattern that played out with Amazon and Google after the dot com bust, with Snowflake and Databricks after the Hadoop bust, and with the survivors of every previous wave.
The companies most exposed to the current fad are the ones that have committed in public to replacement deployments and cannot walk back the commitment without admitting the strategy was wrong. They will keep spending on the failed approach longer than they should because the political cost of reversal is high. They will reverse anyway, where possible without notice, and the share price will absorb the write down. The CEO who approved the strategy will move on to a new role. The next CEO will inherit the cleanup. The board that signed off will hire a search firm and find a new CEO who promises a fresh start. The cycle starts over.
Closing
I have been arguing one thing for the last several thousand words. The current AI moment is following the same pattern as every major technology wave of the last thirty years and the pattern can be read with confidence using the analytical lens of the Y2K case. Inside any given wave there is a real engineering story that produces a lasting residual. Wrapped around it is a fad layer being sold to a buyer class that cannot evaluate the substance. The fad burns off. The residual remains.
The shape of the current fad is the human replacement story. The technology can do what humans do, faster and cheaper, and organizations that do not deploy it will fall behind. This story is wrong on the technical merits and is producing failed deployments at speed. The technology, deployed as a tool that amplifies competent humans, works and produces real value. The technology, deployed as a substitute for competent humans, does not work and produces real damage.
The deeper problem is the buyer class incompetence that lets the fad keep cycling through the same population of executives. This problem has no quick fix. The remediation is putting people who can read the architecture into the rooms where the decisions are being made, which requires changes to board composition, executive selection, and the way technical expertise is valued inside large organizations. None of that is going to happen on a short timeline. The fads will continue to cycle until it does. We are going to keep paying for this lesson because nobody in a position to learn it is willing to.
The working framework is the one practitioners have always used. AI is a tool. It is a crescent wrench and an intern. It amplifies competent humans and produces useful output that needs review. It does not replace humans, and the deployments built on the assumption that it can are failing in predictable ways. The hucksters selling the replacement story are running an old playbook. The practitioners doing the augmentation work are producing the residual that will outlast the noise.
The Y2K parallel holds at the deepest level. The engineers who replaced the core routers and the control plane equipment in 1999 did real work that prevented real damage and got no public credit because the damage they prevented never happened in view. The practitioners building useful AI augmentation in 2026 are doing real work that will produce real value and are getting drowned out by the panic merchants on both sides of the current debate. The consultants selling replacement strategies are the modern bunker salesmen. The analysts declaring the technology hollow because the replacement deployments failed are the modern post January 2000 commentators who said Y2K had been overhyped. Both are wrong in the same way. Both miss the actual story, which is the engineering layer that sits underneath the noise.
The work that matters is being done by people who understand the technology and the business problem they are solving. It always is. That work will produce the residual. The rest is fad. The hucksters will move on. The practitioners will still be here. Same as it ever was.