Every year, organizations spend huge amounts on security awareness programs. The marketing message is always the same: build a more alert workforce that can recognize and avoid malicious emails. Vendors offer polished training platforms, sleek video modules, and vibrant dashboards filled with reassuring metrics. Compliance teams feel relieved when they can point to a high completion rate. Security leaders show charts to the board indicating a drop in click rates. It all sounds good, like a public safety campaign that’s finally effective.
But if these programs were truly effective as advertised, phishing wouldn’t still be the most common way into most corporate networks. We’ve had more awareness training than ever before, yet phishing remains the first step in most breaches. That alone should make us think. If the treatment hasn’t fixed the problem after all this time, maybe we’ve been diagnosing the wrong issue.
Phishing simulations have become the security world’s version of high school pop quizzes. They are surprise tests that rarely offer a real chance to learn. In too many organizations, the goal isn’t to build skills but to catch people off guard. The predictable result: someone clicks, and all the blame falls on them. Usually, it’s the receptionist, customer service representative, or accounting clerk, people whose jobs demand quick communication and responses. While their “mistake” becomes a case study at the next security meeting, the underlying environmental weaknesses remain unaddressed. The match was dropped, yes, but the room was already filled with dry kindling, and the smoke detectors were unplugged.
Awareness Isn’t a Control
We need to be truthful about what security awareness really is. It isn’t a control like multifactor authentication, access restrictions, or network segmentation. It doesn’t reliably prevent or detect attacks. At best, it can influence behavior in specific situations, but only if the rest of the security system is designed to catch failures.
In practice, awareness programs often serve as compliance theatre. They exist to fulfill an audit requirement or to show regulators that “reasonable steps” have been taken. The annual training modules are completed, the policy acknowledgment boxes are checked, and the job is considered finished. That may satisfy paperwork, but it does very little to genuinely reduce risk.
Years of industry research show that most phishing training only leads to temporary improvements. Click rates decrease briefly after testing, then return to the original level. Human memory and attention don’t work the way vendors expect. Worse, these programs can harm relationships between security teams and the rest of the organization. When employees view security as a punitive authority instead of a partner, trust diminishes. Without trust, reporting declines. Incidents take longer to identify, and the attacker’s window of opportunity widens.
The Metrics Game
Part of the issue is measurement. Many awareness vendors report impressive statistics that hide the real situation. They emphasize the lower click rate on their simulations but avoid discussing whether those same users have gotten better at recognizing actual, changing threats.
In some cases, employees become skilled at recognizing the specific style of phishing emails used in the company’s simulations, but they are no better at handling a spear-phish from an actual attacker. They have learned to pass the test, not to defend against real threats. It is like training a guard dog to recognize the mail carrier’s uniform but leaving it helpless when an intruder appears in plain clothes.
Phishing Simulations: The Gotcha Approach
The idea of phishing simulations isn’t flawed in theory. They could be a powerful way to help people recognize threats. However, many organizations turn them into a “gotcha” game. The scenarios are designed to trick even cautious individuals, often using the same urgency or authority cues that real attackers use.
Here’s the catch: those cues also signal a healthy, functioning workplace. Responding quickly to a meeting request from your boss isn’t bad behavior; it’s good citizenship in most organizations. Punishing someone for acting in a way their role demands isn’t education, it’s entrapment.
When someone clicks a link in a simulated phishing email and is immediately met with a scolding screen telling them they failed, what have they learned? More often than not, they learn that the security team is working against them, not with them. They learn to hesitate in ways that might slow down real work. They learn that if they report something suspicious and it turns out to be a test, they might be ridiculed. None of that fosters the trust needed for effective reporting.
The Receptionist Didn’t Burn the Place Down
Security incidents are rarely caused by a single click. That click may open the door, but what happens next is shaped by the design of the systems and networks behind it. If one email compromise can lead to the theft of sensitive data, the problem isn’t just the person at the keyboard; it’s the architecture that enabled that chain of events.
A well-designed environment assumes someone will click eventually. Humans are fallible, and attackers only need one mistake. That is why segmentation, least privilege, rapid detection, and automated containment are so important. Without them, the compromise of a single endpoint can lead to the compromise of an entire domain.
Blaming the user for a breach in such an environment is like blaming a driver for getting lost when the road signs are wrong, the map is outdated, and the GPS has been unplugged.

The Limits of the Checkbox
This is not an attack on the people who run awareness programs. Many of them are dedicated professionals who understand these shortcomings but have little power to change the mandate they have been given. Leadership often views awareness as a simple compliance requirement. The minimum effort becomes the maximum investment.
However, there are programs that break the mold. Some use engaging stories and relatable situations to make security feel personal. Others tailor content to the roles and tasks of their audience, understanding that a finance analyst faces different threats than a warehouse supervisor. These programs respect their audience’s intelligence and focus on building skills, not just ensuring compliance.
Programs that use storytelling, like Inside Man from KnowBe4, show that people learn better from relatable, human experiences than from outdated training videos with robotic voices. These efforts meet people where they are. They recognize that employees aren’t the problem; they’re part of the solution.
Shifting the Goal
The change we need is from “Did they click?” to “Did they report?” and “Do they know how to escalate?” This shift in perspective turns the user from being a liability into an asset. It promotes early warning rather than silent avoidance.
When employees feel safe to report, they form an informal network that acts like sensors across the organization. Each person becomes a potential early warning for an attack. This only works if the response process is quick, respectful, and transparent. People need to see that their reports matter, that they lead to real action, and that they are appreciated rather than punished.
Leadership’s Role
For leadership, the key questions focus on outcomes rather than appearances. How many actual threats were reported and contained thanks to employee vigilance? How quickly were they addressed? Were the systems in place effective at preventing lateral movement? Did the program prepare people for real-world scenarios, not just staged ones?
If the answers are vague or the evidence is weak, that indicates the awareness effort is more performance than genuine defense. Boards and executives should care as much about detection and containment times as they do about click rates. The important metrics are those that measure resilience, not just compliance.
Too many board-level security briefings are just theatre. Slick slides appear, charts and heat maps illuminate, and the CISO points out a 27 percent reduction in phishing click rates. Heads nod in agreement. The room feels reassured. A metric has improved, so the program must be effective.
Except it might mean nothing. That number doesn’t reveal whether employees are actually spotting real threats or reporting them in time. It doesn’t show whether the SOC is catching attacks earlier or whether lateral movement is being stopped. At best, it’s a measure of how well people have learned to pass the company’s own quiz, not whether they can survive the real one. And like any grading system, it can be manipulated. If the only goal is to show improvement, the easiest way to achieve it is by making the test easier. Use obvious lures, remove realistic cues, and the pass rate will rise. That’s not progress; it’s security awareness grade inflation.
Boards should not settle for this. Your role isn’t to applaud the numbers you receive but to question them. Ask whether those metrics relate to actual incident prevention or quicker recovery. Inquire if the decline in clicks was accompanied by an increase in timely reports of genuine threats. Consider whether your simulations reflect the complexity of real-world campaigns, or if they merely target easy wins.
Then ask the most important question: if a single click can cause a disaster, what has your CISO been doing? Because if a lone busy secretary or distracted engineer clicking a link can cause a major breach, you don’t have a user problem, you have a systems problem.
In every other safety-critical industry, systems are designed to withstand inevitable human error. Commercial jets do not crash because one mechanic leaves a wrench in the wrong place. Power plants do not fail because one operator misses a single gauge reading. They incorporate layers of redundancy, segmentation, and containment. If your cyber defenses cannot survive the inevitable click, it indicates the environment is too flat, overprivileged, and under-monitored. That is a design failure, and it rests squarely on security leadership.
Boards should focus on metrics that demonstrate resilience, not just compliance. What percentage of incidents are contained without data loss? How quickly are they detected? How often does a single compromised account go nowhere because the architecture prevents lateral movement? These are the metrics that matter because they measure your ability to absorb a hit and continue operating.
When boards stop nodding at irrelevant numbers, the tone of the conversation shifts. Vanity metrics give way to practical truths. CISOs begin discussing mean time to detect, mean time to contain, and the blast radius of a compromised endpoint. Those figures are harder to make look good on a slide, but they are the only ones that matter.
If your organization is betting everything on the hope that no one will ever click, you are gambling on human perfection. You will lose. Build a system that expects mistakes will happen and can recover from them. Anything less is negligence disguised as PowerPoint.
The Economics of Misplaced Investment
Phishing tests cost money. They consume staff time, software licensing, and internal communication cycles. None of that is inherently bad, but every dollar spent on punitive simulation campaigns is a dollar not spent on strengthening the systems that attackers actually exploit after an initial breach.
When these programs are defended at the executive level, the numbers presented are usually selective. A CISO might report the annual licensing cost of the awareness platform, the projected cost of non-compliance if training is not conducted, and the headcount assigned to manage the effort. Those figures may look small compared to the overall security budget. But what is almost never tallied is the cumulative person-hours spent across the entire company to complete the training, click through the quizzes, respond to phishing simulations, and endure the follow-up coaching sessions. That is real productivity loss, multiplied across hundreds or thousands of employees, and it never appears in the glossy quarterly report.
In some cases, the drain is staggering. Within parts of the Department of Defense, agencies have tracked the compliance workload for certain roles and found that a large portion of an employee’s time was taken up by mandatory compliance tasks, which are completely unrelated to the work they were originally hired to do. The same situation occurs in corporate environments, only without as much measurement. An engineer might spend an afternoon working through training modules that don’t relate to their daily risk profile. A receptionist might need to re-watch awareness videos because they missed a phishing simulation click while managing phone calls and visitors. These hidden costs add up, and when you multiply them across the workforce, they rival or even surpass the cost of the training software itself.
If you must choose between running monthly phishing drills or implementing strong identity controls, patching critical systems, and investing in endpoint detection, the choice should be clear. Training should support a solid architecture, not replace it. It is reckless to spend resources testing human fallibility while ignoring systemic weaknesses. The first dollar should always go toward ensuring a single mistake can’t escalate into a company-wide breach.

Building Trustworthy Programs
The most effective awareness programs are based on trust, not fear. They are open about the purpose of simulations and clearly state that the goal is to help people learn, not to catch them off guard. They frame phishing tests as training exercises rather than traps and provide feedback that respects employees’ dignity. They also share stories from real incidents, including cases within the organization where early reports prevented problems, and offer practical tips that can be used right away.
Trust is built when employees see the security team upholding the same standards it expects from others. This means security staff complete the same training, respond promptly to reports, and give credit when employees identify genuine threats. It also means training is tailored to the specific risks of each role and respects the time and focus of those participating. This isn’t just politeness; it’s an acknowledgment that awareness without respect creates noise, and noise leads to disengagement.
But I have seen the opposite, and it’s equally ugly. At one company where I was invited to give a talk on security theater, the culture of the awareness program leaned more toward public shaming than education. I spent the morning joking about performative security rituals, giving away a friend’s book as a prize, and quoting my friend Michael’s remark that “nobody is in the security business, not even cybersecurity companies, they’re just in business.” Everyone laughed and nodded. Then we entered their security operations center, and on the wall was a huge display showing the usernames of the top offenders in their phishing simulations. Next to each name was the number of failed tests, and surprisingly, in another column, were the previous passwords that failed for ten other employees (though some names appeared on both lists). The “Top Ten Offenders” list was literally framed as a leaderboard.
It was meant to be a motivational tool. However, it functioned as a public pillory. This was a wall of shame disguised as a metric. In the worst cases, it not only damaged trust but also crossed an ethical line. Publicly posting employee usernames alongside their past passwords, even old ones, exceeds professional boundaries. It undermines the very security posture the organization claims to be building. It signals to employees that the security team will not only monitor their mistakes but also memorialize them in a way intended to embarrass.
This type of program doesn’t foster vigilance; it breeds resentment. Employees stop viewing the security team as a partner and start seeing them as an adversary. They share tips on how to dodge phishing tests without truly understanding real threats. Their focus shifts to avoiding humiliation instead of safeguarding the business. And if a real phishing attempt lands in their inbox, the safest choice for them personally might seem to be ignoring it altogether, because reporting it could lead to more scrutiny than staying silent.
A trustworthy awareness program requires a different approach. No wall of shame, no public scorecards, no “top offenders” lists. Instead, focus on clear goals, respectful engagement, and systems that accept people will make mistakes. Success isn’t measured by how few fail, but by how many trust the security team enough to call when something feels off.
Readiness Over Perfection
No awareness program can prevent every click. The aim isn’t to eliminate human error entirely, but to ensure that when it occurs, the impact is minimal and recovery is quick. A mature security approach acknowledges that eventually someone will open the wrong attachment or click on a malicious link. The important question is what happens afterward.
No awareness program can prevent every click. The goal isn’t to eliminate human error altogether but to ensure that when it occurs, the damage is minimal and recovery is quick. A mature security stance accepts that eventually someone will open the wrong attachment or follow a malicious link. Clicks are a natural part of how the internet works. The idea of removing clickable content from emails became outdated years ago when HR systems required people to fill out benefits forms, finance sent invoices and expense reports, and nearly every internal process moved online. In many ways, we are penalizing people for doing their jobs and using the network as it was intended. The key question is what happens next.
If a single click can dismantle the organization, the weakness lies not in the person who clicked but in the systems that allowed that action to escalate into disaster. Readiness starts with controls that make it hard for a compromise to be exploited. Detection should happen automatically, without depending solely on the vigilance of the person targeted. Accounts should be limited in both access and duration, so that stolen credentials are of little value. The environment should be designed so that an attacker’s attempts to move deeper trigger noise and alerts before real damage occurs. And when those alerts activate, the response should be immediate, practiced, and coordinated.
When those pieces are in place, a click signals an investigation rather than a crisis. It becomes a moment to demonstrate the system’s effectiveness, not a frantic effort to contain something that could have been prevented during design. This calls for a shift in how success is evaluated. Lower click rates might seem like progress, but they are an unstable measure if the underlying architecture still allows a single mistake to become a breach. True progress is reflected in faster detection, a smaller impact radius, and fewer steps from alert to containment.
That shift also depends on trust. People need to feel confident that reporting a mistake will not be met with shame or punishment, because delays in reporting often cause more damage than the initial error itself. Awareness programs work best when they are built around that trust, when they offer practical lessons tailored to the work people actually do, and when they show that the security team plays by the same rules it sets for others.
Security isn’t a show. It isn’t about creating metrics that look impressive in a quarterly report. It’s about building an environment that can endure failure without falling apart. When leaders stop chasing the illusion of perfection and begin preparing for the inevitability of human error, they replace fragile optimism with lasting resilience. This shift in focus is what turns a single click from a potential disaster into just another routine alert.