Hello again—welcome to week three of our CISO’s guide to red teaming blog series. When organizations kick off a red teaming engagement, they often expect to gain visibility into technical vulnerabilities in their attack surface. Although these are a key part of your engagement, red teaming goes beyond technical vulnerabilities.

Red teaming beyond technical vulnerabilities

One of the most important insights a CISO gains from red teaming is that security is not just a technical problem—it’s a human and organizational one. While vulnerability scanners and patch management address software flaws, red team exercises often reveal that the weakest links lie in human behavior and process deficiencies. A comprehensive red team doesn’t just stop at hacking computers; it will probe the awareness and reactions of people, as well as the robustness of processes (incident response, change management, physical security procedures, etc.). Red teaming goes beyond finding a misconfigured server or an open port and uncovers systemic issues such as employees being phished, IT support or helpdesk processes being tricked, or incident response playbooks failing under pressure. Addressing these findings is crucial for truly improving organizational resilience.

Exploiting the human element

As noted earlier, the majority of breaches involve some human action or error, whether it’s clicking a malicious link, using a weak password, or misconfiguring something. Red teams take advantage of this by incorporating social engineering attacks into their campaigns.

Social-engineering playbook

This might involve phishing emails, phone calls (vishing), SMS (smishing), and even in-person deception (if it is in scope). For example, a red team member might call a company’s IT helpdesk while posing as a frantic executive: “I’m travelling and can’t log in. I’ve forgotten my token. Please, can you reset my password urgently? I have a board meeting in 10 minutes!” If the helpdesk has a weak authentication process for callers or something that can be bypassed with OSINT, it might just comply, effectively letting the attacker reset the exec’s password and bypass MFA (maybe by also convincing them to disable MFA “just for now”). This tests process adherence: do employees follow security policy under pressure or do they bend the rules? The results can be illuminating. Many red teams find they can gather a lot of information just by calling various departments and asking innocuous questions (pretexting as an auditor, new employee, etc.), a tactic known as elicitation. This might reveal internal lingo, names of key staff, or even details about what software or security measures are in place, all useful intel for further attacks.

Process weaknesses and incident-response gaps

Red teaming also shines a light on process failures and organizational silos. For instance, an exercise might reveal that the incident response process looks good on paper but breaks down in practice.

Real-world escalation breakdowns

Perhaps the SOC analysts noticed some suspicious activity (say, the red team triggering a malware alert) but the incident never got properly escalated to management because the alert was dismissed as a false positive, was too low priority, was invisible on a “new” dashboard, or got stuck in an email queue. In a red team debrief, the timeline of “here’s when we did X, here’s when/if it was noticed, and here’s how the staff responded” is incredibly valuable. It might show that the on-call process on a weekend is unclear, that the SOC is too understaffed to investigate every alert, or that they did respond but the communication to the broader team failed (e.g., they contained a server but didn’t inform the app owners, resulting in confusion). These are systemic issues in incident response and crisis management that a red team helps identify without the cost of a real incident. Many organizations follow up a red team engagement with a tabletop or purple team exercise involving the same scenario to further drill response procedures, this time with key stakeholders aware.

Identity and change-management blind spots

Another example of a systemic issue is poor joiner-mover-leaver (JML) processes for user accounts. A red team might find dormant accounts that still have access because HR and IT didn’t disable them when someone left or because contractors were granted broad access but never audited. By exploiting such an account, a red team can reveal a breakdown in the identity governance process. This often prompts an organization to invest in identity management solutions or tighter HR–IT coordination. Similarly, change management processes can be tested: red teams have successfully deployed rogue devices or applications inside a network and watched if anyone notices an unapproved change. If they can operate for weeks off a server they set up and have no one question it, this indicates a gap in asset management and change detection.

Cultural and reporting challenges

Many red team engagements expose issues of culture and coordination. For example, in some companies, the security team might notice something odd but hesitate to raise an alarm for fear of false alarms or because the company has a blame culture. A healthy organization, like a healthy body, should have reflexes; it should react when something is amiss. If employees are afraid to report a lost badge or an unusual email because they think they’ll be punished, that’s a cultural issue that attackers exploit (they thrive in silence and fear). Red teams sometimes intentionally create obvious signs to see if employees report them. One metric could be the reporting rate. For example, Verizon noted in its 2024 DBIR that 20% of individuals reported phishing in simulated exercises. Only 11% of those who clicked still went on to report these instances, meaning some clicked and then realized their mistake. These numbers, while slowly improving, suggest a lot of attacks could still sail through without being reported. A CISO wants to improve such rates because human sensors (employees reporting incidents) are as important as technical sensors. Red team exercises boost these by raising awareness (“See how convincing a phish can be? Please report anything suspicious quickly!”).

Technology–people–process interplay

Importantly, red teams help find problems not just in IT systems but in the interplay of technology with people and process. For example, a red team may discover that backups are regularly performed (tech process) but the process to restore them is untested or takes too long—a huge gap if ransomware hits. Or they might reveal that while there is an incident response plan, the business continuity plans are not aligned (maybe the plan to continue operations during an IT outage is unrealistic). By simulating an incident, these disconnects become apparent. One could liken such simulations to a fire drill; you might have an evacuation plan on paper, but until you run a drill, you won’t know that, say, one exit door is jammed or people congregate in the wrong place. Red teaming is a cyber fire drill that exposes the non-obvious issues.

Remediation and resilience in action

Consider the scenario of an advanced attacker who has partial access. The security team may detect and contain them on some machines. Does the IT team then quickly reimage those machines and remove persistence, or do bureaucratic delays allow the attacker (red team) to regain a foothold? Perhaps the red teamers observed that after being “caught” on one system, they still had valid credentials that nobody thought to revoke, letting them slip back in through a VPN. That’s a process gap in remediation. Responding teams must not only remove malware but also invalidate credentials, sessions, etc. if they suspect compromise. Red team exercises can uncover if such follow-through is happening.

Building an adaptive security culture

Red teaming outcomes often highlight the need for organizational learning and adaptability. The most mature organizations treat each red team finding as a lesson, update their training and processes, and even share lessons across industries (anonymously, via ISACs, conferences, etc.). They foster a culture where being “beaten” by a red team is not failure but an opportunity to improve, akin to how regular exercise breaks muscle fibers only to rebuild them stronger. To go with the health metaphor, small, controlled doses of stress (red team drills) build the resilience “muscle” of organizations. People become more astute (e.g., employees might start politely challenging that unknown person roaming the office or verifying unusual requests through a second channel). Processes get refined with each test (maybe the company institutes a stricter helpdesk verification protocol because the red team breached it). Over time, these adjustments lead to a security posture where both technology and humans are more prepared to prevent, detect, and respond to real threats in a concerted, agile manner.

What’s next: Industry-specific deep dives

So there you have it! For our final three blogs in this series, I’m going to do deep dives into red teaming for three specific industries—finance and insurance, healthcare and pharmaceuticals, and manufacturing and industrial (OT/ICS).