Psychological Safety in Security Teams: Why It's a Performance Issue, Not a Feelings Issue
Godfrey Maiwun · September 2025 · Team Leadership · 11 min read
The term "psychological safety" carries a cultural connotation that causes many technical leaders to discount it. It sounds soft. It sounds like a wellbeing initiative. In security contexts, it is an operational requirement — and teams that lack it are teams that will eventually miss something they should have caught.
What psychological safety is and is not
Psychological safety, as defined by Amy Edmondson in her foundational research at Harvard Business School, is "a shared belief that the team is safe for interpersonal risk-taking." It means that team members believe they can speak up — about mistakes, gaps, concerns, disagreements — without facing punishment, humiliation, or career consequences for doing so.
It does not mean comfort. It does not mean avoiding difficult conversations or hard feedback. It does not mean everyone agrees or that conflict is absent. Edmondson distinguishes psychological safety from excessive niceness: high-performing teams can have demanding standards and difficult feedback cultures while still being safe environments for raising concerns. The standard is whether people feel they can say what they know without social penalty — not whether everything is pleasant.
In security contexts, the specific question is: does every person in this team believe they can flag a security concern, admit they made a mistake, question a decision, or escalate an anomaly — without fear of being dismissed, blamed, or penalised for doing so?
Why it is an operational requirement in security
Security incidents almost never happen because no one in the organisation had any idea something was wrong. They happen because someone had a concern that was not acted on, or a mistake that was not disclosed, or an observation that was not escalated. Post-incident reviews reveal the signals that were present. The gap is rarely technical — it is cultural. Someone knew something and did not say it, or said it and was not heard.
Incidents happen because signals were not raised, not because no one knew.
The mechanisms are specific. A tier-one analyst notices a pattern in logs that feels anomalous but is uncertain enough that surfacing it feels risky — it might look like they are raising false alarms, and they have been embarrassed for that before. So they close the ticket and move on. An engineer makes a configuration change that might have introduced a vulnerability but is not certain enough to self-report — admission of error is career-risky in this team's culture. So it goes unlogged and unfixed until something breaks. A security architect disagrees with a design decision but has been shut down in past design reviews, so raises no objection this time.
In every case, the organisation's actual security posture is worse than its documented posture — because information that should have flowed upward was suppressed by social risk. This is not a knowledge problem. It is a psychological safety problem.
The security team dynamics that suppress it
Blame culture after incidents. If post-incident reviews focus on who is responsible rather than what failed systemically, team members learn that admitting involvement in a problem is personally costly. The rational response is to minimise disclosure — to report only what is certain, to fix quietly rather than disclose, to not raise concerns that might retrospectively make you look like you should have caught something earlier. This makes your incident detection worse, not better.
Expertise hierarchies that silence junior staff. Security teams typically have wide experience gradients — from junior analysts to senior architects with decade-long careers. In teams where hierarchy is rigid, the junior analyst who notices something the senior architect did not is unlikely to speak up confidently. The analyst assumes the experienced person already knows, or fears looking naive by raising something that turns out to be benign. The experienced person, unaware of the observation, does not investigate. An incident that could have been caught early is not.
Leaders who respond to questions with frustration. Technical leaders who visibly lose patience with questions they consider basic, who dismiss concerns before hearing them fully, or who create a culture where uncertainty is treated as weakness will not receive the information they need to make good decisions. People will tell them what they believe they want to hear — and what they want to hear is that everything is under control.
Building it in a security team
Blameless post-incident reviews. The blameless PIR is the single most powerful structural intervention for building psychological safety in security teams. The goal of a PIR is systemic learning, not individual accountability. Who made an error is less important than what conditions made the error possible and what changes would prevent it. This model is established practice in site reliability engineering — it belongs equally in security operations.
Blameless PIRs, modelled uncertainty, structured escalation channels.
Blameless does not mean no consequences for deliberate negligence. It means that honest disclosure of honest mistakes is protected and valued. The message has to be consistent: when a team member says "I made an error and here is what happened," the response from leadership cannot be punitive if you want that honesty to continue.
Leaders who model uncertainty. The most powerful signal a security leader can send is admitting, in front of their team, that they do not know something. "I am not certain — let me check" or "that is a good catch, I had missed that" from a senior leader gives everyone permission to be uncertain. It signals that competence is not the same as omniscience, and that acknowledging limits is professional rather than weak.
Structured channels for concern escalation. In teams where speaking up in a meeting feels risky, alternative channels — a dedicated Slack channel for "anomalies worth discussing," an anonymous reporting mechanism, a standing agenda item in one-on-ones for "things I am not sure about" — lower the social cost of raising concerns. The goal is to make it easier to say something than to stay silent.
Curiosity rather than defensiveness about near-misses. A near-miss — a security event that was detected and contained before causing damage — should be treated as a gift: evidence of a gap that can now be closed. Leaders who respond to near-misses with frustration about how close it was will train their teams to minimise reporting of near-misses. Leaders who respond with genuine curiosity about what can be learned will see more near-misses reported — and more gaps closed before they become incidents.
The performance case
Edmondson's research across multiple industries consistently shows that teams with high psychological safety detect problems earlier, learn faster from mistakes, and perform better on complex tasks that require coordination. Google's Project Aristotle, which analysed what made high-performing teams at Google, found psychological safety to be the most important factor — more important than individual team member skill levels.
For security teams specifically: the research maps directly to operational outcomes. Earlier detection of anomalies. More complete incident disclosure. Faster learning from failure. Better coordination in crisis response. These are not cultural nice-to-haves. They are the operational capabilities that the function exists to provide.
The investment in psychological safety is not a wellbeing initiative that competes with security priorities. It is a multiplier on the effectiveness of everything else the security function does. A team that feels safe to speak will catch more, learn more, and respond better than a team that does not — regardless of how technically capable its members are.
Filed under: Team Leadership