At Open Systems, we approach AI the same way we approach security: with clarity of purpose and a focus on outcomes. As “Guided GenAI” rises toward the peak of the Future of Work Hype Cycle, many organizations are testing how automation can reshape their security operations. The potential is real – but so are the risks of misalignment.

AI creates value in security only when it elevates human judgement. Not by taking it away. And yet, security leaders face a growing tension: where does human oversight remain indispensable, and where does it become a bottleneck?

This is not a theoretical question. It is one that affects threat response, operational resilience, and trust.

Why “Human-in-the-Loop” Is Not a One-Size-Fits-All Answer

HITL has become a standard pillar of responsible AI governance. But in practice, it can fail when organizations treat it as a universal safeguard.

Research on the MABA-MABA fallacy shows that the assumption “humans do X, machines do Y” breaks down in complex systems. In fact, the more capable the AI, the more paradoxical the situation becomes:

A computationally superior machine still requires humans to ensure that it is working effectively – and the human is held accountable for errors, even when the machine is more accurate.
https://www.nature.com/articles/s41599-020-00646-0

Real-world cases show how fragile this arrangement can be.

Oversight Complacency: When Humans Stop Watching the System

In the U.S., a driver supervising an autonomous vehicle became disengaged — reportedly asleep or playing a game — and the car caused a fatal crash. The automation worked well most of the time, and the human, expected to be a vigilant fallback, simply wasn’t.

This is a classic case of oversight complacency:

  • if the system works too well, humans stop monitoring.
  • if the task is too passive, humans disengage.

The Mirror Image: Alert Fatigue

In security operations we know the opposite effect just as well:

  • when too many alerts are irrelevant, analysts stop responding.
  • attention collapses under noise.

Oversight complacency and alert fatigue are two sides of the same behavioral coin– and both erode accountability.

The lesson is clear: humans cannot meaningfully supervise systems that are designed in ways that undermine human attention. Also regulators have begun to articulate what meaningful human oversight should look like. The European Data Protection Supervisor highlights exactly this challenge: oversight must be real, informed, and feasible – not a symbolic add-on to automated systems.

This aligns with what we see in the field: teams cannot supervise everything, and asking them to do so undermines both morale and reliability. Oversight must be targeted, not total.

Where Guided GenAI Helps

Guided GenAI provides a path forward because it isn’t designed to replace analysts or overwhelm them with raw output. Instead, it brings structure and transparency into workflows that were previously opaque or fragmented.

For Open Systems, this means systems that:

  • explain why a recommendation is being made,
  • highlight what is uncertain or unusual,
  • direct human attention to the moments that truly require judgement,
  • reduce irrelevant noise so oversight remains manageable,
  • and maintain a consistent evidentiary trail.

Rather than forcing humans to manually review everything, Guided GenAI helps them focus on the decisions that matter — without sacrificing speed.

This is accountability by design, not by burden.

Accountability by Design, Not by Default

To avoid the pitfalls outlined in the MABA–MABA literature, next-generation NOCs need to adopt a more deliberate model for shared work. Four principles guide our approach:

  1. Humans shape intent; machines execute with traceability

Cognitive work – interpretation, ethics, prioritization – remains human-led. Machines scale the execution reliably and consistently.

  1. Oversight adjusts dynamically to risk and context

Some actions require human approval; others can run autonomously but remain auditable. Oversight must be flexible enough to reflect real-world pressures.

  1. Responsibility must be distributed explicitly

Clear logs, transparent decision chains, and defined escalation paths ensure accountability is shared appropriately — not silently shifted onto analysts.

  1. Oversight must be realistic for the teams doing the work

Monotonous review tasks do not create safety; they create blind spots. Sustainable oversight means reducing unnecessary cognitive load.

Evolving Faster Than the Threat Landscape

Machines continue to improve rapidly. Humans, by comparison, remain constant. That’s not a weakness, it’s a design constraint. The real opportunity lies in how we architect systems where human judgement and machine intelligence work together without overwhelming one another.

At Open Systems, we believe the future of secure operations is neither fully automated nor fully manual. It is coordinated: a partnership where AI accelerates expertise, clarifies decisions, and upholds accountability at every step.

AI should not replace human judgement. It should extend it – reliably, transparently, and at scale. That is how we deliver security that customers can trust, even as the landscape accelerates.