top of page
Search

AI as a Force Multiplier in Cybersecurity - At What Risk?

  • Writer: Urban Jonson
    Urban Jonson
  • Oct 29
  • 4 min read
ree

Artificial intelligence is rapidly reshaping the cybersecurity landscape. For organizations in transportation, energy, and operational technology (OT), AI promises to be a force multiplier allowing smaller teams to monitor more systems, respond faster, and predict risks that humans might miss.


But as with every powerful technology, there’s a flip side. In safety-critical and industrial domains, the risks of rushing AI into the stack without proper governance may outweigh the benefits.


At the Minerva Institute for Industrial AI & Autonomous Systems, we often say: “Forging intelligence into industry requires more than algorithms—it requires accountability.” That’s precisely where governance comes in.


Before embedding AI into cybersecurity operations, organizations must define why AI is being deployed and how it supports core resilience and safety objectives. A clear AI vision, anchored to the enterprise mission, ensures that automation serves strategic outcomes rather than chasing technical novelty. This alignment transforms AI from an experimental tool into a governed capability that advances organizational resilience and trust.


The Opportunity Side: AI as a Force Multiplier

Cybersecurity teams today face a crushing reality: talent shortages, complex infrastructures, and an expanding threat surface. AI offers a way to keep up.

  • Predictive Risk Detection: Machine learning models can sift through CAN bus, SCADA, or telematics data to highlight anomalies long before they escalate.

  • Automated Triage: AI-powered systems can categorize, prioritize, and escalate alerts in real time, reducing analyst fatigue.

  • Threat Hunting at Scale: AI makes it feasible to spot subtle attack patterns across billions of logs, something impossible for even the best human teams.

The bottom line? AI extends human capability. It doesn’t replace skilled analysts, but it amplifies them.


The Risk Side: New Vulnerabilities Emerge

However, treating AI as a plug-and-play solution can create new risks:

  • Adversarial Inputs: Attackers can feed manipulated data into sensors to fool models—like convincing a vehicle AI that a stop sign isn’t there.

  • Telemetry Spoofing: Fake data streams can mask attacks, leaving the AI blind to real events.

  • Model Drift: Over time, AI can lose accuracy as the environment changes, missing attacks it once caught.

  • Opaque Logic: When AI makes decisions without explainability, organizations can’t justify actions to regulators or validate outcomes in safety-critical contexts.

Instead of reducing risk, poorly governed AI can expand the attack surface.


Governance is the Missing Link

AI in cybersecurity isn’t just a technology story, it’s a governance story. To deploy responsibly, organizations should:

  • Define acceptable use: Not every problem should be solved with AI, especially in OT environments where safety is on the line.

  • Establish an AI policy: A policy should be established that supports the company’s goals and governance objectives, clearly defines accountability, and provides guidance on updating processes and procedures with regard AI.

  • Validate and test models: Treat AI models like any other critical system—with structured verification, validation, and explainability.

  • Integrate AI into governance frameworks: Align with standards like NIST AI RMF, ISO/IEC 42001, COBIT, and sector-specific guidance.

  • Plan for lifecycle management: AI needs retraining, continuous monitoring and improvements, and—eventually—responsible decommissioning.


Structure, Not Slogans

Establishing a cross-functional AI Governance Committee that includes cybersecurity, risk management, operations, and legal teams creates clear accountability. Designating an AI Safety & Assurance Officer or equivalent role ensures oversight of model validation, documentation, and regulatory compliance. This architecture anchors governance within existing risk and compliance frameworks instead of treating it as an add-on technology control.


Responsible AI deployment follows a full lifecycle:

1. Design & Ethical Review – confirm safety and fairness objectives.

2. Validation & Testing – verify model behavior under adversarial and safety-critical scenarios.

3. Operational Monitoring – track drift, bias, and security anomalies in real time.

4. Retraining & Continuous Improvement – incorporate feedback and new data safely.

5. Decommissioning & Audit – retire outdated models responsibly while maintaining traceability.


In OT and critical-infrastructure environments, AI governance should align with sector-specific standards and guidance, such as NERC-CIP, ISA/IEC 62443, and the forthcoming Minerva Institute Framework. Embedding AI oversight within these established standards/frameworks/

models ensures consistency across both digital and physical layers of defense.


Every AI-driven cybersecurity decision should retain a human-in-the-loop mechanism for verification and override. Ethical governance demands explainability, traceability, and human accountability, especially where AI actions may affect safety or regulatory obligations.


Governance is both cultural and technical. Developing processes to develop, instill, and maintain AI literacy among cybersecurity staff ensures they can interpret, challenge, and audit AI-generated insights. Regular training and scenario exercises build a workforce that complements AI rather than blindly deferring to it.


This is the type of industrial AI governance work we’re advancing at the Minerva Institute—helping organizations understand both the capability and the liability of AI in operational environments.


Why This Matters Now

Three forces make this urgent:

  1. Talent Gaps: AI is being deployed to patch resource shortages—but without oversight, it can worsen the problem.

  2. Regulatory Pressure: Frameworks like the EU AI Act and NIST AI RMF are setting expectations for AI accountability.

  3. Real-World Failures: High-profile cases of AI systems making unsafe or flawed decisions have already shown what’s at stake.


Key Takeaway

AI is neither a silver bullet nor a guaranteed risk. It’s a powerful multiplier—but only if we apply the same rigor in governance, validation, and lifecycle management that we demand of every other safety-critical system.


At Minerva, we’re focused on ensuring that AI in OT and critical infrastructure is deployed responsibly - forging intelligence into industry without sacrificing resilience or safety.


If you’d like to learn more about how to get involved with Minerva—whether through collaboration, research, or training—please reach out at: https://www.minervainstitute.ai/contact


👉 What do you think? Is your organization using AI in cybersecurity today and if so, how are you governing it?

 
 
 

Comments


bottom of page