top of page
Search

Artificial Intelligence – In Addition To, Not Instead Of

  • Writer: Urban Jonson
    Urban Jonson
  • Jul 8, 2025
  • 3 min read

Updated: Jul 10, 2025


Artificial Intelligence (AI) is transforming how we work, research, and make decisions. From automating routine tasks to uncovering patterns across vast datasets, its potential to enhance efficiency and innovation is undeniable. However, as AI adoption accelerates, particularly in high-stakes industries such as transportation, healthcare, finance, and defense, we must confront a critical truth: not all applications of AI are created equal, and not all models are ready to operate independently in safety-critical environments.


I want to make one point absolutely clear, concerns about AI’s limitations in certain contexts do not mean I am anti-AI. In fact, quite the opposite is true. I am a strong advocate for AI when used appropriately, with clear boundaries, oversight, and purpose. AI, when applied thoughtfully, can be a tremendous force multiplier for both people and organizations. It can support fundamental research, generate and summarize content, monitor complex systems for anomalies, improve threat detection in cybersecurity, analyze product designs, and much more. These are just a few examples of where AI can complement—and elevate—human expertise.


The problem is not AI itself; it’s how we use it.


Many of the AI models in use today, especially those built on machine learning (ML) and large language models (LLMs), are non-deterministic. This means they do not always return the same result for the same input. They are also inscrutable; meaning we often cannot explain why a model made a particular prediction or decision. This “black box” nature makes current-generation AI models difficult to validate, audit, or debug, especially in scenarios where lives or infrastructure depend on reliable, repeatable results.


Furthermore, current models are often prone to hallucination—a term used to describe confident but false or fabricated responses—and suffer from inconsistent output quality. These limitations are not just academic concerns. In safety-critical systems where milliseconds matter and failure is not an option, deploying unexplainable AI without proper safeguards can lead to disastrous consequences.


This is why it is so important to think of AI as an enhancement to human judgment and well-established engineering processes—not an outright replacement. AI works best when integrated into workflows that involve checks and balances, cross-validation with other tools, and human oversight. In these hybrid environments, AI augments human capabilities rather than replacing them, helping to flag anomalies, suggest options, automate repeatable tasks, or accelerate research.


Where AI starts to go astray is when organizations see it as a shortcut—a way to eliminate roles, skip proven procedures, or outsource accountability to a system that cannot explain itself. That is a dangerous path, especially in industries that rely on functional safety, regulatory compliance, or highly sensitive data. AI can assist a surgeon, not replace one. It can analyze telemetry from an aircraft, but it should not decide unilaterally how to reroute flight paths without pilot approval. It can help engineers design complex systems, but it shouldn’t be the final arbiter of what’s “safe.” As with human hierarchies, authority to do something can be delegated but not responsibility for the resulting action(s). Organizations must assess and understand what ‘authority of action’ they are delegating to AI and their responsibility for the risks that delegation entails.


This is why organizations need to develop, monitor, enforce, and as needed revise robust AI policies and best practices. These guidelines should clearly define how and where AI can be used, which internal processes it may support, and what data it can access. Special attention must be given to protecting proprietary, regulated, or confidential information, particularly when using public cloud-based AI platforms. Uploading sensitive data into a third-party model without safeguards can result in unintended data leakage, IP loss, or even regulatory violations.


Adopting AI responsibly starts with the proper mindset—not viewing AI as a magical replacement for human expertise but as a sophisticated tool to be integrated with intention. It is not about resisting AI; it’s about respecting its capabilities and constraints.

The future of AI is incredibly promising. But its value will ultimately be determined by how well we balance its power with the judgment, accountability, and processes that make systems trustworthy and resilient. In this light, the best use of AI is not instead of people—but in addition to them.


As a nonprofit organization dedicated to advancing industrial AI and autonomy, the Minerva Institute is here to support your organization and its people as you explore and embrace AI’s possibilities. Organizations interested in deepening their engagement can join the Institute as members, gaining access to exclusive workshops, research collaborations, and a network of industry peers. We’d welcome the chance to collaborate, please reach out at inquiry@minervainstitute.ai.


 
 
 

Comments


bottom of page