Artificial intelligence is working its way into every industry, with an estimated 37% of US workers using the technology as of August 2025. It’s affecting everything from how we search on the internet to how Coca-Cola makes its commercials. And while the workplace applications for the technology are hyped up around white-collar jobs—with many office workers using it to draft reports, emails and presentations—AI is making its mark on the manufacturing sector, too.
Deloitte expects manufacturing industry AI adoption to significantly grow in the next few years, transforming everything from the back office to the front line with its specific flavor of predictive automation. The firm expects AI to help improve production by generating shift reports and work instructions, capturing legacy knowledge from retiring employees and laying the foundation for smarter physical robots that can handle high-risk or ergonomically dangerous tasks.
But before we get too excited about the possibility of a science fiction future, it’s important to take a reality check and understand something crucial: AI is already here, and whether your organization is ready or not, it is likely being used in your workplace. Shadow AI is the term for the use of artificial intelligence outside of company knowledge and frameworks. This poses obvious challenges when it comes to compliance and regulatory requirements, and it also affects safety.
Shadow AI and safety
The boom in AI adoption has caused a scramble among organizations to prepare for the massive opportunities represented by artificial intelligence and to shore up defenses against its threats. Prominent regulatory and policy bodies like ISO and NIST have meticulously mapped out the risks associated with the new technology and set forth frameworks and best practices that can keep AI from becoming a liability.
These risks include everything from AI generating false information to human factors like overconfidence playing a role in interpreting the machine output. Generative AI is prone to hallucinations, presenting fabricated details with authority, so it’s a good bet that you want workers relying on proper CPR and AED training in a cardiac emergency and not consulting a machine. And when it comes to assessing hazardous energy, a human set of eyes is still more reliable than an AI-powered machine vision application, which can struggle with complex and context-heavy environments. When an incident happens and someone gets hurt, an algorithm won’t be the one bearing the real-world consequences of an injury or fatality. This is why it’s so crucial to have transparent and well-communicated best practices for authorized AI use.
Shadow AI carries all of the risk associated with authorized AI, but it can’t be monitored, audited or verified in accordance with a company’s policies. Workers may be giving information to an unauthorized AI technology, which in turn may be influencing their actions. And that’s not something they should be blamed for: after all, just like mobile devices and email before it, AI is both a consumer technology and an enterprise technology—some people use AI 24/7, and it feels natural for them to consult a chatbot for a quick and easy response, even if doing so increases their risk of contributing to an incident or getting hurt.
Shadow AI will inevitably make its way into almost every organization, and it will increase the risk of regulatory exposure, dangerous human factors and even critical errors that cause injuries. How can an EHS professional confront this futuristic threat? One way is to treat shadow AI like a human factor.
Internal factors and outcomes
According to the SafeStart Human Factors Framework, the reliability of an outcome is dictated by the relationship between an organization and the individuals operating within. An organization’s systems inform an individual’s thoughts and internal factors, which motivate their actions. The actions of the individual are the last element that influences the outcome. When AI is deployed in an organization with proper oversight and guardrails, it is a technical system.
But shadow AI functions more like an internal factor on the individual level. When an individual consults their preferred consumer-grade chatbot before acting, their prompt and the generated output don’t leave the individual learning loop. So, if we are looking to dampen the risk caused by shadow AI, we should address it in the same way we address other internal factors like rushing or frustration.
On the simplest level, human factors training increases safety through awareness. If a worker can recognize a human factor influencing their thoughts and actions, then they can take steps to eliminate it or account for the increased risk. Even for tricky human factors like complacency, which can’t be felt in the moment it’s influencing a person’s actions, awareness is usually the biggest contributor to managing that state of being. And in order to be aware of human factors, we need to talk about them. The same goes for shadow AI.
The benefits of talking about tech
Just because it’s called shadow AI doesn’t mean you can’t bring it into the spotlight by discussing its use—and its potential consequences—with employees. By acknowledging that some folks may be using their own AI in their daily routines and encouraging them to share their experiences, you are taking the first step towards dispelling the darkness. And this has knock-on effects:
- Stigma busting: When people regularly talk openly about an internal factor, they are less likely to hide it.
- Demystifying: Studies show that many people are nervous about, confused by or untrusting of AI. Open and blame-free conversations about new technology can help people understand how it works, regardless of how they feel towards it. The more people know about a new technology, the better equipped we all are to manage the risk it presents.
- Upgrading human factors conversations: Whether it’s using an app to summarize an email because you’re in a rush, having a chatbot create a report draft because you’re tired, or even feeling alienated or upset because you don’t like AI—artificial intelligence affects people’s human factors. By opening the floor to conversations about AI, you can better understand how it factors into the states of mind that affect our safety.
- 24/7 Safety: The goal of these conversations about artificial intelligence is to limit or eliminate shadow AI from your workplace. But even if you achieve that goal, workers may still use AI at home, where they are vulnerable to the safety risks posed by chatbot use. By talking about how AI affects our thoughts, human factors and actions, you will establish a safety-focused perspective on AI that applies at home and on the road, making employees safer no matter where they are.
Whether your workplace is an early adopter or just starting to explore official uses of artificial intelligence, consider talking to workers about how and when they use AI. By taking the initiative to shine a light on the shadows, you’ll be ensuring that this new and powerful technology isn’t generating invisible dangers in your workplace.
