
Once, when ChatGPT went down for a few hours, a member of our software team asked the team lead, “How urgent is this task? ChatGPT isn’t working — maybe I’ll do it tomorrow?” You can probably imagine the team lead’s reaction. To put it mildly, he wasn’t thrilled.
Today, according to a Stanford HAI report, one in eight companies uses AI services. Productivity has increased — but so have the risks. When AI tools are used without clear oversight, employees may inadvertently feed neural networks not just routine work, but also confidential data. The Samsung case in 2023, when the company discovered that engineers had uploaded sensitive code to ChatGPT, is just one of many examples.
So how do you strike the right balance between leveraging AI for productivity and protecting your company’s security?
AI in business is no longer a “pilot project”
Today, engineers are using AI for more than just writing code. They automate individual stages of CI/CD pipelines, optimize deployments, generate tests — the list goes on.
For businesses, AI translates technical data into plain-language insights. For example, in our industrial equipment monitoring system, we have an AI agent that processes data from IIoT sensors tracking machine performance. It explains the equipment’s condition, highlights risks of failure, outlines possible courses of action, and can even answer client questions.
AI momentum is accelerating. According to Menlo Ventures, companies spent $37 billion on AI technologies in 2025 — three times more than in 2024. AI is becoming an integral part of tech ecosystems. Gartner predicts that soon over 80% of enterprise GenAI programs will be deployed on existing organizational data management platforms rather than as standalone pilot projects.
In this scenario, AI will affect not only human productivity but also the continuity of nearly all business processes.
Where the risks lie
When we first started using LLMs to analyze equipment data, it quickly became clear that the models tended to err on the side of caution — flagging problems where none existed. Had we not trained them to recognize normal conditions, these false positives could have led to unwarranted recommendations and unnecessary costs for clients.
The risk tied to model accuracy can be mitigated early on. But some threats only surface after serious damage is done.
Take confidential data leaks via so-called Shadow AI — interactions with AI through personal accounts or browsers. According to LayerX Security, 77% of employees regularly share corporate data with public AI models. It’s no surprise that IBM reports that one in five data breaches is linked to Shadow AI.
If that number seems exaggerated, consider the incident in which the acting director of the U.S. Cybersecurity and Infrastructure Security Agency uploaded confidential government contract documents to the public version of ChatGPT. I’ve personally seen cases where even system passwords ended up publicly exposed.
This creates unprecedented opportunities for cyber fraud: a bad actor can ask a neural network what it knows about a specific company’s infrastructure — and if an employee has already uploaded that data, the model will provide answers.
What if people do follow the rules?
External threats don’t go away in this situation either. For instance, in June 2025, researchers discovered the EchoLeak vulnerability in Microsoft 365 Copilot, which allowed zero-click attacks. An attacker could send an email containing hidden instructions, and Copilot would automatically process it and trigger the transmission of confidential data — without the recipient even needing to open it.
Alongside technical and security risks, there’s a less obvious but equally dangerous threat: automation bias, the tendency to uncritically trust the output of automated systems. We had a case where a client’s technical team, after we presented our proposal, actually requested a week’s pause to “validate it with ChatGPT”.
So, are we doomed?
Mitigating the risks of using external AI tools doesn’t mean abandoning them. There are several practices that can help:
- Set up corporate subscriptions and centralize LLM access. This is the most basic and straightforward step. In paid corporate versions of AI services, data is not used to train models. Trust us — a subscription costs far less than a confidential data leak.
- Establish a regulatory policy. The company should have a set of rules defining what can and cannot be sent to the model and for which tasks it may be used. There should also be a designated owner who updates these policies as models and regulatory requirements evolve. Since models adapt to each individual user, a lack of unified standards can lead to loss of control over output quality.
- Limit AI agent actions. Every LLM request should be handled based on the user’s role, their access rights, and the type of data being requested. To control interactions between models and company systems, MCP servers can be used — an infrastructure layer that enforces access policies and restrictions regardless of the LLM’s internal logic.
- Monitor where and how data is processed. For some clients, it’s critical that their data never leaves the EU, due to GDPR compliance, the EU AI Act, or internal security policies. In such cases, there are two approaches. The first is to work with a provider that can guarantee data processing and storage on European servers. The second is to use managed solutions like Azure, which allow you to deploy an isolated cloud environment and restrict AI service access to the company’s internal network alone.
At this year’s World Economic Forum in Davos, historian and author Yuval Noah Harari said, “A knife is a tool. You can use a knife to cut a salad or to kill someone, but it’s your decision what to do with it. Artificial intelligence is a knife that can decide for itself whether to cut a salad or commit a murder.” And that, I think, captures a risk we haven’t fully grasped yet. So the question is not whether to use AI services, but how to keep humans actively in the loop.
