
The following article originally appeared on Q McCallum’s blog and is being republished here with the author’s permission.
Generative AI agents and rogue traders pose similar insider threats to their employers.
Specifically, we can expect companies to deploy agentic AI with broad reach and insufficient oversight. That creates the conditions for a particular flavor of long-running problem, which in turn creates a novel risk exposure for both the companies in question and for anyone doing business with them. The bot and the rogue trader are able to inflict sizable, sometimes existential, damage to the firms that employ them.
The key difference is the scope: Rogue traders operate in investment banks, while agentic AI will be deployed to a wider array of companies and industry verticals. Agentic AI may therefore create a greater number of problems than rogue traders and put a greater amount of capital at risk.
I’m naming this risk exposure ROT—Rogue Operator Threat—and this document is a brief explainer on what it is and how to address it.
(I almost called it RAT, with the A for “agentic,” but then realized that it would apply to any kind of automated system. So I broadened the scope to “operator.”)
To set the stage, let’s take a trip to the trading floor:
Understanding the rogue trader
Rogue trader scandals follow the same storyline:
- A trader accrues losses due to bad trades.
- They hide those losses while placing new trades in an attempt to recover.
- The new trades also lose money, digging a deeper hole.
- Repeat.
This cycle continues until they’re caught, at which point the bank is sitting on a large loss (sometimes into the billions of dollars) and the trader faces legal repercussions.
The story of Barings Bank offers a concrete example. Trader Nick Leeson had been logging fraudulent trades, over a stretch of three years, in an attempt to cover his mounting losses. This only came to light when the Kobe earthquake shifted markets against his most recent positions and the losses were no longer possible to hide. Leeson’s £800M ($1.3B) hole drove Barings to bankruptcy just three days later.
This is when you’ll ask: How could a professional trading operation let so many bad trades slip through undetected? How could a trader falsify records? Aren’t trading floors high-tech operations, full of electronic audit trails?
And the answer is: It’s complicated.
Trading operations do keep records, yes. But no system is perfect. Each time a rogue trading scandal comes to light, it turns out that there were loopholes in risk controls. A sufficiently motivated trader—especially one desperate to hide their mistakes—found and exploited these loopholes, continuing their losing streak in plain sight until they could bring in real money to backfill the fake records.
That “until” never happened, though. Which is why their employers then faced financial, reputational, and sometimes legal troubles.
The AI agent’s ROT threat
Similar to a trader, an AI agent operates on behalf of its parent business and is given room to operate independently so it can accomplish its tasks.
The risk is that, in the rush to deploy agentic AI, these companies will likely grant the bots more leeway than is necessary. We’ve already seen cases in which bots have been able to delete emails and wipe a production database. And there are no doubt other stories that haven’t made it into the news.
Those issues were at least caught in real time. Companies facing ROT are exposed to additional longer-running problems in which the bot is able to accrue losses or inflict greater damage over an extended period. In those cases the problems will only be uncovered by accident and/or when it’s too late.
Consider, for example, an agent that creates false data records to reflect (nonexistent) sales orders. It’s possible for this to run until some external event, such as investor due diligence or a budget review, forces someone to double-check those records against reality.
Avoiding ROT: Mitigating the threat
How can you narrow your downside risk exposure to ROT? Preventative measures are key. Strong risk controls, narrow scope of authority, and monitoring can catch rogue operator problems long before they’ve metastasized into an existential threat.
In light of rogue trader scandals, trading shops have been known to tighten risk controls and also separate duties to create a system of checks and balances. (This inhibits traders from logging their own fake trades.) Companies also require traders to take time off, as fraudulent activity may surface when the perpetrator isn’t around every day to keep the system running.
Adapting these ideas to agentic AI, a company could monitor and limit the scope of the bot’s activity (say, requiring human approval to place more than 10 orders an hour). It could also periodically purge the agent’s memory so it doesn’t accumulate too many evolved behaviors, or swap in completely new bots to pick up where the previous one had left off. And per my usual refrain of “never let the bots run unattended,” this company could employ people to cross-check everything the bot does. Trust, but verify.
This will not prevent the AI agent from making mistakes. But guardrails and sufficiently frequent checks should limit the scope of the bot’s damage. As with the rogue trader, the ROT problem isn’t about a single error; it’s about letting the errors grow out of control, undetected.
