
AI agents able to submit huge numbers of pull requests (PRs) to open-source project maintainers risk creating the conditions for future supply chain attacks targeting important software projects, developer security company Socket has argued.
The warning comes after one of its developers, Nolan Lawson, last week received an email regarding the PouchDB JavaScript database he maintains from an AI agent calling itself “Kai Gritun”.
“I’m an autonomous AI agent (I can actually write and ship code, not just chat). I have 6+ merged PRs on OpenClaw and am looking to contribute to high-impact projects,” said the email. “Would you be interested in having me tackle some open issues on PouchDB or other projects you maintain? Happy to start small to prove quality.”
A background check revealed that the Kai Gritun profile was created on GitHub on February 1, and within days had 103 pull requests (PRs) opened across 95 repositories, resulting in 23 commits across 22 of those projects.
Of the 103 projects receiving PRs, many are important to the JavaScript and cloud ecosystem, and count as industry “critical infrastructure.” Successful commits, or commits being considered, included those for the development tool Nx, the Unicorn static code analysis plugin for ESLint, JavaScript command line interface Clack, and the Cloudflare/workers-sdk software development kit.
Importantly, Kai Gritun’s GitHub profile doesn’t identify it as an AI agent, something that only became apparent to Lawson because he received the email.
Reputation farming
A deeper dive reveals that Kai Gritun advertises paid services that help users set up, manage, and maintain the OpenClaw personal AI agent platform (formerly known as Moltbot and Clawdbot), which in recent weeks has made headlines, not all of them good.
According to Socket, this suggests it is deliberately generating activity in a bid to be viewed as trustworthy, a tactic known as ‘reputation farming.’ It looks busy, while building provenance and associations with well-known projects. The fact that Kai Gritun’s activity was non-malicious and passed human review shouldn’t obscure the wider significance of these tactics, Socket said.
“From a purely technical standpoint, open source got improvements,” Socket noted. “But what are we trading for that efficiency? Whether this specific agent has malicious instructions is almost beside the point. The incentives are clear: trust can be accumulated quickly and converted into influence or revenue.”
Normally, building trust is a slow process. This gives some insulation against bad actors, with the 2024 XZ-utils supply chain attack, suspected to be the work of nation state, offering a counterintuitive example. Although the rogue developer in that incident, Jia Tan, was eventually able to introduce a backdoor into the utility, it took years to build enough reputation for this to happen.
In Socket’s view, the success of Kai Gritun suggests that it is now possible to build the same reputation in far less time, in a way that could help to accelerate supply chain attacks using the same AI agent technology. This isn’t helped by the fact that maintainers have no easy way to distinguish human reputation from an artificially-generated provenance built using agentic AI. They might also find the potentially large numbers of of PRs created by AI agents difficult to process.
“The XZ-Utils backdoor was discovered by accident. The next supply chain attack might not leave such obvious traces,” said Socket.
“The important shift is that software contribution itself is becoming programmable,” commented Eugene Neelou, head of AI security for API security company Wallarm, who also leads the industry Agentic AI Runtime Security and Self‑Defense (A2AS) project.
“Once contribution and reputation building can be automated, the attack surface moves from the code to the governance process around it. Projects that rely on informal trust and maintainer intuition will struggle, while those with strong, enforceable AI governance and controls will remain resilient,” he pointed out.
A better approach is to adapt to this new reality. “The long-term solution is not banning AI contributors, but introducing machine-verifiable governance around software change, including provenance, policy enforcement, and auditable contributions,” he said. “AI trust needs to be anchored in verifiable controls, not assumptions about contributor intent.”
