Close Menu
  • Home
  • AI
  • Big Data
  • Cloud Computing
  • iOS Development
  • IoT
  • IT/ Cybersecurity
  • Tech
    • Nanotechnology
    • Green Technology
    • Apple
    • Software Development
    • Software Engineering

Subscribe to Updates

Get the latest technology news from Bigteetechhub about IT, Cybersecurity and Big Data.

    What's Hot

    How Silver Fox preys on Japanese firms this tax season

    March 28, 2026

    Why DCIM still fails when data centres need it most

    March 28, 2026

    AI for nuclear energy: Powering an intelligent, resilient future

    March 28, 2026
    Facebook X (Twitter) Instagram
    Facebook X (Twitter) Instagram
    Big Tee Tech Hub
    • Home
    • AI
    • Big Data
    • Cloud Computing
    • iOS Development
    • IoT
    • IT/ Cybersecurity
    • Tech
      • Nanotechnology
      • Green Technology
      • Apple
      • Software Development
      • Software Engineering
    Big Tee Tech Hub
    Home»Software Development»Shadow AI : How to deal with unauthorized models and uncontrolled agents
    Software Development

    Shadow AI : How to deal with unauthorized models and uncontrolled agents

    big tee tech hubBy big tee tech hubMarch 28, 2026005 Mins Read
    Share Facebook Twitter Pinterest Copy Link LinkedIn Tumblr Email Telegram WhatsApp
    Follow Us
    Google News Flipboard
    Shadow AI : How to deal with unauthorized models and uncontrolled agents
    Share
    Facebook Twitter LinkedIn Pinterest Email Copy Link


    iStock 2218453755iStock 2218453755

    Shadow AI is considered the next iteration of Shadow IT,  with the big difference being that while developers might use a self-contained, unauthorized tool in their work, the tool itself does not create risk.

    Shadow AI is particularly troublesome because an unauthorized model can gain access to databases it shouldn’t have and lack the system and organizational context to make correct decisions. Further, Shadow AI almost always involves someone in the organization taking company intellectual property and pasting it into a public tool, leaving the destination and subsequent processing unknown.

    Part of the problem, according to Broadcom Head of Product Management, Clarity, Brian Nathanson, is an organization’s approach to governance and security exactly because AI is advancing so quickly and continually changing. The engineers feel that the governance is burdensome to get their work done, and that their organizations’ governance is too slow to bring different models on board. “Individuals are seeing the productivity benefit of AI for more than the enterprise does, at least right now, but enterprises, because of the concerns over liability and their IP protection, have basically tried to clamp down,” Nathanson said. “They’ve said, no you can’t use AI tools, or you can only use these authorized AI tools.”

    Nathanson said that puts developers into a bind, because if the company only authorizes, say, Gemini, and the developer knows that Claude might give better responses for a certain activity, the developer thinks “I’ll just copy and paste into my private, personal account of Claude, and they say, ‘I’m just going to use it, because I can’t wait for the governance process to authorize the AI tools.’ ” 

    Ted Way, vice president and chief product officer at SAP, said employees “just want to get stuff done,” and most of the time will ask for forgiveness later. But that’s not worth the risk of sensitive data being leaked, “and not only is it being leaked, but it’s stored and processed outside your company. It might be used to train a model. And then you have your compliance risk,” he said. “And, in the journey to get stuff done, are you actually not even doing it,” because you might not be getting the accurate results you want.

    What organizations can do

    Getting the shadow AI issue under control involves organizational governance, policy and culture.

    Some companies, instead of restricting Ai, have created orchestration layers that allow engineers to use many different open source and proprietary models in a way that is controlled by the orchestration. This reduces the need for engineers to go outside of the company’s policies to get their work done with the model they choose, and thus reduces risk of a company’s proprietary data and conversations aren’t let out into the public.

    From a policy perspective, Way said that it starts with a clear view of policy on generative AI. He explained that modern technology forces a trade-off: organizations can only achieve two out of three desired outcomes—safe, capable, and autonomous.

    • Safe and Capable: This state requires extensive “human babysitting” and is considered to be  too slow, as every request is “gated on humans.”
    • Capable and Autonomous: This represents the opposite extreme—a lack of oversight where the LLM decides what is safe. Way cites an example of an LLM deciding to decrypt repository answers to achieve a better score on an evaluation.
    • Safe and Autonomous: This state is too restricted, meaning the system will not have access to the necessary tools to be capable.

     Addressing Shadow AI requires moving past ineffective governance models. Michael Burch, director of application security at Security Journey, suggests that while an AI team or governance committee should exist, governance is not just a “10-page policy report that nobody’s gonna read.” Instead, it must be about “everyday-to-day practical governance—taking that 10-page report and making it actionable for individuals.” 

    Governance, he said, “isn’t just about the policy publications and writing all the rules and buying the right tools. It’s, is all the work we put in, is it actionable? Did it actually have an impact? And did we give it to people in a way that let them actually do it day-to-day and improve the way they’re thinking and treating security?”  Any governance effort must be “grounded in real truth of day-to-day workflows,” he said, to ensure people will actually adopt it. The ultimate goal is a practical system that drives adoption and gets people to hold themselves accountable for how they use AI. Burch noted that governance fails when policies alone are relied upon to create good decisions. 

    A vital step in this practical approach is building a security culture. This involves teams having a shared vocabulary, workflow guidance, and examples. If everyone understands how AI integrates into their workflows and speaks the same language, the potential for failure is significantly reduced. 

    “If we’re all talking the same language, if we all understand how AI integrates in our different workflows, and we have examples to work from so we understand how to… the lift to get there is a lot smaller for us, we have a lot less chance for failure, because everybody’s kind of on that same page,” Burch explained.

     



    Source link

    Agents deal models Shadow unauthorized uncontrolled
    Follow on Google News Follow on Flipboard
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
    tonirufai
    big tee tech hub
    • Website

    Related Posts

    Your Guide to Asynchronous Java

    March 28, 2026

    Your AI Coding Tool Has Amnesia

    March 27, 2026

    What Is Adobe FrameMaker? A Beginner’s Guide to Features & Benefits

    March 26, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Editors Picks

    How Silver Fox preys on Japanese firms this tax season

    March 28, 2026

    Why DCIM still fails when data centres need it most

    March 28, 2026

    AI for nuclear energy: Powering an intelligent, resilient future

    March 28, 2026

    AI for nuclear energy: Powering an intelligent, resilient future

    March 28, 2026
    About Us
    About Us

    Welcome To big tee tech hub. Big tee tech hub is a Professional seo tools Platform. Here we will provide you only interesting content, which you will like very much. We’re dedicated to providing you the best of seo tools, with a focus on dependability and tools. We’re working to turn our passion for seo tools into a booming online website. We hope you enjoy our seo tools as much as we enjoy offering them to you.

    Don't Miss!

    How Silver Fox preys on Japanese firms this tax season

    March 28, 2026

    Why DCIM still fails when data centres need it most

    March 28, 2026

    Subscribe to Updates

    Get the latest technology news from Bigteetechhub about IT, Cybersecurity and Big Data.

      • About Us
      • Contact Us
      • Disclaimer
      • Privacy Policy
      • Terms and Conditions
      © 2026 bigteetechhub.All Right Reserved

      Type above and press Enter to search. Press Esc to cancel.