Close Menu
  • Home
  • AI
  • Big Data
  • Cloud Computing
  • iOS Development
  • IoT
  • IT/ Cybersecurity
  • Tech
    • Nanotechnology
    • Green Technology
    • Apple
    • Software Development
    • Software Engineering

Subscribe to Updates

Get the latest technology news from Bigteetechhub about IT, Cybersecurity and Big Data.

    What's Hot

    AWS targets AI agent sprawl with new Bedrock Agent Registry

    April 11, 2026

    App Store fight continues as Apple and Epic clash over court-ordered stay

    April 11, 2026

    Minus K Congratulates to the Following Winners of Minus K’s 2025/2026 Educational Giveaway

    April 11, 2026
    Facebook X (Twitter) Instagram
    Facebook X (Twitter) Instagram
    Big Tee Tech Hub
    • Home
    • AI
    • Big Data
    • Cloud Computing
    • iOS Development
    • IoT
    • IT/ Cybersecurity
    • Tech
      • Nanotechnology
      • Green Technology
      • Apple
      • Software Development
      • Software Engineering
    Big Tee Tech Hub
    Home»Software Development»AI-assisted Development Multiplies Human Error: What’s Your AI Governance and Risk Management Strategy?
    Software Development

    AI-assisted Development Multiplies Human Error: What’s Your AI Governance and Risk Management Strategy?

    big tee tech hubBy big tee tech hubApril 11, 2026005 Mins Read
    Share Facebook Twitter Pinterest Copy Link LinkedIn Tumblr Email Telegram WhatsApp
    Follow Us
    Google News Flipboard
    AI-assisted Development Multiplies Human Error: What’s Your AI Governance and Risk Management Strategy?
    Share
    Facebook Twitter LinkedIn Pinterest Email Copy Link


    iStock 1495008087iStock 1495008087

    Agentic artificial intelligence is becoming ingrained in enterprise operations at lightning speed. With the promise of delivering unprecedented productivity (and pushed by CEOs and CIOs who see AI as the key to being competitive), AI agents have become “co-pilots” for practically every developer. As a result, AI-generated code is turning up everywhere. 

    But the hidden risks of the current use of agentic AI are piling up almost as quickly as the code. AI agents do an excellent job of predicting the next line of code, but they don’t grasp the security implications of the code being created. In many cases, by automating productivity as a trusty co-pilot, they amplify human error by suggesting insecure patterns that developers working at breakneck speed accept without a second thought. The ability of AI agents to work autonomously only accelerates the problem.

    It’s moving even faster with operational technology such as home thermostats, cameras, and travel-booking assistants, Chief Security Advisor at BeyondTrust Morey Haber said recently. “In the next year, nearly every technology we operate will be connected to agentic AI,” he said. 

    According to a recent report from Gartner, the rampant use of shadow AI and rogue automation is further fueling the proliferation of AI vulnerabilities. Gartner notes that 32% of IT workers using generative AI tools at work say they keep them hidden from cybersecurity teams. Combined with low-code/no-code platforms and vibe coding practices, the AI copilots are greatly expanding the enterprise attack surface. 

    AI Vulnerabilities Proliferate

    If high velocity development practices aren’t enough, agentic AI use is also being pushed from the top, where executives seem to have strong faith in what AI agents can do, with Gartner finding that 79% of IT leaders expect significant benefits. They readily convert custom-built AI chatbots into AI agents by linking them with APIs and tools. This increases risk because only 14% of IT leaders say they are confident that the data and content are ready for human and AI interactions. CISOs are often powerless to deter these initiatives.

    Another survey by PagerDuty found that 81% of execs are willing to let autonomous systems take action during a security breach, system outage, or other crises. That finding underscores a disconnect between the hopes for agentic AI and the reality: 96% of execs say they’re confident they can detect and mitigate AI failures before they impact operations, even though 84% have already experienced AI-related outages. Meanwhile, research by Capgemini found that only 27% of organizations now say they have trust in fully autonomous agents, down from 43% a year ago. 

    The reality is that AI doesn’t create new vulnerabilities; it replicates the bad habits found in the vast datasets it was trained on. Essentially, it’s amplifying human error. If organizations don’t change their approach to AI development, we risk flooding our repositories with AI-generated code that is fundamentally insecure and continues to feed the expansion of the enterprise attack surface.

    How CISOs Can Stem the Tide

    CISOs aren’t completely helpless in bringing autonomous AI use under control. But they must act quickly to implement a layered oversight program that reduces vulnerabilities in line with their risk tolerances.

    Prioritize Developer Risk Management: AI agents may be introducing risks into the environment, but it begins with human developers. A comprehensive developer risk management program that addresses relevant learning pathways, AI guardrails, and tech stack observability and traceability is necessary to prepare developers for an expert security review of their work. Developer education and upskilling in security best practices, including the use of benchmarks to track progress in acquiring new skills, will be critical to ensuring the safety of both developer- and AI-generated code. It’s a core element of developers ultimately reaping the benefits of AI coding tools and agentic agents.

    Inventory Shadow AI: Gaining control over AI agents begins with knowing what you have and where they are. Deep observability into AI-assistant development is essential, enabling you to identify which developers use which large language models (LLMs) and on which codebases. 

    Gaining deep visibility into AI agents also allows organizations to prioritize the associated risks, depending on the agent type (embedded, standalone) and the risk level of the projects they are working on. A comprehensive inventory is also important for implementing effective access controls, which are necessary for defense. Gartner predicts that by 2029, more than half of successful cybersecurity attacks against AI agents will exploit access control issues through direct or indirect prompt injection. 

    Focus on Governance: By automating policy enforcement, you can ensure that AI-assistant developers meet secure development standards before their work is accepted into critical repositories.

    A Secure Foundation Is the Key to Success

    AI-assisted development is here to stay because the benefits to productivity are too great to ignore. But the unfettered use of AI agents has multiplied vulnerabilities in code, leading to much greater risk that many enterprise security programs are not yet adequately prepared to defend against. 

    A thorough, modernized program based on visibility, observability, governance and developer upskilling can reverse the trend and move organizations toward the successful use of automated AI-assisted development. Gartner estimates that CIOs and CISOs who work with business leaders in implementing structured security programs will see the best results. Those partnerships could, according to Gartner, lead to a 50% reduction in critical cybersecurity incidents by 2028, even as the number of high-level AI initiatives grows by 20% over the same period.



    Source link

    AIAssisted Development Error governance human management Multiplies Risk Strategy Whats
    Follow on Google News Follow on Flipboard
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
    tonirufai
    big tee tech hub
    • Website

    Related Posts

    How to Build an EHR System (Electronic Health Records)

    April 10, 2026

    The Missing Context Layer: Why Tool Access Alone Won’t Make AI Agents Useful in Engineering

    April 9, 2026

    Sahaj Garg on Designing for Ambiguity in Human Input – Software Engineering Radio

    April 9, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Editors Picks

    AWS targets AI agent sprawl with new Bedrock Agent Registry

    April 11, 2026

    App Store fight continues as Apple and Epic clash over court-ordered stay

    April 11, 2026

    Minus K Congratulates to the Following Winners of Minus K’s 2025/2026 Educational Giveaway

    April 11, 2026

    AI-assisted Development Multiplies Human Error: What’s Your AI Governance and Risk Management Strategy?

    April 11, 2026
    About Us
    About Us

    Welcome To big tee tech hub. Big tee tech hub is a Professional seo tools Platform. Here we will provide you only interesting content, which you will like very much. We’re dedicated to providing you the best of seo tools, with a focus on dependability and tools. We’re working to turn our passion for seo tools into a booming online website. We hope you enjoy our seo tools as much as we enjoy offering them to you.

    Don't Miss!

    AWS targets AI agent sprawl with new Bedrock Agent Registry

    April 11, 2026

    App Store fight continues as Apple and Epic clash over court-ordered stay

    April 11, 2026

    Subscribe to Updates

    Get the latest technology news from Bigteetechhub about IT, Cybersecurity and Big Data.

      • About Us
      • Contact Us
      • Disclaimer
      • Privacy Policy
      • Terms and Conditions
      © 2026 bigteetechhub.All Right Reserved

      Type above and press Enter to search. Press Esc to cancel.