Close Menu
  • Home
  • AI
  • Big Data
  • Cloud Computing
  • iOS Development
  • IoT
  • IT/ Cybersecurity
  • Tech
    • Nanotechnology
    • Green Technology
    • Apple
    • Software Development
    • Software Engineering

Subscribe to Updates

Get the latest technology news from Bigteetechhub about IT, Cybersecurity and Big Data.

    What's Hot

    Chinese Buses, European Fears, and the Truth About Connected Fleets

    November 11, 2025

    Google’s Plan to Fix a Broken System

    November 11, 2025

    swift – IOS app not opened or prompted to open when Universal Link used

    November 10, 2025
    Facebook X (Twitter) Instagram
    Facebook X (Twitter) Instagram
    Big Tee Tech Hub
    • Home
    • AI
    • Big Data
    • Cloud Computing
    • iOS Development
    • IoT
    • IT/ Cybersecurity
    • Tech
      • Nanotechnology
      • Green Technology
      • Apple
      • Software Development
      • Software Engineering
    Big Tee Tech Hub
    Home»Cloud Computing»Agent Factory: Creating a blueprint for safe and secure AI agents
    Cloud Computing

    Agent Factory: Creating a blueprint for safe and secure AI agents

    big tee tech hubBy big tee tech hubOctober 5, 2025016 Mins Read
    Share Facebook Twitter Pinterest Copy Link LinkedIn Tumblr Email Telegram WhatsApp
    Follow Us
    Google News Flipboard
    Agent Factory: Creating a blueprint for safe and secure AI agents
    Share
    Facebook Twitter LinkedIn Pinterest Email Copy Link


    Azure AI Foundry brings together security, safety, and governance in a layered process enterprises can follow to build trust in their agents.

    This blog post is the sixth out of a six-part blog series called Agent Factory which shares best practices, design patterns, and tools to help guide you through adopting and building agentic AI.

    Trust as the next frontier

    Trust is rapidly becoming the defining challenge for enterprise AI. If observability is about seeing, then security is about steering. As agents move from clever prototypes to core business systems, enterprises are asking a harder question: how do we keep agents safe, secure, and under control as they scale?

    The answer is not a patchwork of point fixes. It is a blueprint. A layered approach that puts trust first by combining identity, guardrails, evaluations, adversarial testing, data protection, monitoring, and governance.

    Why enterprises need to create their blueprint now

    Across industries, we hear the same concerns:

    • CISOs worry about agent sprawl and unclear ownership.
    • Security teams need guardrails that connect to their existing workflows.
    • Developers want safety built in from day one, not added at the end.

    These pressures are driving the shift left phenomenon. Security, safety, and governance responsibilities are moving earlier into the developer workflow. Teams cannot wait until deployment to secure agents. They need built-in protections, evaluations, and policy integration from the start.

    Data leakage, prompt injection, and regulatory uncertainty remain the top blockers to AI adoption. For enterprises, trust is now a key deciding factor in whether agents move from pilot to production.

    What safe and secure agents look like

    From enterprise adoption, five qualities stand out:

    • Unique identity: Every agent is known and tracked across its lifecycle.
    • Data protection by design: Sensitive information is classified and governed to reduce oversharing.
    • Built-in controls: Harm and risk filters, threat mitigations, and groundedness checks reduce unsafe outcomes.
    • Evaluated against threats: Agents are tested with automated safety evaluations and adversarial prompts before deployment and throughout production.
    • Continuous oversight: Telemetry connects to enterprise security and compliance tools for investigation and response.
    A framework defining the cycle of risk evaluation and management.

    These qualities do not guarantee absolute safety, but they are essential for building trustworthy agents that meet enterprise standards. Baking these into our products reflects Microsoft’s approach to trustworthy AI. Protections are layered across the model, system, policy, and user experience levels, continuously improved as agents evolve.

    How Azure AI Foundry supports this blueprint

    A view of security settings and agent controls inside Azure AI Foundry.

    Azure AI Foundry brings together security, safety, and governance capabilities in a layered process enterprises can follow to build trust in their agents.

    • Entra Agent ID
      Coming soon, every agent created in Foundry will be assigned a unique Entra Agent ID, giving organizations visibility into all active agents across a tenant and helping to reduce shadow agents.
    • Agent controls
      Foundry offers industry first agent controls that are both comprehensive and built in. It is the only AI platform with a cross-prompt injection classifier that scans not just prompt documents but also tool responses, email triggers, and other untrusted sources to flag, block, and neutralize malicious instructions. Foundry also provides controls to prevent misaligned tool calls, high risk actions, and sensitive data loss, along with harm and risk filters, groundedness checks, and protected material detection.
    An example of how Azure AI Foundry flags prompts for security risks.
    • Risk and safety evaluations
      Evaluations provide a feedback loop across the lifecycle. Teams can run harm and risk checks, groundedness scoring, and protected material scans both before deployment and in production. The Azure AI Red Teaming Agent and PyRIT toolkit simulate adversarial prompts at scale to probe behavior, surface vulnerabilities, and strengthen resilience before incidents reach production.
    • Data control with your own resources
      Standard agent setup in Azure AI Foundry Agent Service allows enterprises to bring their own Azure resources. This includes file storage, search, and conversation history storage. With this setup, data processed by Foundry agents remains within the tenant’s boundary under the organization’s own security, compliance, and governance controls.
    • Network isolation
      Foundry Agent Service supports private network isolation with custom virtual networks and subnet delegation. This configuration ensures that agents operate within a tightly scoped network boundary and interact securely with sensitive customer data under enterprise terms.
    • Microsoft Purview
      Microsoft Purview helps extend data security and compliance to AI workloads. Agents in Foundry can honor Purview sensitivity labels and DLP policies, so protections applied to data carry through into agent outputs. Compliance teams can also use Purview Compliance Manager and related tools to assess alignment with frameworks like the EU AI Act and NIST AI RMF, and securely interact with your sensitive customer data under your terms.
    • Microsoft Defender
      Foundry surfaces alerts and recommendations from Microsoft Defender directly in the agent environment, giving developers and administrators visibility into issues such as prompt injection attempts, risky tool calls, or unusual behavior. This same telemetry also streams into Microsoft Defender XDR, where security operations center teams can investigate incidents alongside other enterprise alerts using their established workflows.
    • Governance collaborators
      Foundry connects with governance collaborators such as Credo AI and Saidot. These integrations allow organizations to map evaluation results to frameworks including the EU AI Act and the NIST AI Risk Management Framework, making it easier to demonstrate responsible AI practices and regulatory alignment.

    Blueprint in action

    From enterprise adoption, these practices stand out:

    1. Start with identity. Assign Entra Agent IDs to establish visibility and prevent sprawl.
    2. Built-in controls. Use Prompt Shields, harm and risk filters, groundedness checks, and protected material detection.
    3. Continuously evaluate. Run harm and risk checks, groundedness scoring, protected material scans, and adversarial testing with the Red Teaming Agent and PyRIT before deployment and throughout production.
    4. Protect sensitive data. Apply Purview labels and DLP so protections are honored in agent outputs.
    5. Monitor with enterprise tools. Stream telemetry into Defender XDR and use Foundry observability for oversight.
    6. Connect governance to regulation. Use governance collaborators to map evaluation data to frameworks like the EU AI Act and NIST AI RMF.

    Proof points from our customers

    Enterprises are already creating security blueprints with Azure AI Foundry:

    • EY uses Azure AI Foundry’s leaderboards and evaluations to compare models by quality, cost, and safety, helping scale solutions with greater confidence.
    • Accenture is testing the Microsoft AI Red Teaming Agent to simulate adversarial prompts at scale. This allows their teams to validate not just individual responses, but full multi-agent workflows under attack conditions before going live.

    Learn more

    Did you miss these posts in the Agent Factory series?

    A close up of a group of 3 D dev tools.

    Azure AI Foundry

    Build trustworthy AI agents that automate tasks, enhance user experiences, and deliver results.





    Source link

    agent Agents Blueprint Creating factory Safe Secure
    Follow on Google News Follow on Flipboard
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
    tonirufai
    big tee tech hub
    • Website

    Related Posts

    IBM extends serverless computing to GPU workloads for enterprise AI and simulation

    November 10, 2025

    OpenAI starts creating new benchmarks that more accurately evaluate AI models across different languages and cultures

    November 10, 2025

    What is generative AI? How artificial intelligence creates content

    November 9, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Editors Picks

    Chinese Buses, European Fears, and the Truth About Connected Fleets

    November 11, 2025

    Google’s Plan to Fix a Broken System

    November 11, 2025

    swift – IOS app not opened or prompted to open when Universal Link used

    November 10, 2025

    A new era and new features in Azure Ultra Disk

    November 10, 2025
    About Us
    About Us

    Welcome To big tee tech hub. Big tee tech hub is a Professional seo tools Platform. Here we will provide you only interesting content, which you will like very much. We’re dedicated to providing you the best of seo tools, with a focus on dependability and tools. We’re working to turn our passion for seo tools into a booming online website. We hope you enjoy our seo tools as much as we enjoy offering them to you.

    Don't Miss!

    Chinese Buses, European Fears, and the Truth About Connected Fleets

    November 11, 2025

    Google’s Plan to Fix a Broken System

    November 11, 2025

    Subscribe to Updates

    Get the latest technology news from Bigteetechhub about IT, Cybersecurity and Big Data.

      • About Us
      • Contact Us
      • Disclaimer
      • Privacy Policy
      • Terms and Conditions
      © 2025 bigteetechhub.All Right Reserved

      Type above and press Enter to search. Press Esc to cancel.