Close Menu
  • Home
  • AI
  • Big Data
  • Cloud Computing
  • iOS Development
  • IoT
  • IT/ Cybersecurity
  • Tech
    • Nanotechnology
    • Green Technology
    • Apple
    • Software Development
    • Software Engineering

Subscribe to Updates

Get the latest technology news from Bigteetechhub about IT, Cybersecurity and Big Data.

    What's Hot

    Navigating the labyrinth of forks

    July 18, 2025

    OpenAI unveils ‘ChatGPT agent’ that gives ChatGPT its own computer to autonomously use your email and web apps, download and create files for you

    July 18, 2025

    Big milestone for the future of quantum computing.

    July 18, 2025
    Facebook X (Twitter) Instagram
    Facebook X (Twitter) Instagram
    Big Tee Tech Hub
    • Home
    • AI
    • Big Data
    • Cloud Computing
    • iOS Development
    • IoT
    • IT/ Cybersecurity
    • Tech
      • Nanotechnology
      • Green Technology
      • Apple
      • Software Development
      • Software Engineering
    Big Tee Tech Hub
    Home»Cloud Computing»Microsoft and NVIDIA accelerate AI development and performance  
    Cloud Computing

    Microsoft and NVIDIA accelerate AI development and performance  

    big tee tech hubBy big tee tech hubMarch 22, 2025009 Mins Read
    Share Facebook Twitter Pinterest Copy Link LinkedIn Tumblr Email Telegram WhatsApp
    Follow Us
    Google News Flipboard
    Microsoft and NVIDIA accelerate AI development and performance  
    Share
    Facebook Twitter LinkedIn Pinterest Email Copy Link


    Together, Microsoft and NVIDIA are accelerating some of the most groundbreaking innovations in AI. We are excited to continue innovating with several new announcements from Microsoft and NVIDIA that further enhance our full stack collaboration.

    Together, Microsoft and NVIDIA are accelerating some of the most groundbreaking innovations in AI. This long-standing collaboration has been at the core of the AI revolution over the past few years, from bringing industry-leading supercomputing performance in the cloud to supporting breakthrough frontier models and solutions like ChatGPT in Microsoft Azure OpenAI Service and Microsoft Copilot.

    Today, there are several new announcements from Microsoft and NVIDIA that further enhance the full stack collaboration to help shape the future of AI. This includes integrating the newest NVIDIA Blackwell platform with Azure AI services infrastructure, incorporating NVIDIA NIM microservices into Azure AI Foundry, and empowering developers, startups, and organizations of all sizes like NBA, BMW, Dentsu, Harvey and OriGen, to accelerate their innovations and solve the most challenging problems across domains.

    Empowering all developers and innovators with agentic AI 

    Microsoft and NVIDIA collaborate deeply across the entire technology stack, and with the rise of agentic AI, they are thrilled to share several new offerings that are available in Azure AI Foundry. First is that Azure AI Foundry now offers NVIDIA NIM microservices. NIM provides optimized containers for more than two dozen popular foundation models, allowing developers to deploy generative AI applications and agents quickly. These new integrations can accelerate inferencing workloads for models available on Azure, providing significant performance improvements, greatly supporting the growing use of AI agents. Key features include optimized model throughput for NVIDIA accelerated computing platforms, prebuilt microservices deployable anywhere, and enhanced accuracy for specific use cases. In addition, we will soon be integrating the NVIDIA Llama Nemotron Reason open reasoning model. NVIDIA Llama Nemotron Reason is a powerful AI model family designed for advanced reasoning.

    Epic, a leading electronic health record company, is planning to take advantage of the latest integration of NVIDIA NIM on Azure AI Foundry, improving AI applications to deliver better healthcare and patient results.

    The launch of NVIDIA NIM microservices in Azure AI Foundry offers a secure and efficient way for Epic to deploy open-source generative AI models that improve patient care, boost clinician and operational efficiency, and uncover new insights to drive medical innovation. In collaboration with UW Health and UC San Diego Health, we’re also researching methods to evaluate clinical summaries with these advanced models. Together, we’re using the latest AI technology in ways that truly improve the lives of clinicians and patients.

    Drew McCombs, VP Cloud and Analytics, Epic

    Further, Microsoft is also working closely with NVIDIA to optimize inference performance for popular, open-source language models and ensure they are available on Azure AI Foundry so customers can take full advantage of the performance and efficiency benefits from foundation models. The newest addition of this collaboration is the performance optimization for Meta Llama models using TensorRT-LLM. Developers can now use the optimized Llama models from the model catalog in Azure AI Foundry to experience improvements in throughput without additional steps.

    “At Synopsys, we rely on cutting-edge AI models to drive innovation, and the optimized Meta Llama models on Azure AI Foundry have delivered exceptional performance. We’ve seen substantial improvements in both throughput and latency, allowing us to accelerate our workloads while optimizing costs. These advancements make Azure AI Foundry an ideal platform for scaling AI applications efficiently.”

    Arun Venkatachar, VP Engineering, Synopsys Central Engineering

    At the same time, Microsoft is excited to be expanding its model catalog in Azure AI Foundry even further with the addition of Mistral Small 3.1, which is coming soon, an enhanced version of Mistral Small 3, featuring multimodal capabilities and an extended context length of up to 128k.

    Microsoft is also announcing the general availability of Azure Container Apps serverless graphics processing units (GPUs) with support for NVIDIA NIM. Serverless GPUs allow enterprises, startups, and software development companies to seamlessly run AI workloads on-demand with automatic scaling, optimized cold start, and per-second billing with scale down to zero when not in use to reduce operational overhead. With the support of NVIDIA NIM, development teams can easily build and deploy generative AI applications alongside existing applications within the same networking, security, and isolation boundary.

    Expanding Azure AI Infrastructure with NVIDIA 

    The evolution of reasoning models and agentic AI systems is transforming the artificial intelligence landscape. Robust and purpose-built infrastructure is key to their success. Today, Microsoft is excited to announce the general availability of Azure ND GB200 V6 virtual machine (VM) series accelerated by NVIDIA GB200 NVL72 and NVIDIA Quantum InfiniBand networking. This addition to the Azure AI Infrastructure portfolio, alongside existing virtual machines that use NVIDIA H200 and NVIDIA H100 GPUs, highlight Microsoft’s commitment to optimizing infrastructure for the next wave of complex AI tasks like planning, reasoning, and adapting in real-time. 

    As we push the boundaries of AI, our partnership with Azure and the introduction of the NVIDIA Blackwell platform represent a significant leap forward. The NVIDIA GB200 NVL72, with its unparalleled performance and connectivity, tackles the most complex AI workloads, enabling businesses to innovate faster and more securely. By integrating this technology with Azure’s secure infrastructure, we are unlocking the potential of reasoning AI.

    Ian Buck, Vice President of Hyperscale and HPC, NVIDIA

    The combination of high-performance NVIDIA GPUs with low-latency NVIDIA InfiniBand networking and Azure’s scalable architectures are essential to handle the new massive data throughput and intensive processing demands. Furthermore, comprehensive integration of security, governance, and monitoring tools from Azure supports powerful, trustworthy AI applications that comply with regulatory standards.

    Built with Microsoft’s custom infrastructure system and the NVIDIA Blackwell platform, at the datacenter level each blade features two NVIDIA GB200 Grace™ Blackwell Superchips and NVIDIA NVLink™ Switch scale-up networking, which supports up to 72 NVIDIA Blackwell GPUs in a single NVLink domain. Additionally, it incorporates the latest NVIDIA Quantum InfiniBand, allowing for scaling out to tens of thousands of Blackwell GPUs on Azure, providing two times the AI supercomputing performance from previous GPU generations based on GEMM benchmark analysis.

    As Microsoft’s work with NVIDIA continues to grow and shape the future of AI, the company also looks forward to bringing the performance of NVIDIA Blackwell Ultra GPUs and the NVIDIA RTX PRO 6000 Blackwell Server Edition to Azure. Microsoft is set to launch the NVIDIA Blackwell Ultra GPU-based VMs later in 2025. These VMs promise to deliver exceptional performance and efficiency for the next wave of agentic and generative AI workloads.

    Azure AI’s infrastructure, advanced by NVIDIA accelerated computing, consistently delivers high performance at scale for AI workloads as evidenced by leading industry benchmarks like Top500 supercomputing and MLPerf results.1,2 Recently, Azure Virtual Machines using NVIDIA’s H200 GPUs achieved exceptional performance in the MLPerf Training v4.1 benchmarks across various AI tasks. Azure demonstrated leading cloud performance by scaling 512 H200 GPUs in a cluster, achieving a 28% speedup over H100 GPUs in the latest MLPerf training runs by MLCommons.3 This highlights Azure’s ability to efficiently scale large GPU clusters. Microsoft is excited that customers are utilizing this performance on Azure to train advanced models and get efficiency for generative inferencing. 

    Empowering businesses with Azure AI Infrastructure

    Meter is training a large foundation model on Azure AI Infrastructure to automate networking end-to-end. The performance and power of Azure will significantly scale Meter’s AI training and inference, aiding in the development of models with billions of parameters across text-based configurations, time-series telemetry, and structured networking data. With support from Microsoft, Meter’s models aim to improve how networks are designed, configured, and managed—addressing a significant challenge for progress.

    Black Forest Labs, a generative AI start-up with the mission to develop and advance state-of-the-art deep learning models for media, has extended its partnership with Azure. Azure AI services infrastructure is already being used to deploy its flagship FLUX models, the world’s most popular text-to-image media models, serving millions of high-quality images everyday with unprecedented speed and creative control. Building on this foundation, Black Forest Labs will adopt the new ND GB200 v6 VMs to accelerate the development and deployment of its next-gen AI models, pushing the boundaries of innovation in generative AI for media. Black Forest Labs has been a Microsoft partner since its inception, working together to secure the most advanced, efficient, and scalable infrastructure for training and delivering its frontier models.

    We are expanding our partnership with Microsoft Azure to combine BFL’s unique research expertise in generative AI with Azure’s powerful infrastructure. This collaboration enables us to build and deliver the best possible image and video models faster and at greater scale, providing our customers with state-of-the-art visual AI capabilities for media production, advertising, product design, content creation and beyond.

    Robin Rombach, CEO, Black Forest Labs

    Creating new possibilities for innovators across industries

    Microsoft and NVIDIA have launched preconfigured NVIDIA Omniverse and NVIDIA Isaac Sim virtual desktop workstations, and Omniverse Kit App Streaming, on the Azure marketplace. Powered by Azure Virtual Machines using NVIDIA GPUs, these offerings provide developers everything they need to get started developing and self-deploying digital twin and robotics simulation applications and services for the era of physical AI. Several Microsoft and NVIDIA ecosystem partners including Bright Machines, Kinetic Vision, Sight Machine, and SoftServe are adopting these capabilities to build solutions that will enable the next wave of digitalization for the world’s manufacturers.

    There are many innovative solutions built by AI startups on Azure. Opaque Systems helps customers safeguard their data using confidential computing; Faros AI provides software engineering insights, allowing customers to optimize resources and enhance decision-making, including measuring the ROI of their AI coding assistants; Bria AI provides a visual generative AI platform that allows developers to use AI image generation responsibly, providing cutting-edge models trained exclusively on fully-licensed datasets; Pangaea Data is delivering better patient outcomes by enhancing screening and treatment at the point of care; and Basecamp Research is driving biodiversity discovery with AI and extensive genomic datasets. 

    Experience the latest innovations from Azure and NVIDIA 

    Today’s announcements at the NVIDIA GTC AI Conference underscore Azure’s commitment to pushing the boundaries of AI innovations. With state-of-the-art products, deep collaboration, and seamless integrations, we continue to deliver the technology that supports and empowers developers and customers in designing, customizing, and deploying their AI solutions efficiently. Learn more at this year’s event and explore the possibilities that NVIDIA and Azure hold for the future.

    • Visit us at Booth 514 at NVIDIA GTC.

    Sources:

    1November 2024 | TOP500

    2Benchmark Work | Benchmarks MLCommons

    3Leading AI Scalability Benchmarks with Microsoft Azure – Signal65





    Source link

    accelerate Development Microsoft Nvidia performance
    Follow on Google News Follow on Flipboard
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
    tonirufai
    big tee tech hub
    • Website

    Related Posts

    Solution Validation Services Matter More Than Ever Before

    July 18, 2025

    Highlights for Consumer Industries from Cisco Live US 2025

    July 17, 2025

    Running high-performance PostgreSQL on Azure Kubernetes Service

    July 16, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Editors Picks

    Navigating the labyrinth of forks

    July 18, 2025

    OpenAI unveils ‘ChatGPT agent’ that gives ChatGPT its own computer to autonomously use your email and web apps, download and create files for you

    July 18, 2025

    Big milestone for the future of quantum computing.

    July 18, 2025

    Exploring supersymmetry through twisted bilayer materials – Physics World

    July 18, 2025
    Advertisement
    About Us
    About Us

    Welcome To big tee tech hub. Big tee tech hub is a Professional seo tools Platform. Here we will provide you only interesting content, which you will like very much. We’re dedicated to providing you the best of seo tools, with a focus on dependability and tools. We’re working to turn our passion for seo tools into a booming online website. We hope you enjoy our seo tools as much as we enjoy offering them to you.

    Don't Miss!

    Navigating the labyrinth of forks

    July 18, 2025

    OpenAI unveils ‘ChatGPT agent’ that gives ChatGPT its own computer to autonomously use your email and web apps, download and create files for you

    July 18, 2025

    Subscribe to Updates

    Get the latest technology news from Bigteetechhub about IT, Cybersecurity and Big Data.

      • About Us
      • Contact Us
      • Disclaimer
      • Privacy Policy
      • Terms and Conditions
      © 2025 bigteetechhub.All Right Reserved

      Type above and press Enter to search. Press Esc to cancel.