Close Menu
  • Home
  • AI
  • Big Data
  • Cloud Computing
  • iOS Development
  • IoT
  • IT/ Cybersecurity
  • Tech
    • Nanotechnology
    • Green Technology
    • Apple
    • Software Development
    • Software Engineering

Subscribe to Updates

Get the latest technology news from Bigteetechhub about IT, Cybersecurity and Big Data.

    What's Hot

    Testlio expands its crowdsourced testing platform to provide human-in-the-loop testing for AI solutions

    November 11, 2025

    Guiding Organizations in Their AI Journey

    November 11, 2025

    Chinese Buses, European Fears, and the Truth About Connected Fleets

    November 11, 2025
    Facebook X (Twitter) Instagram
    Facebook X (Twitter) Instagram
    Big Tee Tech Hub
    • Home
    • AI
    • Big Data
    • Cloud Computing
    • iOS Development
    • IoT
    • IT/ Cybersecurity
    • Tech
      • Nanotechnology
      • Green Technology
      • Apple
      • Software Development
      • Software Engineering
    Big Tee Tech Hub
    Home»Cloud Computing»SecureBERT 2.0: Cisco’s next-gen AI model powering cybersecurity applications
    Cloud Computing

    SecureBERT 2.0: Cisco’s next-gen AI model powering cybersecurity applications

    big tee tech hubBy big tee tech hubNovember 1, 2025016 Mins Read
    Share Facebook Twitter Pinterest Copy Link LinkedIn Tumblr Email Telegram WhatsApp
    Follow Us
    Google News Flipboard
    SecureBERT 2.0: Cisco’s next-gen AI model powering cybersecurity applications
    Share
    Facebook Twitter LinkedIn Pinterest Email Copy Link


    Today, we are excited to share that the SecureBERT 2.0 model is available on HuggingFace and GitHub with an accompanying research paper. This release marks a significant milestone, building on the already widely adopted SecureBERT model to unlock even more advanced cybersecurity applications. Just see this unparalleled performance across real-world tasks:
    V3.Image1V3.Image1

    In 2022, the first SecureBERT model was introduced by Ehsan and a team of researchers from Carnegie Mellon University and UNC Charlotte as a pioneering language model designed specifically for the cybersecurity domain. It bridged the gap between general-purpose NLP models like BERT and the specialized needs of cybersecurity professionals—enabling AI systems to understand the technical language of threats, vulnerabilities, and exploits.

    By December 2023, SecureBERT ranked among the top 100 most downloaded models on HuggingFace out of the approximately 500,000 models then available on the repository. It gained significant recognition across the cybersecurity community and remains in active use by major organizations, including the MITRE Threat Report ATT&CK Mapper (TRAM) and CyberPeace Institute.

    In this blog, we’ll reflect on the impact of the original SecureBERT model, detail the significant advancements made in SecureBERT 2.0, and explore some real-world applications of this powerful new model.

    The impact of the original SecureBERT model

    Security analysts at enterprises and agencies devote a tremendous amount of time to parsing through various security signals to identify, analyze, categorize, and report on potential threats. It’s an important process that, when done entirely manually, is time-consuming, expensive, and prone to human error.

    SecureBERT gave researchers and analysts a tool that could process security reports, malware analyses, and vulnerability write-ups with contextual accuracy never before possible. Even today, it serves as an invaluable tool for cybersecurity experts at some of the world’s top agencies, universities, and labs.

    However, SecureBERT had several limitations. It struggled to handle long-context inputs such as detailed threat intelligence reports and mixed-format data combining text and code. Since SecureBERT was trained on RoBERTa-base, a classic BERT encoder with a 512-token context limit and no FlashAttention, it was slower and more memory-intensive during training and inference. In contrast, SecureBERT 2.0, built on ModernBERT, benefits from an optimized architecture with extended context, faster throughput, lower latency, and reduced memory usage.

    With SecureBERT 2.0, we addressed these gaps in training data and advanced the architecture to deliver a model that was even more capable and contextually aware than ever. While the original SecureBERT was a standalone base model, the 2.0 version includes several fine-tuned variants specializing in various real-world cybersecurity applications.V2Image2V2Image2

    Introducing SecureBERT 2.0

    SecureBERT 2.0 brings greater contextual relevance and domain expertise for cybersecurity, understanding code sources and programming logic in a way its predecessor simply could not. The key here is a training dataset that is larger, more diverse, and strategically curated to help the model better capture subtle security nuances and deliver more accurate, reliable, and context-aware threat analysis.

    While large autoregressive models such as GPT-5 excel at generating language, encoder-based models like SecureBERT 2.0 are designed to understand, represent, and retrieve information with precision—a fundamental need in cybersecurity. Generative models predict the next token; encoder models transform entire inputs into dense, semantically rich embeddings that capture relationships, context, and meaning without fabricating content.

    This distinction makes SecureBERT 2.0 ideal for high-precision, security-critical applications where factual accuracy, explainability, and speed are paramount. Built on the ModernBERT architecture, it uses hierarchical long-context encoding and multi-modal text-and-code understanding to analyze complex threat data and source code efficiently.

    Let’s take a look at how SecureBERT 2.0 helps security analysts in real-world applications.

    Real world applications of SecureBERT 2.0

    Imagine you are a SOC analyst tasked with investigating a suspected supply chain compromise. Traditionally, this would involve correlating open-source intelligence, internal alerts, and vulnerability reports in a process which could take several weeks of manual data analysis and cross-referencing.

    With SecureBERT 2.0, you can simply embed all relevant assets—reports, codes, CVE data, and threat intelligence, for example—in the system. The model immediately surfaces connections between obscure indicators and previously unseen infrastructure patterns.

    This is just one potential scenario of many; SecureBERT 2.0 can support and streamline a wealth of potential security applications:

    • Threat Intelligence Correlation: Linking indicators of compromise across multiple sources to uncover campaign patterns and adversary tactics
    • Incident Triage & Alert Prioritization: Embedding alerts and reports to detect duplicates, related incidents, or known CVEs—reducing noise and analyst workload
    • Secure Code & Vulnerability Detection: Identifying risky patterns, insecure dependencies, and potential zero-day vulnerabilities in source code
    • Semantic Search & RAG for Security Ops: Providing context-aware retrieval across internal knowledge bases, threat feeds, and documentation for faster analyst response
    • Policy and Compliance Search: Enabling accurate semantic lookup across large regulatory and governance corpora

    Unlike generative LLMs that create text, SecureBERT 2.0 interprets and structures information to deliver faster inference, lower compute costs, and minimize the risk of hallucination. This makes it a trusted foundation model for enterprise, defense, and research environments where precision and data integrity matter most.

    Under the hood of SecureBERT 2.0

    There are three components to the SecureBERT 2.0 architecture that make this model such a significant advancement: its ModernBERT foundation, its data expansion, and smarter approach to pretraining.

    SecureBERT 2.0 is powered by ModernBERT, a next-generation transformer designed for long-document processing. Extended attention mechanisms and hierarchical encoding allow the model to capture both fine-grained syntax and high-level structure—critical for analyzing long, multi-section security reports.

    The model is trained on 13 times more data than the original SecureBERT with a new corpus that includes curated security articles and technical blogs, filtered cybersecurity data, code vulnerability repositories, and incident narratives. In total, this dataset covers 13 billion text tokens and 53 million code tokens.

    Finally, a microannealing pretraining curriculum gradually transitions from curated to real-world data, balancing quality and diversity. Targeted masking teaches the model to predict crucial security actions and entities like “bypass,” “encrypt,” or “CVE,” strengthening domain representation.

    The performance of SecureBERT 2.0 is a marked improvement over its predecessor and other evaluated models across benchmarks; the details can be found in complete research paper.

    Looking ahead: AI for security at Cisco

    SecureBERT 2.0 demonstrates what’s possible when architecture and data are purpose-built for cybersecurity. It joins other models, like the generative Foundation-Sec-8B from Cisco’s Foundation AI team, as part of Cisco’s continued commitment to applying AI responsibly within the domain of cybersecurity.

    We are excited to share this model with the world, to see some of the innovative ways it will be embraced by the security community, and to continue exploring potential usages for taxonomy creation, knowledge graph generation, and other cutting-edge applications.

    You can get started with the SecureBERT 2.0 model on HuggingFace and GitHub today, and dig into our research paper for more detail and performance benchmarking.

    The future of cybersecurity AI is securely intelligent.



    Source link

    applications Ciscos Cybersecurity model NextGen Powering SecureBERT
    Follow on Google News Follow on Flipboard
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
    tonirufai
    big tee tech hub
    • Website

    Related Posts

    IBM extends serverless computing to GPU workloads for enterprise AI and simulation

    November 10, 2025

    What is generative AI? How artificial intelligence creates content

    November 9, 2025

    Customer Experience is Ready to Roll at Cisco Live Melbourne

    November 8, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Editors Picks

    Testlio expands its crowdsourced testing platform to provide human-in-the-loop testing for AI solutions

    November 11, 2025

    Guiding Organizations in Their AI Journey

    November 11, 2025

    Chinese Buses, European Fears, and the Truth About Connected Fleets

    November 11, 2025

    Google’s Plan to Fix a Broken System

    November 11, 2025
    About Us
    About Us

    Welcome To big tee tech hub. Big tee tech hub is a Professional seo tools Platform. Here we will provide you only interesting content, which you will like very much. We’re dedicated to providing you the best of seo tools, with a focus on dependability and tools. We’re working to turn our passion for seo tools into a booming online website. We hope you enjoy our seo tools as much as we enjoy offering them to you.

    Don't Miss!

    Testlio expands its crowdsourced testing platform to provide human-in-the-loop testing for AI solutions

    November 11, 2025

    Guiding Organizations in Their AI Journey

    November 11, 2025

    Subscribe to Updates

    Get the latest technology news from Bigteetechhub about IT, Cybersecurity and Big Data.

      • About Us
      • Contact Us
      • Disclaimer
      • Privacy Policy
      • Terms and Conditions
      © 2025 bigteetechhub.All Right Reserved

      Type above and press Enter to search. Press Esc to cancel.