Close Menu
  • Home
  • AI
  • Big Data
  • Cloud Computing
  • iOS Development
  • IoT
  • IT/ Cybersecurity
  • Tech
    • Nanotechnology
    • Green Technology
    • Apple
    • Software Development
    • Software Engineering

Subscribe to Updates

Get the latest technology news from Bigteetechhub about IT, Cybersecurity and Big Data.

    What's Hot

    In vivo tracking of CAR-T cells in tumors via nanobubble-based contrast enhanced ultrasound

    February 11, 2026

    Exposed Training Open the Door for Crypto-Mining in Fortune 500 Cloud Environments

    February 11, 2026

    9 Best Cheap Laptops (2026), Tested and Reviewed

    February 11, 2026
    Facebook X (Twitter) Instagram
    Facebook X (Twitter) Instagram
    Big Tee Tech Hub
    • Home
    • AI
    • Big Data
    • Cloud Computing
    • iOS Development
    • IoT
    • IT/ Cybersecurity
    • Tech
      • Nanotechnology
      • Green Technology
      • Apple
      • Software Development
      • Software Engineering
    Big Tee Tech Hub
    Home»Artificial Intelligence»Fine-tuning LLMs with user-level differential privacy
    Artificial Intelligence

    Fine-tuning LLMs with user-level differential privacy

    big tee tech hubBy big tee tech hubMay 27, 20250172 Mins Read
    Share Facebook Twitter Pinterest Copy Link LinkedIn Tumblr Email Telegram WhatsApp
    Follow Us
    Google News Flipboard
    Fine-tuning LLMs with user-level differential privacy
    Share
    Facebook Twitter LinkedIn Pinterest Email Copy Link


    Making these algorithms work for LLMs

    If we run these algorithms “out-of-the-box” for LLMs, things go badly. So, we came up with optimizations to the algorithms that fix the key issues with running them “out-of-the-box”.

    For ELS, we had to go from example-level DP guarantees to user-level DP guarantees. We found that previous work was adding orders of magnitude more noise than was actually necessary. We were able to prove that we can add significantly less noise, making the model much better while retaining the same privacy guarantees.

    For both ELS and ULS, we had to figure out how to optimize the contribution bound. A “default” choice is to choose a contribution bound that every user already satisfies; that is, we don’t do any pre-processing. However, some users may contribute a large amount of data, and we will need to add large amounts of noise to provide privacy to these users. Setting a smaller contribution bound reduces the amount of noise we need to add, but the cost is having to discard a lot of data. Because LLM training runs are expensive, we can’t afford to try training a bunch of models with different contribution bounds and pick the best one — we need an effective strategy to pick the contribution bound before we start training.

    After lengthy experimentation at scale, for ELS we found that setting the contribution bound to be the median number of examples held by each user was an effective strategy. For ULS, we give a prediction for the total noise added as a function of the contribution bound, and found that choosing the contribution bound minimizing this prediction was an effective strategy.



    Source link

    differential finetuning LLMs Privacy userlevel
    Follow on Google News Follow on Flipboard
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
    tonirufai
    big tee tech hub
    • Website

    Related Posts

    How to integrate a graph database into your RAG pipeline

    February 10, 2026

    An Approach to Accelerate Verification and Software Standards Testing with LLMs

    February 10, 2026

    Why Generative AI Is a Game-Changer for Marketers and How You Can Master It

    February 9, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Editors Picks

    In vivo tracking of CAR-T cells in tumors via nanobubble-based contrast enhanced ultrasound

    February 11, 2026

    Exposed Training Open the Door for Crypto-Mining in Fortune 500 Cloud Environments

    February 11, 2026

    9 Best Cheap Laptops (2026), Tested and Reviewed

    February 11, 2026

    Laurence Fournier Beaudry and Guillaume Cizeron are on the brink of a controversial Olympic ice dance gold

    February 11, 2026
    About Us
    About Us

    Welcome To big tee tech hub. Big tee tech hub is a Professional seo tools Platform. Here we will provide you only interesting content, which you will like very much. We’re dedicated to providing you the best of seo tools, with a focus on dependability and tools. We’re working to turn our passion for seo tools into a booming online website. We hope you enjoy our seo tools as much as we enjoy offering them to you.

    Don't Miss!

    In vivo tracking of CAR-T cells in tumors via nanobubble-based contrast enhanced ultrasound

    February 11, 2026

    Exposed Training Open the Door for Crypto-Mining in Fortune 500 Cloud Environments

    February 11, 2026

    Subscribe to Updates

    Get the latest technology news from Bigteetechhub about IT, Cybersecurity and Big Data.

      • About Us
      • Contact Us
      • Disclaimer
      • Privacy Policy
      • Terms and Conditions
      © 2026 bigteetechhub.All Right Reserved

      Type above and press Enter to search. Press Esc to cancel.