Close Menu
  • Home
  • AI
  • Big Data
  • Cloud Computing
  • iOS Development
  • IoT
  • IT/ Cybersecurity
  • Tech
    • Nanotechnology
    • Green Technology
    • Apple
    • Software Development
    • Software Engineering

Subscribe to Updates

Get the latest technology news from Bigteetechhub about IT, Cybersecurity and Big Data.

    What's Hot

    MacBook Air deal: Save 10% Apple’s slim M4 notebook

    October 13, 2025

    Part 1 – Energy as the Ultimate Bottleneck

    October 13, 2025

    From Static Products to Dynamic Systems

    October 13, 2025
    Facebook X (Twitter) Instagram
    Facebook X (Twitter) Instagram
    Big Tee Tech Hub
    • Home
    • AI
    • Big Data
    • Cloud Computing
    • iOS Development
    • IoT
    • IT/ Cybersecurity
    • Tech
      • Nanotechnology
      • Green Technology
      • Apple
      • Software Development
      • Software Engineering
    Big Tee Tech Hub
    Home»Tech»New ‘persona vectors’ from Anthropic let you decode and direct an LLM’s personality
    Tech

    New ‘persona vectors’ from Anthropic let you decode and direct an LLM’s personality

    big tee tech hubBy big tee tech hubAugust 7, 2025006 Mins Read
    Share Facebook Twitter Pinterest Copy Link LinkedIn Tumblr Email Telegram WhatsApp
    Follow Us
    Google News Flipboard
    New ‘persona vectors’ from Anthropic let you decode and direct an LLM’s personality
    Share
    Facebook Twitter LinkedIn Pinterest Email Copy Link

    Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now


    A new study from the Anthropic Fellows Program reveals a technique to identify, monitor and control character traits in large language models (LLMs). The findings show that models can develop undesirable personalities (e.g., becoming malicious, excessively agreeable, or prone to making things up) either in response to user prompts or as an unintended consequence of training. 

    The researchers introduce “persona vectors,” which are directions in a model’s internal activation space that correspond to specific personality traits, providing a toolkit for developers to manage the behavior of their AI assistants better.

    Model personas can go wrong

    LLMs typically interact with users through an “Assistant” persona designed to be helpful, harmless, and honest. However, these personas can fluctuate in unexpected ways. At deployment, a model’s personality can shift dramatically based on prompts or conversational context, as seen when Microsoft’s Bing chatbot threatened users or xAI’s Grok started behaving erratically. As the researchers note in their paper, “While these particular examples gained widespread public attention, most language models are susceptible to in-context persona shifts.”

    Training procedures can also induce unexpected changes. For instance, fine-tuning a model on a narrow task like generating insecure code can lead to a broader “emergent misalignment” that extends beyond the original task. Even well-intentioned training adjustments can backfire. In April 2025, a modification to the reinforcement learning from human feedback (RLHF) process unintentionally made OpenAI’s GPT-4o overly sycophantic, causing it to validate harmful behaviors. 


    AI Scaling Hits Its Limits

    Power caps, rising token costs, and inference delays are reshaping enterprise AI. Join our exclusive salon to discover how top teams are:

    • Turning energy into a strategic advantage
    • Architecting efficient inference for real throughput gains
    • Unlocking competitive ROI with sustainable AI systems

    Secure your spot to stay ahead:


    How persona vectors work

    image 4e16ea
    Source: Anthropic

    The new research builds on the concept that high-level traits, such as truthfulness or secrecy, are encoded as linear directions within a model’s “activation space” (the internal, high-dimensional representation of information embedded within the model’s weights). The researchers systematized the process of finding these directions, which they call “persona vectors.” According to the paper, their method for extracting persona vectors is automated and “can be applied to any personality trait of interest, given only a natural-language description.”

    The process works through an automated pipeline. It begins with a simple description of a trait, such as “evil.” The pipeline then generates pairs of contrasting system prompts (e.g., “You are an evil AI” vs. “You are a helpful AI”) along with a set of evaluation questions. The model generates responses under both the positive and negative prompts. The persona vector is then calculated by taking the difference in the average internal activations between the responses that exhibit the trait and those that do not. This isolates the specific direction in the model’s weights that corresponds to that personality trait.

    Putting persona vectors to use

    In a series of experiments with open models, such as Qwen 2.5-7B-Instruct and Llama-3.1-8B-Instruct, the researchers demonstrated several practical applications for persona vectors.

    First, by projecting a model’s internal state onto a persona vector, developers can monitor and predict how it will behave before it generates a response. The paper states, “We show that both intended and unintended finetuning-induced persona shifts strongly correlate with activation changes along corresponding persona vectors.” This allows for early detection and mitigation of undesirable behavioral shifts during fine-tuning.

    Persona vectors also allow for direct intervention to curb unwanted behaviors at inference time through a process the researchers call “steering.” One approach is “post-hoc steering,” where developers subtract the persona vector from the model’s activations during inference to mitigate a bad trait. The researchers found that while effective, post-hoc steering can sometimes degrade the model’s performance on other tasks. 

    A more novel method is “preventative steering,” where the model is proactively steered toward the undesirable persona during fine-tuning. This counterintuitive approach essentially “vaccinates” the model against learning the bad trait from the training data, canceling out the fine-tuning pressure while better preserving its general capabilities.

    image d7158d
    Source: Anthropic

    A key application for enterprises is using persona vectors to screen data before fine-tuning. The researchers developed a metric called “projection difference,” which measures how much a given training dataset will push the model’s persona toward a particular trait. This metric is highly predictive of how the model’s behavior will shift after training, allowing developers to flag and filter problematic datasets before using them in training.

    For companies that fine-tune open-source models on proprietary or third-party data (including data generated by other models), persona vectors provide a direct way to monitor and mitigate the risk of inheriting hidden, undesirable traits. The ability to screen data proactively is a powerful tool for developers, enabling the identification of problematic samples that may not be immediately apparent as harmful. 

    The research found that this technique can find issues that other methods miss, noting, “This suggests that the method surfaces problematic samples that may evade LLM-based detection.” For example, their method was able to catch some dataset examples that weren’t obviously problematic to the human eye, and that an LLM judge wasn’t able to flag.

    In a blog post, Anthropic suggested that they will use this technique to improve future generations of Claude. “Persona vectors give us some handle on where models acquire these personalities, how they fluctuate over time, and how we can better control them,” they write. Anthropic has released the code for computing persona vectors, monitoring and steering model behavior, and vetting training datasets. Developers of AI applications can utilize these tools to transition from merely reacting to undesirable behavior to proactively designing models with a more stable and predictable personality.

    Daily insights on business use cases with VB Daily

    If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.

    Read our Privacy Policy

    Thanks for subscribing. Check out more VB newsletters here.

    An error occured.

    vb daily phone



    Source link
    Anthropic decode direct LLMs persona personality vectors
    Follow on Google News Follow on Flipboard
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
    tonirufai
    big tee tech hub
    • Website

    Related Posts

    SGLA criticizes California Governor Newsom for signing ‘flawed, rushed’ sweepstakes ban

    October 13, 2025

    The Download: Our bodies’ memories, and Traton’s electric trucks

    October 12, 2025

    Apple says goodbye to the Clips app

    October 12, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Editors Picks

    MacBook Air deal: Save 10% Apple’s slim M4 notebook

    October 13, 2025

    Part 1 – Energy as the Ultimate Bottleneck

    October 13, 2025

    From Static Products to Dynamic Systems

    October 13, 2025

    A Guide to Develop Banking Software

    October 13, 2025
    Advertisement
    About Us
    About Us

    Welcome To big tee tech hub. Big tee tech hub is a Professional seo tools Platform. Here we will provide you only interesting content, which you will like very much. We’re dedicated to providing you the best of seo tools, with a focus on dependability and tools. We’re working to turn our passion for seo tools into a booming online website. We hope you enjoy our seo tools as much as we enjoy offering them to you.

    Don't Miss!

    MacBook Air deal: Save 10% Apple’s slim M4 notebook

    October 13, 2025

    Part 1 – Energy as the Ultimate Bottleneck

    October 13, 2025

    Subscribe to Updates

    Get the latest technology news from Bigteetechhub about IT, Cybersecurity and Big Data.

      • About Us
      • Contact Us
      • Disclaimer
      • Privacy Policy
      • Terms and Conditions
      © 2025 bigteetechhub.All Right Reserved

      Type above and press Enter to search. Press Esc to cancel.