Close Menu
  • Home
  • AI
  • Big Data
  • Cloud Computing
  • iOS Development
  • IoT
  • IT/ Cybersecurity
  • Tech
    • Nanotechnology
    • Green Technology
    • Apple
    • Software Development
    • Software Engineering

Subscribe to Updates

Get the latest technology news from Bigteetechhub about IT, Cybersecurity and Big Data.

    What's Hot

    Navigating the labyrinth of forks

    July 18, 2025

    OpenAI unveils ‘ChatGPT agent’ that gives ChatGPT its own computer to autonomously use your email and web apps, download and create files for you

    July 18, 2025

    Big milestone for the future of quantum computing.

    July 18, 2025
    Facebook X (Twitter) Instagram
    Facebook X (Twitter) Instagram
    Big Tee Tech Hub
    • Home
    • AI
    • Big Data
    • Cloud Computing
    • iOS Development
    • IoT
    • IT/ Cybersecurity
    • Tech
      • Nanotechnology
      • Green Technology
      • Apple
      • Software Development
      • Software Engineering
    Big Tee Tech Hub
    Home»Artificial Intelligence»Virtual Personas for Language Models via an Anthology of Backstories – The Berkeley Artificial Intelligence Research Blog
    Artificial Intelligence

    Virtual Personas for Language Models via an Anthology of Backstories – The Berkeley Artificial Intelligence Research Blog

    big tee tech hubBy big tee tech hubApril 28, 2025006 Mins Read
    Share Facebook Twitter Pinterest Copy Link LinkedIn Tumblr Email Telegram WhatsApp
    Follow Us
    Google News Flipboard
    Virtual Personas for Language Models via an Anthology of Backstories – The Berkeley Artificial Intelligence Research Blog
    Share
    Facebook Twitter LinkedIn Pinterest Email Copy Link




    header

    We introduce Anthology, a method for conditioning LLMs to representative, consistent, and diverse virtual personas by generating and utilizing naturalistic backstories with rich details of individual values and experience.

    What does it mean for large language models (LLMs) to be trained on massive text corpora, collectively produced by millions and billions of distinctive human authors?

    In “Language Models as Agent Models”, compelling evidence suggests that recent language models could be considered models of agents: provided with a textual context, LLMs are capable of generating conditional text that represents the characteristics of an agent likely to have produced that context. This suggests that, with appropriate conditioning, LLMs could be guided to approximate the responses of a particular human voice, rather than the mixture of voices that otherwise emerges. If realized, this capability of LLMs would have significant implications for user research and social sciences—conditioned language models as virtual personas of human subjects could serve as cost-effective pilot studies and supporting best practices in human studies, e.g. the Belmont principles of justice and beneficence.

    In this work, we introduce Anthology, an approach for steering LLMs to representative, consistent, and diverse virtual personas by providing richly detailed life narratives of individuals as conditioning context to models.

    In doing so, we also present methods to generate backstories from LLMs themselves as a means to efficiently produce massive sets covering a wide range of human demographics.
    By grounding language models in naturalistic backstories, Anthology allows LLMs to simulate individual human samples with increased fidelity, measured in terms of matching the distributions and consistencies of human responses.

    Our Approach: Anthology

    Conditioning Language Model Generation with Individual Life Narratives

    A significant limitation of earlier methods in steering LLMs to virtual personas has been the inability to reliably approximate individual human samples. Prior approaches prompt LLMs with broad demographic information, e.g., “I am a 25-year-old from California. My highest level of education is less than high school,” which are essentially bodies of text generated from a tuple of demographic variables.
    With these methods, we are only able to approximate human samples at a population level, not at the individual level, which results in:

    • Responses prone to LLMs defaulting to stereotypical and/or prototypical portrayals, as they are only conditioned on demographic variables (e.g., race and gender)
    • Inability to provide important metrics of interest such as covariance and statistical significance, as individual responses are required for such compuatations

    Anthology enables the approximation of individual subjects by conditioning with richly detailed backstories. Through these backstories, the model captures implicit and explicit markers of personal identity, including demographic traits and spontaneous references to cultural, socioeconomic backgrounds, and life philosophies. Our approach involves generating a vast set of backstories representing a wide range of demographic attributes via language models queried with unrestricted, open-ended prompts such as, “Tell me about yourself.” We then match virtual personas conditioned by each backstory to real-world survey samples.

    Results: Closer Approximation of Public Opinion Polls

    For evaluation, we compare the effectiveness of different methods for conditioning virtual personas in the context of approximating three Pew Research Center ATP surveys: Waves 34, 92, and 99.

    results

    Results on approximating human responses for Pew Research Center ATP surveys. Boldface and underlined results indicate values closest and the second closest to those of humans, respectively.

    As measures of success in approximating human samples with virtual personas, we consider the following metrics:

    • Average Wasserstein distance (WD) between response distributions as a measure of representativeness
    • Frobenius norm (Fro.) between correlation matrices as a measure of consistency
    • Cronbach’s alpha as an additional measure of internal consistency

    Prior to analyzing virtual subjects, we estimate the lower bounds of each evaluation metric by repeatedly dividing the human population into two equal-sized groups at random and calculating these metrics between the subgroups.
    We take averaged values from 100 iterations to represent the lower-bound estimates.

    We consistently observe that Anthology outperforms other conditioning methods with respect to all metrics, for both the Llama-3-70B and the Mixtral-8x22B.
    When comparing two matching methods, the greedy matching method tends to show better performance on the average Wasserstein distance across all Waves. We attribute differences in matching methods to the one-to-one correspondence condition of maximum weight matching and the limited number of virtual users available. Specifically, the weights assigned to matched virtual subjects in maximum weight matching are inevitably lower than those in greedy matching, as the latter relaxes the constraints on one-to-one correspondence. This discrepancy can result in a lower demographic similarity between matched human and virtual users compared to the counterpart from greedy matching. These results suggest that the richness of the generated backstories in our approach elicits more nuanced responses compared to baselines.

    Final Thoughts

    Anthology marks a promising new direction in conditioning virtual personas in LLMs that could potentially reshape how we conduct user research, public opinion surveys, and other social science applications by offering a scalable, and at times, ethical alternative to traditional human surveys.
    However, the use of Anthology, as in any other application of language models in the social sciences, also brings several considerations to the forefront: although the generated backstories help create more representative personas, there remains a risk of perpetuating biases or infringing on privacy, so results should be used and interpreted with caution.

    In terms of future steps, we envision our approach benefiting from a more expansive and diverse set of backstories, each representing a consistent life narrative of individuals.
    Additionally, a valuable extension of the work would be to consider free-form response generation, enabling more natural and nuanced persona simulations beyond structured survey formats such as multiple-choice.
    Finally, an exciting next dimension in applying LLMs in behavioral studies would involve simulating longer-term effects, allowing virtual personas to model and retrospectively examine changes over time.

    All of these directions present multitudes of technical challenges; please let us know if you are interested in collaborating or want to discuss our work further!

    Learn more about our work: link to full paper

    @article{moon2024virtual,
      title={Virtual personas for language models via an anthology of backstories},
      author={Moon, Suhong and Abdulhai, Marwa and Kang, Minwoo and Suh, Joseph and Soedarmadji, Widyadewi and Behar, Eran Kohen and Chan, David M},
      journal={arXiv preprint arXiv:2407.06576},
      year={2024}
    }
    



    Source link

    Anthology Artificial Backstories Berkeley Blog Intelligence Language models Personas Research Virtual
    Follow on Google News Follow on Flipboard
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
    tonirufai
    big tee tech hub
    • Website

    Related Posts

    Big milestone for the future of quantum computing.

    July 18, 2025

    This “smart coach” helps LLMs switch between text and code | MIT News

    July 17, 2025

    Scientists discover the moment AI truly understands language

    July 16, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Editors Picks

    Navigating the labyrinth of forks

    July 18, 2025

    OpenAI unveils ‘ChatGPT agent’ that gives ChatGPT its own computer to autonomously use your email and web apps, download and create files for you

    July 18, 2025

    Big milestone for the future of quantum computing.

    July 18, 2025

    Exploring supersymmetry through twisted bilayer materials – Physics World

    July 18, 2025
    Advertisement
    About Us
    About Us

    Welcome To big tee tech hub. Big tee tech hub is a Professional seo tools Platform. Here we will provide you only interesting content, which you will like very much. We’re dedicated to providing you the best of seo tools, with a focus on dependability and tools. We’re working to turn our passion for seo tools into a booming online website. We hope you enjoy our seo tools as much as we enjoy offering them to you.

    Don't Miss!

    Navigating the labyrinth of forks

    July 18, 2025

    OpenAI unveils ‘ChatGPT agent’ that gives ChatGPT its own computer to autonomously use your email and web apps, download and create files for you

    July 18, 2025

    Subscribe to Updates

    Get the latest technology news from Bigteetechhub about IT, Cybersecurity and Big Data.

      • About Us
      • Contact Us
      • Disclaimer
      • Privacy Policy
      • Terms and Conditions
      © 2025 bigteetechhub.All Right Reserved

      Type above and press Enter to search. Press Esc to cancel.