Close Menu
  • Home
  • AI
  • Big Data
  • Cloud Computing
  • iOS Development
  • IoT
  • IT/ Cybersecurity
  • Tech
    • Nanotechnology
    • Green Technology
    • Apple
    • Software Development
    • Software Engineering

Subscribe to Updates

Get the latest technology news from Bigteetechhub about IT, Cybersecurity and Big Data.

    What's Hot

    Working with @Generable and @Guide in Foundation Models

    July 18, 2025

    Navigating the labyrinth of forks

    July 18, 2025

    OpenAI unveils ‘ChatGPT agent’ that gives ChatGPT its own computer to autonomously use your email and web apps, download and create files for you

    July 18, 2025
    Facebook X (Twitter) Instagram
    Facebook X (Twitter) Instagram
    Big Tee Tech Hub
    • Home
    • AI
    • Big Data
    • Cloud Computing
    • iOS Development
    • IoT
    • IT/ Cybersecurity
    • Tech
      • Nanotechnology
      • Green Technology
      • Apple
      • Software Development
      • Software Engineering
    Big Tee Tech Hub
    Home»Tech»How the A-MEM framework supports powerful long-context memory so LLMs can take on more complicated tasks
    Tech

    How the A-MEM framework supports powerful long-context memory so LLMs can take on more complicated tasks

    big tee tech hubBy big tee tech hubMarch 6, 2025005 Mins Read
    Share Facebook Twitter Pinterest Copy Link LinkedIn Tumblr Email Telegram WhatsApp
    Follow Us
    Google News Flipboard
    How the A-MEM framework supports powerful long-context memory so LLMs can take on more complicated tasks
    Share
    Facebook Twitter LinkedIn Pinterest Email Copy Link

    Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More


    Researchers at Rutgers University, Ant Group and Salesforce Research have proposed a new framework that enables AI agents to take on more complicated tasks by integrating information from their environment and creating automatically linked memories to develop complex structures. 

    Called A-MEM, the framework uses large language models (LLMs) and vector embeddings to extract useful information from the agent’s interactions and create memory representations that can be retrieved and used efficiently. With enterprises looking to integrate AI agents into their workflows and applications, having a reliable memory management system can make a big difference.

    Why LLM memory is important

    Memory is critical in LLM and agentic applications because it enables long-term interactions between tools and users. Current memory systems, however, are either inefficient or based on predefined schemas that might not fit the changing nature of applications and the interactions they face.

    “Such rigid structures, coupled with fixed agent workflows, severely restrict these systems’ ability to generalize across new environments and maintain effectiveness in long-term interactions,” the researchers write. “The challenge becomes increasingly critical as LLM agents tackle more complex, open-ended tasks, where flexible knowledge organization and continuous adaptation are essential.”

    A-MEM explained

    A-MEM introduces an agentic memory architecture that enables autonomous and flexible memory management for LLM agents, according to the researchers.

    image

    Every time an LLM agent interacts with its environment— whether by accessing tools or exchanging messages with users — A-MEM generates “structured memory notes” that capture both explicit information and metadata such as time, contextual description, relevant keywords and linked memories. Some details are generated by the LLM as it examines the interaction and creates semantic components.

    Once a memory is created, an encoder model is used to calculate the embedding value of all its components. The combination of LLM-generated semantic components and embeddings provides both human-interpretable context and a tool for efficient retrieval through similarity search.

    Building up memory over time

    One of the interesting components of the A-MEM framework is a mechanism for linking different memory notes without the need for predefined rules. For each new memory note, A-MEM identifies the nearest memories based on the similarity of their embedding values. The LLM then analyzes the full content of the retrieved candidates to choose the ones that are most suitable to link to the new memory. 

    “By using embedding-based retrieval as an initial filter, we enable efficient scalability while maintaining semantic relevance,” the researchers write. “A-MEM can quickly identify potential connections even in large memory collections without exhaustive comparison. More importantly, the LLM-driven analysis allows for nuanced understanding of relationships that goes beyond simple similarity metrics.”

    After creating links for the new memory, A-MEM updates the retrieved memories based on their textual information and relationships with the new memory. As more memories are added over time, this process refines the system’s knowledge structures, enabling the discovery of higher-order patterns and concepts across memories.

    image d4ca4b

    In each interaction, A-MEM uses context-aware memory retrieval to provide the agent with relevant historical information. Given a new prompt, A-MEM first computes its embedding value with the same mechanism used for memory notes. The system uses this embedding to retrieve the most relevant memories from the memory store and augment the original prompt with contextual information that helps the agent better understand and respond to the current interaction. 

    “The retrieved context enriches the agent’s reasoning process by connecting the current interaction with related past experiences and knowledge stored in the memory system,” the researchers write.

    A-MEM in action

    The researchers tested A-MEM on LoCoMo, a dataset of very long conversations spanning multiple sessions. LoCoMo contains challenging tasks such as multi-hop questions that require synthesizing information across multiple chat sessions and reasoning questions that require understanding time-related information. The dataset also contains knowledge questions that require integrating contextual information from the conversation with external knowledge.

    image 1157c8

    The experiments show that A-MEM outperforms other baseline agentic memory techniques on most task categories, especially when using open source models. Notably, researchers say that A-MEM achieves superior performance while lowering inference costs, requiring up to 10X fewer tokens when answering questions.

    Effective memory management is becoming a core requirement as LLM agents become integrated into complex enterprise workflows across different domains and subsystems. A-MEM — whose code is available on GitHub — is one of several frameworks that enable enterprises to build memory-enhanced LLM agents.

    Daily insights on business use cases with VB Daily

    If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.

    Read our Privacy Policy

    Thanks for subscribing. Check out more VB newsletters here.

    An error occured.

    vb daily phone



    Source link
    AMEM complicated framework LLMs longcontext memory powerful supports tasks
    Follow on Google News Follow on Flipboard
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
    tonirufai
    big tee tech hub
    • Website

    Related Posts

    OpenAI unveils ‘ChatGPT agent’ that gives ChatGPT its own computer to autonomously use your email and web apps, download and create files for you

    July 18, 2025

    Finding value from AI agents from day one

    July 18, 2025

    TikTok is adding features for songwriters to its app

    July 17, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Editors Picks

    Working with @Generable and @Guide in Foundation Models

    July 18, 2025

    Navigating the labyrinth of forks

    July 18, 2025

    OpenAI unveils ‘ChatGPT agent’ that gives ChatGPT its own computer to autonomously use your email and web apps, download and create files for you

    July 18, 2025

    Big milestone for the future of quantum computing.

    July 18, 2025
    Advertisement
    About Us
    About Us

    Welcome To big tee tech hub. Big tee tech hub is a Professional seo tools Platform. Here we will provide you only interesting content, which you will like very much. We’re dedicated to providing you the best of seo tools, with a focus on dependability and tools. We’re working to turn our passion for seo tools into a booming online website. We hope you enjoy our seo tools as much as we enjoy offering them to you.

    Don't Miss!

    Working with @Generable and @Guide in Foundation Models

    July 18, 2025

    Navigating the labyrinth of forks

    July 18, 2025

    Subscribe to Updates

    Get the latest technology news from Bigteetechhub about IT, Cybersecurity and Big Data.

      • About Us
      • Contact Us
      • Disclaimer
      • Privacy Policy
      • Terms and Conditions
      © 2025 bigteetechhub.All Right Reserved

      Type above and press Enter to search. Press Esc to cancel.