Close Menu
  • Home
  • AI
  • Big Data
  • Cloud Computing
  • iOS Development
  • IoT
  • IT/ Cybersecurity
  • Tech
    • Nanotechnology
    • Green Technology
    • Apple
    • Software Development
    • Software Engineering

Subscribe to Updates

Get the latest technology news from Bigteetechhub about IT, Cybersecurity and Big Data.

    What's Hot

    SGLA criticizes California Governor Newsom for signing ‘flawed, rushed’ sweepstakes ban

    October 13, 2025

    Gesture Recognition for Busy Hands

    October 13, 2025

    Inside the ‘Let’s Break It Down’ Series for Network Newbies

    October 13, 2025
    Facebook X (Twitter) Instagram
    Facebook X (Twitter) Instagram
    Big Tee Tech Hub
    • Home
    • AI
    • Big Data
    • Cloud Computing
    • iOS Development
    • IoT
    • IT/ Cybersecurity
    • Tech
      • Nanotechnology
      • Green Technology
      • Apple
      • Software Development
      • Software Engineering
    Big Tee Tech Hub
    Home»Software Development»Anchoring AI to a reference application
    Software Development

    Anchoring AI to a reference application

    big tee tech hubBy big tee tech hubSeptember 26, 2025004 Mins Read
    Share Facebook Twitter Pinterest Copy Link LinkedIn Tumblr Email Telegram WhatsApp
    Follow Us
    Google News Flipboard
    Anchoring AI to a reference application
    Share
    Facebook Twitter LinkedIn Pinterest Email Copy Link


    Service templates are a typical building block in the “golden paths” organisations build for their engineering teams, to make it easy to do the right thing. The templates are supposed to be the role models for all the services in the organisation, always representing the most up to date coding patterns and standards.

    One of the challenges with service templates though is that once a team instantiated a service with one, it’s tedious to feed template updates back to those services. Can GenAI help with that?

    Reference application as sample provider

    As part of a larger experiment that I recently wrote about here, I created an MCP server that gives a coding assistant access to coding samples for typical patterns. In my case, this was for a Spring Boot web application, where the patterns were repository, service and controller classes. It is a well established prompting practice at this point that providing LLMs with examples of the outputs that we want leads to better results. To put “providing examples” into fancier terms: This is also called “few-shot prompting”, or “in-context learning”.

    When I started working with code samples in prompts, I quickly realised how tedious this was, because I was working in a natural language markdown file. It felt a little bit like writing my first Java exams at university, in pencil: You have know idea if the code you’re writing actually compiles. And what’s more, if you’re creating prompts for multiple coding patterns, you want to keep them consistent with each other. Maintaining code samples in a reference application project that you can compile and run (like a service template) makes it a lot easier to provide AI with compilable, consistent samples.

    Anchoring AI to a reference application

    Detect drift from the reference application

    Now back to the problem statement I mentioned at the beginning: Once code is generated (be that with AI, or with a service template), and then further extended and maintained, codebases often drift away from the role model of the reference application.

    So in a second step, I wondered how we might use this approach to do a “code pattern drift detection” between the codebase and the reference application. I tested this with a relatively simple example, I added a logger and log.debug statements to the reference application’s controller classes:

    Screenshot of a git commit diff in the reference application, showing a controller with an added @Slf4j annotation, and a log.debug statement in one of the endpoint mappings.

    Then I expanded the MCP server to provide access to the git commits in the reference application. Asking the agent to first look for the actual changes in the reference gives me some control over the scope of the drift detection, I can use the commits to communicate to AI exactly what type of drift I’m interested in. Before I introduced this, when I just asked AI to compare the reference controllers with the existing controllers, it went a bit overboard with lots of irrelevant comparisons, and I saw this commit-scoping approach have a good impact.

    An expanded version of the previous diagram, this time showing the setup for the drift detection. The prompt asks the agent to find latest changes, the agent gets the latest commit from the reference application, via MCP server. The agent then looks at the diff and uses it to analyse the target application, and to create a drift report. In a second step, the user can then ask the agent to write code that closes the gaps identified in the report.

    In the first step, I just asked AI to generate a report for me that identified all the drift, so I could review and edit that report, e.g. remove findings that were irrelevant. In the second step, I asked AI to take the report and write code that closes the gaps identified.

    When is AI bringing something new to the table?

    A thing as simple as adding a logger, or changing a logging framework, can also be done deterministically by codemod tools like OpenRewrite. So bear that in mind before you reach for AI.

    Where AI can shine is whenever we have drift that needs coding that is more dynamic than is possible with regular-expression-based codemod recipes. In an advanced form of the logging example, this might be turning non-standardised, rich log statements into a structured format, where an LLM might be better at turning a wide variety of existing log messages into the respective structure.

    The example MCP server is included in the repository that accompanies the original article.



    Source link

    Anchoring application Reference
    Follow on Google News Follow on Flipboard
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
    tonirufai
    big tee tech hub
    • Website

    Related Posts

    Google unveils Gemini Enterprise to offer companies a more unified platform for AI innovation

    October 12, 2025

    From vibe coding to vibe deployment: Closing the prototype-to-production gap

    October 12, 2025

    This week in AI updates: Amazon Quick Suite, Gemini Enterprise, and more (October 10, 2025)

    October 11, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Editors Picks

    SGLA criticizes California Governor Newsom for signing ‘flawed, rushed’ sweepstakes ban

    October 13, 2025

    Gesture Recognition for Busy Hands

    October 13, 2025

    Inside the ‘Let’s Break It Down’ Series for Network Newbies

    October 13, 2025

    SVS Engineers: Who are the people that test-drive your network?

    October 12, 2025
    Advertisement
    About Us
    About Us

    Welcome To big tee tech hub. Big tee tech hub is a Professional seo tools Platform. Here we will provide you only interesting content, which you will like very much. We’re dedicated to providing you the best of seo tools, with a focus on dependability and tools. We’re working to turn our passion for seo tools into a booming online website. We hope you enjoy our seo tools as much as we enjoy offering them to you.

    Don't Miss!

    SGLA criticizes California Governor Newsom for signing ‘flawed, rushed’ sweepstakes ban

    October 13, 2025

    Gesture Recognition for Busy Hands

    October 13, 2025

    Subscribe to Updates

    Get the latest technology news from Bigteetechhub about IT, Cybersecurity and Big Data.

      • About Us
      • Contact Us
      • Disclaimer
      • Privacy Policy
      • Terms and Conditions
      © 2025 bigteetechhub.All Right Reserved

      Type above and press Enter to search. Press Esc to cancel.