Close Menu
  • Home
  • AI
  • Big Data
  • Cloud Computing
  • iOS Development
  • IoT
  • IT/ Cybersecurity
  • Tech
    • Nanotechnology
    • Green Technology
    • Apple
    • Software Development
    • Software Engineering

Subscribe to Updates

Get the latest technology news from Bigteetechhub about IT, Cybersecurity and Big Data.

    What's Hot

    “Bunker Mentality” in AI: Are We There Yet?

    October 14, 2025

    Israel Hamas deal: The hostage, ceasefire, and peace agreement could have a grim lesson for future wars.

    October 14, 2025

    Astaroth: Banking Trojan Abusing GitHub for Resilience

    October 13, 2025
    Facebook X (Twitter) Instagram
    Facebook X (Twitter) Instagram
    Big Tee Tech Hub
    • Home
    • AI
    • Big Data
    • Cloud Computing
    • iOS Development
    • IoT
    • IT/ Cybersecurity
    • Tech
      • Nanotechnology
      • Green Technology
      • Apple
      • Software Development
      • Software Engineering
    Big Tee Tech Hub
    Home»Software Development»Beyond the benchmarks: Understanding the coding personalities of different LLMs
    Software Development

    Beyond the benchmarks: Understanding the coding personalities of different LLMs

    big tee tech hubBy big tee tech hubSeptember 7, 20250195 Mins Read
    Share Facebook Twitter Pinterest Copy Link LinkedIn Tumblr Email Telegram WhatsApp
    Follow Us
    Google News Flipboard
    Beyond the benchmarks: Understanding the coding personalities of different LLMs
    Share
    Facebook Twitter LinkedIn Pinterest Email Copy Link


    pexels roman odintsov 6898859pexels roman odintsov 6898859

    Most reports comparing AI models are based on benchmarks of performance, but a recent research report from Sonar takes a different approach: grouping different models by their coding personalities and looking at the downsides of each when it comes to code quality.

    The researchers studied five different LLMs using the SonarQube Enterprise static analysis engine on over 4,000 Java assignments. The LLMs reviewed were Claude Sonnet 4, OpenCoder-8B, Llama 3.2 90B, GPT-4o, and Claude Sonnet 3.7.

    They found that the models had different traits, such as Claude Sonnet 4 being very verbose in its outputs, producing over 3x as many lines of code as OpenCoder-8B for the same problem.

    Based on these traits, the researchers divided the five models into coding archetypes. Claude Sonnet 4 was the “senior architect,” writing sophisticated, complex code, but introducing high-severity bugs. “Because of the level of technical difficulty attempted, there were more of these issues,” said Donald Fischer, a VP at Sonar.

    OpenCoder-8B was the “rapid prototyper” as a result of it being the fastest and most concise while also potentially creating technical debt, making it ideal for proof-of-concepts. It created the highest issue density of all the models, with 32.45 issues per thousand lines of code.

    Llama 3.2 90B was the “unfulfilled promise,” as its scale and backing implies it should be a top-tier model, but it only had a pass rate of 61.47%. Additionally, 70.73% of the vulnerabilities it created were “BLOCKER” severity, the most severe type of bug, which prevents testing from continuing.

    GPT-4o was an “efficient generalist,” a jack-of-all-trades that is a common choice for general-purpose coding assistance. Its code wasn’t as verbose as the senior architect or as concise as the rapid prototyper, but somewhere in the middle. It also avoided producing severe bugs for the most part, but 48.15% of its bugs were control-flow mistakes.

    “This paints a picture of a coder who correctly grasps the main objective but often fumbles

    the details required to make the code robust. The code is likely to function for the intended scenario but will be plagued by persistent problems that compromise quality and reliability over time,” the report states.

    Finally, Claude 3.7 Sonnet was a “balanced predecessor.” The researchers found that it was a capable developer that produced well-documented code, but still introduced a large number of severe vulnerabilities.

    Though the models did have these distinct personalities, they also shared similar strengths and weaknesses. The common strengths were that they quickly produced syntactically correct code, had solid algorithmic and data structure fundamentals, and efficiently translated code to different languages. The common weaknesses were that they all produced a high percentage of high-severity vulnerabilities, introduced severe bugs like resource leaks or API contract violations, and had an inherent bias towards messy code.

    “Like humans, they become susceptible to subtle issues in the code they generate, and so there’s this correlation between capability and risk introduction, which I think is amazingly human,” said Fischer.

    Another interesting finding of the report is that newer models may be more technically capable, but are also more likely to generate risky code. For example, Claude Sonnet 4 has a 6.3% improvement over Claude 3.7 Sonnet on benchmark pass rates, but the issues it generated were 93% more likely to be “BLOCKER” severity.

    “If you think the newer model is superior, think about it one more time because newer is not actually superior; it’s injecting more and more issues,” said Prasenjit Sarkar, solutions marketing manager at Sonar.

    How reasoning modes impact GPT-5

    The researchers followed up their report this week with new data on GPT-5 and how the four available reasoning modes—minimal, low, medium, and high—impact performance, security, and code quality.

    They found that increasing reasoning has a diminishing return on functional performance. Bumping up from minimal to low results in the model’s pass rate rising from 75% to 80%, but medium and high only had a pass rate of 81.96% and 81.68%, respectively.

    In terms of security, high and low reasoning modes eliminate common attacks like path-traversal and injection, but replace them with harder-to-detect flaws, like inadequate I/O error-handling. The low reasoning mode had the highest percentage of that issue at 51%, followed by high (44%), medium (36%), and minimal (30%).

    “We have seen the path-traversal and injection become zero percent,” said Sarkar. “We can see that they are trying to solve one sector, and what is happening is that while they are trying to solve code quality, they are somewhere doing this trade-off. Inadequate I/O error-handling is another problem that has skyrocketed. If you look at 4o, it has gone to 15-20% more in the newer model.”

    There was a similar pattern with bugs, with control-flow mistakes decreasing beyond minimal reasoning, but advanced bugs like concurrency / threading increasing alongside the reasoning difficulty.

    “The trade-offs are the key thing here,” said Fischer. “It’s not so simple as to say, which is the best model? The way this has been viewed in the horse race between different models is which ones complete the most number of solutions on the SWE-bench benchmark. As we’ve demonstrated, the models that can do more, that push the boundaries, they also introduce more security vulnerabilities, they introduce more maintainability issues.”



    Source link

    benchmarks coding LLMs personalities Understanding
    Follow on Google News Follow on Flipboard
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
    tonirufai
    big tee tech hub
    • Website

    Related Posts

    A Guide to Develop Banking Software

    October 13, 2025

    Google unveils Gemini Enterprise to offer companies a more unified platform for AI innovation

    October 12, 2025

    From vibe coding to vibe deployment: Closing the prototype-to-production gap

    October 12, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Editors Picks

    “Bunker Mentality” in AI: Are We There Yet?

    October 14, 2025

    Israel Hamas deal: The hostage, ceasefire, and peace agreement could have a grim lesson for future wars.

    October 14, 2025

    Astaroth: Banking Trojan Abusing GitHub for Resilience

    October 13, 2025

    ios – Differences in builds between Xcode 16.4 and Xcode 26

    October 13, 2025
    Advertisement
    About Us
    About Us

    Welcome To big tee tech hub. Big tee tech hub is a Professional seo tools Platform. Here we will provide you only interesting content, which you will like very much. We’re dedicated to providing you the best of seo tools, with a focus on dependability and tools. We’re working to turn our passion for seo tools into a booming online website. We hope you enjoy our seo tools as much as we enjoy offering them to you.

    Don't Miss!

    “Bunker Mentality” in AI: Are We There Yet?

    October 14, 2025

    Israel Hamas deal: The hostage, ceasefire, and peace agreement could have a grim lesson for future wars.

    October 14, 2025

    Subscribe to Updates

    Get the latest technology news from Bigteetechhub about IT, Cybersecurity and Big Data.

      • About Us
      • Contact Us
      • Disclaimer
      • Privacy Policy
      • Terms and Conditions
      © 2025 bigteetechhub.All Right Reserved

      Type above and press Enter to search. Press Esc to cancel.