Close Menu
  • Home
  • AI
  • Big Data
  • Cloud Computing
  • iOS Development
  • IoT
  • IT/ Cybersecurity
  • Tech
    • Nanotechnology
    • Green Technology
    • Apple
    • Software Development
    • Software Engineering

Subscribe to Updates

Get the latest technology news from Bigteetechhub about IT, Cybersecurity and Big Data.

    What's Hot

    Detecting backdoored language models at scale

    February 4, 2026

    Piezotronic-probe modulates piezoelectric-electric-thermal coupling field in GaN power electronics

    February 4, 2026

    Hyundai Motor Group partners with Vodafone IoT to deploy connected cars in the Middle East

    February 4, 2026
    Facebook X (Twitter) Instagram
    Facebook X (Twitter) Instagram
    Big Tee Tech Hub
    • Home
    • AI
    • Big Data
    • Cloud Computing
    • iOS Development
    • IoT
    • IT/ Cybersecurity
    • Tech
      • Nanotechnology
      • Green Technology
      • Apple
      • Software Development
      • Software Engineering
    Big Tee Tech Hub
    Home»Tech»Don’t Regulate AI Models. Regulate AI Use
    Tech

    Don’t Regulate AI Models. Regulate AI Use

    big tee tech hubBy big tee tech hubFebruary 2, 2026025 Mins Read
    Share Facebook Twitter Pinterest Copy Link LinkedIn Tumblr Email Telegram WhatsApp
    Follow Us
    Google News Flipboard
    Don’t Regulate AI Models. Regulate AI Use
    Share
    Facebook Twitter LinkedIn Pinterest Email Copy Link



    silhouettes looking at screens sit behind a building shape with the scales of justice in a digital grid patterned setting

    Hazardous dual-use functions (for example, tools to fabricate biometric voiceprints to defeat authentication).
    Regulatory adherence: confine to licensed facilities and verified operators; prohibit capabilities whose primary purpose is unlawful.

    Close the loop at real-world choke points

    AI-enabled systems become real when they’re connected to users, money, infrastructure, and institutions, and that’s where regulators should focus enforcement: at the points of distribution (app stores and enterprise marketplaces), capability access (cloud and AI platforms), monetization (payment systems and ad networks), and risk transfer (insurers and contract counterparties).

    For high-risk uses, we need to require identity binding for operators, capability gating aligned to the risk tier, and tamper-evident logging for audits and postincident review, paired with privacy protections. We need to demand evidence for deployer claims, maintain incident-response plans, report material faults, and provide human fallback. When AI use leads to damage, firms should have to show their work and face liability for harms.

    This approach creates market dynamics that accelerate compliance. If crucial business operations such as procurement, access to cloud services, and insurance depend on proving that you’re following the rules, AI model developers will build to specifications buyers can check. That raises the safety floor for all industry players, startups included, without handing an advantage to a few large, licensed incumbents.

    The E.U. approach: How this aligns, where it differs

    This framework aligns with the E.U. AI Act in two important ways. First, it centers risk at the point of impact: The act’s “high-risk” categories include employment, education, access to essential services, and critical infrastructure, with life-cycle obligations and complaint rights. It also recognizes special treatment for broadly capable systems (GPAI) without pretending publication control is a safety strategy. My proposal for the United States differs in three key ways:

    First, the U.S. must design for constitutional durability. Courts have treated source code as protected speech, and a regime that requires permission to publish weights or train a class of models starts to resemble prior restraint. A use-based regime of rules governing what AI operators can do in sensitive settings, and under what conditions, fits more naturally within the U.S. First Amendment doctrine than speaker-based licensing schemes.

    Second, the E.U. can rely on platforms adapting to the precautionary rules it writes for its unified single market. The U.S. should accept that models will exist globally, both open and closed, and focus on where AI becomes actionable: app stores, enterprise platforms, cloud providers, enterprise identity layers, payment rails, insurers, and regulated-sector gatekeepers (hospitals, utilities, banks). Those are enforceable points where identity, logging, capability gating, and postincident accountability can be required without pretending we can “contain” software. They also span the many specialized U.S. agencies that may not be able to write higher-level rules broad enough to affect the whole AI ecosystem. Instead, the U.S. should regulate AI service choke points more explicitly than Europe does, to accommodate the different shape of its government and public administration.

    Third, the U.S. should add an explicit “dual-use hazard” tier. The E.U. AI Act is primarily a fundamental-rights and product-safety regime. The United States also has a national-security reality: Certain capabilities are dangerous because they scale harm (biosecurity, cyberoffense, mass fraud). A coherent U.S. framework should name that category and regulate it directly, rather than trying to fit it into generic “frontier model” licensing.

    China’s approach: What to reuse, what to avoid

    China has built a layered regime for public-facing AI. The “deep synthesis” rules (effective 10 January 2023) require conspicuous labeling of synthetic media and place duties on providers and platforms. The Interim Measures for Generative AI (effective 15 August 2023) add registration and governance obligations for services offered to the public. Enforcement leverages platform control and algorithm filing systems.

    The United States should not copy China’s state-directed control of AI viewpoints or information management; it is incompatible with U.S. values and would not survive U.S. constitutional scrutiny. The licensing of model publication is brittle in practice and, in the United States, likely an unconstitutional form of censorship.

    But we can borrow two practical ideas from China. First, we should ensure trustworthy provenance and traceability for synthetic media. This involves mandatory labeling and provenance forensic tools. They give legitimate creators and platforms a reliable way to prove origin and integrity. When it is quick to check authenticity at scale, attackers lose the advantage of cheap copies or deepfakes and defenders regain time to detect, triage, and respond. Second, we should require operators to file their methods and risk controls with regulators for public-facing, high-risk services, like we do for other safety-critical projects. This should include due-process and transparency safeguards appropriate to liberal democracies along with clear responsibility for safety measures, data protection, and incident handling, especially for systems designed to manipulate emotions or build dependency, which already include gaming, role-playing, and associated applications.

    A pragmatic approach

    We cannot meaningfully regulate the development of AI in a world where artifacts copy in near real time and research flows fluidly across borders. But we can keep unvetted systems out of hospitals, payment systems, and critical infrastructure by regulating uses, not models; enforcing at choke points; and applying obligations that scale with risk.

    Done right, this approach harmonizes with the E.U.’s outcome-oriented framework, channels U.S. federal and state innovation into a coherent baseline, and reuses China’s useful distribution-level controls while rejecting speech-restrictive licensing. We can write rules that protect people and that still promote robust AI innovation.



    Source link

    Dont models regulate
    Follow on Google News Follow on Flipboard
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
    tonirufai
    big tee tech hub
    • Website

    Related Posts

    Detecting backdoored language models at scale

    February 4, 2026

    Exclusive: Positron raises $230M Series B to take on Nvidia’s AI chips

    February 4, 2026

    The $500 Check That Kickstarted Apple Just Sold for $2.4 Million. Take a Look

    February 3, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Editors Picks

    Detecting backdoored language models at scale

    February 4, 2026

    Piezotronic-probe modulates piezoelectric-electric-thermal coupling field in GaN power electronics

    February 4, 2026

    Hyundai Motor Group partners with Vodafone IoT to deploy connected cars in the Middle East

    February 4, 2026

    Exclusive: Positron raises $230M Series B to take on Nvidia’s AI chips

    February 4, 2026
    About Us
    About Us

    Welcome To big tee tech hub. Big tee tech hub is a Professional seo tools Platform. Here we will provide you only interesting content, which you will like very much. We’re dedicated to providing you the best of seo tools, with a focus on dependability and tools. We’re working to turn our passion for seo tools into a booming online website. We hope you enjoy our seo tools as much as we enjoy offering them to you.

    Don't Miss!

    Detecting backdoored language models at scale

    February 4, 2026

    Piezotronic-probe modulates piezoelectric-electric-thermal coupling field in GaN power electronics

    February 4, 2026

    Subscribe to Updates

    Get the latest technology news from Bigteetechhub about IT, Cybersecurity and Big Data.

      • About Us
      • Contact Us
      • Disclaimer
      • Privacy Policy
      • Terms and Conditions
      © 2026 bigteetechhub.All Right Reserved

      Type above and press Enter to search. Press Esc to cancel.