Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Google’s New Chrome ‘Auto Browse’ Agent Attempts to Roam the Web Without You

    January 28, 2026

    Chrome takes on AI browsers with tighter Gemini integration, agentic features for autonomous tasks

    January 28, 2026

    Scientists May Have Found How the Brain Becomes One Intelligent System

    January 28, 2026
    Facebook Twitter Instagram
    • Tech
    • Gadgets
    • Spotlight
    • Gaming
    Facebook Twitter Instagram
    iGadgets TechiGadgets Tech
    Subscribe
    • Home
    • Gadgets
    • Insights
    • Apps

      Google Uses AI Searches To Detect If Someone Is In Crisis

      April 2, 2022

      Gboard Magic Wand Button Will Covert Your Text To Emojis

      April 2, 2022

      Android 10 & Older Devices Now Getting Automatic App Permissions Reset

      April 2, 2022

      Spotify Blend Update Increases Group Sizes, Adds Celebrity Blends

      April 2, 2022

      Samsung May Improve Battery Significantly With Galaxy Watch 5

      April 2, 2022
    • Gear
    • Mobiles
      1. Tech
      2. Gadgets
      3. Insights
      4. View All

      Scientists May Have Found How the Brain Becomes One Intelligent System

      January 28, 2026

      Stanford’s Light Breakthrough Could Finally Make Quantum Computers Scale

      January 28, 2026

      Franny Hsiao, Salesforce: Scaling enterprise AI

      January 28, 2026

      Deloitte sounds alarm as AI agent deployment outruns safety frameworks

      January 28, 2026

      March Update May Have Weakened The Haptics For Pixel 6 Users

      April 2, 2022

      Project 'Diamond' Is The Galaxy S23, Not A Rollable Smartphone

      April 2, 2022

      The At A Glance Widget Is More Useful After March Update

      April 2, 2022

      Pre-Order The OnePlus 10 Pro For Just $1 In The US

      April 2, 2022

      Google’s New Chrome ‘Auto Browse’ Agent Attempts to Roam the Web Without You

      January 28, 2026

      6 Best VPN Services (2026), Tested and Reviewed

      January 28, 2026

      Election Denier Tina Peters Was ‘Pardoned’ by Trump. She’s Still in Prison

      January 28, 2026

      AMD Ryzen 7 9850X3D CPU Review: Gaming’s Best Chip

      January 28, 2026

      Latest Huawei Mobiles P50 and P50 Pro Feature Kirin Chips

      January 15, 2021

      Samsung Galaxy M62 Benchmarked with Galaxy Note10’s Chipset

      January 15, 2021
      9.1

      Review: T-Mobile Winning 5G Race Around the World

      January 15, 2021
      8.9

      Samsung Galaxy S21 Ultra Review: the New King of Android Phones

      January 15, 2021
    • Computing
    iGadgets TechiGadgets Tech
    Home»Tech»Computing»Deloitte sounds alarm as AI agent deployment outruns safety frameworks
    Computing

    Deloitte sounds alarm as AI agent deployment outruns safety frameworks

    adminBy adminJanuary 28, 2026No Comments5 Mins Read
    Facebook Twitter Pinterest LinkedIn Tumblr Email
    Deloitte sounds alarm as AI agent deployment outruns safety frameworks
    Share
    Facebook Twitter LinkedIn Pinterest Email

    A new report from Deloitte has warned that businesses are deploying AI agents faster than their safety protocols and safeguards can keep up. Therefore, serious concerns around security, data privacy, and accountability are spreading.

    According to the survey, agentic systems are moving from pilot to production so quickly that traditional risk controls, which were designed for more human-centred operations, are struggling to meet security demands.

    Just 21% of organisations have implemented stringent governance or oversight for AI agents, despite the increased rate of adoption. Whilst 23% of companies stated that they are currently using AI agents, this is expected to rise to 74% in the next two years. The share of businesses yet to adopt this technology is expected to fall from 25% to just 5% over the same period.

    Poor governance is the threat

    Deloitte is not highlighting AI agents as inherently dangerous, but states the real risks are associated with poor context and weak governance. If agents operate as their own entities, their decisions and actions can easily become opaque. Without robust governance, it becomes difficult to manage and almost impossible to insure against mistakes.

    According to Ali Sarrafi, CEO & Founder of Kovant, the answer is governed autonomy. “Well-designed agents with clear boundaries, policies and definitions managed the same way as an enterprise manages any worker can move fast on low-risk work inside clear guardrails, but escalate to humans when actions cross defined risk thresholds.”

    “With detailed action logs, observability, and human gatekeeping for high-impact decisions, agents stop being mysterious bots and become systems you can inspect, audit, and trust.”

    As Deloitte’s report suggests, AI agent adoption is set to accelerate in the coming years, and only the companies that deploy the technology with visibility and control will hold the upper hand over competitors, not those who deploy them quickest.

    Why AI agents require robust guardrails

    AI agents may perform well in controlled demos, but they struggle in real-world business settings where systems can be fragmented and data may be inconsistent.

    Sarrafi commented on the unpredictable nature of AI agents in these scenarios. “When an agent is given too much context or scope at once, it becomes prone to hallucinations and unpredictable behaviour.”

    “By contrast, production-grade systems limit the decision and context scope that models work with. They decompose operations into narrower, focused tasks for individual agents, making behaviour more predictable and easier to control. This structure also enables traceability and intervention, so failures can be detected early and escalated appropriately rather than causing cascading errors.”

    Accountability for insurable AI

    With agents taking real actions in business systems, such as keeping detailed action logs, risk and compliance are viewed differently. With every action recorded, agents’ activities become clear and evaluable, letting organisations inspect actions in detail.

    Such transparency is crucial for insurers, who are reluctant to cover opaque AI systems. This level of detail helps insurers understand what agents have done, and the controls involved, thus making it easier to assess risk. With human oversight for risk-critical actions and auditable, replayable workflows, organisations can produce systems that are more manageable for risk assessment.

    AAIF standards a good first step

    Shared standards, like those being developed by the Agentic AI Foundation (AAIF), help businesses to integrate different agent systems, but current standardisation efforts focus on what is simplest to build, not what larger organisations need to operate agentic systems safely.

    Sarrafi says enterprises require standards that support operation control, and which include, “access permissions, approval workflows for high-impact actions, and auditable logs and observability, so teams can monitor behaviour, investigate incidents, and prove compliance.”

    Identity and permissions the first line of defence

    Limiting what AI agents can access and the actions they can perform is important to ensure safety in real business environments. Sarrafi said, “When agents are given broad privileges or too much context, they become unpredictable and pose security or compliance risks.”

    Visibility and monitoring are important to keep agents operating inside limits. Only then can stakeholders have confidence in the adoption of the technology. If every action is logged and manageable, teams can then see what has happened, identify issues, and better understand why events occurred.

    Sarrafi continued, “This visibility, combined with human supervision where it matters, turns AI agents from inscrutable components into systems that can be inspected, replayed and audited. It also allows rapid investigation and correction when issues arise, which boosts trust among operators, risk teams and insurers alike.”

    Deloitte’s blueprint

    Deloitte’s strategy for safe AI agent governance sets out defined boundaries for the decisions agentic systems can make. For instance, they might operate with tiered autonomy, where agents can only view information or offer suggestions. From here, they can be allowed to take limited actions, but with human approval. Once they have proven to be reliable in low-risk areas, they can be allowed to act automatically.

    Deloitte’s “Cyber AI Blueprints” suggest governance layers and embedding policies and compliance capability roadmaps into organisational controls. Ultimately, governance structures that track AI use and risk, and embedding oversight into daily operations are important for safe agentic AI use.

    Readying workforces with training is another aspect of safe governance. Deloitte recommends training employees on what they shouldn’t share with AI systems, what to do if agents go off track, and how to spot unusual, potentially dangerous behaviour. If employees fail to understand how AI systems work and their potential risks, they may weaken security controls, albeit unintentionally.

    Robust governance and control, alongside shared literacy are fundamental to the safe deployment and operation of AI agents, enabling secure, compliant, and accountable performance in real-world environments

    (Image source: “Global Hawk, NASA’s New Remote-Controlled Plane” by NASA Goddard Photo and Video is licensed under CC BY 2.0. )

     

    Deloitte sounds alarm as AI agent deployment outruns safety frameworks插图

    Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.

    AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

    AI Business Strategy,Governance, Regulation & Policy,agentic,governance,policy,strategyagentic,governance,policy,strategy#Deloitte #sounds #alarm #agent #deployment #outruns #safety #frameworks1769614872

    agent Agentic Alarm deloitte deployment frameworks governance outruns policy safety sounds strategy
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    admin
    • Website
    • Tumblr

    Related Posts

    Chrome takes on AI browsers with tighter Gemini integration, agentic features for autonomous tasks

    January 28, 2026

    Franny Hsiao, Salesforce: Scaling enterprise AI

    January 28, 2026

    Top 10 AI security tools for enterprises in 2026

    January 28, 2026
    Add A Comment

    Leave A Reply Cancel Reply

    Editors Picks

    McKinsey tests AI chatbot in early stages of graduate recruitment

    January 15, 2026

    Bosch’s €2.9 billion AI investment and shifting manufacturing priorities

    January 8, 2026
    8.5

    Apple Planning Big Mac Redesign and Half-Sized Old Mac

    January 5, 2021

    Autonomous Driving Startup Attracts Chinese Investor

    January 5, 2021
    Top Reviews
    9.1

    Review: T-Mobile Winning 5G Race Around the World

    By admin
    8.9

    Samsung Galaxy S21 Ultra Review: the New King of Android Phones

    By admin
    8.9

    Xiaomi Mi 10: New Variant with Snapdragon 870 Review

    By admin
    Advertisement
    Demo
    iGadgets Tech
    Facebook Twitter Instagram Pinterest Vimeo YouTube
    • Home
    • Tech
    • Gadgets
    • Mobiles
    • Our Authors
    © 2026 ThemeSphere. Designed by WPfastworld.

    Type above and press Enter to search. Press Esc to cancel.