Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Heat Waves Are Overwhelming Honey Bee Hives

    January 17, 2026

    YouTube relaxes monetization guidelines for some controversial topics

    January 17, 2026

    Scientists Are Tracking Mysterious Blackouts Beneath the Sea

    January 17, 2026
    Facebook Twitter Instagram
    • Tech
    • Gadgets
    • Spotlight
    • Gaming
    Facebook Twitter Instagram
    iGadgets TechiGadgets Tech
    Subscribe
    • Home
    • Gadgets
    • Insights
    • Apps

      Google Uses AI Searches To Detect If Someone Is In Crisis

      April 2, 2022

      Gboard Magic Wand Button Will Covert Your Text To Emojis

      April 2, 2022

      Android 10 & Older Devices Now Getting Automatic App Permissions Reset

      April 2, 2022

      Spotify Blend Update Increases Group Sizes, Adds Celebrity Blends

      April 2, 2022

      Samsung May Improve Battery Significantly With Galaxy Watch 5

      April 2, 2022
    • Gear
    • Mobiles
      1. Tech
      2. Gadgets
      3. Insights
      4. View All

      Heat Waves Are Overwhelming Honey Bee Hives

      January 17, 2026

      Scientists Are Tracking Mysterious Blackouts Beneath the Sea

      January 17, 2026

      Scientists Create Living Computers Powered by Mushrooms

      January 16, 2026

      A Strange State of Matter Behaves Very Differently Under Even Weak Magnetism

      January 16, 2026

      March Update May Have Weakened The Haptics For Pixel 6 Users

      April 2, 2022

      Project 'Diamond' Is The Galaxy S23, Not A Rollable Smartphone

      April 2, 2022

      The At A Glance Widget Is More Useful After March Update

      April 2, 2022

      Pre-Order The OnePlus 10 Pro For Just $1 In The US

      April 2, 2022

      Thinking Machines Cofounder’s Office Relationship Preceded His Termination

      January 17, 2026

      The Campaign to Destroy Renee Good

      January 16, 2026

      Our Favorite Compact Power Station Is on Sale for 33% Off

      January 16, 2026

      The 45 Best Movies on Hulu, WIRED's Picks (January 2026)

      January 16, 2026

      Latest Huawei Mobiles P50 and P50 Pro Feature Kirin Chips

      January 15, 2021

      Samsung Galaxy M62 Benchmarked with Galaxy Note10’s Chipset

      January 15, 2021
      9.1

      Review: T-Mobile Winning 5G Race Around the World

      January 15, 2021
      8.9

      Samsung Galaxy S21 Ultra Review: the New King of Android Phones

      January 15, 2021
    • Computing
    iGadgets TechiGadgets Tech
    Home»Tech»Computing»Meeting the new ETSI standard for AI security
    Computing

    Meeting the new ETSI standard for AI security

    adminBy adminJanuary 15, 2026No Comments6 Mins Read
    Facebook Twitter Pinterest LinkedIn Tumblr Email
    Meeting the new ETSI standard for AI security
    Share
    Facebook Twitter LinkedIn Pinterest Email

    The ETSI EN 304 223 standard introduces baseline security requirements for AI that enterprises must integrate into governance frameworks.

    As organisations embed machine learning into their core operations, this European Standard (EN) establishes concrete provisions for securing AI models and systems. It stands as the first globally applicable European Standard for AI cybersecurity, having secured formal approval from National Standards Organisations to strengthen its authority across international markets.

    The standard serves as a necessary benchmark alongside the EU AI Act. It addresses the reality that AI systems possess specific risks – such as susceptibility to data poisoning, model obfuscation, and indirect prompt injection – that traditional software security measures often miss. The standard covers deep neural networks and generative AI through to basic predictive systems, explicitly excluding only those used strictly for academic research.

    ETSI standard clarifies the chain of responsibility for AI security

    A persistent hurdle in enterprise AI adoption is determining who owns the risk. The ETSI standard resolves this by defining three primary technical roles: Developers, System Operators, and Data Custodians.

    For many enterprises, these lines blur. A financial services firm that fine-tunes an open-source model for fraud detection counts as both a Developer and a System Operator. This dual status triggers strict obligations, requiring the firm to secure the deployment infrastructure while documenting the provenance of training data and the model’s design auditing.

    The inclusion of ‘Data Custodians’ as a distinct stakeholder group directly impacts Chief Data and Analytics Officers (CDAOs). These entities control data permissions and integrity, a role that now carries explicit security responsibilities. Custodians must ensure that the intended usage of a system aligns with the sensitivity of the training data, effectively placing a security gatekeeper within the data management workflow.

    ETSI’s AI standard makes clear that security cannot be an afterthought appended at the deployment stage. During the design phase, organisations must conduct threat modelling that addresses AI-native attacks, such as membership inference and model obfuscation.

    One provision requires developers to restrict functionality to reduce the attack surface. For instance, if a system uses a multi-modal model but only requires text processing, the unused modalities (like image or audio processing) represent a risk that must be managed. This requirement forces technical leaders to reconsider the common practice of deploying massive, general-purpose foundation models where a smaller and more specialised model would suffice.

    The document also enforces strict asset management. Developers and System Operators must maintain a comprehensive inventory of assets, including interdependencies and connectivity. This supports shadow AI discovery; IT leaders cannot secure models they do not know exist. The standard also requires the creation of specific disaster recovery plans tailored to AI attacks, ensuring that a “known good state” can be restored if a model is compromised.

    Supply chain security presents an immediate friction point for enterprises relying on third-party vendors or open-source repositories. The ETSI standard requires that if a System Operator chooses to use AI models or components that are not well-documented, they must justify that decision and document the associated security risks.

    Practically, procurement teams can no longer accept “black box” solutions. Developers are required to provide cryptographic hashes for model components to verify authenticity. Where training data is sourced publicly (a common practice for Large Language Models), Developers must document the source URL and acquisition timestamp. This audit trail is necessary for post-incident investigations, particularly when attempting to identify if a model was subjected to data poisoning during its training phase.

    If an enterprise offers an API to external customers, they must apply controls designed to mitigate AI-focused attacks, such as rate limiting to prevent adversaries from reverse-engineering the model or overwhelming defences to inject poison data.

    The lifecycle approach extends into the maintenance phase, where the standard treats major updates – such as retraining on new data – as the deployment of a new version. Under the ETSI AI standard, this triggers a requirement for renewed security testing and evaluation.

    Continuous monitoring is also formalised. System Operators must analyse logs not just for uptime, but to detect “data drift” or gradual changes in behaviour that could indicate a security breach. This moves AI monitoring from a performance metric to a security discipline.

    The standard also addresses the “End of Life” phase. When a model is decommissioned or transferred, organisations must involve Data Custodians to ensure the secure disposal of data and configuration details. This provision prevents the leakage of sensitive intellectual property or training data through discarded hardware or forgotten cloud instances.

    Executive oversight and governance

    Compliance with ETSI EN 304 223 requires a review of existing cybersecurity training programmes. The standard mandates that training be tailored to specific roles, ensuring that developers understand secure coding for AI while general staff remain aware of threats like social engineering via AI outputs.

    “ETSI EN 304 223 represents an important step forward in establishing a common, rigorous foundation for securing AI systems”, said Scott Cadzow, Chair of ETSI’s Technical Committee for Securing Artificial Intelligence.

    “At a time when AI is being increasingly integrated into critical services and infrastructure, the availability of clear, practical guidance that reflects both the complexity of these technologies and the realities of deployment cannot be underestimated. The work that went into delivering this framework is the result of extensive collaboration and it means that organisations can have full confidence in AI systems that are resilient, trustworthy, and secure by design.”

    Implementing these baselines in ETSI’s AI security standard provides a structure for safer innovation. By enforcing documented audit trails, clear role definitions, and supply chain transparency, enterprises can mitigate the risks associated with AI adoption while establishing a defensible position for future regulatory audits.

    An upcoming Technical Report (ETSI TR 104 159) will apply these principles specifically to generative AI, targeting issues like deepfakes and disinformation.

    See also: Allister Frost: Tackling workforce anxiety for AI integration success

    Banner for AI & Big Data Expo by TechEx events.

    Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events. Click here for more information.

    AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

    AI Business Strategy,Deep Dives,Features,Governance, Regulation & Policy,Inside AI,adoption,ai,developers,enterprise,ethics,etsi,frameworks,governance,security,standardsadoption,ai,developers,enterprise,ethics,etsi,frameworks,governance,security,standards#Meeting #ETSI #standard #security1768483904

    adoption AI developers enterprise ethics etsi frameworks governance meeting Security standard standards
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    admin
    • Website
    • Tumblr

    Related Posts

    California AG sends Musk’s xAI a cease-and-desist order over sexual deepfakes

    January 16, 2026

    Retailers bring conversational AI and analytics closer to the user

    January 16, 2026

    Banks operationalise as Plumery AI launches standardised integration

    January 16, 2026
    Add A Comment

    Leave A Reply Cancel Reply

    Editors Picks

    McKinsey tests AI chatbot in early stages of graduate recruitment

    January 15, 2026

    Bosch’s €2.9 billion AI investment and shifting manufacturing priorities

    January 8, 2026
    8.5

    Apple Planning Big Mac Redesign and Half-Sized Old Mac

    January 5, 2021

    Autonomous Driving Startup Attracts Chinese Investor

    January 5, 2021
    Top Reviews
    9.1

    Review: T-Mobile Winning 5G Race Around the World

    By admin
    8.9

    Samsung Galaxy S21 Ultra Review: the New King of Android Phones

    By admin
    8.9

    Xiaomi Mi 10: New Variant with Snapdragon 870 Review

    By admin
    Advertisement
    Demo
    iGadgets Tech
    Facebook Twitter Instagram Pinterest Vimeo YouTube
    • Home
    • Tech
    • Gadgets
    • Mobiles
    • Our Authors
    © 2026 ThemeSphere. Designed by WPfastworld.

    Type above and press Enter to search. Press Esc to cancel.