NIS2 Article 21(1) sets the frame: essential and important entities must take appropriate and proportionate technical, operational, and organisational measures to manage the risks posed to the security of network and information systems. Article 21(2) then lists ten specific categories of measures that, at a minimum, must be addressed. These are not optional line items. They are the floor of what supervisory authorities expect when they assess an entity's cybersecurity posture.
The list itself is compact — ten items, each described in a handful of words. That brevity is deceptive. Each measure encompasses an entire operational domain. "Incident handling" is four syllables and years of work. "Supply chain security" sounds like a procurement checkbox until you consider the depth of due diligence, contractual requirements, and continuous monitoring it demands.
This guide examines each of the ten measures individually. For each, we cover what the regulation requires, what operational implementation looks like, what evidence supervisors are likely to request, and where the most consequential gaps typically emerge. The goal is to move from reading the list to building the capability.
The Proportionality Principle
Before the ten measures: a critical framing. Article 21(1) requires measures that are "appropriate and proportionate." This is not a loophole — it is a calibration mechanism. The measures must take into account the entity's exposure to risks, its size, the likelihood of occurrence of incidents, their severity including societal and economic impact, and the state of the art.
An SME operating in digital infrastructure implements these measures differently than a major energy utility. The domains are the same; the depth, complexity, and investment are proportionate. Supervisors understand this — but proportionality arguments must be documented and defensible. "We are small, so we did less" is not a proportionality argument. "Based on our risk assessment, our threat exposure, and the potential impact of disruption to our 340 customers, we implemented these specific controls at this level of maturity" is a proportionality argument.
Measure (a): Risk Analysis and Information System Security Policies
Article 21(2)(a) requires policies on risk analysis and information system security.
This is the foundation. Without a structured risk analysis methodology and documented security policies, every subsequent measure lacks the rationale that connects it to the entity's actual risk landscape. Supervisors will ask: how do you decide which risks to mitigate? How do you determine the adequacy of your controls? The answer must be a documented, repeatable process — not "our CISO makes a judgement call."
Operational requirements:
- A documented risk assessment methodology that specifies how risks are identified, analysed, evaluated, and treated
- An information security policy approved by management that sets the strategic direction for all security controls
- A risk register that documents identified risks, their assessment, treatment decisions, and residual risk levels
- Periodic risk assessment cycles — at least annually and after significant changes to systems, processes, or the threat landscape
- Clear ownership of the risk assessment process and decision authority for risk treatment
Evidence supervisors expect: A current risk register with dated assessments. Evidence that the methodology was followed (not just that a register exists). Board or management approval records for risk acceptance decisions. Records of periodic reviews showing the register is maintained, not static.
Common gap: The risk assessment exists as a point-in-time exercise conducted during initial compliance setup. It has not been updated since. New systems have been deployed, third-party relationships have changed, and the threat landscape has evolved — but the risk register still reflects the state of the world at the time of first assessment.
Measure (b): Incident Handling
Article 21(2)(b) requires measures for incident handling.
This is the operational capacity to detect, respond to, and recover from security incidents. It encompasses the full incident lifecycle: preparation, detection, analysis, containment, eradication, recovery, and post-incident activity. For entities subject to both NIS2 and DORA, the incident handling requirements under both frameworks must be addressed in an integrated manner, though the reporting obligations have distinct requirements.
Operational requirements:
- An incident response plan that defines roles, responsibilities, escalation paths, and communication procedures
- Detection capabilities appropriate to the entity's risk profile — monitoring, alerting, and log analysis
- Classification criteria that distinguish between incidents requiring different response levels, including the threshold for NIS2 significant incident reporting under Art. 23
- Response procedures for the most likely incident scenarios in the entity's specific context
- Post-incident review processes that capture lessons learned and drive improvement
Evidence supervisors expect: A documented, tested incident response plan. Records of incident response exercises or tabletop simulations. Incident logs showing that the plan was followed during actual incidents. Post-incident review reports. Evidence that lessons learned have been fed back into the plan and into other security measures.
Common gap: The incident response plan exists but has never been tested. The entity has never conducted a tabletop exercise or simulated incident response scenario. When an actual incident occurs, the plan is consulted for the first time — and the team discovers that escalation contacts are outdated, communication templates do not exist, and the classification criteria are too ambiguous to apply under pressure.
Measure (c): Business Continuity and Crisis Management
Article 21(2)(c) requires business continuity, such as backup management and disaster recovery, and crisis management.
This measure covers the entity's ability to maintain or restore operations after a disruptive event. It is not limited to IT disaster recovery — it encompasses business process continuity, crisis governance, and the organisational decision-making that must function when normal management structures are overwhelmed.
Operational requirements:
- Business impact analysis (BIA) identifying critical business processes and the ICT systems that support them
- Defined recovery time objectives (RTO) and recovery point objectives (RPO) for critical systems
- Backup procedures with defined schedules, storage locations, encryption, and regular restoration testing
- Disaster recovery plans that can restore critical systems within defined RTOs
- Crisis management procedures including escalation criteria, crisis team composition, and decision-making authority
- Regular testing of business continuity and disaster recovery plans — not just backup integrity, but end-to-end recovery scenarios
Evidence supervisors expect: A current BIA with dated analysis. Defined and approved RTOs and RPOs. Backup test records showing regular restoration verification. DR test records with measured recovery times compared against RTOs. Crisis management exercise records. Evidence that test findings have driven improvements.
Common gap: Backups exist and are tested for integrity, but full disaster recovery has never been tested end-to-end. The entity can confirm that backup data is restorable, but cannot demonstrate that it can rebuild a functioning environment from those backups within the defined RTO. The gap between "backup works" and "we can recover" is where operational resilience lives or dies.
Measure (d): Supply Chain Security
Article 21(2)(d) requires supply chain security, including security-related aspects concerning the relationships between each entity and its direct suppliers or service providers.
This is one of the most operationally complex measures. It requires entities to assess and manage the security risks introduced by their supply chain — not just their direct suppliers, but the security properties of the products and services those suppliers deliver. The evolution from questionnaire-based vendor assessment to continuous oversight is directly relevant here.
Operational requirements:
- Identification of all suppliers and service providers whose products or services affect the security of the entity's network and information systems
- Risk assessment of the supply chain, taking into account the criticality of the supplier relationship, the overall quality of the supplier's cybersecurity practices, and the vulnerability landscape of their products
- Contractual arrangements that include security requirements, incident notification obligations, and audit rights
- Monitoring of supplier security posture — not just at onboarding, but continuously throughout the relationship
- Assessment of the risks arising from the supply chain as a whole, including concentration risks and cascading failure scenarios
Evidence supervisors expect: A supplier inventory with criticality classifications. Evidence of security due diligence during supplier selection. Contractual clauses addressing security requirements. Records of ongoing supplier monitoring. Evidence that supply chain risks are reflected in the entity's risk register.
Common gap: Supply chain security is treated as a procurement exercise — questionnaires at onboarding, contractual clauses, and then no further engagement until contract renewal. The security posture of critical suppliers changes continuously, but the entity's assessment does not. A supplier that passed due diligence two years ago may have undergone significant changes — acquisitions, restructuring, security incidents — that materially alter the risk profile of the relationship.
Measure (e): Security in Network and Information Systems Acquisition, Development, and Maintenance
Article 21(2)(e) requires security in network and information systems acquisition, development, and maintenance, including vulnerability handling and disclosure.
This measure covers the security of the entity's technology lifecycle — from how it procures systems, through how it develops and configures them, to how it maintains them and handles the vulnerabilities that inevitably emerge. The EUVD operational playbook is a practical complement to this measure for organisations managing vulnerability prioritisation.
Operational requirements:
- Security requirements integrated into procurement processes — evaluating the security properties of products and services before acquisition
- Secure development practices for internally developed systems — including secure coding standards, security testing in the development lifecycle, and separation of development, testing, and production environments
- Vulnerability management procedures: identification, assessment, prioritisation, and remediation of vulnerabilities in all systems within the entity's scope
- Vulnerability disclosure policies that enable responsible reporting of vulnerabilities discovered in the entity's systems
- Patch management procedures with defined SLAs based on vulnerability severity and asset criticality
Evidence supervisors expect: Security requirements in procurement specifications. Secure development lifecycle documentation. Vulnerability scan records and remediation tracking. Patch management SLAs and compliance metrics. Vulnerability disclosure policy.
Common gap: Vulnerability management exists as a scanning programme — regular scans produce vulnerability reports. But the pipeline from identification to remediation is not managed to defined SLAs, and the entity cannot demonstrate that critical vulnerabilities are consistently remediated within an acceptable timeframe. The scan happens; the remediation tracking does not.
Measure (f): Policies and Procedures to Assess the Effectiveness of Cybersecurity Risk-Management Measures
Article 21(2)(f) requires policies and procedures to assess the effectiveness of cybersecurity risk-management measures.
This is the measure that separates compliance from security. Implementing controls is necessary. Knowing whether those controls actually work is what turns compliance investment into security outcomes. This measure demands that the entity systematically tests and evaluates whether its controls achieve their intended effect — not just whether they exist.
Operational requirements:
- Defined assessment methods appropriate to the control type — technical testing for technical controls, process review for procedural controls, audit for governance controls
- Assessment schedules that ensure all significant controls are evaluated periodically
- Criteria for determining control effectiveness — not just "is it configured" but "does it prevent, detect, or respond as designed"
- Processes for addressing identified deficiencies — remediation tracking with defined timelines and accountability
- Independence in assessment — the function evaluating controls should not be the same function operating them
Evidence supervisors expect: Assessment schedules and records. Test results demonstrating control performance. Deficiency tracking with remediation timelines and completion evidence. Evidence of independence in the assessment function. Trend data showing whether control effectiveness is improving, stable, or degrading.
Common gap: Effectiveness assessment is conflated with compliance checking. The entity verifies that controls are in place (the firewall rule exists, the policy is approved, the backup schedule is configured) but does not test whether the control achieves its security objective (the firewall rule blocks the intended traffic, the policy is followed in practice, the backup can be restored to a functional state within the target window). FortisEU's approach to control confidence over audit readiness addresses this distinction directly.
Measure (g): Basic Cyber Hygiene Practices and Cybersecurity Training
Article 21(2)(g) requires basic cyber hygiene practices and cybersecurity training.
This measure addresses the human factor. Technical controls are necessary but insufficient if the people operating and using systems do not have the awareness and skills to maintain security. Cyber hygiene covers the baseline practices — password management, phishing awareness, secure device handling, software update discipline — that reduce the attack surface attributable to human behaviour.
Operational requirements:
- A cybersecurity awareness programme covering all staff, including management body members (Art. 20(2) explicitly requires management body training)
- Training tailored to roles — general awareness for all staff, specialised training for ICT staff, and governance-oriented training for the management body
- Regular delivery — not a one-time onboarding exercise but a sustained programme with periodic refreshment
- Defined cyber hygiene practices communicated and enforced across the organisation
- Assessment of training effectiveness — not just completion tracking, but evaluation of whether training changes behaviour
Evidence supervisors expect: Training programme documentation. Delivery records showing who was trained, when, and on what topics. Management body training records (this is a specific Art. 20(2) requirement). Evidence of training effectiveness measurement — phishing simulation results, awareness assessment scores, or behavioural metrics.
Common gap: Training is delivered as a compliance exercise — annual e-learning modules with completion tracking. The entity can demonstrate that 95% of staff completed the module. What it cannot demonstrate is whether the training had any effect on behaviour. Completion is not competence. Phishing simulation campaigns, for example, provide a measurable behavioural baseline that generic completion tracking does not.
Measure (h): Policies and Procedures Regarding the Use of Cryptography and Encryption
Article 21(2)(h) requires policies and procedures regarding the use of cryptography and, where appropriate, encryption.
Cryptography is a foundational security control, and this measure requires entities to have a deliberate, documented approach to its use — not just "we encrypt things" but a policy-driven framework that specifies when, how, and with what cryptographic standards data and communications are protected.
Operational requirements:
- A cryptography policy specifying the approved algorithms, key lengths, and protocols for the entity's use cases
- Classification-driven encryption requirements — which data classifications require encryption at rest, in transit, or both
- Key management procedures: generation, distribution, storage, rotation, revocation, and destruction of cryptographic keys
- Regular assessment of cryptographic controls against evolving standards and threat capabilities
- Procedures for cryptographic agility — the ability to migrate to new algorithms when existing ones are deprecated or compromised
Evidence supervisors expect: Documented cryptography policy. Evidence that the policy is implemented in practice — encryption configurations, certificate management records, key rotation logs. Assessment records showing periodic review of cryptographic adequacy.
Common gap: Encryption is deployed opportunistically rather than policy-driven. TLS is enabled on external-facing services. Disk encryption is on by default. But there is no documented policy driving these decisions, no assessment of whether the encryption standards are adequate for the data classification, and no key management procedures beyond "the cloud provider handles it." When a supervisor asks for the cryptography policy, the entity discovers it does not have one.
Measure (i): Human Resources Security, Access Control, and Asset Management
Article 21(2)(i) requires human resources security, access control policies, and asset management.
This is a composite measure covering three interconnected domains. HR security addresses the security considerations throughout the employment lifecycle — screening, onboarding, role changes, and offboarding. Access control ensures that access to systems and data is appropriate, least-privilege, and regularly reviewed. Asset management ensures that all information assets are inventoried, classified, and protected according to their sensitivity.
Operational requirements:
- Pre-employment screening appropriate to the role's access level and sensitivity
- Security responsibilities defined in employment contracts and role descriptions
- Onboarding procedures that provision access based on role requirements, not requests
- Role change and offboarding procedures that promptly adjust or revoke access — the timeliness of access revocation on departure is a frequent supervisory focus
- Access control policies implementing least privilege and segregation of duties
- Periodic access reviews that verify current access is still justified
- Asset classification scheme and inventory covering information assets, hardware, software, and data
Evidence supervisors expect: HR security procedures integrated with the employment lifecycle. Access provisioning records showing role-based allocation. Access review records with evidence of action taken on identified discrepancies. Offboarding records demonstrating timely access revocation. Asset inventory with classification.
Common gap: Access reviews are conducted but findings are not actioned. The entity can demonstrate that it performed a quarterly access review, but the review identified 47 accounts with excessive privileges and none were remediated before the next review cycle. The review becomes a compliance artefact rather than a security activity.
Measure (j): Multi-Factor Authentication, Continuous Authentication, and Secured Communications
Article 21(2)(j) requires the use of multi-factor authentication or continuous authentication solutions, secured voice, video, and text communications, and secured emergency communication systems within the entity, where appropriate.
This is the most technically specific of the ten measures. It targets the authentication and communication mechanisms that, if compromised, provide attackers with the access or intelligence needed to escalate their position within the entity.
Operational requirements:
- Multi-factor authentication for all remote access, privileged access, and access to critical systems — "where appropriate" is interpreted broadly by supervisors, and entities that limit MFA to a narrow set of systems must justify why other systems are excluded
- Assessment of whether continuous authentication mechanisms (session monitoring, behavioural analytics, risk-based step-up authentication) are appropriate for the entity's risk profile
- Encryption of voice, video, and text communications — particularly those that may contain sensitive business or security information
- Secured emergency communication systems that function when primary communication channels are unavailable or compromised
- Documentation of which communication channels are approved for which sensitivity levels
Evidence supervisors expect: MFA deployment coverage metrics. Configuration evidence for communication encryption. Emergency communication plan and test records. Justification for any systems where MFA is not deployed despite being in scope.
Common gap: MFA is deployed for external-facing applications and VPN access, but internal systems — including administrative consoles, database access, and infrastructure management interfaces — rely on single-factor authentication. The entity's MFA coverage report shows 100% because it only counts the systems it decided to include, not all systems that should be included based on risk.
Building the Evidence Architecture
The common thread across all ten measures is evidence. Supervisors do not assess NIS2 compliance by reading policy documents — they assess it by examining whether policies are implemented, whether implementation is effective, and whether the entity can demonstrate this through records, metrics, and test results.
The evidence architecture for Art. 21 compliance should produce:
- Policy evidence: Current, approved policies for each measure domain
- Implementation evidence: Technical configurations, process records, and operational logs showing policies are enforced
- Effectiveness evidence: Test results, metrics, and assessment findings showing controls achieve their intended purpose
- Improvement evidence: Records of deficiency identification, remediation, and re-testing showing the entity learns and adapts
FortisEU's compliance automation platform structures evidence collection around these four layers, generating the audit trail that supervisory interactions require.
Key Takeaways
-
Article 21(2) lists ten minimum measures — each is an operational domain, not a checkbox. Implementation means building capabilities, not documenting intentions. Every measure requires policies, technical implementation, effectiveness testing, and continuous improvement.
-
Proportionality is a calibration mechanism, not an exemption. Entities must document why their implementation level is appropriate for their risk exposure. Generic "we are small" arguments do not satisfy supervisory scrutiny.
-
Evidence is the common thread. Supervisors assess compliance through records, metrics, and test results — not policy documents. Build your evidence architecture around four layers: policy, implementation, effectiveness, and improvement.
-
The most consequential gaps are not missing controls — they are untested controls. An incident response plan that has never been exercised, access reviews that identify problems without remediation, backup systems that have never been restored end-to-end — these are the findings that drive supervisory action.
-
Measures (d) supply chain and (f) effectiveness assessment are the most operationally complex. Supply chain security requires continuous engagement beyond onboarding questionnaires. Effectiveness assessment requires testing that controls work, not just that they exist.
-
All ten measures must be integrated, not siloed. Risk analysis (a) drives the scope of all other measures. Incident handling (b) depends on detection from (e) and (j). Business continuity (c) requires the asset knowledge from (i). Treating measures independently produces compliance artefacts, not security capability.
