If you build or run software for an 'essential' service, cyber resilience is about to become less of an IT hygiene issue and more of a board-level delivery constraint.

The Cyber Security and Resilience Bill is set up to make incident reporting faster, supply chains tighter, and 'best practice' easier for regulators to enforce.

For app, software and AI programmes, the impact is simple: you'll need to ship in a way that's provably secure, auditable, and reportable-without grinding delivery to a halt.

A split-screen image features a fast-moving kanban board on the left, displaying software delivery tasks and a CI/CD pipeline, while the right side highlights a UK Parliament document labeled "Cyber Security & Resilience Bill" with a prominent red "24h" timer icon. The background subtly depicts critical infrastructure elements like a power grid, hospital, and data center racks, all set against a modern, high-contrast aesthetic with UK color cues, emphasizing urgency with the overlay text: "24 hours to report. Are you ready?"

As of 18 April 2026, the Cyber Security and Resilience (Network and Information Systems) Bill is still progressing through Parliament-but its core direction is already shaping how enterprise buyers think about delivery risk, supplier assurance, and incident readiness.

Quick self-assessment

Loading...

Important: this quick assessment is a practical screening tool, not a legal opinion, formal compliance audit, or definitive statement of whether your organisation is in scope. Use it to highlight areas that may need closer review, and rely on legal, regulatory, security, and procurement advice before making decisions based on the result.

What you'll learn:

  • How to sanity-check whether you (or your suppliers) could be in scope-before procurement forces the conversation.

  • What's genuinely changing (scope expansion, faster reporting, stronger enforcement), and what's likely to arrive via secondary legislation.

  • How the 24h/72h reporting clock turns observability, ownership, and runbooks into product requirements.

  • A practical supply-chain due diligence checklist for software and AI vendors.

  • A "Compliance-by-Design" checklist you can apply across Discovery → Build → Operate, including an AI-specific security baseline.

In this article:

Does the Cyber Security and Resilience Bill apply to you (or your software suppliers)?

First, a date check: as of 18 April 2026, the Cyber Security and Resilience (Network and Information Systems) Bill is still going through Parliament (it has reached the Commons "report stage", with the date still to be announced). That means the detail can still move. But the direction of travel is already clear enough that procurement teams are starting to treat "NIS-style" cyber resilience as a supplier requirement, not a nice-to-have. (See the UK Parliament's Bill stages page and the government's policy statement on the Cyber Security and Resilience Bill.)

Here's the mistake we see most often in app and software programmes: assuming this is "for the regulated entity" (a hospital trust, an energy operator, a transport provider) and not for the teams building the software.

In practice, this lands in three ways:

  1. You are directly in scope (because you operate an essential service, provide a regulated digital service, run a data centre above thresholds, or deliver a "relevant managed service").

  2. Your supplier is in scope, and you inherit their obligations as contract clauses (security controls, evidence, incident co-operation).

  3. You become a high-impact supplier to an in-scope entity, and your customer treats you like a "critical supplier" risk even before any regulator formally designates anyone.

A quick scoping decision tree (non-legal, but useful)

  • Do you deliver an essential service, or enable one? (Think: "if our system is down, can the organisation still do the thing society relies on?")

  • Do you provide ongoing IT admin/monitoring/support into customer environments? That's exactly the shape of a "managed service" the Bill is targeting.

  • Could a cyber incident in your product materially disrupt an essential/digital/managed service? Not just availability-also integrity and confidentiality.

  • Would it be credible for a regulator (or a risk team) to call you 'high impact'? If your service is hard to replace quickly, the answer is often yes.

If you're running a software build and you want a feel for "what governance and delivery evidence looks like" in practice, Scorchsoft's app development FAQs (how delivery and governance works) give you a good baseline.

Important note: this article is not legal advice. It's a delivery-focused translation of what's in the Bill and DSIT's factsheets so you can make better programme decisions earlier.

What's actually new (and what's 'more enforceable') in the Bill

If you've lived through ISO 27001 projects, DSPT questionnaires, or the annual "prove you're secure" dance, it's tempting to treat this Bill as another checklist.

That's not the real shift.

The Cyber Security and Resilience Bill is designed to turn cyber resilience into a regulated outcome that organisations must be able to evidence-and to extend that expectation beyond the classic NIS-regulated operators into more of the digital supply chain.

Here are the changes that matter most for app, software and AI programmes.

1) Scope expansion: more organisations pulled into 'NIS-style' regulation

DSIT's factsheets make the intent explicit: the regime expands beyond operators of essential services and digital service providers to include:

  • Data centres as an essential service, with thresholds defined primarily by rated IT load (for example, ≥1MW for most data centres, and ≥10MW for enterprise data centres). Ofcom is the operational regulator. (Data centres factsheet)

  • Relevant managed service providers (RMSPs)-i.e. providers delivering ongoing management such as support, monitoring, and active administration into customer IT systems, whether access is on-premises or remote. (RMSPs factsheet)

  • A new mechanism to designate "critical suppliers" whose disruption could cause significant societal or economic impact, bringing them under comparable obligations via secondary legislation. (Designating critical suppliers factsheet)

Translation: even if your organisation isn't regulated today, your hosting, monitoring, managed support, and key SaaS dependencies may be tomorrow.

2) Faster reporting and wider reporting triggers

The Bill moves to an initial notification within 24 hours and a fuller report within 72 hours of becoming aware-across OESs, RDSPs, RMSPs and (where designated) critical suppliers. The explanatory notes set out those timelines and the requirement to copy the CSIRT/NCSC at the same time as the regulator. (Bill text + explanatory notes PDF)

3) Enforcement is being strengthened (because the old maximum fines were easy to ignore)

The enforcement factsheet is blunt: the current maximum penalty of £17 million is not proportionate for larger regulated entities, and the Bill aims to simplify penalty bands and introduce new maximum penalties that better reflect turnover. (Enforcement factsheet)

4) A moving target: more will be set through secondary legislation and codes of practice

Many of the "how exactly will this be judged?" details (thresholds, sector tailoring, what counts as proportionate, and how regulators calculate turnover) are expected to be implemented through secondary legislation after Royal Assent. That's why you need a delivery approach that can absorb new requirements without a full re-platform.

If you want the practical version of this: you're not just buying code any more-you're buying a build-and-operate system that produces audit-ready evidence. That's where choosing bespoke app and software development services with proper governance becomes a risk decision, not a style preference.

Incident reporting: 24 hours / 72 hours means your systems must be 'reportable'

You can't "comply at the point of breach". The Bill's 24-hour initial notification effectively forces you to design for fast, defensible answers.

DSIT describes a two-stage structure:

  • Within 24 hours of becoming aware: a "light-touch" notification to the regulator, with the NCSC sighted at the same time.

  • Within 72 hours: a fuller notification with more detail, but only insofar as the information is known at that point.

It also widens what's reportable in practice-explicitly calling out incidents like successful ransomware and "pre-positioning" (where an attacker has access and could cause significant harm even if disruption hasn't happened yet). (Incident reporting factsheet)

What this changes for app and software delivery

If your programme touches critical services (or you supply those who do), you need to be able to produce the early facts quickly. That usually means building and operating with:

  • Centralised logging and audit trails (and the ability to find a single customer's activity across services)

  • Clear service ownership (who is on the hook to decide severity at 2am?)

  • Pre-defined severity criteria that map to "significant impact" triggers

  • Dependency mapping (cloud, third-party APIs, queued jobs, identity provider, email/SMS providers)

  • Rehearsed incident runbooks (not just "we have an IR policy")

The DSIT factsheet also adds a practical twist many teams miss: after a full notification, RMSPs/RDSPs and data centre operators must take steps to identify which customers are likely to be adversely affected, and then notify them. That creates operational work: knowing which tenants were on the affected infrastructure, how to contact them, and how to word updates without making claims you can't evidence yet.

A simple '24h → 72h' internal checklist

If you had an incident right now, could you answer these within a working day?

  • What service is impacted, and what does "normal" look like for it?

  • Which customers are affected (or likely to be)?

  • What changed in the last 24-72 hours (deployments, config changes, new dependencies)?

  • What evidence do we have (logs, alerts, access trails), and where is it stored?

  • Who is authorised to talk to regulators, the NCSC, and customers?

Operational readiness is part of delivery, not an afterthought-which is exactly why we often point clients to an app launch checklist (monitoring and post-launch readiness) even when the topic is "compliance". Launch is the start of accountability.

A clean infographic titled "24h → 72h Incident Reporting Timeline" displays a horizontal timeline on a white background. Key milestones are highlighted: "Within 24 hours (initial notification)" and "Within 72 hours (full report)," accompanied by icons and labels for each step, including Detection/alert, Triage & severity decision, Assign incident owner, Initial notification fields, Evidence collection, Customer impact analysis, Customer notification workflow, and Full report fields. The design features a modern aesthetic with navy, grey, and red accents, ensuring readability for a UK cybersecurity audience.

Supply chain duties and 'designated critical suppliers': why procurement will start asking harder questions

The Bill treats supply chain security as a first-class resilience problem.

That sounds abstract until you see the mechanism: regulators get a formal power to designate "critical suppliers"-a small number of suppliers whose disruption could cause significant economic or societal impact-so those suppliers can be held to core security and incident reporting duties too. DSIT's policy statement spells out that the goal is consistent standards across the most critical tiers of the supply chain. (Policy statement)

The "designating critical suppliers" factsheet even includes a recent, painfully real example: the June 2024 Synnovis cyber attack, which disrupted pathology services across multiple London hospitals, with 11,000+ appointments and operations disrupted and the company estimating £32.7 million in losses. (Designating critical suppliers factsheet)

What changes, practically?

If you buy or build software that supports essential services, you should assume supplier assurance will tighten before enforcement tightens.

The critical supplier designation test described by DSIT is a useful risk lens even if you're not designated:

  • You supply directly to an OES/RDSP/RMSP.

  • You rely on network and information systems to provide that supply.

  • An incident affecting your systems could disrupt the service your customer provides.

  • That disruption is likely to have significant impact on the economy or day-to-day functioning of society.

  • Substitutability matters: if you can't realistically be swapped out quickly, you look more "critical".

Translate "supply chain duty" into a software and AI project

Your "supply chain" isn't just outsourcing. It's everything your delivery and operations depend on:

  • Open-source packages and commercial SDKs

  • Cloud infrastructure, managed databases, message queues

  • CI/CD pipelines, build agents, secrets stores

  • Managed hosting / monitoring providers

  • Identity providers, email/SMS providers

  • AI model/API providers and vector databases

The uncomfortable truth: your app can have pristine code and still fail compliance because a dependency is unmanaged, unpatched, or operationally opaque.

Procurement and contract clauses you should expect

For enterprise buyers, expect more requests for:

  • Evidence of a secure SDLC (reviews, testing, change control)

  • Vulnerability management and patch SLAs

  • Access control evidence (least privilege, segregation of environments)

  • Incident co-operation clauses (timelines, who reports what, how evidence is shared)

  • Subcontractor approval rights and dependency disclosure

  • Continuity/exit plans (how you hand over code, runbooks, and environments)

Vendor due diligence checklist (software/AI-specific)

If you want a fast, high-signal checklist to use with suppliers, start here:

  1. Secure build evidence: threat model, architecture diagram, Definition of Done includes security tests.

  2. Vulnerability process: scanning, triage, fix targets, and how emergency patches ship.

  3. Access discipline: MFA, privileged access management, environment segregation, audit trails.

  4. Incident readiness: who's on call, how you'll support 24h/72h reporting, comms templates.

  5. Dependency control: inventory (SBOM-style), versioning discipline, change approvals.

  6. Data handling: what you store, where it lives, retention and deletion, encryption.

  7. Resilience proof: backups, recovery tests, realistic RTO/RPO, and evidence of drills.

If you're assessing a delivery partner, it helps to map these questions to their real-world practices and track record. Scorchsoft's capabilities and delivery features page is a useful reference point for what "good" tends to include in an enterprise-grade build.

A mini-scenario: the first 24-72 hours of a supplier outage

Prepared supplier: can identify affected tenants quickly, has immutable logs, can provide a written timeline of actions, ships mitigations safely (feature flags, config rollback), and supports customer notifications with clear, bounded statements.

Unprepared supplier: can't say which customers are affected, has patchy logs, can't evidence access, and makes "hand-wavy" claims that collapse under scrutiny.

The Bill doesn't create this difference-it just makes it visible, fast.

A diagram titled "Software Supply Chain Map" features a central box labeled "Your App / AI System," surrounded by connected spokes leading to various elements such as a Cloud Provider, Managed Database, and Outsourced Development Team. Dependencies are color-coded in green for "Contractual control" and blue for "Technical control," with some spokes displaying both colors, all set against a clean white background with minimal design and easy-to-read labels, suitable for a UK enterprise cyber resilience context.

Technical and methodological security requirements: turning CAF outcomes into delivery requirements

One of the fastest ways to waste money in a regulated environment is to write policies that sound reassuring, then ship software that can't demonstrate the outcomes those policies promise.

Regulators need a way to assess "appropriate and proportionate" security in a consistent way. In UK critical sectors, the NCSC's Cyber Assessment Framework (CAF) is the most common lens for that conversation, built around objectives like governance and risk management, protection, detection, and response/recovery. (NCSC Cyber Assessment Framework (CAF))

So what does that mean for your programme?

From outcome to buildable requirement (a practical translation)

Outcome-based frameworks are deliberately non-prescriptive. Your job is to translate them into engineering requirements you can test and evidence.

A useful starting set for most app and software builds looks like this:

  • Identity and access control: MFA, least privilege, role-based access, joiner/mover/leaver process, break-glass access.

  • Secure configuration: baseline hardening for infrastructure, containers, databases, and SaaS tools; config drift detection.

  • Vulnerability management: dependency scanning, patch cadence, emergency patch path, and "what do we do about zero-days?"

  • Secrets management: no secrets in code, rotation, scoped tokens, separate secrets per environment.

  • Encryption: in transit (TLS) and at rest where appropriate, with key management clarified.

  • Secure logging: logs that are tamper-resistant, time-synchronised, and useful for investigations.

  • Change control: peer review, protected branches, CI gates, and an auditable deployment trail.

  • Back-up and recovery: defined RTO/RPO, tested restores, and evidence of drills.

  • Security testing in the Definition of Done: code review, automated tests, SAST/DAST where appropriate, and pen testing at sensible milestones.

The delivery artefacts that make compliance "evidenceable"

In regulated-style delivery, these artefacts are not bureaucracy-they're what allows you to answer hard questions quickly:

  • Threat model (per epic / major feature)

  • Architecture diagram + data flow diagram

  • Access matrix (who can do what, where)

  • Environment inventory (what exists, what it's for, how it's protected)

  • Test evidence (automated + manual) and release notes

  • Incident runbooks and escalation paths

  • Disaster recovery test results and lessons learned

If your organisation needs predictable sign-off points and documentation (common in critical infrastructure and procurement-led programmes), a more structured approach can help. That's why we keep a "boringly reliable" option on the table: structured delivery (clear milestones and evidence points).

The key idea: avoid tick-box compliance. CAF-style assessment is outcome-focused; you still need engineering judgement and clear operational ownership to make it real.

What changes in app, software and AI projects (Discovery → Build → Operate): the 'Compliance-by-Design' checklist

If you take one thing from this article, make it this: the fastest path to being "Bill-ready" is not a separate compliance project. It's a delivery system that repeatedly ships secure, observable, well-owned software.

Here's a practical checklist you can drop into your programme cadence.

Discovery (decide what you're really building, and how 'critical' it is)

  • Service criticality: what fails if this system is down, wrong, or leaking data?

  • Data classification: what's personal, sensitive, regulated, or commercially catastrophic?

  • Dependency inventory: cloud, identity, third-party APIs, open-source libraries, AI providers.

  • Reporting expectations: who would need to notify regulators/customers, and what would you need to know within 24 hours?

Build (turn outcomes into delivery mechanics)

  • Threat model at epic level (don't leave it to "security review at the end")

  • Secure defaults: least privilege, environment segregation, hardened configuration

  • CI gates: dependency scanning, code review, secrets scanning, automated tests

  • Definition of Done includes: security testing evidence + operational notes (not just features)

Operate (treat incident readiness as a feature)

  • Logs that can answer "what happened?" quickly (and per customer/tenant)

  • Clear on-call ownership and escalation paths

  • Customer notification workflow (who contacts whom, using what channels)

  • Runbooks for top incident types (ransomware, credential compromise, third-party outage)

AI-specific note: new supply-chain and misuse risks

If you're integrating LLMs/ML (even via a hosted API), you inherit risks like prompt injection, data leakage, and opaque third-party dependencies. The NCSC guidelines for secure AI system development break secure AI work down into secure design, development, deployment, and operation.

A minimal "secure AI" checklist for most business systems:

  • Hard boundaries on what the model can access (data, tools, actions)

  • Input/output filtering and logging (so you can investigate misuse)

  • Evaluation against misuse cases (not just accuracy)

  • Vendor and model dependency clarity (what happens if the API is down or changes?)

If you're planning a controlled AI integration, this is where our AI app development experience is useful: the exciting bit isn't "adding AI"-it's adding it without losing auditability.

And yes, this all applies to mobile too: a front-end app is still part of the incident surface area when it connects to essential services. (See Scorchsoft's mobile app development capabilities.)

A three-column diagram titled "Compliance-by-Design Checklist" features sections labeled Discovery, Build, and Operate. Each column includes bulleted lists detailing critical elements such as service criticality and data classification under Discovery, secure defaults and threat models under Build, and centralized logging and customer notification workflows under Operate, all presented in a modern infographic style with a white background, navy and grey text, and red accents.

The complexities (and why a trusted supplier matters): secondary legislation, regulators, and overlapping rules

A lot of "what you need to do" will not be fully known until after Royal Assent, because the Bill is designed to be futureproofed via secondary legislation (for example: updating security requirements, bringing more sectors into scope, and refining thresholds over time). (Futureproofing factsheet)

That creates a very normal leadership problem:

  • You need to prepare early (because procurement and incident readiness can't be bolted on later).

  • You also need to avoid building a brittle compliance "solution" that only matches today's draft.

Expect multiple regimes to overlap

Most organisations that touch critical services already live under more than one rule set. The Bill won't replace those-it will sit alongside them. Common overlaps include:

  • UK GDPR / data protection duties and breach reporting processes

  • Contractual controls (supplier assurance, audit rights, incident co-operation)

  • Cross-border obligations (especially if you operate in the EU or serve EU customers)

The simplest way to stay sane is to build an adaptable control set: CAF-aligned outcomes + strong supply chain discipline + rehearsed incident readiness. Those three survive shifting guidance.

Why supplier track record starts to matter more

When the environment is moving, you don't just need developers. You need a delivery partner who can:

  • Translate outcome-based expectations into buildable requirements

  • Produce evidence without slowing delivery to a crawl

  • Support operational reality (monitoring, incident response, customer communications)

If you want the primary source to hand to your legal and risk colleagues, use the Bill text and explanatory notes (for the fine print).

And if you suspect your systems might be in scope (or you supply those who are), it's worth doing a short scoping and delivery-readiness assessment with a team that's used to evidence-led builds. That's the difference between "we're agile" and "we can prove we did the right thing". If you want to explore that route, you can work with a delivery partner who can evidence security and resilience.

Key Takeaways

  • Assume this will land through procurement first - even if you're not directly regulated, enterprise buyers will push incident co-operation, evidence, and supplier assurance down the chain.

  • Design for the 24h/72h clock now - you can't improvise logs, ownership, and runbooks during an incident.

  • Treat your software supply chain as part of the system - libraries, CI/CD, cloud services, MSPs, and AI APIs all become resilience dependencies.

  • Translate outcomes into artefacts - threat models, architecture/data-flow diagrams, access matrices, test evidence, and recovery drills are what make "proportionate security" provable.

  • Build an adaptable control set - secondary legislation will evolve; CAF-aligned outcomes plus incident readiness tends to survive changing thresholds and guidance.

If you've made it this far, you should now be able to:

  • sanity-check whether your organisation (or your suppliers) could be pulled into scope,

  • spot the three delivery-impacting obligations (reporting, supply chain duties, and evidenceable security outcomes), and

  • run a practical readiness checklist across Discovery → Build → Operate.

Want to sanity-check your own programme?

If your systems touch critical services-or you suspect you're becoming a high-impact supplier-the most valuable next step is a short scoping and delivery-readiness conversation to identify the fastest risk-reduction actions (without turning your programme into a paperwork factory).

You can start that conversation here: Contact Scorchsoft.