April 2, 2026

Enterprise data resilience in the AI era has moved past backup

  • Enterprise data resilience means ensuring the data AI systems act on is trustworthy.
  • Infrastructure decisions will determine AI deployments hold up under real operating conditions.

World Backup Day turns 15 this year. When it started, the message was simple: stop losing your files. That framing made sense in 2011. In 2026, it barely scratches the surface of what enterprise data resilience actually requires.

The change has been gradual, but the acceleration has not. The data infrastructure that underpins AI is carrying a different kind of load – and the consequences of getting it wrong have changed in kind, not in degree. A corrupted backup used to mean lost work. A compromised data pipeline feeding an autonomous AI agent means something considerably worse: decisions made at scale, at speed, on data that cannot be trusted.

Kumar Mitra, executive director and general manager of the infrastructure solutions group for Greater Asia Pacific at Lenovo, says preventing data loss is necessary, but not sufficient. As AI – particularly inferencing – becomes central to real-time decision-making, resilience must go beyond availability to ensuring data is trustworthy.

The quality of inputs determines the quality of outcomes, and in an AI-driven enterprise, that relationship is unforgiving.

The pilot problem has a data problem inside it

At Lenovo Tech World in Hong Kong earlier this month, one statistic kept surfacing in executive conversations: roughly half of all enterprise AI proofs of concept never reach production. The reasons cited ranged from cost overruns to governance gaps to infrastructure misalignment. But Mitra points to something more fundamental sitting underneath most of those explanations.

A large part of that failure rate comes down to data readiness. Proofs of concept work in isolation. They break when organisations try to scale them against real-world environments where data is inconsistent, ungoverned, or fragmented in systems that were never designed to talk to each other.

The infrastructure consequences of this are specific. Enterprise data resilience in the context of AI production workloads is not about whether data can be recovered after a failure. It is about whether data is consistently accessible, properly governed, and integrated in environments before anything goes wrong.

The backup conversation and the AI readiness conversation, which most enterprises have been treating as separate problems owned by separate teams, are the same.

Inferencing changes the infrastructure calculus

Kumar Mitra, Executive Director, Infrastructure Solutions Group, CAP & ANZ, Lenovo

The cost curve for AI infrastructure is not what most enterprise planning assumptions are built on. Art Hu, Lenovo’s SVP Global CIO, was candid about this at the Hong Kong event: inferencing costs can run up to 15 times training costs over a model’s operational lifecycle. By 2030, 75% of AI compute is expected to be inferencing workloads, with 80% of that running on distributed edge infrastructure.

That change changes what enterprise data resilience needs to deliver. Training a model is a bounded exercise. Inferencing is continuous, high-volume, and increasingly real-time. The data infrastructure supporting it – storage, retrieval, movement – has to perform not at project timescales but at operational ones, and the margin for latency or degradation is narrow.

Mitra’s framing is that the efficiency with which data can be stored and moved to support real-time decisions at scale is now a core infrastructure design requirement. CIOs still thinking about data infrastructure in terms of storage costs not inferencing performance are planning for an AI deployment environment that does not exists.

Sovereignty reshapes what resilience means in ASEAN

Lenovo’s CIO Playbook 2026, commissioned with IDC, found that 86% of APAC organisations are running hybrid AI environments. In ASEAN, the figure reflects data sovereignty pressures that are structural not transitional. These are not organisations that chose hybrid as a design preference but organisations operating in jurisdictions where certain data cannot leave the country, and where constraint shapes downstream infrastructure decisions.

The enterprise data resilience implications are significant. The traditional model – centralised systems, replication in geographies, recovery from a primary site – does not work when data cannot move freely in borders. What replaces it is what Mitra calls distributed resilience: workloads placed based on data sensitivity, latency requirements, and regulatory constraints, with architecture designed to ensure continuity and security without compromising compliance at any point.

“In a fragmented regulatory landscape like ASEAN, resilience is not about recovery. It is about maintaining trust and continuity wherever the data resides,” Mitra said.

Organisations operating in multiple ASEAN markets are effectively managing multiple resilience architectures simultaneously, each calibrated to a different regulatory environment. The ones treating this as a unified infrastructure challenge not a country-by-country compliance exercise are the ones building something that will actually scale.

Agentic AI and the integrity problem

The data resilience conversation is about to get harder. Agentic AI – autonomous systems that reason and write back to enterprise data without a human in the loop at every step – introduces a category of risk that backup and recovery frameworks were not designed for. The question is not only whether data can be restored, but after an autonomous agent has acted on it, can it the data still be trusted.

Lenovo’s CIO Playbook data captures the hesitation this creates: 33% of organisations cite security and privacy risks as a primary concern, and 27% point to data quality and integration issues. The deliberate slowness with which most organisations are approaching agentic AI deployment – the same playbook data shows most expect it to take more than 12 months to scale – reflects a recognition that the governance frameworks required to protect data integrity in an agentic environment do not yet exist at most enterprises.

Mitra’s prescription is strong governance from the start, including access controls, validation of how data is written or modified, and full traceability of AI-driven actions. As AI becomes more autonomous, resilience must ensure not that data is available, but that it remains trusted throughout the lifecycle of every agent action.

World Backup Day was built around the simple, durable anxiety of losing data you cannot get back. That anxiety has changed to whether the data that AI systems are operating on accurate and worthy of the decisions being made on its basis.

Enterprise data resilience in 2026 spans those two concerns simultaneously. The infrastructure decisions that felt optional two years ago – hybrid architecture, distributed governance, sovereignty-aware design – are now structural requirements for any organisation serious about AI at production scale. Organisations that built those foundations early are discovering that what looked like compliance overhead was actually the groundwork for everything else. The ones that deferred it are finding out what it costs to retrofit.

Want to experience the full spectrum of enterprise technology innovation? Join TechEx in Amsterdam, California, and London. Covering AI, Big Data, Cyber Security, IoT, Digital Transformation, Intelligent Automation, Edge Computing, and Data Centres, TechEx brings together global leaders to share real-world use cases and in-depth insights. Click here for more information.

TNG – Latest News & Reviews