The world is experiencing a level of disruption and business risk not seen in generations. Some companies freeze and fail, while others innovate, advance, and even thrive. The difference is resilience.
With data now acknowledged as an important element of real-time decision-making, perhaps it is time for organisations to make a serious effort at achieving data resilience. And if they do, how does an organisation achieve data resilience?
Joseph Yang, general manager for storage and data services at HPE Asia Pacific, conceded that there are multiple aspects that come into the picture on the topic of data resilience, including data availability, data durability and data security.
He also acknowledged that cybercriminals are becoming more sophisticated in their methods, with the latest focus now on backups, as they’ve come to realise that organisations can recover lost data through their various backup strategies.
“Now, not only do the cyberattacks prevent you from accessing your data, but bad threat actors remove the data off your network and hold you for ransom and even then, in many cases agreeing to pay the ransom may not guarantee full recovery."
Joseph Yang
“So, these are the three main areas that should be taken into the utmost consideration when chalking out your data resiliency strategy,” he added.
Do enterprises in Asia understand this definition of data resilience?
According to Yang, enterprises in Asia are very attuned to the first two aspects of resilience: data availability and data durability.
“They understand the importance of having backups and protecting their data from losses. As for data security, not all organisations have achieved security maturity, but they are certainly moving towards that, especially with the increasing security risks and cyberattacks,” he countered.
Misconceptions about multi-cloud resilience
Yang lamented a prevailing user misconception that the cloud is not inherently available. Applications must be architected to ensure data availability.
In addition, in the public cloud, depending on the vendors, snapshot backup capabilities may not be as mature and easy to use. As such moving data across locations may also incur significant egress costs, even within the same cloud vendor. Another misconception, applications need to be rewritten for resilience.
For those already in the cloud, how do they achieve data resilience?
Yang reiterated that the public cloud, unlike on-premises environments, is not resilient by default. Storage replication does not provide protection for all applications in the cloud.
“Enterprises must work to protect workloads VM by VM, application by application,” he called out. This, however, is complicated because applications run on different operating models, with independent lines of business building their own applications that they put in the cloud.
“This means they will need a different approach to data protection that works across multiple operating models and environments,” said Yang.
Orchestrating resilience is a team sport
Statista estimates that in 2021, enterprises run an average of 110 SaaS applications, up 27% from 2020. Many of these applications are evolving at different rates, suggesting that skill sets to support these must likewise continually evolve.
“Until 10 years ago, there was a very clear divide between infrastructure teams. Back then probably 90% of the responsibility for data resiliency was given to the infrastructure team, as they owned the backups and datacentres.
“With DevSecOps, that line is merging. When we talk about cloud-native applications, it’s a whole different world altogether because the applications themselves must be built to be resilient. You cannot rely on infrastructure alone since pretty much cloud-native applications are built with the assumption that your infrastructure is not resilient. In a cloud-native world, applications must be designed to have resilience built and integrated,” elaborated Yang.
Persistent data silos
Even before the cloud, data was already created, consumed, and managed by departments creating silos.
The ‘Netskope Cloud and Threat Report: Cloud Data Sprawl’ found that more than one in five (22%) users upload, create, share or store data in personal apps and personal instances, with Gmail, WhatsApp, Google Drive, Facebook, WeTransfer, and LinkedIn ranking as the most popular personal apps and instances.
Yang conceded that data sprawl is common for companies of all sizes. It’s overlooked until it becomes too expensive to manage or causes security issues.
He suggested companies conduct an audit to have insights into the value of data, costs and possible gaps or overlaps within the network. From there, develop a decision-making framework to evaluate what data to keep and discard and figure out how to protect data in an end-to-end manner.
How to ensure data resilience strategy remains relevant
“There’s no one perfect data resiliency strategy, especially with business priorities constantly changing. What organisations should do is review their data resilience strategy on a regular basis, where they classify the different types of data and their value to the business and update their strategy accordingly,” he concluded.
Click on the PodChat player to hear Yang share his perspective on how organisations can achieve data resilience in a multi-cloud world.
- Define data resilience in the context of today’s hybrid environment.
- One of the early selling points of the cloud is resilience, along with scalability and the utility model. What are the top 3 prevailing misconceptions about the multi-cloud as inherently resilient?
- What needs to happen to achieve data resilience in this hybrid environment?
- Who needs to get involved in the architecting of a data resilience strategy? Who will have day-to-day accountability for the execution?
- With technology evolving and business priorities changing in line with market dynamics, how do you ensure that your data resilience strategy remains relevant?