As recently as a decade ago, a prevailing attitude among enterprise technology decision-makers was that cloud data storage infrastructures were inherently insecure. A majority of respondents to a TrendMicro survey (55%) felt that using shared storage introduced risk and increased vulnerability; 53% reported that their apprehension about security was holding them back from adopting cloud technologies.
Today’s business leaders have far more confidence in public cloud providers’ capabilities. Only 28% of the respondents to a recently-published SANS Institute survey felt their organization’s cloud security architecture didn’t effectively mitigate risk. Nearly half (42.5%) stated that they believe the security tools and controls available in the cloud are among the best in the industry.
This confidence in the security of the public cloud certainly isn’t misplaced. Major cloud providers have access to cutting-edge hardware and infrastructure, robust physical security and greater technical expertise than even the largest of enterprises can hope to harness when building an on-premises data center. They also maintain resilient geo-redundant data center facilities around the globe, across multiple availability zones and regions, ensuring local outages won’t impact their customers. And, they routinely achieve average service uptime rates exceeding 99.99%.
Nonetheless, realizing robust data security in the cloud is different than securing legacy storage architectures. To protect your cloud databases, data lakes or data warehouse, you’ll need to maintain ongoing visibility, apply good governance practices and ensure that you understand exactly how the public cloud’s shared responsibility model applies to your individual data storage, access and usage needs.
Understanding the Shared Responsibility Model for Cloud Data Protection
While cloud transformation can in fact enhance data security across your organization, cloud architectures also have the potential to create significant risks and vulnerabilities if stakeholders don’t fully understand who is responsible for securing what in any given cloud environment. In each of the cloud service models, software-as-a-service (SaaS), platform-as-a-service (PasS) and infrastructure-as-a-service (IaaS), the cloud provider assumes responsibility only for certain aspects of the deployment. In all three models, the cloud provider is in charge of securing the physical data centers and the hardware within them as well as the virtualization layer through which cloud resources are delivered to their users. In all of the models, too, cloud customers are responsible for securing their own data, managing user identities and access, maintaining the infrastructure and devices that connect to the cloud, and configuring their cloud platforms and resources appropriately.
Many other aspects of a cloud environment can seem like grey areas. The security controls a cloud customer must manage in one provider’s IaaS implementation may look very different from those in a PaaS solution from the same provider. In general, customers are entirely responsible for the security of applications and workloads running on server-based instances in the cloud.
It’s always essential to review contacts and documentation carefully so that you understand the cloud provider’s offering and architecture thoroughly before you begin implementing controls. In practice, this can mean, for instance, that if you’ve built a data lake in the cloud and have carefully defined roles and delimited access privileges, but you haven’t exercised the same degree of care for the underlying S3 bucket where the data’s actually being stored, you’ve left a serious security gap.
Maintaining the Right Compliance Controls
Regardless of its industry or where a business is located — no matter how many compliance certificates a cloud provider holds — it’s the cloud customer’s responsibility to satisfy their own regulatory compliance requirements. Naturally, these vary according to vertical, where the enterprise is located and where it does business.
Some regulations, such as the Sarbanes-Oxley Act of 2002 (SOX) and the Payment Card Industry Data Security Standard (PCI-DSS), mandate that organizations processing or handling financial data maintain certain security controls. Others, such as the California Consumer Privacy Act (CCPA) and Europe’s General Data Protection Regulation (GDPR), have strict requirements governing how, where and when consumers’ personal data can be stored.
Maintaining GDPR compliance in the cloud poses several unique challenges. The GDPR limits the extent to which personal data belonging to E.U. citizens can be stored outside of the European Economic Area (EEA). Entities subject to the GDPR must ensure their cloud provider adheres to the regulation’s data localization laws and adequacy requirements. In addition, the GDPR mandates that organizations maintain control over and ownership of their own data while it’s stored in the cloud. They must also be able to maintain control over and visibility into the types of metadata collected by the cloud provider.
The GDPR also requires that cloud customers be able to retrieve data in a structured, usable format and be able to provide it to its subject on demand. Plus, data must not be stored for longer than it’s needed, and it must be possible to entirely delete records on demand. This creates issues in certain data lake architectures.
In the Apache Hadoop Distributed File System (HDFS), for instance, files are stored as immutable block sequences, and it’s not possible to delete individual records. In order to maintain the right to erasure that GDPR compliance requires, enterprises may have to replace outdated solutions with modern data lakes.
Backups and Replication for Protecting Your Cloud Data
Although today’s public cloud providers have justly earned their reputation for enabling strong data security, it’s incumbent upon the customer to understand the limits of the shared responsibility model, to ensure that their data is being stored in keeping with compliance requirements, and to maintain good governance procedures. Customers also need to understand how data replication and backups work in the cloud.
Replication involves storing an additional copy of live data on a secondary node or site. Cloud providers replicate data to increase resilience as well as to improve performance of data-intensive applications by automatically shifting to the node that offers the highest quality of service. An AWS S3 storage bucket, for instance, automatically creates three replicas of every object that’s stored there. These aren’t the same as backups, however.
Because replication involves live data, the data changes along with your primary production data. This means it’s not possible to restore from a replica as you could from a snapshot or full backup. To retain some historical record of your data, you’ll need to enable versioning. And you’ll want to pay close attention to the question of who has permission to access — or delete — the versions. For more granular data protection and retrieval capabilities, you can elect additional offerings such as geo-redundant storage or a full-featured cloud backup solution.
If you are planning a cloud adoption initiative, but have concerns regarding data security and compliance, reach out to our Advisory team. Our Advisory practice offers a range of consulting services that will help build your cloud strategy and support the development of governance, security, and compliance policies.