Share:

Capital One’s data breach will be held up as a failure for public cloud. In fact it’s anything but.

News of the latest major data breach impacting Capital One customers is all over the press, and given that Capital One has been a pioneering user of AWS for its production applications, there may be a natural assumption that the public cloud has made this breach possible. However, if you look more carefully at the published details of this breach, it turns out that the public cloud was not a contributor at all. This appears to be a breach that would have been possible no matter where the applications ran, and the data was stored.

There are two factors that came together to make this data breach possible; criminal intent and content exposure. While the former is impossible to completely eradicate in an organisation, the second can be minimized by engineering and operations, cloud or not. It has been reported that an employee of a Seattle technology company has been arrested and charged with computer fraud and abuse. We should assume that in any large organisation, there is someone who is thinking about how to personally profit from their position. Said differently, it is dangerous to assume positive intent from everyone in a large group.

It is an unfortunate truth that insiders are a significant proportion of culprits in all industries, therefore all processes should be designed with this in mind and we should use the design principles of least privilege and separation of duties to make the execution of bad things harder. The design goal should be that all critical data should be encrypted during transfer and at rest, and that the encryption keys should be kept securely by a small number of trusted individuals. The application that allows people to access the data to execute their jobs, would be capable of decrypting that data for the purpose of the job – but no more. This is not always done well, nor is it always checked well.

Gaining access to the content, appears to have been carried out by compromising a server. This allowed access to IAM credentials, that could then be used to access data – through API calls masquerading as a valid application. All of this is possible no matter where the server was housed, and the data devices were located, whether a public cloud or an in-house data centre.

Correct design is important no matter what services are being used, or where those services are being executed. It is possible to build bad stuff with good tools, as much as it is possible to build bad stuff with bad tools. In this instance, we should not be blaming the tools.

In some respects, the fact that the environment was supported on AWS meant that it was easier to diagnose the problems – as logging cannot be disabled, the evidence of the attack remained there, and once aware that this was happening detection and remediation were quick.

Overall this should be seen as a positive for the public cloud, but I suspect it will be a while before this is the broad impression.