S3 Reduced Redundancy Storage is (almost) dead

As a software developer and architect I have spent countless hours discussing the benefits, the costs and the challenges of deprecating an API or a service in a product. AWS has the opportunity and the business model to skip the entire discussion and simply use the pricing element to make a service useless.

Let’s take the Reduced Redundancy Storage option for Amazon S3. It has been around since 2010 and the advantage versus standard storage is (actually, was) the cost. As for the AWS documentation:

It provides a cost-effective, highly available solution for distributing or sharing content that is durably stored elsewhere, or for storing thumbnails, transcoded media, or other processed data that can be easily reproduced. The RRS option stores objects on multiple devices across multiple facilities, providing 400 times the durability of a typical disk drive, but does not replicate objects as many times as standard Amazon S3 storage

Again, the only benefit of Reduced Redundancy Storage is the cost. Once you remove the cost discount, it’s a useless feature.

imageedit_1_2408963963

If you make it more expensive than standard storage you are effectively deprecating it without having to change a single SDK or API signature. And that’s exactly what AWS did lowering the price of standard storage class without changing the one for the Reduced Redundancy Storage option.

These are the prices for US East (N. Virginia) but similar differences apply in other regions:

Amazon S3 Reduced Redundancy Storage
imageedit_4_5408119950

The only real change was then moving the RRS from the main S3 page (where now the options do not include RRS anymore) to a separate one.

Amazon S3

imageedit_7_4172100111

S3 Reduced Redundancy Storage is still there, they did not even increase the price of the service. But it’s a dead feature and you have no reason to use it anymore. An amazing approach to the challenge of deprecation.

Transparent server side encryption on S3

I would like to take advantage of Server-Side Encryption with Amazon S3-Managed Encryption Keys for some existing buckets where I have some legacies applications storing data without S3 encryption. Encryption of data at rest is of course important and with the chance of doing it on AWS with a simple flag (or one line of code) there is not much of an excuse not using it while working with S3. But how does it work for old legacy applications where you might not able to change the client code soon? Unfortunately there is not a simple way to achieve it using S3 configuration only.

Ideally I would love to simply find a simple bucket property in the console but unfortunately there is not one. With a bucket policy I can of course lock PUT requests without server-side encryption but my goal is to convert them to PUT with server side encryption, not simply reject the requests. A bucket policy can only check for permissions on the object that is uploaded to S3 and compare to the rules set, it cannot transform data on the fly.

Any other option?

You can implement a Lamdba function that performs a new PUT for the same objects on every PUT of  requests without the server-side encryption attribute: it has some implications on the costs but it’s an easy short term workaround while you adapt the legacy application and it’s entirely transparent to the existing clients, whatever they are using SDKs or directly performing HTTP requests (check the doc to get a general idea about S3 integration with Lambda)

Of course the long-term solution should be to implement Server Side Encryption with the SDK changing client code but a Lambda function can be your short-term hack.