InfoQ – August 2021

From cloud vulnerabilities to OpenSearch, from cloud emissions to Google Cloud Private Service Connect, from EC2-Classic to Cloudwatch Cross Account Alarms: a recap of the topics I covered for InfoQ in August 2021.

Need Help Tracking Cloud Emissions? Microsoft Previews Microsoft Cloud for Sustainability

At the recent Inspire 2021 conference, Microsoft announced the preview of Microsoft Cloud for Sustainability, a new service to help companies measure and manage their carbon emissions, set sustainability goals and take measurable action.

After 15 Years AWS Retires EC2-Classic

AWS has announced the plan to retire the EC2-Classic platform in the next few months. The cloud provider expects that customers still running the first iteration of its virtual cloud computing instance will migrate to the newest Virtual Private Cloud (VPC) by August 2022.

ElasticSearch Fork OpenSearch is Generally Available

Amazon has recently announced the general availability of OpenSearch 1.0, the Apache 2.0-licensed fork of Elasticsearch that was created after Elastic changed their license.

Amazon Introduces Cloudwatch Cross Account Alarms to Consolidate Management

Amazon CloudWatch recently announced cross account alarms, a new feature that enables customers to set alerts and take actions based on changes to metrics across different AWS accounts.

Google Cloud Private Service Connect Now Generally Available

Google Cloud has recently announced the general availability of Private Service Connect, a service to keep all customer’s traffic private and secure over Google’s global network while abstracting the underlying network infrastructure.

Is CVE the Solution for Cloud Vulnerabilities?

At the recent Black Hat USA 2021, security experts from cloud infrastructure company Wiz argued that a CVE database for cloud vulnerabilities is needed, starting a debate in the cloud and cybersecurity communities.

AWS Introduces Security Analytics Bootstrap to Perform Security Investigations

AWS recently announced Security Analytics Bootstrap, an open source framework to perform security investigations on AWS service logs using an Amazon Athena analysis environment.

Base performance and EC2 T2 instances

Almost three years ago AWS launched the now very popular T2 instances, EC2 servers with burstable performance. As Jeff Barr wrote in 2014:

Even though the speedometer in my car maxes out at 150 MPH, I rarely drive at that speed (and the top end may be more optimistic than realistic), but it is certainly nice to have the option to do so when the time and the circumstances are right. Most of the time I am using just a fraction of the power that is available to me. Many interesting compute workloads follow a similar pattern, with modest demands for continuous compute power and occasional needs for a lot more.

It took a while for the users to fully understand the benefits of the new class and how to compute and monitor CPU credits but the choice between different instance types was very straightforward.

A bit of history…

Originally there were only 3 instance types (t2.micro, t2.small and t2.medium) and base performance, RAM and CPU Credits were very clear, growing linearly.

Instance     Base    RAM    Credits/hr

t2.micro     10%     1.0     6     
t2.small     20%     2.0     12     
t2.medium    40%     4.0     24

And price too. A t2.medium was effectively equivalent to 2 small instances or 4 micro instances. Both as credits, base rate and price. So far so good.

At the end of 2015, AWS introduced an even smaller instance, the t2.nano but the approach was still the same:

Instance     Base    RAM    Credits/hr

t2.nano      5%     0.5     3

Still same approach, nothing different.

Now large and even bigger!

But AWS extended the T2 class in the large range too, having first a t2.large in June 2015 and t2.xlarge and t2.2xlarge at the end of 2016. A lot more flexibility and a class that can cover many use cases with the chance of vertical scaling but finally the linear grow was broken:

Instance  RAM    Credits/hr Price/hr

t2.large        2        8      $0.094
t2.xlarge       4       16      $0.188
t2.2xlarge      8       32      $0.376

So far so good, the price per hour doubles according to vCPU and Memory. So a t2.2xlarge is equivalent to 4 t2.large. But what about the base performance?

Instance    Base Performance

t2.large      60% (of 200%)
t2.xlarge     90% (of 400%)
t2.2xlarge   135% (of 800%)

A t2.2xlarge is not equivalent to 4 t2.large.

I have a better base rate that I can run forever as average for cCPU running 4 nodes of t2.large (I would be able to average 30% on every vCPU) versus running a single t2.2xlarge (where I have a base performance of less than 17% on every vCPU) for the very same price.

The bigger you go, the lower the base performance by vCPU is.

So what?

vitaly-145502
Even without the loss in term of base performance, you have many reasons to choose more small instances: better HA in multi AZ, easier horizontal scaling, better N+1 metrics

But with T2 large+ instances even the AWS pricing strategy pushes you away from a single instance.

Unless you have an application that definitely benefit from a single larger t2 instance (for example a database server), spread out your load across smaller instances, with the T2 class you have one more reason for that.

The recently announced instance size flexibility for EC2 reserved instances makes it even easier to adapt the instance type even if you have a lot of RI capacity.