Base performance and EC2 T2 instances

Almost three years ago AWS launched the now very popular T2 instances, EC2 servers with burstable performance. As Jeff Barr wrote in 2014:

Even though the speedometer in my car maxes out at 150 MPH, I rarely drive at that speed (and the top end may be more optimistic than realistic), but it is certainly nice to have the option to do so when the time and the circumstances are right. Most of the time I am using just a fraction of the power that is available to me. Many interesting compute workloads follow a similar pattern, with modest demands for continuous compute power and occasional needs for a lot more.

It took a while for the users to fully understand the benefits of the new class and how to compute and monitor CPU credits but the choice between different instance types was very straightforward.

A bit of history…

Originally there were only 3 instance types (t2.micro, t2.small and t2.medium) and base performance, RAM and CPU Credits were very clear, growing linearly.

Instance     Base    RAM    Credits/hr

t2.micro     10%     1.0     6     
t2.small     20%     2.0     12     
t2.medium    40%     4.0     24

And price too. A t2.medium was effectively equivalent to 2 small instances or 4 micro instances. Both as credits, base rate and price. So far so good.

At the end of 2015, AWS introduced an even smaller instance, the t2.nano but the approach was still the same:

Instance     Base    RAM    Credits/hr

t2.nano      5%     0.5     3

Still same approach, nothing different.

Now large and even bigger!

But AWS extended the T2 class in the large range too, having first a t2.large in June 2015 and t2.xlarge and t2.2xlarge at the end of 2016. A lot more flexibility and a class that can cover many use cases with the chance of vertical scaling but finally the linear grow was broken:

Instance  RAM    Credits/hr Price/hr

t2.large        2        8      $0.094
t2.xlarge       4       16      $0.188
t2.2xlarge      8       32      $0.376

So far so good, the price per hour doubles according to vCPU and Memory. So a t2.2xlarge is equivalent to 4 t2.large. But what about the base performance?

Instance    Base Performance

t2.large      60% (of 200%)
t2.xlarge     90% (of 400%)
t2.2xlarge   135% (of 800%)

A t2.2xlarge is not equivalent to 4 t2.large.

I have a better base rate that I can run forever as average for cCPU running 4 nodes of t2.large (I would be able to average 30% on every vCPU) versus running a single t2.2xlarge (where I have a base performance of less than 17% on every vCPU) for the very same price.

The bigger you go, the lower the base performance by vCPU is.

So what?

vitaly-145502
Even without the loss in term of base performance, you have many reasons to choose more small instances: better HA in multi AZ, easier horizontal scaling, better N+1 metrics

But with T2 large+ instances even the AWS pricing strategy pushes you away from a single instance.

Unless you have an application that definitely benefit from a single larger t2 instance (for example a database server), spread out your load across smaller instances, with the T2 class you have one more reason for that.

The recently announced instance size flexibility for EC2 reserved instances makes it even easier to adapt the instance type even if you have a lot of RI capacity.