Here are the abstract and the slides of my talk “Do not press that Button: Operational Challenges of relational Databases on the Cloud” at the DevOps Conference 2018 in Berlin this week.
You can find the full interview on JAXenter.
After going through two iterations of the AWS Certified Security – Specialty Beta, I am happy the results are finally out and I can celebrate and hold one more AWS certification.
According to the Amazon documentation, many users never pay for automated backups and DB Snapshots
There is no additional charge for backup storage, up to 100% of your total database storage for a region. (Based upon our experience as database administrators, the vast majority of databases require less raw storage for a backup than for the primary dataset, meaning that most customers will never pay for backup storage.)
But what if you are not one a lucky one? How do you know if you can increase the retention period of a RDS instance without ending up paying for extra storage? Or how much do you need to reduce your retention period to use only the (free) quota?
Here it gets tricky: there is no API to find out the total size of your backups or snapshots and there is nothing available in the console either.
As snapshots are incremental, it is really down to the activity of your database and it might be quite hard to predict the storage extra you are going to pay at the end of the month if you have a fixed retention or some business requirements on manual snapshots.
Sometime it might even be better to allocate more storage to the RDS instances in the region – and have access as well to higher IOPS on standard SSD storage – than simply pay for the extra storage of the snapshots.
But how can you figure out the best balance to take advantage of the free storage or forecast your backup storage? There is no direct way to find it out but you can estimate the cost – and so indirectly the size – by running a billing report.
Billing report and backup usage information
You can get a billing report from the console going to the Billing & Cost Management and:
- select “Reports”
- choose “AWS Usage Report”
- select “Amazon RDS Service”
- specify time period and granularity
- download report in XML or CVS format.
- open the report and check the “TotalBackupUsage” and “ChargedBackupUsage”
Knowing now the cost and the AWS S3 pricing for your region, you can now determine the storage.
How can I automate the process?
You can generate and have a Hourly Cost Allocation Report delivered automatically to a S3 bucket and process the information and create your alarms and logic accordingly.
For more information see the Monthly Cost Allocation Report page
I am very excited to attend this year the DevOps Conference in Berlin and present the session “Do not press that button: operational challenges of relational databases on the cloud”. You can find the abstract here. Looking forward to DevOpsCon in May!
Since 3 years ago, RDS offers the option of running a MySQL database using the T2 instance type, currently from db.t2.micro to db.t2.large. These are low-cost standard instances that provide a baseline level of CPU performance – according to the size — with the ability to burst above the baseline using a credit approach.
What happens when you run out of credits?
There are two metrics you should monitor in CloudWatch, the CpuCreditUsage and the CpuCreditBalance. When your CpuCreditBalance approaches zero, your CPU usage is going to be capped and you will start having issues on your database.
It is usually better to have some alarms in place to prevent that, either checking for a minimum number of credits or spotting a significant drop in a time interval. But what can you do when you hit the bottom and your instance remains at the baseline performance level?
How to recover from a zero credit scenario?
The most obvious approach is to increase the instance class (for example from db.t2.medium to db.t2.large) or switch to a different instance type, for example a db.m4 or db.c3 instance. But it might not be the best approach when your end users are suffering: if you are running a Multi-AZ database in production, this is likely not the fastest option to recover, as it first requires a change of instance type on the passive master and then a failover to the new master.
You can instead try a simple reboot with failover: the credit you see on CloudWatch is based on current active host and usually your passive master has still credits available as it is usually less loaded than the master one. As for the sceenshot above, you might gain 80 credits without any cost and with a simple DNS change that minimizes the downtime.
Do I still need to change the instance class?
Yes. Performing a reboot with failover is simply a way to reduce your recovery time when having issues related to the capped CPUs and gain some time. It is not a long term solution as you will most likely run out of credits again if you do not change your instance class soon.
To summarize, triggering a failover on a Multi-AZ RDS running on T2 is usually a faster way to gain credits than modifying immediately the instance class.