Amazon RDS And Aurora Not Supporting Your Database?

In this post, Tulip Gupta, Senior Cloud Architect at Cloudreach, outlines a high-availability solution for databases not supported by RDS/Aurora using Docker and EFS.

Ever run into a situation where you realize that the database software you use is not supported by AWS native solutions of RDS and Aurora, but you want a high-availability (HA) solution to meet your Recovery Time Objective (RTO)/Recovery Point Objective (RPO) needs?

Hosting it on EC2 instances in a master and slave architecture is complicated and does not offer resilience and scaling that you are looking for. I have had many customers asking me for alternatives. The relational database that they use is either not supported by RDS/ Aurora or RDS/Aurora lacks the capability to handle some features. For example, RDS cannot handle memory/heap tables for MySQL databases. 

In this blog, I have outlined an alternative solution that uses EFS, Docker and autoscaling, and offers almost RDS kind of Disaster Recovery (DR) and HA capability. 

First, let’s look at a HA solution with a single master autoscaled across three zones:

  1. Database is a docker based container app hosted on an EC2 instance and configured via user data on EC2 instance launches
  2. The EC2 instance is autoscaled across three zones with a desired, max and min instance of one
  3. An internal network load balancer is attached to the autoscaling group. It provides a consistent front end host address even while the backend IP of the instances change. It routes all TCP incoming traffic on the database port  to the EC2 instance
  4. At any point in time, there is always one instance running that reads and writes to the backend EFS. Point to note more than one instance writing to the backed may cause the data to corrupt. Recommended would be only one instance writing to EFS at any point in time.
  5. The data is replicated to a host instance volume, which in turn is resynced to in the backed to an EFS storage through a cron job scheduled to run every minute.
  6. In EFS, every file system object (i.e. directory, file, and link) is redundantly stored across multiple Availability Zones and can be accessed concurrently from all Availability Zones.

The architecture offers the following HA and DR capabilities:

  1. Data Loss prevention- The syncing of data from instance host volume to EFS usually takes less than two minutes (most companies have RTO/RPO of more than 10 mins). Also, EFS is a durable storage system with every object stored redundantly across AZs
  2. Automatic failover- If a zone goes down or the instance terminates, the data is stored in the EFS and a new instance would spin up, retrieving the data from EFS and the database would be readily available for operations. This whole process usually takes six to ten minutes and is fully automated via autoscaling. 
  3. Future scaling provisions- As the architecture is automated via autoscaling, user data and containers, it requires little to no manual intervention during failovers and is resilient. 

Read Replicas 

Sometimes companies are looking for solutions to meet their high read IOPS challenges. For situations like this, an enhanced HA solution is outlined below which involves breaking it down to microservices with read replicas catering to the read volumes.

  1. Any existing app that writes to the database would use the "app read and write channel" to write to the backend storage. This "app read and write" channel is a part of the HA solution in the current architecture. The master DB instance is autoscaled and will be the only instance writing to the backend storage on EFS
  2. For read IOPS needs, the "Read Only" channel would be utilized. Data warehouses and visualization tools can use the read channel to access the read replicas to perform the reads. The read replicas are autoscaled instances across zones and have eventual consistency with the master instance DB. They retrieve and only perform read operations from the backend storage of EFS and eventually sync with the master database.

With this design there is a clear delineation between the write and read operations, and, as such, any existing system is not impacted or overloaded during high IOPS.

The launch template used for the autoscaling group passes the configuration detail to the EC2 instances through the user data

User Data

The user data passed during EC2 instance launch configuration does the following:

  1. Mounts an EFS volume
  2. Installs AWS CLI
  3. Installs Docker
  4. Creates Docker host instance volume and syncs with EFS
  5. Docker logs into ECR and pulls the database container image
  6. Runs the database container mounting the host instance docker volume to the data folder.

This solution works well for customers looking into alternatives for hosting databases with autoscaling and HA capability. The EFS acts as a backup storage volume and can be easily backed up to S3 using AWS Backup.

To see more posts from the Cloudreach Technical Blog click here

  • Amazon RDS
  • docker
  • Amazon Aurora
  • Tech Blog