From Monolith to Micro-Serverless

We built a production-ready application in less than 3 weeks (eyebrow raise). I’m talking an available and scalable system that runs on-demand (what’s the catch?). And we did this whole thing without the help of a single DevOps guru (blasphemy!).

If the title of this article didn’t already give it away, we built a set of loosely-coupled serverless applications that replaced an old monolith and its painful, manual processes. Our micro-serverless architecture can be seen in this incredibly colourful diagram:

Micro-serverless architecture
Step 1: Finding the tools

Cloudreach - Cost Control

Cost Control, a Cloudreach Product, is a tool that (among other things) looks at an organization’s cloud spend and makes rightsizing and reservation recommendations to reduce monthly costs. While the tool met our (and our customers') needs for years, as we've scaled the manual nature of the recommendation process didn't scale alongside customer demand. While the team wanted to focus on continually building new features, they often found themselves buried in a mountain of spreadsheets and bash scripts.

AWS and the Serverless Framework

The problem

Transforming this component of Cost Control into an automated, right-sizing machine was a no-brainer. The problem was that we had just a couple weeks to create an application that featured a client, API, authentication, CI/CD, and cron jobs - not to mention that none of us had DevOps experience, so provisioning servers was out of the question. Finally, if that wasn’t enough, our stakeholders asked if we could ensure our solution’s O&M costs were kept to a minimum.

Enter: the Serverless framework (angelic singing).

Call it FAAS (functions as a service), IAAS (infrastructure as a service), or digital gold; it doesn’t matter. Serverless was a life-saver. With just a few CLI commands, we were able to get our application scaffolded and deployed to AWS.

Step 2: Going Micro-serverless

Like every other hack-happy developer, we threw our hands on the keyboard so fast that we quickly created a small mess; all of our Serverless functions were shoved into one, massive application. To remedy this, we hit the pause button, did a little meditation, and broke our functions into two camps: 1) api triggered and 2) non-api triggered. The non-api triggered code was simply code that needed to run whenever an event occurred (SNS publish topic, S3 put object, scheduled task, etc.). This lead to our "microserverless" architecture (set of loosely-coupled, serverless applications), which looked a lot like this...

src/
  api/
    [Serverless Application]
  jobs/
    [Serverless Application]


Where each Serverless Application was scaffolded similar to this:

app/
  lib/
  functions/
  test/
  package.json
  serverless.yml


Pretty neat and tidy, right? The next piece of our application was building the web client.

Web Client (React)

React (a Javascript library for building user interfaces) made sense for our web client because we needed a UI that was stateful and component based. We figured that our timeline was so short that we needed to take advantage of existing components and libs.

Given our lack of servers at this point, it didn’t make sense to create one just to serve up a front-end application. Enter: S3 and Cloudfront.

S3 and Cloudfront

Putting our React app in an S3 bucket guaranteed that our app would be highly-available. To sweeten the pot a little more, we put this bucket behind Amazon’s CDN (Cloudfront) which not only gave our app high availability, but also low latency.

In order to serve up a React application from an s3 bucket, the bucket needed to be configured with an index "document" and an error "document". This requirement exists so that s3 knows what asset to serve up as your homepage and what asset to deliver in the event of an error. Since our application handles its own errors, our document for the homepage and error was the same: index.html.

Deploying our application to an s3 bucket with Cloudfront meant that pushing a new version of our web client was as simple as:

1) rebuilding our application using 'npm run build'

2) copying the build artifacts into our s3 bucket

3) busting the Cloudfront cache so it would serve our updated application.

Cognito and AWS Amplify

We knew that our client needed to have authentication. Enter: Cognito. With Cognito, we have no need for managing user secrets. AWS takes care of all that. What’s even better is that the AWS Amplify library plays quite nicely with Cognito. If you haven’t used Amplify, you can check it out here. Just to give an example of how easily signin works with Amplify:

```javascript
await Auth.signIn(email, password);
```


The other tricky part of our client was securely uploading files to S3. Thankfully, Amplify has a service this as well...cue the code snippet:

```javascript
Await Storage.put(fileContents, fileName);
```

CI/CD

The final step of our Microserverless application was creating an automated build and deployment process. We chose CircleCI because it was 1) cloud-hosted and 2) super simple. After creating an AWS user with permissions specific to our deployment, we were up and running. Each of our 3 applications had almost identical test and deployment commands to this:

```yml
- run: 
working_directory: ~/cost-control/api
name: Install app dependencies
command: npm ci

- run: 
working_directory: ~/cost-control/api
name: Run API tests with code coverage
command: npm test

- run:
working_directory: ~/cost-control/api
name: Deploy API 
command: npm run deploy
```


You may be thinking that 'npm run deploy' is doing a lot under the hood...not so! Deploy is just using the serverless cli, 'serverless deploy'. This command creates a build artifact and then deploys this artifact using AWS Cloudformation. No need to mess with CodePipeline, CodeBuild, or CodeDeploy.

A word on deployment: the 'serverless deploy' command takes an optional '--stage' argument. This argument is unbelievably helpful. With this single argument, you can manage your production, qa, and a plethora of other deployment stages. (Pro tip: when CircleCi "git’s" your code, you can also use the branch name to automagically make a decision on what deployment stage to use).

Step 3: Ongoing maintenance

Our ongoing costs will be twofold: 1) maintenance and 2) application usage.

Hands down, maintenance is the worst part of the software lifecycle. The good news is that Serverless puts a lot maintenance in someone else’s hands: AWS. Although there are tradeoffs to this approach, in general it’s nice to have someone else managing things like server health and auto scaling. We still manage application code, but bugs are often 10x easier to fix when we’re not digging into infrastructure issues. Saved time equals saved money.

If you haven’t drank enough Serverless kool-aid at this point, here’s another cup: we only pay for our application when it’s in use. Period. That’s the beauty of Serverless. A Serverless application is only charged for the resources that it uses, not the uptime of these resources. There are exceptions to this rule for things like RDS (which generally require a server), so application architecture may require a little extra planning to account for things like this.

Summary

Monoliths are scary. Sometimes the thought of breaking a monolith into services is more daunting than just maintaining the application in its monolithic state.

What Serverless does so well is abstract away all the infrastructure-related details that make the move to a services-based application so difficult. Because of this, getting a Serverless application up and running is a piece of cake and usually costs a fraction of the cost compared to running a server around the clock.

So if you’re feeling stuck in a monolithic black hole, give Serverless a try. You will likely be surprised at how easy it is to get your application deployed in the cloud!

  • serverless
  • aws-amplify
  • micro-serverless