Making a PaaS
Deploying an application into an off-the-shelf PaaS offers a low barrier to initial entry, and can be a great way to get an early PoC up and running. In our dealings with customers who demand flexible, scalable platforms, however, we’ve found that often there are other factors in play that mean a different, more bespoke approach is beneficial. This is at the same time as maintaining the ease-of-use and quality of experience that a PaaS offering can provide.
Let me explain. Launching an application on a PaaS service can initially seem attractive, but as a platform’s popularity and maturity grow, we typically see the need for iterative changes to adapt its operation; this may be to save money, introduce new functionality, or incorporate emerging technologies. This can be difficult to do with a PaaS deployment — once an application is live, going back to re-architect the underlying infrastructure using an IaaS model instead of PaaS is likely to be a painful process.
Many customers demand flexibility that cannot be offered by PaaS platforms — lacking granular control of the environment outside the application code is often a deal-breaker, and being unable to make use of software platforms not supported by the PaaS provider can mean that compromises in application architecture become necessary. Networking requirements can also rule out deploying into a PaaS: the need to make low-latency and/or secure connections to on-prem or other providers’ systems often drives the requirement for a more tailored infrastructure.
In contrast to using existing PaaS offerings, we at Cloudreach have had success using IaaS orchestration, automation and containerisation to create bespoke PaaS-like platforms for customers — this allows fine-grained control of underlying infrastructure when required, while at the same time abstracting away the detail of the infrastructure from an application deployment perspective. Hybrid “part-PaaS” platforms using Elastic Beanstalk — as it allows control over the underlying infrastructure — are also an option, but that’s a topic for another blog entry…
From the IaaS ground up, we typically recommend using tools such as Troposphere and Boto to provision AWS resources using CloudFormation — this enables customers to build and orchestrate using infrastructure-as-code techniques, which have the advantage of keeping configuration under version control.
Building base AMIs with a tool such as Packer ensures repeatability, and in a similar way we suggest using CM such as Chef or Ansible to provision individual instances according to their roles within the platform if required. Application deployment using a container format/runtime such as Docker can bring a PaaS-like user experience — a Docker-centric approach may drive the use of specialised OS/platform combinations such as CoreOS or Kubernetes, or even hosted container services such as AWS ECS, which we are currently trialling internally. A quick side-note: this is not to equate containerisation and DIY PaaS solutions, but to point out that the former can be a useful building block for the latter.
Moving more towards a true in-house PaaS, a layer such as Cloud Foundry can bootstrap itself in AWS — the requirements around how an environment will be managed influence or even dictate the build technology to be used. Micro-PaaS environments such as Flynn or Deis can also drive CloudFormation to provision their own underlying infrastructure; while improving all the time, however, at the time of writing we would be wary of deploying these in production.
We’ll be expanding on some of the above points in future blog entries, but in the meantime please do get in touch if you’d like to know more about how we’ve helped our customers build stable, flexible and scalable platforms within AWS.