Using Sceptre to package and deploy complex AWS Lambda
At Cloudreach, hundreds of deployments are run every day to create or modify Lambda functions for our customers. Things usually go smoothly, until there is a dependency on a package which does not exist in AWS runtime. The question then arises: how do we package and deploy this function? That’s whereSceptrecomes in!
Why use Sceptre?
I often observe engineers manually creating "zip" archives for deployment before committing them to a version control system, such as Git. Since the zipped archive has to be recreated and deployed each time the source or dependency files have been changed, this method usually leads to inconsistent deployments; along with potentially hours of debugging.
There are a number of solutions available to solve this issue, such as AWS CloudFormation and Terraform but I’d like to focus on tool called Sceptre.
Sceptre is an open-source tool, created at Cloudreach, that enables users to utilise cloud native features of infrastructure as code (CloudFormation), extends the functionality of the CloudFormation and reduces the complex and error prone tasks of managing these templates.
You can learn more about using Sceptre this previous post:a tool for driving AWS CloudFormation.
Gaining flexibility with Sceptre
One of Sceptre’s benefits is being able to read the content of a single configuration file and load these values into a generated CloudFormation template.
Minimal stack configuration to deploy lambda function (simplified):
template_path: templates/lambda.py parameters: Name: my-awesome-function Role: !stack_output role::Arn sceptre_user_data: Runtime: python2.7 Handler: index.handler Code: ZipFile: !file_contents functions/my_function.py
Reusable Python-based template (simplified):
from troposphere import Template, Parameter, Ref, Output, GetAtt from troposphere.awslambda import Function, Code def sceptre_handler(sceptre_user_data): return SceptreResource(sceptre_user_data).template.to_json() class SceptreResource(object): def __init__(self, sceptre_user_data): self.template = Template() name = self.template.add_parameter(Parameter("Name", Type="String")) role = self.template.add_parameter(Parameter("Role", Type="String")) sceptre_user_data["FunctionName"] = Ref(name) sceptre_user_data["Role"] = Ref(role) sceptre_user_data["Code"] = Code(**sceptre_user_data["Code"]) function = self.template.add_resource( Function("Function", **sceptre_user_data) ) self.template.add_output(Output("Arn", Value=GetAtt(function, "Arn")))
As you can see, Sceptre is easy to use and gives you freedom on implementation. For more information please refer to official Sceptredocumentation.
It’s worth noting that a CloudFormation template may embed inline Lambda code of up to 4096 characters. This is limited to a single file and is available only fornodejsandpythonruntimes.
If the embedded code size limit is reached or a dependency is introduced, the code should be packaged and uploaded to an S3 bucket. CloudFormation will then fetch this packaged Lambda instead.
Manual packaging and uploading to S3 bucket:
template_path: templates/lambda.py ... sceptre_user_data: ... Code: S3Bucket: my-deployments S3Key: lambda/my_function.zip
When Sceptre runs for the first time, the Lambda functionmy-awesome-functionis deployed from the S3 bucket. The function runs as expected until you change the code and try to redeploy the stack. CloudFormation won’t recognise that the function source code has changed, because the stack itself remains unchanged.
According toLambda documentation, in order for CloudFormation to detect changes, theS3Bucket/S3Keyvalues must change. Another way to do this is to introduce the new propertyS3ObjectVersion, which should be synchronised with the latest version of the deployed artifact. The question then arises: what is the best to tackle the problem of changing templates?
Sceptre uses the concept of plugins, known as "Hooks" and "Resolvers", to extend the functionality of CloudFormation. These concepts can bring dynamics and auto-discovery into stack definitions. Cloudreach have released a project called sceptre-zip-code-s3 and it aims to help solve this issue. Sceptre-zip-code-s3 keeps the main Sceptre project homogeneous, by providing convenient hook and resolver to package, upload and synchronise uploaded artifact versions. The repository also contains example configurations, templates and Lambdas with different runtimes to experiment with.
How to extend existing projects
To get all the benefits ofsceptre-zip-code-s3plugins, a few actions are required:
- Follow thehow-to-installguide to setup the plugins to existing project.
- Move the function file into dedicated directory and rename it toindex.py.
- Create aMakefileaccording to thefunction-makefiledocumentation.
- Adjust the config file withs3_packageands3_versioncalls.
New directory layout forfunctions/my_function.py:
functions └── my_function ├── Makefile └── index.py
Updated stack configuration file:
template_path: templates/lambda.py hooks: before_create: - !s3_package functions/my_function before_update: - !s3_package functions/my_function ... sceptre_user_data: ... Code: S3Bucket: my-deployments S3Key: lambda/my_function.zip S3ObjectVersion: !s3_version
With the project set up, Sceptre is able to detect any changes made to the source code or changes to its dependencies. A new zip package artifact is created and uploaded to the S3 bucket if checksum has changed, and then update CloudFormation template with latestS3ObjectVersionvalue.
To learn more about howsceptre-zip-code-s3works and other interesting facts, please see the cloudreach/sceptre-zip-code-s3 project on GitHub. Issues or Pull Requests are highly appreciated!
At the time of writing,sceptre-zip-code-s3is already in active use by two projects in production
Happy deployment with Sceptre!
To view and contribute to the official Sceptre project visitGitHub.