Evolution in the Cloud: From EC2 stacks to Serverless

Chris Stura 12th September 2018
State of Development in the Cloud

Since the public cloud was introduced, developers have had to navigate constant changes to the way that applications are designed and developed.  Chris Stura, Chief Architect at Cloudreach, reflects on this evolution, considering the impact of serverless computing and where we can go from there.

I have been writing code for as long as I remember. Now, I know that there are probably more “veteran” developers out there but I’m certain that not many can boast having tackled Sams: Teach yourself Borland C in 21 days at age 10.

 

I have seen a long list of trends come and go in development, from mainframe development to client-server, the web revolution and now cloud. We have finally completed a full circle in the infrastructure space, going from “green screen terminals” or thin clients connected to mainframes to now “browser-based single page web apps” or thin clients connected to the cloud.

 

I don’t, for one minute, want to suggest that the Cloud is the same as the mainframe but what does make these two worlds similar is the way that applications are experienced by the end user. Of course, our modern web apps and mobile apps are more colourful and interactive than the green screens of the past but the concept of server-based delivered, dumb, generic shell applications, which download and run those applications in parts as they are used, are all still there.  It is easy to see how the powerful mainframes of the past are similar to the massive, distributed, parallel computing clusters which scale infinitely in the cloud today.

 

If we look at the cloud itself, we find a place where everything seems to come together and almost anything is possible. I suppose, as we have evolved and made our application models more “cloud-centric”, we have also increased the cloudiness of those applications. We have gone from virtual machines, with software-defined networking running on swarms of servers, to more sophisticated PaaS services. These provide a lot of the application level infrastructure to build truly scalable applications all the way to a model where we don’t need to worry about the servers at all anymore.

 

In the Beginning…

Let’s go back a few years to the first cloud services. I believe the first real service which addressed the developer community in the cloud was probably Elastic Beanstalk on AWS. Elastic Beanstalk was released on January 19th, 2011 (about seven- years ago). It made the development of J2EE web applications based on Tomcat easier and took away the need to manage the application server (this was still running on EC2 instances under the hood where you still had to setup auto-scaling, application monitoring and a number of other things).

 

At the time (and arguably still today), Java workloads were the most common, so this service made a lot of sense and got people talking about and experiencing the many advantages that building applications in the Cloud have. Other services, like S3 and SQS, then introduced PaaS services, which made it easy to build scalable applications without having to manage the infrastructure behind popular enterprise application infrastructure (like message queues and file storage).

 

I remember, in my own experience, the plugin that Amazon had created for Eclipse to take advantage of Elastic Beanstalk.  This made the developer experience even better because we could go directly from our traditional IDE into the Cloud. It just couldn’t get any better until…

 

And then there were Containers…

Containers in themselves are nothing new really.  We actually had these in Solaris back in February of 2004 and then we saw them come to Linux with OpenVZ in 2005. But with Docker, these really took off. I think this is largely to the fact that, when we moved to a cloud computing model, the virtual machine took the place of the physical server and hence we needed a way to replicate virtualisation on virtualisation (Hmm for some reason that didn’t sound right but you get the picture of having many isolated workloads running on the same physical CPU and memory spec delivering optimisation – that’s what made VMWare what they are today).

 

Of course, Docker brought other benefits like composability. Also,  by making small, encapsulated applications easier to manage and run, it allowed us to introduce the microservice architecture we are designing applications around today.

 

Though the concept of stateless, encapsulated business logic was introduced with the EJB 3.0 spec way back in 2006. Docker and microservices, however, started an architecture revolution and helped architects move away from monolithic design into a more service-oriented architecture where many different high-level services are assembled and maintained independently and weaved together to form an application. This introduced assembly and deployment complexity which helped the DevOps discipline to thrive, bringing with it the concept of business agility to larger organisations working on more complex projects.

 

There is much more that could be said about containerisation and its benefits, as well as what is happening in the ecosystem but that would be better suited to an article on its own.

 

And now Serverless…

In many ways, I still see serverless computing and containers in competition with each other and in-fact there have been times where I have preferred a container based approach instead of a serverless design. I think this is mainly because there are fewer limitations to what you can accomplish with containers at this point in time (and fewer unknowns).

 

However, things are getting better in the serverless world. With the promise to completely remove any and all management complexity from the mix (“NoOps”), to take away the need to consider scalability, and to seamlessly integrate with an even broader range of cloud PaaS software infrastructure services, serverless is making its way quickly into the development landscape.

 

If I reference my Java enterprise background, I can see in the serverless models the ability to use cloud computing systems like AWS as a sort of infinite hyperscale application server; with little to no limitations. We can centre our designs on business logic (mostly) and create efficient, event-driven architectures that run our software when it needs to be run. We can then break down the complex problems of modern software into small, function-based components using cloud services to manage persistence and storage.

 

Serverless frameworks offer many advantages and are the first real attempt to unleash the true power of the cloud. This is because the architectures are woven into the other services offered by Cloud Service Providers (CSP), thus allowing you to design applications which are driven and managed by the Clouds themselves.

 

Being so deeply integrated into the cloud environments, serverless architectures come with the trade-off of a deep vendor lock-in. The pricing models are also based on the number of executions and calculating the true cost curve of a serverless application today can be quite complex. However, this does fit well with the pay-as-you-go model adopted by the cloud providers and, in that way, blends very well with this philosophy.

 

My opinion is that we will see cloud providers investing more in this area and also see these frameworks get more and more integrated with their services. I also expect serverless frameworks to get their own dose of standardisation and abstraction, which will remove the roadblock of vendor lock-in /dependence, allowing true multi-cloud software to be built.

 

What’s next?

I have always loved to speculate on what we may see in the future, I’m sure everyone in technology does because things in our industry tend to move incredibly quickly. I think that the best way of going about predicting the future is to look at what is emerging in the present. The trends today are more and more focused on data and AI. Compute infrastructure and storage is quickly becoming a commodity and serverless is now an established buzzword for most. If we consider AWS’s Lambda, it is now one of the most sought-after skills in the IT marketplace.

 

If we want to identify a future trend we should try and push some of these technologies to their limits, see them come together and then look at what becomes possible once they have become commodity themselves.

 

Data is already fueling AI and I believe that these two will come together quickly to produce general purpose artificial intelligence. This should give way to the production of data on a scale that we cannot imagine from the intelligence models created. General purpose intelligence will start to give way to a new industrial revolution that will take automation to the next level and transform society as we know it.

 

The Cloud will continue to evolve but the models that we create will be largely centred on interaction with general purpose intelligence to drive actions in the physical world. We can already see this in its infancy through IoT and Edge computing technology where we already see services like Lambda starting to make a play. Event-driven computing will be the future but the services we currently interact with, and design our software into, will no longer be interacted with at such a low level. As developers, we will instead ask general-purpose intelligence to identify trends or occurrences and those high-level events will trigger software which will both manipulate physical devices and feedback information to the AI so that the algorithms within the software themselves can become fuzzy.

 

Conclusions

It is undeniable that public cloud has profoundly influenced the development community. It has changed the way we think about developing software and opened more possibilities to developers than there have been for quite some time. I think it is safe to say that the Cloud has driven a revolution in the development community in many ways, from the introduction of DevOps, to the software-defined data centre, serverless computing, containers, and hyperscale. It has also helped with the commoditised consumption of advanced technologies such as artificial intelligence and big data. I believe that this is one of the most exciting times in history to be in the technology industry and though most of my excitement comes from the great advances in artificial intelligence and robotics, there is a lot to be excited about given the pace at which things are changing.

 


Leave a Reply

Your email address will not be published. Required fields are marked *