Article

What I Learned at AWS re:Invent 2016: DevOps and Microservices in the Enterprise

One of the biggest themes at re:Invent 2016 was serverless computing, a concept that is sure to make a huge impact in the world of DevOps. No matter who I talked to, which sessions I attended, various booths I spent time at, it was apparent that serverless computing is on everyone’s mind, and many have even taken the plunge and started using it for test/dev and low priority production services. It seems like the developer community just might have vaulted right past containers and into this brave new world of serverless computing.

Serverless Computing: A Brief History

The first iteration (for our purposes) of development and infrastructure working together is the venerated two-tier web and database server model. We built apps, created a relational DB schema, and hosted those components on physical and eventually, virtual servers. The operations team was responsible for the hardware, operating system, patching, and potentially some of the database and middleware software. And the development team handled the database and middleware layers, and of course building, testing, and running the code. It’s a model so ubiquitous that it’s still used in a vast number of workloads within the enterprise.

Through virtualization, we began to shed some of those operational layers by reducing the amount of hardware we had to run. It became convenient to advance our build tools to create templates that could easily be deployed and managed… but it wasn’t enough. It still forced us to manage countless redundant operating systems, maintain patching cycles, and so on. Enter containers.

devops1

Containers take the concept of a virtual server one step further by virtualizing the OS process management and filesystem concepts so that both Dev and Ops only manages applications. We create images with minimal overhead to run standard containers that can be cloned a nearly infinite number of times to run applications and processes. The lifecycle of a container may be days or minutes or even seconds. As such, we must no longer worry about patching because a vulnerability is addressed in my base image and running containers simply cycle out as new patched versions come online.

This sounds great but we haven’t avoided the need to manage the underlying physical or virtual machines running our container images. We’ve decoupled at the OS level instead of the VM level, but we still have a potentially huge fleet of container servers to maintain. What if we could take it one step further?

Enter Serverless Computing

Serverless computing is the idea that we give someone else (Amazon Web Services in this case) a small unit of code or work to be run. A customized scheduling process takes our job and schedules it on a managed pool of compute resources that we know nothing about and frankly don’t need to. Our job runs on a unit of memory (and by extension, CPU) that lives only long enough to complete the task (for example, AWS Lambda is billed in increments of 100ms).

Why is this great? Just as with containers, we can write a chunk of code and give it to a scheduling system to run. Likewise, we don’t have to manage an operating system in any way. But the best part is that we no longer must manage any physical or virtual container servers to run our compartmentalized work units. Instead, Amazon Web Services handles all of that for us. All we need to worry about is picking a supported language (Node.js, Python, Java, and C# at the time of this writing) and making our code or executable do what we want.

Let’s not forget about the CAPEX benefits of this serverless computing model. Instead of investing in servers that sit around for three to five years only partially utilized, or even paying hourly prices for cloud VMs, we have a job that runs for a matter of seconds and we’re only billed for the time we use to the nearest 100 milliseconds. When harnessed properly, this has the potential for serious savings over the legacy hardware model.

Event-Based Computing

With Lambda functions comes the notion of event-based computing. When we move our logic into a Lambda function the first question is often: “How do I access or trigger my code?” It’s not as if we have a web server running on a server that is online perpetually. Furthermore, we established that our compute capacity processing our function lives only long enough to run the job. How can we wrap our heads around something so stateless?

The answer is that Lambda functions trigger based on events we can configure, things like new files appearing in S3 buckets, an alert event in CloudWatch, or even a voice command (Lambda is the heart behind Amazon’s Alexa, after all). However, one of the most useful triggers for our purposes is API Gateway. In other words, I can trigger the execution of my Lambda function by hitting a simple URL defined in API Gateway.  

An example of this might be our AHEAD Aviation BagApp. BagApp is a demo application used by fictitious passengers who are traveling with AHEAD Aviation to check their baggage status. The user may click a button to display their current bag’s status, which is ultimately a link in the front end. This link is a URL defined in API Gateway that triggers a Lambda function to call a database and retrieve the necessary baggage information. The Lambda function execution lives just long enough to retrieve the information encompassed by this one database call, a mere 2-3 seconds.

Serverless Computing Loves Microservices

Another fairly new development concept came out of the Service Oriented Architecture (SOA) paradigm of the last ten or so years, namely Microservices. This concept eschews large monolithic applications with dozens of connected parts for a myriad of tiny services responsible for one task. The best example of microservices is to consider amazon.com. The microservices approach breaks this application down into discrete services responsible for things like recommended items for a user, processing a payment, dispatching an order on the back end, maintaining the list of items in a shopping cart, displaying a list of product reviews, etc.

By breaking our code down into microservices, we decouple the piece parts making it easier to maintain code, easier for new hires to learn a codebase, and easier to make changes or updates to a piece of an application without causing a cascading failure due to tight coupling with other services. It’s a tradeoff however, because as our monolith breaks down into microservices, we may have dozens of individual processes to host and support as DevOps teams.

This is where serverless computing comes in. By pairing AWS’s API Gateway with a Lambda function, we can create a job for a single backend microservice in just a few clicks. Repeating this process a few dozen times for each service is far less daunting than building a fleet of 60 servers to host my app. It’s also considerably easier to maintain as I only have to care for the Lambda function (code control, backups). I can even leverage modern frameworks like Chalice or Zappa to make functions even easier to create.

devops2

We’ve barely scratched the surface on serverless computing and microservices in this post. You might be asking: “What about Security?”, “How do I scale my Lambda functions?”, “What does it cost?”  You may even be saying, “Wow this sounds great but I’ll never be able to refactor my giant legacy applications to take advantage of this.” The fact is you can and we’ve seen it work. We’d love to talk about it and show you how, because if there’s one thing I learned at re:Invent this year, it’s that serverless computing is going to take DevOps by storm in the coming months and years. For available demos on DevOps and microservices (coming soon), check out our demonstrations from AWS re:Invent and schedule your own with our experts today! 

aws-demo-banner_2-2