The Freedom of AWS Lambda

AWS Lambda is hardly new these days, yet people are still only starting to explore it – after all, not all of us have the flexibility to spend our free time exploring new tech, or are in a position to explore and experiment on the job, and many organizations simply can’t change on a dime, and chase every cool technology that they find.  There are growing pains here, to be sure, but AWS Lambda can provide a freedom to organizations that simply allows them to build products at a pace that can’t be achieved in more traditional ways, and at a fraction of the cost.

What is AWS Lambda?

We’re all familiar with containers – 3rd party software that sits on a server, and provides a set of services that our software can take advantage of.  Containers are everywhere, and we don’t blink an eye to use them.  No one builds a custom web server – they deploy their web app into a container, like Apache Web Server or NginX.  No one builds a server-based Java fat client – they build their app to conform to the Servlet spec, and then deploy it into a Servlet Container, like Apache Tomcat or Jetty.  This tech has been around for years, and has made us significantly more productive, because the containers have abstracted out a significant layer of complexity that we no longer need to worry about – we just build our products according to the rules (i.e. – specifications and standards), and the container happily does its’ job.

AWS Lambda is simply the next iteration on this theme, and takes advantage of the advances of virtualization over the last decade or so.  With each of the examples above, it’s someone’s responsibility to stand up a server or two (or more), install the container,  deploy your code into it, and then maintain those servers with upgrades, fixes, etc, for the lifetime of the service.  When traffic gets high, you might need to spin up a few new servers, and then hopefully remember to decomission them when its back down to normal.  As it turns out, AWS Lambda is an abstraction layer that handles exactly those services for you.

Serverless is NOT Scary

The term ‘serverless’ is a bit of a misnomer that even Amazon’s CTO, Werner Vogels admits.  Of course there is a server behind the scenes there somewhere – it’s simply that as a programmer, architect or systems operator, you don’t need to touch it.  This shouldn’t be a totally unfamiliar paradigm, since most of us have been working with virtual servers for the past 10-15 years anyway, and tech like Docker Containers more recently – we’ve simply shifted that responsibility to live behind Amazon’s walls.

This allows us to start thinking about our products not as applications, but as services.  It’s like an abstraction layer that takes care of the fact that code needs to be bundled and deployed, so the actual value proposition of our work can take center stage.  In this way, it can be extremely liberating, and we can suddenly go from idea-to-value with a few clicks.

Imagine that your customers are demanding that they need to be able to see and update their configuration in your API – with Lambda, you can provide this capability in at least pilot form in little more time than it takes you to implement it.

Did you say something about cost?

Why yes, I did.  Picture this – you’re a scrappy startup, and you’ve finally got your first paying customer.  The deal is signed, and its time to deliver – so you fire up the AWS console, spin up a few servers (redundancy, don’t you know), set up a load balancer and configure DNS, and you’re in business.  The problem is, what happens if it takes weeks for your app to really get traction?  Or what if the nature of the app is that it’s only actively used a few hours a day?  You’re paying for 24 hours worth of availability for these services, even if your app only uses a fraction of that.

AWS Lambda has a very different pricing model, based on actual processing time, instead of theoretical availability.  This means that if your app only did 3 hours of total work, then you only pay for the 3 hours of time – what a phenomenal way for a scrappy startup to scale up!

Check out the pricing page for full details, of course – there is a $.20 charge per million requests, the normal price involved for moving data out of the AWS cloud, a fee for storing the the actual code, and any other services you’re using – just like anything in AWS pricing, it’s a combination of many things, so read carefully!

Wait, that’s the wrong end of ‘Scale’!

Of course, no one really wants to talk about that end of the scaleability model – we want to know what happens when our app hits the big time (does anyone get ‘Slashdotted‘ anymore?)

There’s good news here, too – if you have more work than your Lambda function can process, they’ll just spin up more to handle the load.  So just like other AWS services like OpsWorks and Elastic Beanstalk, the auto-scaling comes with the service.

There’s a cost to this, of course.  Because they’re effectively giving you parallel processes, each instance will incur the usage cost simultaneously – so if 10 instances are spun up and continually working for an hour, you’ll incur 10 hours worth of cost.

All is not lost, though – Amazon provides the tools to limit how many instances can be running at any given time, so you have some control.  Of course, if you simply setup every workload to bring in more money than it costs, then you’re golden – the auto-scaling is effectively printing money (I’m still working on this myself).

Slow down there, Killer

Let’s take a step back and be realistic – AWS Lambda is not the Panacea of computing.  As with any technology, there are trade-offs, draw backs and side effects that you need to be aware of, among them are:

  • SLA – AWS does not currently provide an SLA for performance or reliability.  While this sounds terrible, it’s simply an important trade-off that needs to be made with design.  It might mean that you should have a backup plan for any important code that you deploy as a Lambda function.  It almost certainly means that if your code is mission critical (i.e. – your company goes out of business if it’s broken), or if lives are at stake, then AWS Lambda is not for you.  I’ll wager that the majority of code does not fall into that category, however, and can cleanly take advantage of the benefits provided.
  • Performance – Because Amazon takes care of the deployment for you, and only charges you for actual usage, and not for 24 hours worth of server availability, they reserve the right to reclaim resources if you’re not using them.  When this happens you will run into some latency as they spin up a new container – this is known as a ‘cold start’.  In practice, this seems to be a few hundred milliseconds for JavaScript functions, and a few seconds for Java functions, but the good news is that if you’re app is active, then this will be rare.  Consider this carefully when building your services – if you have a strong requirement for sub-second latency, you should plan accordingly.  For most ‘offline’, event oriented or batch systems, however, this is likely not a problem.
  • Deployment infrastructure – Like most of AWS’ products, it provides both a web interface to use Lambda’s as well as an API and CloudFormation support, but because it’s still early days, the landscape for integrating AWS Lambda into your deployment pipelines is fairly sparse.  I’ve come to like the Serverless Framework as a nice infrastructure tool that allows me to define my Lambda functions and any dependent components like DyanamoDB tables, API Gateway configurations, etc. into a single file that sits in my source control repo.  There aren’t a ton of other options out there, however, and as you scale your Lambda usage, you might need to do a bit of work here to create some templates and standards.

What You Should Be Doing – Today!

If you’re already working in the AWS environment, you should be considering Lambda as an option, but just like anything, you need to make the right decisions for your service, your organization and your customers.  Get creative – even if Lambda is not likely to be your deployment tech of choice, it may be a great place to experiment with prototypes.  It might just be a key piece of your infrastructure for internal tools.

The potential is huge here, as we truly get comfortable with the idea that virtualization is more than just a way to make more efficient use of hardware resources, but is truly a way to rethink how we approach problems in our IT world.