by Sev Geraskin
—
Sat Dec 31 2022
To validate your market idea, you want to develop and deploy your prototype as soon as possible and iterate on it. When building a proof of concept or minimally viable version of your product, the last thing you want is to be slowed down by infrastructure and operations. Worry about that after you validate your idea and have paying customers.
The serverless model has transformed software development by eliminating the need to manage your application infrastructure. Serverless leverages your cloud provider data center capabilities to manage your infrastructure. Adopting a serverless model means faster application development and deployment while leaving scaling and securing your infrastructure to your cloud partner.
According to the State of Serverless studies done by Datadog in 2021 and 2022, over 70 percent of organizations running in AWS cloud adopted serverless functionality.
The two frameworks that help the development of serverless infrastructure are Serverless by Serverless Inc. and Serverless Application Model (SAM) by Amazon Web Services (AWS). These frameworks enable you to deploy and get started with your application in minutes.
Both Serverless and SAM act as shorthand for complex declarative infrastructure templates. In addition, they give you CLI tools that allow you to deploy and manage your application from the command line.
Serverless supports different cloud platforms, such as Google Cloud and Microsoft Azure Cloud, in addition to Amazon Web Services (AWS), and gives a web-accessible UI and dashboards.
AWS SAM provides a local Lambda-like execution environment and integrates with AWS SAR (Serverless Application Repository) for publishing serverless applications in AWS.
AWS Lambda is Amazon's implementation of serverless computing as an alternative to a container or virtual machine infrastructure.
What can AWS Lambda do?
We ran AWS Lambda functions for a wide variety of use cases to do the following:
How does AWS Lambda work?
AWS Lambda runs on Firecracker, a virtual machine monitor (VMM) that uses the Linux Kernel-based Virtual Machine (KVM) to create and manage microVMs.
KVM controls the host CPU to perform hardware virtualization, memory management, and hardware abstraction.
Firecracker sits in the host OS on the IO path, configures KVM, and emulates hardware. Firecracker is optimized explicitly for serverless with 5-10MB overhead, can boot Linux in 125ms, and supports the creation of up to 150 microVMs per second per host.
Thus, Firecracker enables running thousands of Lambda functions on the same hardware without having thousands of network cards or SSD drives.
Here's what the AWS Lambda stack looks like:
Do you deploy bugs to production and take too much time to recover?
You can solve this problem using serverless AWS Lambda by leveraging the functions' built-in versioning and aliasing. A Lambda alias points to a function version, and you can update the alias to point to a new version of the function after deployment. Thus, AWS Lambda allows you to create and manage multiple versions of your function code.
Each AWS Lambda alias has a unique ARN or Amazon Resource Name. Other system components in your cloud system architecture reference this alias ARN. Thus, if you configure a production alias and change the underlying version of Lambda, you don't need to update the other system components, saving you time and effort.
Thus, you can leverage Lambda versions and aliases to deploy safely and recover fast from production defects.
You can start running your machine learning or data-intensive workloads in the AWS cloud in no time on serverless infrastructure.
The standard way to upload a code into AWS Serverless Lambda Function is to create a .zip file. Unfortunately, Lambda has a frustrating 50 MB .zip file limit that can prevent data scientists and engineers from using it to deploy their workloads, which usually reference large libraries and can grow to gigabytes in size.
Luckily, you can package and deploy your containers on AWS Lambda. Using a container image with Lambda can be as simple as fetching an AWS base image and creating a Dockerfile with your dependencies installed on it. This approach enables you to deploy applications up to 10 GB in size and supports most machine-learning and data-intensive applications.