- Using AWS Lambda functions with Docker containers - Fri, May 12 2023
- A Go AWS SDK example - Fri, Nov 11 2022
- Getting started with Jenkins - Tue, Aug 16 2022
Before we dig any further into Lambda containers, it's a good time for a refresher about the Lambda service.
Why AWS Lambda?
AWS Lambda is an event-driven, serverless computing platform. As mentioned at the start, we don't need to worry about provisioning or managing servers. We can just upload our code and let it run. So, no need to worry about OS patch cycles, scaling, right-sizing, and such. It's all managed by AWS.
AWS Lambda executes the code based on a response to an event. event is a trigger that causes the Lambda function to execute. This could be an add/delete on an S3 bucket or an HTTP request from the AWS API Gateway.
Here are a few reasons for considering AWS Lambda:
Operational support
- AWS Lambda was designed to simplify high availability and address scaling and security concerns
- Amazon states that the Lambda service provides the greatest level of operational support
Automatic scaling
- Burst capacity of up to 3,000 in select Regions (us-west-2, us-east-1, and eu-west-1)
- Enables scaling from 0 to 3,000 concurrent executions in seconds
- After the initial burst, the functions concurrency can scale by an additional 500 instances each minute
- Provisioned concurrency available for highly consistent function executions
Cost
- Only pay for use; by default, never pay for idle resources
- Scales to zero
- Precise cost measurements: 1 ms billing
- Allocate memory for cost/performance
- No need to pay for patching and maintenance
AWS Lambda with containers
Containers are used to package and run an application, along with its dependencies, in an isolated, predictable, and repeatable way.
The container feature, now available in AWS Lambda, provides all the same features of a regular Lambda function; however, you will gain the benefits that come with containers. AWS Lambda allows your packaged container to be up to 10 GB in size.
AWS provides a set of open-source base images that you can use to create your container image. The base images available are preloaded with a language runtime and other components required to run a container image on Lambda. Lambda provides the following runtimes on the base images provided:
- Node.js
- Python
- Go
- Java
- .NET
- Ruby
Should you wish to work with a language that is not on this list, a custom runtime, AWS provides the required Lambda components with the Amazon Linux or Amazon Linux 2 operating system. You can then just add your runtime, dependencies, and code to these images. These two runtimes are named provided.al2 and provided.
Before we build our application, there is one feature I would like to touch on: the ability to test your images locally.
Testing a Lambda image locally
The base images for Lambda include a proxy for the Lambda Runtime API to allow you to test your Lambda function container image locally. The AWS Lambda Runtime Interface Emulator, known as RIE for short, is a lightweight web server that converts HTTP requests into JSON events, which are then passed to the Lambda function inside the container image.
Should you choose to use an alternate base image, Amazon provides an open-sourced RIE component on the AWS GitHub repository. The GitHub link also provides the means to test your images using the RIE.
Building our own Lambda Docker container
We are going to build a Lambda function inside a Docker container that will take the bucket name and display the contents.
Here is what our final project folder structure will look like:
To follow along, you will need to have the following prerequisites installed:
- Go version 1.18 or later (https://go.dev/doc/install)
- Docker (https://docs.docker.com/engine/install/)
- AWS command line (https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html)
To start, we need to create a directory to build our project. From a terminal, type:
mkdir Lambda_container_example && cd Lambda_container_example
Next, we need to initialize our Go project:
go mod init lambdaContainer
We will create a directory for our package file:
mkdir pkg
Add a main.go file and a listBucket.go inside the pkg directory. Using ZSH, I can use the touch command:
touch main.go && touch pkg/listBucket.go
Now, we can focus on the main code inside listBucket.go. The code will have an init and a ListBucketContents function.
In your IDE or text editor, open the listBucket.go file. Add a package name to the top of the file. I have gone with package Bucket.
You will need to import six packages for this program. The required packages are as follows:
- "context"
- "log"
- "github.com/aws/aws-sdk-go-v2/aws"
- "github.com/aws/aws-sdk-go-v2/config"
- "github.com/aws/aws-sdk-go-v2/service/s3"
- "github.com/aws/aws-sdk-go-v2/service/s3/types"
The first two packages are from Go's standard library. The next four are from the AWS Go v2 SDK. Using Go's import statement, group the packages together within parentheses.
Before we get to the init function, we will create a variable from the S3 Client type:
Var client *s3.Client
This will be used to create a new client for the config we are going to provide in our init function.
The init function will be where our default configuration will be created:
func init() { cfg, err := config.LoadDefaultConfig(context.TODO(), config.WithRegion("eu-west-1")) if err != nil { panic("configuration error, " + err.Error()) } client = s3.NewFromConfig(cfg) }
All I have provided is a default region to work with. The NewFromConfig function from the S3 library creates a new client to work with.
All the heavy lifting is done in ListBucketContents. The function takes a single parameter value, the bucket name. The function takes a custom type named AwsBucket.
type AwsBucket struct { Name string `json:"BucketName"` }
We use a struct tag to parse the bucket name from the json code.
The ListBucketContents function uses the client we created in the init function. The client calls a function to list some or all of the objects from a chosen bucket.
The bucket's contents are captured in a slice of types.Object, https://pkg.go.dev/github.com/aws/aws-sdk-go-v2/service/s3@v1.30.2/types#Object.
This allows us to loop through the results and add the Key value to our string slice. The Key value is the name given to the file. You can also get other values from types.object results.
func ListBucketContents(bn AwsBucket) ([]string, error) { result, err := client.ListObjectsV2(context.TODO(), &s3.ListObjectsV2Input{ Bucket: aws.String(bn.Name), }) var contents []types.Object if err != nil { log.Printf("Couldn't list objects in bucket %v. Here's why: %v\n", bn.Name, err) } else { contents = result.Contents } var details []string for _, c := range contents { details = append(details, *c.Key) } return details, nil }
The newly created slice with the file names is then displayed to the console. The last part is to plug this into the main.go file. Let's see what this looks like:
package main import ( Bucket "Bucket/pkg" "github.com/aws/aws-lambda-go/lambda" ) func main() { lambda.Start(Bucket.ListBucketContents) }
Using the lambda start function, we can just call our ListBucketContents function. It's that easy. We will pass the bucketname via the JSON payload. I will demonstrate this later. But for now, once the files are saved, run the go mod tidy command. This will also install the Go packages.
Next, we look at building the Dockerfile.
Dockerfile
We have our application; now, we just need to build it inside a container. The best way to go through our Dockerfile is line by line. So, let's do that.
The first line of our Dockerfile will provide the image to build on:
FROM public.ecr.aws/lambda/provided:al2 AS build
From the Amazon ECR public gallery (https://gallery.ecr.aws/lambda/provided), we are using the provided:al2 image. The al2 indicates that this is an Amazon Linux 2 image. We have also indicated that this will be a multistage build; this will remove any unnecessary bloating from the image.
The next few lines of the Dockerfile set some prerequisites for our image:
ENV CGO_ENABLED=0 RUN mkdir -p /opt/extensions RUN yum -y install go RUN go env -w GOPROXY=direct ADD go.mod go.sum ./ RUN go mod download
Setting the environment variable CGOENABLED to 0 will run without any external dependencies. We follow this by creating the /opt/extensions directory to store our installs. Using the "-p" switch on the mkdir command is useful, as it allows us to create subdirectories after it creates the parent directory. Next, we install the Go programming language using yum. Setting GOPROXY to direct allows downloads to use a direct connection to source control servers.
Let's finish up our Dockerfile. Here are our last few lines:
COPY . ${LAMBDA_TASK_ROOT} RUN env GOOS=linux GOARCH=amd64 go build -o=/main # copy artifacts to a clean image FROM public.ecr.aws/lambda/provided:al2 COPY --from=build /main /main ENTRYPOINT [ "/main" ]
The copy command will copy any dependencies alongside the function handler to ensure that the Lambda runtime can locate them when the function is invoked.
We then compile our Go application code into an executable.
At the start, we set our Dockerfile to be a multistage build. The second FROM takes a clean, new image, and with the COPY line below, we take our main build and copy it to a clean image. The ENTRYPOINT is the Lambda function.
All that is left to do is build our Docker image.
From the directory of our Go application, which should contain our Dockerfile, run the command docker build -t listbucketfunction.
Image built. The next step is to upload to Amazon ECR and create the Lambda itself.
Uploading the Docker image to Amazon ECR
To allow Docker to push (and pull) images to Amazon ECR, we must first authenticate to our default register. To make life a bit easier, the AWS CLI provides a get-login-password command. Using this command is the preferred method to authenticate to a private registry in Amazon ECR if using the AWS CLI.
We use the get-login-password AWS CLI command as follows:
password=$(aws ecr get-login-password --region eu-west-1)
Now passing the password variable to the Docker login, we use AWS as the username along with a URL that consists of our AWS account ID and a region:
echo $password | docker login --username AWS --password-stdin *AccountID*.dkr.ecr.eu-west-1.amazonaws.com
To push our image to the ECR repository, we need to tag it:
docker tag listbucketfunction *AccountID*.dkr.ecr.eu-west-1.amazonaws.com/listbucketfunction
To see the output of the commands, I've attached the screenshot below for reference.
Our image is now tagged; we are almost ready to push to Amazon ECR. To be able to push to ECR, we need to create a repository to hold it. Again, this can be done using the AWS CLI:
aws ecr create-repository \ --repository-name listbucketfunction \ --image-scanning-configuration scanOnPush=true \ --region eu-west-1
If successful, you should see output similar to that shown below:
For additional options for the aws ecr create-repository command, please take a look at the documentation.
Finally, it is time to push our image to the ECR repository:
docker push *AccountID*.dkr.ecr.eu-west-1.amazonaws.com/listbucketfunction
Docker image pushed to AWS ECR
Taking a look at the Amazon ECR console, we can see that our image has been successfully uploaded.
Next, we set up the Lambda.
Configuring the AWS Lambda Container
Navigate to the AWS Lambda console. From the console, click Create Function.
Provide a name for your function. Under Container image URI, click Browse images.
From the list of images, select the listBucketFunction that we uploaded to the ECR repository in the previous section of this article.
When it comes to the execution role, select the first option to create a role with basic permissions:
We will need to add a policy to this role, which we will do shortly.
Now select Create Function to continue.
We now have our container function.
Before we can test the function, we just need to add a predefined policy to the role.
From the console screen of our function, select the Configuration tab, and click Permissions. This will provide the name of the role attached to the Lambda function. Click the role name, which will take you to the IAM role.
Click Add permissions and type in the policy AmazonS3ReadOnlyAccess. Attach this policy to the role.
You should now have two roles attached:
Testing the Lambda container function
Everything is now in place to test our Lambda container function. We will be running the AWS Lambda invoke command from the AWS CLI:
aws lambda invoke \ --cli-binary-format raw-in-base64-out \ --function-name checkBucketContents \ --payload '{ "BucketName": "company-file-explorer" }' \ response.json
The two inputs are the function name and the payload. The function name is simply the name we gave the Lambda function when we set it up. If you have followed my naming conventions, you will need to add checkBucketContents.
The payload is the JSON that you want to provide to your Lambda function as input. So, the first part will always be BucketName, as we configured our Lambda function to take that value. The value on the right-hand side is the name of the bucket whose contents you want to view.
We are required to give a filename at the end to store the results.
Running the function will return (if successful):
{ "StatusCode": 200, "ExecutedVersion": "$LATEST" }
Open the file, response.json in our example, to view the contents of the bucket:
Results: Our Lambda container function returned the contents of the chosen bucket.
The code
Below is the entire code.
package main import ( Bucket "Bucket/pkg" "github.com/aws/aws-lambda-go/lambda" ) func main() { lambda.Start(Bucket.ListBucketContents) } pkg/listBucket.go package Bucket import ( "context" "log" "github.com/aws/aws-sdk-go-v2/aws" "github.com/aws/aws-sdk-go-v2/config" "github.com/aws/aws-sdk-go-v2/service/s3" "github.com/aws/aws-sdk-go-v2/service/s3/types" ) var client *s3.Client func init() { cfg, err := config.LoadDefaultConfig(context.TODO(), config.WithRegion("eu-west-1")) if err != nil { panic("configuration error, " + err.Error()) } client = s3.NewFromConfig(cfg) } type AwsBucket struct { Name string `json:"BucketName"` } func ListBucketContents(bn AwsBucket) ([]string, error) { result, err := client.ListObjectsV2(context.TODO(), &s3.ListObjectsV2Input{ Bucket: aws.String(bn.Name), }) var contents []types.Object if err != nil { log.Printf("Couldn't list objects in bucket %v. Here's why: %v\n", bn.Name, err) } else { contents = result.Contents } var details []string for _, c := range contents { details = append(details, *c.Key) } return details, nil } Dockerfile FROM public.ecr.aws/lambda/provided:al2 AS build ENV CGO_ENABLED=0 RUN mkdir -p /opt/extensions RUN yum -y install go RUN go env -w GOPROXY=direct ADD go.mod go.sum ./ RUN go mod download COPY . ${LAMBDA_TASK_ROOT} RUN env GOOS=linux GOARCH=amd64 go build -o=/main # copy artifacts to a clean image FROM public.ecr.aws/lambda/provided:al2 COPY --from=build /main /main ENTRYPOINT [ "/main" ]
Summary
This article has covered a fair bit of ground. We started off looking at the reasons for using an AWS Lambda function and then using it as a container. We then went through the process of creating a Lambda in Go, which would read the contents of an S3 bucket. Continuing, we built the container and pushed it to Amazon ECR. Finally, we created the Lambda function through the AWS console and tested it.
Subscribe to 4sysops newsletter!
Containers are a very useful addition to the AWS Lambda service. I hope this guide has shown you how easy it is to get started and has given you a taste for more.