Uploading the artifacts to Cloud

In the previous chapter, we deployed our first Lambda function via the AWS CLI. As you remember, we used the locally stored JAR file in the command line, and we saw that having the JAR file on our development machine works for such a simple scenario. However, when we want to deploy our Lambda functions via CloudFormation, we have to upload the very same JAR file to an S3 storage. S3 is the oldest and maybe the most famous AWS offering and provides a scalable and durable storage area for developers. In S3, you can store any type of files and pay only for the storage you actually use. In this chapter, we will introduce S3 to upload our artifacts, but in the following chapters, we will also use it to store uploaded files by users, such as profile pictures.

As a first step toward automated deployment, we will use the AWS Gradle plugin built by Classmethod Inc. This is a set of plugin that allows access to AWS API's directly from Gradle code. Detailed documentation can be found at https://github.com/classmethod/gradle-aws-plugin. The plugin has support for a couple of AWS services, but for now, we are going to use only S3 and CloudFormation support. Let's start with adding the plugin definition and some generic configuration to our build.gradle file. First, add the plugin to the classpath of buildscript:

buildscript {
repositories {
.....
}
dependencies
{
classpath "com.github.jengelman.gradle.plugins:shadow:1.2.3"
classpath "jp.classmethod.aws:gradle-aws-plugin:0.+"
}
}

Then, just below this block, let's add this block to apply the plugin to the root project and all the subprojects:

allprojects {
apply plugin: "jp.classmethod.aws"
aws {
region = "us-east-1"
}
}

Here, we picked us-east-1 (North Virginia) as the region, but you can select another region depending on the location of your clients. This means that all our applications will be deployed to the us-east-1 region.

This is a citation of the AWS documentation: the AWS Cloud infrastructure is built around regions and availability zones (AZs). A region is a physical location in the world where we have multiple availability zones. Availability zones consist of one or more discrete data centers, each with redundant power, networking and connectivity, housed in separate facilities. These availability zones offer you the ability to operate production applications and databases, which are more highly available, fault-tolerant, and scalable than would be possible from a single data center.

Every region has a different set of services. For the service availability list per region, check out https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/. At the time of writing, the us-east-1 (North Virginia), us-west-2 (Oregon), eu-west-1 (Ireland), and ap-northeast-1 (Tokyo) regions are supporting all the services we are going to use throughout the book.

After the generic configuration, the first AWS-related task that we are going to create will be the automated creation of an S3 bucket. We will use this S3 bucket to upload the created artifacts and the CloudFormation template. Our gradle script will first upload the shadow JAR files to the S3 bucket for every Lambda function. Then, in the final stage, the CloudFormation template will be uploaded to the same bucket. As the last step, the Gradle script will trigger the creation or update of the CloudFormation template in AWS. AWS will read the JSON configuration, create the desired resources (IAM roles, Lambda functions, and so on). When we define our Lambda function in the CloudFormation template, we will refer to the JAR file location in the deployment bucket and AWS will automatically fetch the JAR from there in order to create the Lambda function. So first, we will create this essential task to create the deployment bucket if it does not exist. In the build.gradle file, let's add this snippet just before the configure (subprojects.findAll { it.name.startsWith("lambda-") }) part:

def deploymentBucketName = "serverless-book-${aws.region}" 
def deploymentTime = new java.text.SimpleDateFormat("yyyyMMddHHmmss").format(new Date()); 
 
allprojects { 
    apply plugin: "jp.classmethod.aws.s3" 
    task createDeploymentBucket(type: jp.classmethod.aws.gradle.s3.CreateBucketTask) { 
        bucketName deploymentBucketName 
        ifNotExists true 
    } 
} 
 
configure(subprojects.findAll()) { 
    if (it.name.startsWith("lambda-")) { 
    ...... 

Here, you might have noticed that we created two global variables. The first one is the name of the deployment bucket. We suffix the bucket name for every region using the ${aws.region} variable that we previously set in the AWS plugin configuration.

For your project, you should change the bucket name with a unique name because, in AWS S3, bucket names are global and they must be unique. If you copy and paste the code snippet directly, you will see that the bucket creation will fail because a bucket with the name in the example code is already existing.

Now you can run the ./gradlew createDeploymentBucket command to see whether your task is working. After running the command, you can check the bucket using AWS CLI with the following command:

    $ aws s3 ls

You should see serverless-book-us-east-1 in the list on the screen.

Now we have to extend the build phase of Lambda subprojects. As you may remember from the previous chapter, we created a block to write the build script only for subprojects starting with the lambda- prefix. Now, in this block, just after the build.finalizedBy shadowJar line, let's add this block:

def producedJarFilePath = it.tasks.shadowJar.archivePath 
def s3Key = "artifacts/${it.name}/${it.version}/${deploymentTime}.jar" 

task uploadArtifactsToS3(type: jp.classmethod.aws.gradle.s3.AmazonS3FileUploadTask, 
  dependsOn: [build, createDeploymentBucket]) { 
   bucketName deploymentBucketName 
   file producedJarFilePath 
   key s3Key 
} 

Here, we created a new task, which uploads the shadow JAR to the S3 bucket. Note that the s3Key variable is changing every time we run the Gradle script, so a new file will always be uploaded to S3. This is important because in the following sections, we will inject the deploymentTime variable to CloudFormation, and thus AWS will always fetch the latest version of the JAR file and update the Lambda function. Also, note that uploadArtifactsToS3 depends on the build and createDeploymentBucket tasks. Also, we access the created JAR file's path as an object by the it.tasks.shadowJar.archivePath variable. This is a variable created automatically by the Shadow JAR plugin, and it returns a file object pointing to a shadow JAR file in the build library. Now we can run the newly created task to see it in action. Run this command on the root directory:

    $ ./gradlew uploadArtifactsToS3

Thanks to Gradle's task dependency feature, this command will automatically trigger and build the S3 bucket creation tasks before itself, and as you can see in the output, the test, build, shadowJar, S3 bucket creation tasks will be executed before the upload tasks come into action and upload the JAR file to the S3 file. For now, we have only one Lambda function project (lambda-test), so this task will run only once, but when we add more Lambda functions, this task will be propagated to other Lambda functions as well, so any change will be automatically reflected to the subprojects. Now we can check whether the artifact is uploaded to S3 via AWS CLI:

    $ aws s3 ls s3://serverless-book-us-east-1/artifacts/lambda-test/1.0/

Don't forget to change the bucket name with your bucket name to see the result. In the output, you should see the uploaded JAR file by our task.

So far, so good. We solved the packaging issue using Gradle and without any other third-party tools. Now things are getting harder. In the next section, we will create our first CloudFormation template to deploy our JAR to AWS Lambda.