Writing the foundation of a Lambda function

We can start our task with writing a Lambda function, which responds to S3 events and will resize the image. At this stage, the code will not resize the image, but it will only log the request, so first, we can see that the function is really triggered by S3.

As usual, we can create our new module with the name lambda-imageresizer:

    $ mkdir -p lambda-imageresize/src/main/java/com/
serverlessbook/lambda/imageresize

Then, let's add this new module to our settings.gradle file:

    $ echo "include 'lambda-imageresizer'" >> settings.gradle  

We can now create our Handler class in the com.serverlessbook.lambda.imageresize package:

    $ touch lambda-imageresizer/src/main/java/com/serverlessbook/lambda/
imageresizer/Handler.java

In this Lambda function, we will consume standardized events prepared by S3. AWS provides a Java package that includes POJOs for this type of event, including S3. This package can be found in the Maven repository by the name com.amazonaws:aws-lambda-java-events. It means our job is even easier now, because we neither have to create a model for the incoming event nor use our own JSON deserialization procedures. Then, we will first create a build.gradle method in the lambda-imageresizer module and add the necessary dependency:

dependencies { 
  compile group: 'com.amazonaws', name: 'aws-lambda-java-events',
version: '1.3.0' }

We create the preliminary version of our Handler method as follows:

package com.serverlessbook.lambda.imageresizer; 
 
import com.amazonaws.services.lambda.runtime.Context; 
import com.amazonaws.services.lambda.runtime.RequestHandler; 
import com.amazonaws.services.lambda.runtime.events.S3Event; 
import org.apache.log4j.Logger; 
 
public class Handler implements RequestHandler<S3Event, Void> { 
 
  private static final Logger LOGGER = Logger.getLogger(Handler.class); 
 
  private void resizeImage(String bucket, String key) { 
    LOGGER.info("Resizing s3://" + bucket + "/" + key); 
  } 
 
  @Override 
  public Void handleRequest(S3Event input, Context context) { 
    input.getRecords().forEach(s3EventNotificationRecord -> 
resizeImage(s3EventNotificationRecord.getS3().getBucket().getName(), s3EventNotificationRecord.getS3().getObject().getKey())); return null; } }

As you have seen, the built-in library of AWS made it very easy, and we have already access to the most important information of the newly added file to the S3 bucket. When a new file is added to the bucket by any means, this Lambda function will be invoked and the resizeImage function will execute. In this function, we will resize the image and save it to the same bucket with a user ID. We will see in detail how we will get the user ID from the file saved at S3.

Now, let's create a Lambda function using this artifact. Let's add a new resource to cloudformation.json:

"ImageResizerLambda": { 
  "Type": "AWS::Lambda::Function", 
    "Properties": { 
      "Handler": "com.serverlessbook.lambda.imageresizer.Handler", 
      "Runtime": "java8", 
      "Timeout": "300", 
      "MemorySize": "1024", 
      "Description": "Test lambda", 
      "Role": { 
        "Fn::GetAtt": [ 
          "LambdaExecutionRole", 
          "Arn" 
        ] 
      }, 
      "Code": { 
        "S3Bucket": { 
          "Ref": "DeploymentBucket" 
        }, 
        "S3Key": { 
         "Fn::Sub": "artifacts/lambda-imageresizer/${ProjectVersion}/
${DeploymentTime}.jar" } } } }

Let's now create the S3 bucket:

"ProfilePicturesBucket": { 
  "Type": "AWS::S3::Bucket", 
   "Properties": { 
     "BucketName": { 
       "Fn::Sub": "${DomainName}-profilepictures" 
     } 
   } 
} 
As you noted, the S3 bucket will be created using the DomainName value you gave in the main build.gradle file. Be careful about using a unique domain name, as any other reader of the book might be using the same domain name, and there can be only one S3 bucket with the same name on all AWS.

We can now add the event configuration to the S3 bucket. Let's just add the following snippet under the BucketName property as a new property:

"NotificationConfiguration": { 
  "LambdaConfigurations": [ 
    { 
       "Event": "s3:ObjectCreated:*", 
       "Filter": { 
         "S3Key": { 
           "Rules": [ 
             { 
               "Name": "prefix", 
               "Value": "uploads/" 
             } 
           ] 
         } 
       }, 
       "Function": { 
         "Fn::GetAtt": [ 
           "ImageResizerLambda", 
           "Arn" 
         ] 
       } 
     } 
   ] 
} 

Here, note the Event and Filter values. Setting the Event value to s3:ObjectCreated:* tells AWS that our Lambda function should only be called when a new object is added to the bucket. We could also call Lambda when an object is deleted, but it is not our case for now. The Filter value is limiting the invocation of the Lambda. Our Lambda will be called only in the uploads/ folder, because our users will be uploading their profile pictures to that folder. It is important to use filters because once our Lambda function executes, it will save the resized photos to another folder. If we had not limited this event, our Lambda function would also cause another execution of itself, such as a recursive function, and it would be an endless loop.

At this stage, we configured the event, but S3 is still not permitted to invoke our Lambda. Remember that in the previous chapter, we created an authorizer at the API Gateway layer, and we created an AWS::Lambda::Permission resource to let the apigateway.amazonaws.com principal to execute our function. Here, we will do something very similar. Let's add this snippet to our resources:

"ImageResizerLambdaPermisson": { 
  "Type": "AWS::Lambda::Permission", 
    "Properties": { 
      "Action": "lambda:InvokeFunction", 
      "FunctionName": { 
        "Ref": "ImageResizerLambda" 
      }, 
      "Principal": "s3.amazonaws.com", 
      "SourceArn": { 
        "Fn::Sub": "arn:aws:s3:::${DomainName}-profilepictures" 
      } 
   } 
}

Now, our Lambda can be executed by S3 on our behalf.

At this stage, you can try to upload a file to the bucket, but to the uploads/ folder, and see that the Lambda function is automatically executed and writes a log entry to CloudWatch.