2
By the end of this chapter, you will be able to:
This chapter teaches you how to build and run serverless applications with AWS.
In the previous chapter, we focused on understanding the serverless model and getting started with AWS and Lambda, the first building blocks of a serverless application on AWS. You also learned about how the serverless model differs from traditional product development.
In this chapter, we will learn about other AWS capabilities such as S3, SNS, and SQS. You can start by asking students about different AWS serverless technologies that the students have heard about or have had the chance to work with. Talk to them briefly about different AWS services such as S3 storage, API Gateway, SNS, SQS, and DynamoDB services. We will discuss them in detail in this chapter.
Amazon Simple Storage Service or S3 is nothing but a cloud storage platform that lets you store and retrieve any amount of data anywhere. Amazon S3 provides unmatched durability, scalability, and availability so that you can store your data in one of the most secure ways. This storage service is accessible via simple web interfaces, which can either be REST or SOAP. Amazon S3 is one of the most supported platforms, so either you can use S3 as a standalone service or you can integrate it with other AWS services.
Amazon S3 is an object storage unit that stores data as objects within resources called "buckets". Buckets are containers for your objects and serve multiple purposes. Buckets let you organize Amazon namespaces at the highest level and also play a key role in access control. You can store any amount of objects within a bucket, while your object size can vary from 1 byte to 5 terabytes. You can perform read, write, and delete operations on your objects in the buckets.
Objects in S3 consist of metadata and data. Data is the content that you want to store in the object. Within a bucket, an object is uniquely identified by a key and a version ID. The key is the name of the object.
When you add a new object in S3, a version ID is generated and assigned to the object. Versioning allows you to maintain multiple versions of an object. Versioning in S3 needs to be enabled before you can use it.
If versioning is disabled and you try to copy the object with the same name (key), it will overwrite the existing object.
A combination of bucket, key, and version ID allows you to uniquely identify each object in Amazon S3.
For example, if your bucket name is aws-serverless and the object name is CreateS3Object.csv, the following would be the fully qualified path of an object in S3:
Now, let's understand some of the key characteristics of using the Amazon S3 service:
In the following diagram, you can see that when the S3 bucket in source-region-A goes down, route 53 is redirected to the replicated copy in source-region-B:
Geographic redundancy enables the replication of your data and stores this backup data in a separate physical location. You can always get your data back from this backup physical location just in case the main site fails.
Since it is server-side encryption, there is no user interference required. Hence, when a user tries to read the data, the server decrypts the data automatically.
With Amazon S3, you can host your entire static website at a low cost, while leveraging a highly available and scalable hosting solution to meet varied traffic demands.
In this exercise, we'll look at doing the following:
So, let's get started. Here are the steps to perform this exercise:
Bucket Name: Enter a unique bucket name. For this book, we've used www.aws-serverless.tk since we will host a website using our S3 bucket. As per AWS guidelines, a bucket name must be unique across all existing bucket names in Amazon S3. So, you need to choose your individual bucket names.
Region: Click on the dropdown next to Region and select the region where you want to create the bucket. We will go with the default region, US-East (N. Virginia).
If you want to copy these settings from any other bucket and want to apply them to the new bucket, you can click on the dropdown next to Copy settings from an existing bucket. We will configure the settings for this bucket here, so we will leave this option blank:
Versioning
Server access logging
Tags
Object-level logging
Default encryption
For this exercise, go with the default properties and click on the Next button.
At the top, note the Endpoint information. This will be the URL to access your website. In this case, it is http://www.aws-serverless.com.s3-website-us-east-1.amazonaws.com/.
The index.html file is a simple HTML file that contains basic tags, which are for demonstration purposes only.
Congratulations! You have just deployed your website using the Amazon S3 bucket.
We have successfully deployed our S3 bucket as a static website. There are different use case scenarios for S3 services, such as media hosting, backup and storage, application hosting, software, and data delivery.
Now, we'll look at enabling versioning on an S3 bucket. Here are the steps to do so:
Your Lambda function can be called using Amazon S3. Here, the event data is passed as a parameter. This integration enables you to write Lambda functions that process Amazon S3 events, for example, when a new S3 bucket gets created and you want to take an action. You can write a Lambda function and invoke it based on the activity from Amazon S3:
In this exercise, we will demonstrate AWS S3 integration with the AWS Lambda service. We will create an S3 bucket and load a text file. Then, we will write a Lambda function to read that text file. You will see an enhancement for this demonstration later in this chapter when we integrate it further with the API Gateway service to show the output of that text file as an API response.
Here are the steps to perform this exercise:
Observe the contents of this file's text message: Welcome to Lambda and S3 integration demo Class!!.
Provide the name of the Lambda function. Let's name it read_from_s3.
Choose the runtime as Node.js 6.10.
Choose the Create a new role from one or more templates option. Provide the role name as read_from_s3_role.
Under policy templates, choose Amazon S3 object read-only permissions.
var AWS = require('aws-sdk');
var s3 = new AWS.S3();
exports.handler = function(event, context, callback) {
// Create variables the bucket & key for the uploaded S3 object
var src_bkt = 'lambdas3demo';
var src_key = 'sample.txt';
// Retrieve the object
s3.getObject({
Bucket: src_bkt,
Key: src_key
}, function(err, data) {
if (err) {
console.log(err, err.stack);
callback(err);
}
else {
console.log('\n\n' + data.Body.toString()+'\n');
callback(null, data.Body.toString());
}
});
};
Note that the default output of the data will be in binary format, so we are using the toString function to convert that binary output to a string:
Once the function has been executed, you should see the highlighted message Welcome to Lambda and S3 integration demo Class !!, as provided in the following screenshot. This message was the content of the sample.txt file that we uploaded into our S3 bucket in step 3:
Now, we have completed our discussion about S3 integration with a Lambda function.
API development is a complex process, and is a process that is constantly changing. As part of API development, there are many inherent complex tasks, such as managing multiple API versions, implementation of access and authorization, managing underlying servers, and doing operational work. All of this makes API development more challenging and impactful on an organization's ability to deliver software in a timely, reliable, and repeatable way.
Amazon API Gateway is a service from Amazon that takes care of all API development-related issues (discussed previously) and enables you to make your API development process more robust and reliable. Let's look into this in more detail now.
Amazon API Gateway is a fully managed service that focuses on creating, publishing, maintaining, monitoring, and securing APIs. Using API Gateway, you can create an API that acts as a single point of integration for external applications while you implement business logic and other required functionality at the backend using other AWS services.
With API Gateway, you can define your REST APIs with a few clicks in an easy-to-use GUI environment. You can also define API endpoints, their associated resources and methods, manage authentication and authorization for API consumers, manage incoming traffic to your backend systems, maintain multiple versions of the same API, and perform operational monitoring of API metrics as well. You can also leverage the managed cache layer, where the API Gateway service stores API responses, resulting in faster response times.
The following are the major benefits of using API Gateway. We have seen similar benefits of using other AWS services, such as Lambda and S3:
Let's understand certain concepts of the API Gateway and how they work. This will help you build a better understanding on how the API Gateway works:
Now, we will look at a demo of API Gateway and explore its different features. Along with this demo, we will also create a simple REST API using API Gateway and integrate it with a Lambda function. We will extend our earlier exercise on S3 integration with Lambda and create a REST API to show the contents of "sample.txt" as API response. This API will be integrated with Lambda to execute the function, and a GET method will be defined to capture the contents of the file and show it as the API response:
Here are the steps to perform this exercise:
Here, you have three options for choose from:
New API
Import from Swagger
Example API
API name: Enter read_from_S3_api
Description: Enter sample API
Endpoint Type: Choose Regional and click on Create API.
We haven't created any resources yet as part of this exercise, so the AWS console will only have the root resource and no other resources.
Your API will invoke the Lambda function.
The Lambda function gets executed and sends the response back to the API.
The API receives the response and publishes it:
This is what will appear on your screen:
Great! You have just integrated the API Gateway with Lambda and S3.
We'll now turn our focus to other native services. We'll begin with Amazon SNS and then move on to Amazon SQS.
Amazon Simple Notification Services (SNS) is the cloud-based notification service that's provided by AWS that enables the delivery of messages to the recipients or to the devices. SNS uses the publisher/subscriber model for the delivery of messages. Recipients can either subscribe to one or more "topics" within SNS or can be subscribed by the owner of a particular topic. AWS SNS supports message deliveries over multiple transport protocols.
AWS SNS is very easy to set up and can scale very well depending on the number of messages. Using SNS, you can send messages to a large number of subscribers, especially mobile devices. For example, let's say you have set up the monitoring for one of your RDS instances in AWS, and once the CPU goes beyond 80%, you want to send an alert in the form of an email. You can set up an SNS service to achieve this notification goal:
You can set up AWS SNS using the AWS Management Console, AWS command-line interface, or using the AWS SDK. You can use Amazon SNS to broadcast messages to other AWS services such as AWS Lambda, Amazon SQS, and to HTTP endpoints, email, or SMS as well.
Let's quickly understand the basic components, along with their functions, of Amazon SNS:
Here are some of the applications of Amazon SNS:
In a simple message queue service, we have applications playing the roles of producers and consumers. The applications, known as producers, create messages and deliver them to the queues. Then, there is another application, called the consumer, which connects to the queue and receives the messages. Amazon SQL is a managed service adaptation of such message queue services.
Amazon Simple Queue Service (SQS) is a fully managed messaging queue service that enables applications to communicate by sending messages to each other:
Amazon SQS provides a secure, reliable way to set up message queues. Currently, Amazon SQS supports two types of message queues:
There is a limit on the number of messages supported by FIFO queues.
Just like Amazon SNS, you can also set up the AWS SQS service using the AWS Management Console, AWS command-line interface, or using the AWS SDK.
Amazon DynamoDB is a NoSQL database service that is fully managed. Here, you won't have to face the operative and scaling challenges of a distributed database. Like other serverless AWS services, with DynamoDB, you don't have to worry about hardware provisioning setup, configuration data replication, or cluster scaling.
DynamoDB uses the concept of partition keys to spread data across partitions for scalability, so it's important to choose an attribute with a wide range of values and that is likely to have evenly distributed access patterns.
With DynamoDB, you pay only for the resources you provision. There is no minimum fee or upfront payment required to use DynamoDB. The pricing of DynamoDB depends on the provisioned throughput capacity.
Throughput Capacity
In DynamoDB, when you plan to provision a table, how do you know the throughput capacity required to get optimal performance out of your application?
The amount of capacity that you provision depends on how many reads you are trying to execute per second, and also how many write operations you are trying to do per second. Also, you need to understand the concept of strong and eventual consistency. Based on your settings, DynamoDB will reserve and allocate enough Amazon resources to keep low response times and partition data over enough servers to meet the required capacity to keep the application's read and write requirements.
Eventual consistency is a type of consistency where there is no guarantee that what you are reading is the latest updated data. Strong consistency is another type of consistency where you always read the most recent version of the data. Eventual consistent operations consume half of the capacity of strongly consistent operations.
Now, let's look at some important terms:
You are charged for reserving these resources, even if you don't load any data into DynamoDB. You can always change the provisioned read and write values later.
DynamoDB Streams is a service that helps you capture table activity for DynamoDB tables. These streams provide an ordered sequence of item-level modifications in a DynamoDB table and store the information for up to 24 hours. You can combine DynamoDB Streams with other AWS services to solve different kinds of problems, such as audit logs, data replication, and more. DynamoDB Streams ensure the following two things:
AWS maintains separate endpoints for DynamoDB and DynamoDB Streams. To work with database tables and indexes, your application must access a DynamoDB endpoint. To read and process DynamoDB Stream records, your application must access a DynamoDB Streams endpoint in the same region.
Amazon DynamoDB is integrated with AWS Lambda. This enables you to create triggers that can respond to events automatically in DynamoDB Streams. With triggers, you can build applications that react to data modifications in DynamoDB tables.
Integration with Lambda allows you to perform many different actions with DynamoDB Streams, such as storing data modifications on S3 or sending notifications using AWS services such as SNS.
In this exercise, we'll create an SNS topic and subscribe to it. So, let's get started:
ARN stands for Amazon Resource Name, and it is used to identify a particular resource in AWS.
Note that if you need to reference a particular AWS resource in any other AWS service, you do so using the ARN.
We have successfully created a topic. Let's go ahead and create a subscription for this topic. We will set up an email notification as part of the subscription creation so that whenever something gets published to the topic, we will get an email notification.
Once the subscription is confirmed, you should see the following screenshot:
So, you have successfully created an SNS topic and have successfully subscribed to that topic as well. Whenever anything gets published to this topic, you will get an email notification.
In this exercise, we will see create a Lambda function and integrate it with SNS to send email notifications:
Here are the steps to perform this exercise:
Name: Write lambda_with_sns.
Runtime: Keep it as Node.js.
Role: Select Create role from template from the dropdown. Here, we are creating a Lambda function to send an SNS notification.
Role name: Provide the role name as LambdaSNSRole.
Policy templates: Choose SNS publish policy:
The following is an explanation of the main parts of the code:
sns.publish: The publish action is used to send a message to an Amazon SNS topic. In our case, we have an email subscription on the topic, we are trying to publish onto. Therefore, a successful publishing here will result in an email notification.
Message: The message you want to send to the topic. This message text will be delivered to the subscriber.
TopicArn: The topic you want to publish to. Here, we are publishing to the "TestSNS" topic, which we created in our previous exercise. So, copy and paste the ARN of the topic that we created in the earlier exercise here.
As we can see, the following message in the execution results is Message sent successfully. This confirms that the Lambda code was successful in sending a notification to the SNS topic.
Time to check your email account, which was configured as part of the subscriber in the preview exercise. You should see the following AWS notification message:
This concludes our exercise on the simple integration of Lambda with Amazon SNS.
In the last exercise, we showcased lambda integration with Amazon SNS. As part of the exercise, whenever our lambda function was executed, we got an email alert generated by SNS service.
Now, we will extend that exercise to perform an activity here.
Let's assume that you are processing certain events and whenever there is an error with processing of a particular event, you move the problematic event into a S3 bucket so you can process them separately. Also, you want to be notified via an email whenever any such an event arrives in the S3 bucket.
So, we will do an activity to create a new S3 bucket and set up a mechanism that enables you to get an email alert whenever a new object is uploaded into this S3 bucket. When a new object is added to the S3 bucket, it will trigger the Lambda function created in the earlier exercise which will send the required email alert using SNS service.
Here are the steps for completion:
The solution for this activity can be found on page 154.
In this chapter, we looked at Amazon S3 and serverless deployments. We worked with API Gateway and its integration with AWS. We delved into fully managed services such as SNS, SQS, and DynamoDB. Finally, we integrated SNS with S3 and Lambda.
In the next chapter, we'll build an API Gateway that we covered in this chapter. A comparison with a traditional on-premises web application will be done as we replace traditional servers with serverless tools while making the application scalable, highly available, and performant.