Now that we've got a feel of mapping Lambda events with S3, let us give yet another service a go! This time its DynamoDB!
You can trigger Lambda functions in response to updates made to a particular DynamoDB table; for example, a new row that gets added in a table gets validated by a Lambda function, a row deletion operation resulting in Lambda sending a notification to a user, and so on. But, before you go ahead and implement triggers for DynamoDB, its important to note that, unlike S3, DynamoDB is a stream-based event source, which means that you first need to enable streams on your DynamoDB table before you create and map functions to it. Lambda actually polls the particular stream for events and when it finds one, it invokes the corresponding function mapped to it.
In the following use case, we shall see how to use a Lambda function to check a particular data column in the table for a pattern. If the pattern is invalid, the function should delete the entire data row, or else ignore it. Once again, I will be using APEX to deploy my functions, so first we need to get the folder directory all setup. Since we already have a work directory, and a project directory created, we will just go ahead and create a simple folder for this particular use case under the following folder structure:
# mkdir ~/workdir/apex/event_driven/functions/myDynamoToLambdaFunc
With the directory created, we only need to create a function.dev.json and index.js files here as well. Remember, the function.dev.json file is unique to each use case so in this case, the file will contain the following set of instructions:
{
"description": "Node.js lambda function using DynamoDB as a trigger to validate the value of the inserted IP address and deletes it if it's invalid.",
"role": "arn:aws:iam::<account-id>:role/myLambdaDynamoFuncRole",
"handler": "index.handler",
"environment": {}
}
Once again, the code is self-explanatory. We once again have to create a corresponding IAM role to allow our Lambda function to interact and poll the DynamoDB table on our behalf. This includes providing Lambda with the necessary permissions to describe and list DynamoDB streams as well as get records from the table itself.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "myDynamodbPermissions",
"Effect": "Allow",
"Action": [
"dynamodb:DescribeStream",
"dynamodb:GetRecords",
"dynamodb:GetShardIterator",
"dynamodb:ListStreams",
"dynamodb:DeleteItem"
],
"Resource":
[
"arn:aws:dynamodb:us-east-1:<account-id>:table/LambdaTriggerDB*"
]
},
{
"Sid": "myLogsPermissions",
"Effect": "Allow",
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": "arn:aws:logs:*:*:*"
}
]
}
With the configurations of the function out of way, let us now have a quick look at the function code itself:
function isValidIPAddress(ipAddr, cb){
if(/^(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|
[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\
.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)$/.test(ipAddr)){
cb(null, "Valid IPv4 Address");
}
else{
cb("Invalid");
}
}
The following code snippet simply checks and validates whether a supplied IP address is a valid or an invalid IP address. We have used a regex expression to do the check.
exports.handler = (event, context, callback) => {
var ipAddr, eventName;
var tableName = "LambdaTriggerDB";
event.Records.forEach((record) => {
eventName = record.eventName;
console.log("Event: "+eventName);
switch(eventName){
case "MODIFY":
case "INSERT":
ipAddr = record.dynamodb.Keys.IP_ADDRESS.S;
Here, we check the eventName that is, MODIFY, INSERT, or REMOVE, to decide the different execution paths. For Modify and Insert events, we will check for the validity of the IP address and if it's invalid then delete that particular record from the DynamoDB table. In case of a remove event, we don't want to do anything.
We have used a simple switch case to achieve this task.
You can find the complete code along with all the necessary config files for your reference here: https://github.com/PacktPublishing/Mastering-AWS-Lambda.
We will once again use APEX to deploy our function to Lambda. To do so, we execute the APEX deploy command from the project level directory as shown below:
# apex --env dev deploy myDynamoToLambdaFunc
With your function successfully packaged and deployed, you can now create the DynamoDB table and the associated Lambda trigger as well. The table creation is a straight forward process. Select the DynamoDB option from the AWS Management Console. Click on Create new table and fill out the necessary information as shown in the image below. Make sure to provide the same table name as provided in your Lambda IAM role. For the Primary key, type in IP_ADDRESS and select the attribute as String. Click on Create once done.

Once the table is copied, make sure to copy the table's stream ARN. The stream ARN will be required in the next steps when we map the table stream with our deployed Lambda function.
To configure the function's trigger, select the newly created function from Lambda's dashboard. Next, select the Triggers option to configure the event mapping. Click on the blank box adjacent to the Lambda function and choose the option DynamoDB as shown. Fill in the required details as described below:
- DynamoDB table: From the drop down list, select the stream enabled table that we just created a while back.
- Batch size: Provide a suitable value for the batch size operation. Here, I've opted for the default values.
- Starting position: Select the position from where the function must execute. In this case, we have gone with the Latest position marker.
Make sure the Enable trigger option is selected before you complete the configurations:

With this step completed, we are now ready to test our function. To do so, simply add a valid record in the DynamoDB table and check the function's logs using Amazon CloudWatch Logs. Once verified, try the same using an invalid IP address and see the results. You can use the same logic to verify data that is dumped into your DynamoDB table or even perform some level of processing over data in the table before it is either deleted or archived as a file to S3.