It’s an antipattern to assume that the smallest resource size available to your function will provide the lowest total cost. If your function’s resource size is too small, you could pay more because of a longer execution time than if more resources were available that allowed your function to complete more quickly.
You don’t need to implement all use cases through a series of blocking/synchronous API requests and responses. If you are able to design your application to be asynchronous, you might find that each decoupled component of your architecture takes less compute time to conduct its work than tightly coupled components that spend CPU cycles awaiting responses to synchronous requests. Many of the Lambda event sources fit well with distributed systems, and can be used to integrate your modular and decoupled functions in a more cost-effective manner.
Some Lambda event sources allow you to define the batch size for the number of records that are delivered on each function invocation (for example, Kinesis and DynamoDB). You should run tests to find the optimal number of records for each batch size so that the polling frequency of each event source is tuned to how quickly your function can complete its task.
The variety of event sources available to integrate with Lambda means that you often have a variety of solutions available to meet your requirements. Depending on your use case and requirements (request scale, volume of data, latency required, and so on), there might be a nontrivial difference in the total cost of your architecture based on which AWS services you choose as the components that surround your Lambda function.