On the other hand, serverless computing is still in its infancy; hence, it is not suitable for all use cases and it does have its limitations:
- Transparency: The infrastructure is managed by the FaaS provider. This is in exchange for flexibility; you don't have full control of your application, you cannot access the underlying infrastructure, and you cannot switch between platform providers (vendor lock-in). In future, we expect increasing work toward the unification of FaaS; this will help avoid vendor lock-in and allow us to run serverless applications on different cloud providers or even on-premise.
- Debugging: Monitoring and debugging tools were built without serverless architecture in mind. Therefore, serverless functions are hard to debug and monitor. In addition, it's difficult to set up a local environment to test your functions before deployment (pre-integration testing). The good news is that tools will eventually arrive to improve observability in serverless environments, as serverless popularity is rising and multiple open source projects and frameworks have been created by the community and cloud providers (AWS X-Ray, Datadog, Dashbird, and Komiser).
- Cold starts: It takes some time to handle a first request by your function as the cloud provider needs to allocate proper resources (AWS Lambda needs to start a container) for your tasks. To avoid this situation, your function must remain in an active state.
- Stateless: Functions need to be stateless to provide the provisioning that enables serverless applications to be transparently scalable. Therefore, to persist data or manage sessions, you need to use an external database, such as DynamoDB or RDS, or an in-memory cache engine, such as Redis or Memcached.
Having stated all these limitations, these aspects will change in the future with an increasing number of vendors coming up with upgraded versions of their platforms.