There is no doubt about it: The sheer number of different solutions and architectures available in a cloud environment seems to be endless, to the point where cloud complexity must be managed before it can cause a negative impact and affect company operations.
In truth most cloud-related solutions are focused on avoiding excessive complexity, and as a matter of fact, sometimes this is done in such a way that breaks paradigms we have being taking for granted for a long time. One such example is using Serverless approach. To put it simply, imagine being able to publish an application in the cloud without having to worry about servers, or managing resources such as disk space, memory, and CPU usage? And it gets even better! How about being billed exclusively on the resources you have actually used, effectively getting rid of non-productive resources costs? Well, this is a precise description of the Serverless model, yet another way the cloud can make your business run faster, simpler and cost-effective.
So, what is the Serverless model?
Let’s start by stating the obvious: Of course, there are servers and other computing resources in a Serverless based solution, the difference is the fact that they simply do not matter anymore to you, the client of a Cloud Services Provider. But rest assured, there is still a team of experts making sure the usual levels of availability and resilience we got used to expect from a cloud solution are still delivered.
To put it in simpler terms, Serverless computing is an execution model where the cloud provider functions as the server itself, dynamically managing and allocating computing resources as needed. In this approach, the costs are not based on previously acquired units, but on the resources dynamically consumed by your application.
As early as 2006, there were already cloud providers that tried to implement the FaaS (Function as a Service) model, including Google, with its Google App Engine, that allowed a model where the billing was based on application usage, but this solution was limited by a Python framework that did not allow the execution of arbitrary code.
The first major cloud provider to deliver a truly Serverless approach was Amazon, which in 2014 introduced AWS Lambda, where instead of loading the application into a container or virtual machine, customers simply uploaded the code into Lambda and it took care of everything else. The model was, in fact, quite simple: application would remain dormant, until it was activated by the appropriate trigger, then Lambda would start its execution. Once the application completed its task, it was removed from the Lambda service. That simple.
Reinforcing the obvious fact that the Serverless approach still uses servers, it was soon revealed - to no one's surprise - that the driving force behind AWS Lambda was container technology. The real change in this model was the fact that AWS itself was in charge of loading the code in the container and executing it. As expected, with the success of this approach, other major Cloud players followed the trend, so Google, Microsoft, IBM and Oracle have also created Serverless offerings.
Pros and cons
As expected, the Serverless model gained a lot of interest as soon as it was released, as a friend told me: nothing better than living in a post-container world, where my only job is sending my application to a cloud provider, let they make sure everything runs as expected, and be charged only for the resources were effectively consumed.
Serverless computing can bring many benefits, that goes without saying, but just like any other technology, understanding its advantages is only one side of the story. For a proper implementation, a critical look at any disadvantages is also necessary before fully committing to a new model.
"In the serverless model, the costs are not based on previously acquired units, but on the resources dynamically consumed by your application"
However, since there is way more components involved than in a traditional architecture, the attack surface is larger. Also, it is important to remember that you cannot install any security solution on the server or even on the network. In this case you need to trust that your cloud provider properly implements security controls.
Another important aspect is privacy considerations, as most Serverless environments run in a public cloud, this implies in shared resources and your data may also be accessible to the cloud provider staff.
Now, it is important to understand that being concerned about security and privacy is quite different from saying that the Serverless model running on a public cloud is not secure. It is! Just understand that for some cases, where there are stricter security and privacy requirements, you might want to consider using Serverless on a private cloud or even on premises (Kubernetes allows you to do just that!).
In general, the Serverless model can be of immense value and is easily adopted, it is just a matter of understanding which applications benefit the most and having a reliable provider.
As many businesses are already adopting this model as their preferred approach, with the creation of standards, and inclusion of more programming languages, Serverless computing will be one of the best ways to gain agility in app development/publication and at the same time implement a cost-effective approach. In truth, this is the power of cloud