In a nutshell: “Just upload code and execute it”.
From my viewpoint as an engineer, serverless computing means that I can implement as well as use cloud functionality without having to establish and to manage server deployments.
A quote from Amazon states: “Serverless computing allows you to build and run applications and services without thinking about servers. Serverless applications don’t require you to provision, scale, and manage any servers. You can build them for nearly any type of application or back-end service, and everything required to run and scale your application with high availability is handled for you” (https://aws.amazon.com/serverless/).
For example, if business logic has to be executed, I develop one or more functions (procedures), deploy those “into a serverless cloud” and invoke them without having to worry, for example, about initialization, containers, web or application servers, deployment descriptions or scaling.
Or, when I need database access, I create a database service instance “in the cloud” and use it. Of course, I have to possibly (maybe – maybe not) reason about capacity, but I don’t have to find hardware, find, install and maintain database software images, scale the instances, and so on.
There are many explanations and discussions of serverless computing, for example, https://martinfowler.com/articles/serverless.html or https://martinfowler.com/bliki/Serverless.html, among many others. A clear-cut technical definition of “serverless computing” is still missing.
Serverless Computing: Why is it Interesting?
There are many reasons why serverless computing is interesting and appealing. Cost is one factor, hardware utilization is another. The above references outline many, many more.
However, from a software or service engineering perspective there are many important reasons why this relatively new concept is very much worth exploring and considering for future development and possibly migration from traditional approaches like, for example, Kubernetes.
For one, the focus and the effort on managing server deployments and hardware environments is extremely reduced or removed altogether, including aspects like scaling or failure recovery. This frees up engineering, QA, dev ops and ops time and resources to focus on the core business functionality development.
More importantly, a serverless development and execution environment restricts the software architecture, implementation, testing and deployment in major ways. This reduction in variance allows execution optimization and enables a significant increase in development quality and dependability.
Serverless Computing: What are the Choices?
As in the case of any new major development, several providers put forward their specific implementations, and this is the case for serverless computing as well. I don’t attempt to provide a comprehensive list here, but instead refer to the following page as one example that collected some providers: http://www.nkode.io/2017/09/12/serverless-frameworks.html.
One observation is that there are vendor specific as well as vendor unspecific implementations of serverless computing. It is in the eye of the beholder (based on use cases and requirements) to determine the most applicable environment.
Serverless Computing: Changing Planets
It is easy to label serverless computing as “The Next Big Thing”. However, I believe that a real fundamental shift is taking place that in a major ways breaks with the historical development and engineering of distributed computing. Remote procedure call (RPC – https://en.wikipedia.org/wiki/Remote_procedure_call), Distributed Computing Environment (DCE – https://en.wikipedia.org/wiki/Distributed_Computing_Environment), CORBA (https://en.wikipedia.org/wiki/Common_Object_Request_Broker_Architecture), REST (https://en.wikipedia.org/wiki/Representational_state_transfer), just to name a few, have all in common that the design and engineering has to take hardware and resource topology and deployment into consideration, including scaling, and recovery. Furthermore, it had to reason about “local” and “remote”.
Serverless computing, as it currently is implemented, takes away the “distribution” and deployment aspect to very large extent or even completely. This will change significantly how software engineering approaches system and service construction. Time will tell the real impact, of course, but I’d expect major shifts from a software engineering perspective.
What’s Next: Journey Ahead
From a very pragmatic viewpoint, serverless computing is an alternative software development and execution environment for (distributed) services. And this defines the journey ahead: figuring out how the various aspects of software engineering are realized, like for example
- Function implementation
- Procedure implementation
- Error handling, exception handling and logging
- Failure recovery
- Scaling (out and in)
The blogs following this will explore these and other aspects over time.
Serverless computing, and serverless functions in particular, are very appealing developments, especially from the viewpoint of software development/engineering as well as scaleable execution.
Let’s get hands-on and see what the upside potentials are as well as where the limits and issues lie.
The views expressed on this blog are my own and do not necessarily reflect the views of Oracle.