1st September 2022
Part 5 of a 6 Part Series on Deploying on Google Cloud Platform
We recently did five mock deployments on Google Cloud Platform using different methods in an effort to understand the nuances associated with each method.
The five methods we investigated were:
- Google Compute Engine
- Google Kubernetes Engine
- App Engine
- Cloud Run
- Cloud Functions
We have examined 5 approaches over the last 5 weeks and next we will wrap it all up with an article that talks about our findings in the context of all five approaches.
For our fifth article in the series, we will be deploying an application on Google Cloud Platform using Cloud Functions.
Cloud Functions Approach
Cloud Functions is a serverless execution environment for building, running and managing cloud services. It enables the user to attach simple, single-purpose applications to events emitted from other parts of the cloud infrastructure and make meaningful processes run based on those events.
The code executes in a fully managed environment, removing the need to provision pieces of infrastructure or worry about managing servers.
Cloud Functions can be written using Node.js, Python, Go, Java, .NET, Ruby and PHP. They run in language-specific runtime environments. They are not limited to event-based processing, since they can be activated by HTTP requests, with the benefit of Google managing the TLS certificates for the functions.
More Cloud Functions features can be found at https://cloud.google.com/functions.
When to use Cloud Functions
Cloud Functions is a very versatile environment. For example when you need to run simple, single-purpose code in a production environment without worrying about where the code is run Cloud Functions can handle this.
No server configuration required. It is a good option for small applications that don’t necessarily need to connect to other applications. It is also a wise option for when the team doesn’t have much SysAdmin knowledge since it is serverless.
Some other interesting use-cases for Cloud Functions are: when the application needs to receive events from other services in GKE, like Cloud Storage, Cloud Pub/Sub and so on.
And, finally, it’s a good choice for applications that do not need to run all day on a server.
How we implemented it
For this solution we have used a terraform project that declares the following resources:
- A VPC with a subnet for connecting Redis and CloudFunctions
- A bucket to store the application sources
- Two functions exposed as GetCounters and SetCounter
The project contains a simple Counter Application consuming a Redis Memory Store.
Here is the Terraform block declaring the Redis instance.
Here we declare the connector that will be used to allow the application to access the Redis instance.
In the declaration of CloudFunctions (in this case GetCounters) we set the runtime, source code file and bucket, environment variables and the entry point. For CloudFunctions the artifact is declared as a library (Aka Golang library) and the entry point is a public function that intercepts the requests.
The complete solution with other components can be found in our repository: https://github.com/nearform/gcp-articles
|Deploy code to production fast and easily||Limited networking and provision configuration|
|Google-managed TLS certificates for HTTPS requests||It only supports some languages, therefore limited use-cases|
|Predictable pricing, charged by resources used rather than instance types||It can add complexity to the application, making the architecture harder to understand|
|No worries about server upgrades and OS security breaches||Vendor lock-in, since each serverless solution from cloud providers have their own rules|
|No need to worry about autoscaling since it can scale indefinitely, automatically||Vendor lock-in, since each serverless solution from cloud providers have their own rules|
|Simple pricing model, you pay for the time your code is running||Can be difficult to keep track of all the Cloud Functions running|