Part 3 of a 6 Part Series on Deploying on Google Cloud Platform
We recently did five mock deployments on Google Cloud Platform using different methods in an effort to understand the nuances associated with each method.
The five methods we investigated were:
We will be examining each of the 5 approaches over the coming weeks and wrap it all up with an article that talks about our findings in the context of all five approaches.
For our third article in the series we will be deploying an application on Google Cloud Platform using App Engine.
Deploying with App Engine
Google App Engine is one of the fully managed, serverless solutions provided by Google Cloud Platform. It allows users to build monolithic server-side rendered websites, and it supports popular development languages with a range of developer tools. It takes care of the server and deployment management so the user can focus on the code.
Google App Engine is best suited for applications that are designed using a microservice architecture. It provides the user with two different environments: standard and flexible. The standard environment runs applications in a sandbox, using specific runtimes. The flexible environment runs applications on Docker containers using any version of the languages supported by App Engine.
More on App Engine can be found at https://cloud.google.com/appengine.
When to use App Engine
Google App Engine is designed to provide nearly limitless scalability, which is a good option for applications that are expected to grow fast in traffic. It is also a good call for applications that are written in one of the supported languages: Python, Java, Node.js, Go, Ruby, PHP, or .NET.
Additionally, the most important things to consider when deciding which platform to use are the team’s skills in relation to managing servers and monitoring tools. In the case of App Engine, these skills are not strictly necessary. It also has a free tier, which can be a very attractive feature.
How we implemented it
For this solution we used a terraform project that declares the following resources:
- A VPC with a subnet to connect Redis and CloudRun
- A redis instance to store data
- A CloudRun service that runs the application
The project contains a simple Counter Application consuming a Redis Memory Store.
Here is a Terraform block declaring the Redis instance.
Here we declare the connector that will be used to allow the application to access the Redis instance.
In the AppEngine application we set two Terraform resources: the application and its standard version. In the standard version resource, we can set parameters like region, image, and the application environment variables, among other details.
The complete solution with other components can be found in our repository: https://github.com/nearform/gcp-articles
Pros and cons
|Serverless, no servers to manage
||Only supports a few languages
|Nearly limitless scalability, even in short spikes
||Some level of vendor lock-in (you can’t make system changes, for example)
|Frees the users from dealing with monitoring tools
||It can add complexity to the application, making the architecture harder to understand
|It can run small applications for free (on the free tier)
|Can be scaled to zero (if the application isn’t needed for a period of time, for example)