Skip to content

Deploying with Cloud Run on Google Cloud Platform

Part 4 of a 6 Part Series on Deploying on Google Cloud Platform

We recently did five mock deployments on Google Cloud Platform using different methods in an effort to understand the nuances associated with each method.

The five methods we investigated were:

We will be examining each of the 5 approaches over the coming weeks and wrap it all up with an article that talks about our findings in the context of all five approaches.

For our fourth article in the series we will be deploying an application on Google Cloud Platform using Cloud Run.

Cloud Run Approach

Cloud Run is the “Container to Production in seconds” solution that Google Cloud Platform offers. It allows users to deploy and run request-serving containers, or even small jobs to a fully managed, serverless platform .

Cloud Run integrates very well with other GCP services like Cloud Build and Artifact Registry. Its pricing model is very simple. The user pays for the time their code is running, rounding down to 100ms.

Cloud Run uses a Docker image as its smallest unit of deployment, which simplifies the developer experience. It also scales indefinitely depending on the number of requests received, which makes it a reliable solution.

Additionally, developers can use any programming language they want since Cloud Run works with Docker containers instead of language runtimes.

When to use Cloud Run

Cloud Run is a good solution for serverless applications written in pretty much any language.

It leverages the flexibility of Docker containers while incorporating some really interesting features like automatic application scaling. This makes it a good option for running microservices, as long as the user doesn’t need node allocation and networking features.

Cloud Run is also a good option for organizations that could benefit from the pricing flexibility of serverless applications.

How we implemented it

For this solution we have used a terraform project that declares the following resources:

  • A VPC with a subnet for connecting Redis and CloudRun
  • A Redis instance to store data
  • A CloudRun service that runs the application

The project contains a simple Counter Application consuming a Redis Memory Store.

Main Components

Redis Here is the Terraform block declaring the Redis instance.

Plain Text
resource "google_redis_instance" "data" {
 name               = "${var.app_name}-redis"
 region             = var.region
 tier               = "BASIC"
 memory_size_gb     = var.regis_memory_size_gb
 authorized_network = google_compute_network.vpc.id
 connect_mode       = "PRIVATE_SERVICE_ACCESS"
}

Connector Here we declare the connector that will be used to allow the application to access the Redis instance.

Plain Text
resource "google_vpc_access_connector" "connector" {
 provider      = google-beta
 name          = "${var.app_name}-vpc-conn"
 region        = var.region
 project       = var.project_id
 subnet {
   name = google_compute_subnetwork.data.name
 }
}

CloudRun Service In the CloudRun Service we set parameters like region, image, and the application environment variables among other details.

Plain Text
resource "google_cloud_run_service" "app" {
 provider = google-beta
 name     = var.app_name
 location = var.region
 template {
   spec {
     containers {
       image = local.app_image
       env {
         name  = "REDIS_ADDR"
         value = "${google_redis_instance.data.host}:${google_redis_instance.data.port}"
       }
       ports {
         name           = "http1"
         container_port = "8080"
       }
     }
   }
   metadata {
     annotations = {
       "run.googleapis.com/vpc-access-egress"    = "all-traffic"
       "autoscaling.knative.dev/minScale"        = var.min_scale
       "autoscaling.knative.dev/maxScale"        = var.max_scale
       "run.googleapis.com/vpc-access-connector" = google_vpc_access_connector.connector.name
     }
   }
 }
...
}

Other components The complete solution with other components can be found in our repository: https://github.com/nearform/gcp-articles

Pros and cons

Pros Cons
“Container to production in seconds,” and Google really means it Some events available for Cloud Functions are not available in Cloud Run (like Firestore)
Simple and unified developer experience, each service implemented by a Docker image Not very useful for applications with background tasks
Scalable serverless execution based on number of requests received It can add complexity to the application, making the architecture harder to understand
Support for code written in any language, thanks to Docker
Can be integrated into GKE clusters using Anthos
Simple pricing model, you pay for the time your code is running

Insight, imagination and expertly engineered solutions to accelerate and sustain progress.

Contact