Run Serverless Kubernetes Pods with GCP: Part 1

Integrating serverless container deployment solutions with K8s enables developer productivity and faster time to market. In this article series, we’ll explore Cloud Run and learn how it integrates with Google Kubernetes Engine (GKE).

Kentaro Wakayama Avatar

Kentaro Wakayama

23 October 2022

Run Serverless Kubernetes Pods with GCP: Part 1

Due to their flexibility and agility, containers have become the preferred deployment option in the cloud. Though it was initially developed and used by Google as an internal platform code, Kubernetes has undoubtedly become the market leader among container orchestration solutions.

Today, all leading cloud service providers deliver multiple flavors of K8s and supporting services on their platforms. Customers have no shortage of options: managed K8s services, serverless containers, hosted container registries, specialized security services for containers, and more. These suites of products have made containerized application deployment in the cloud more robust than ever.

Integrating serverless container deployment solutions with K8s takes this one step further. In the first installment of this two-part article series we’ll explore Cloud Run , the Knative-based serverless deployment solution for containers. In part two, we’ll look at how it integrates with Google Kubernetes Engine (GKE), the native, managed K8s solution on GCP.

Knative for Kubernetes

Serverless has gained traction over the past few years because it enables organizations to generate value for their core business applications without the overhead of managing complex underlying infrastructure.

Knative is an open-source platform that helps run enterprise-grade serverless applications on Kubernetes. Knative manages the complexities related to running the application, including configuring the network, autoscaling, revision tracking, eventing framework for cloud-native services, and more.

Organizations turn to Knative for these key benefits:

  • Avoiding vendor lock-in: Knative facilitates the use of serverless applications on any platform that supports Kubernetes. This makes it possible to port applications wherever the organization deems fit.
  • Developer-friendly: The container images are the deployment units in Knative, so developers can build and deploy applications in the languages and frameworks they’re familiar with. Common patterns (like GitOps) and popular frameworks (like Django, Node.js, Flask, Spring Framework, and Ruby on Rails) are all supported.
  • Plug and play: Knative can easily integrate with existing CI/CD tools like GitLab, so organizations can hit the ground running. The plug-and-play approach helps Knative interoperate with logging and monitoring tools (such as Splunk), service mesh, and the networking tools of your organization’s choice.
  • Open API: Knative helps by providing an open API and abstractions for common use cases for building, running and managing containerized applications.

Knative consists of three main components:

  1. Build: This provides a flexible approach for packaging source code into containers. It eliminates the overhead of cross compiling and of local build by providing tools to build containers from code source directly on the K8s cluster. It also helps organizations optimize their available compute capacity to deploy applications.
  2. Serving: Serving is built on K8s and Istio service mesh to enable the deployment of serverless applications. It helps autoscale applications based on HTTP requests, including scaling down to zero. This component also helps integrate with networking, logging and monitoring platforms, and service mesh components tailored to customer requirements.
  3. Eventing: Eventing aids in the adoption of an event-driven architecture for serverless deployments. It helps implement decoupled services that interact with each other based on events. Because the eventing is aligned with CloudEvents specification, cross-service interoperability is assured and customers can connect it with their existing systems.

Knative has an active open-source community and is supported by more than 100 organizations, including industry leaders like IBM, Red Hat, SAP, Pivotal, and more. It has gained popularity thanks to the flexibility and portability that it offers for serverless deployments.

What Is Cloud Run?

Cloud Run in GCP is a serverless container hosting solution based on Knative. It enables workload portability and a seamless developer experience, whether you’re using GKE in the cloud or running the cluster on-premises using Anthos.

Cloud Run can be used for the high-speed deployment of applications in a serverless environment. It also provides the flexibility to use the framework and language of your choice. The service is well integrated with GCP container ecosystem tools and services like Cloud Build, Artifact Registry, and Cloud Code to name a few. Cloud Run also integrates well with multiple partner solutions, including GitLab, Sentry, HashiCorp, Datadog, and more.

Developers can quickly deploy and manage applications on Cloud Run from the command-line interface or from GCP’s UI. And because it’s Knative-based, Cloud Run supports fast autoscaling for pods based on incoming traffic. Cloud Run also leverages GCP’s built-in regional redundancy, where the services are replicated across multiple zones within a region.

All applications deployed in managed Cloud Run receive a stable HTTPS endpoint, autoconfigured with TLS termination for secure access. Customers can also invoke Cloud Run services and connect to it through protocols like gRPC, WebSocket, HTTP/1.* and  HTTP/2.

Deployment Options and Architecture

GCP offers a fully managed Cloud Run service, and Cloud Run for Anthos. Both services use Knative in the backend and customers can choose based on their deployment preference.

Cloud Run

Cloud Run is fully managed, and serverless by design. It helps organizations deploy containers in a matter of seconds and access their application through an out-of-the-box HTTPS URL. The underlying infrastructure management is done by the platform, and it ensures automated scaling out and scaling in of workloads based on app ingress traffic.

The fully managed Cloud Run service is deployed in sandboxed environments and provides strict container isolation, ensuring the security of critical workloads. Applications deployed in Cloud Run can communicate with resources in your VPC (like VM instances or Memorystore instances) with a private IP through Serverless VPC Access connector.

Cloud Run can support up to 1,000 containers by default, and can be further scaled using a quota increase request. It uses pay-per-use billing for CPU, memory, and access requests rounded to the nearest 100 milliseconds.

Cloud Run for Anthos

Anthos, GCP’s hybrid/multi-cloud application management solution, extends the power of GKE to an environment of your choice and provides a unified development, management, and operations experience for clusters.

Cloud Run for Anthos takes this one step further and abstracts the concepts of K8s so that developers can build and deploy serverless applications without deep K8s expertise. It also helps organizations leverage their existing investments in K8s. Services deployed using Cloud Run for Anthos can benefit from all of the features available on every server that is part of the Anthos cluster, including GPUs. The cost is also included in the GKE cluster usage charges.

Cloud Run-based microservices can coexist with other microservices in the same Anthos cluster and interact with each other over a private network. The services can be made accessible over the internet or published to a private VPC network. They will also be part of Istio service mesh, which enables unified service management.

Cloud Run for Anthos provides the additional flexibility of modernizing your application right where it is. This is especially useful for legacy applications deployed on-premises. It can be used to deploy serverless applications on your on-premises Anthos clusters, and they can later be moved back to the fully managed version of Cloud Run on the cloud or in a different environment compatible with Knative.

Conclusion

Cloud Run and Cloud Run for Anthos are GCP’s serverless container deployment options. Powered by Knative—the trusted choice for serverless container deployment—Cloud Run enables developer productivity and faster time to market by abstracting the infrastructure management activities. 

While Cloud Run is delivered as a fully managed service, Cloud Run for Anthos helps leverage existing K8s investments for serverless deployments. It provides the flexibility to deploy applications without fear of vendor-lock in, and ensures seamless integration with the GCP ecosystem to support your application stack. Cloud Run brings together the power of Kubernetes and serverless to deliver the best of both worlds.

In the next part of this blog series, we’ll show you how to get started with Cloud Run to deploy serverless applications.

For our latest insights and updates, follow us on LinkedIn

Kentaro Wakayama Avatar

Kentaro Wakayama

Managing Director, CEO

Kentaro leads Coder Society as CEO, bringing hands-on expertise in software development, cloud technologies, and building high-performing engineering teams.

Contact us