Looking for a way to ease the complexity of managing workloads across multiple clouds? Read this article to find out why you need to give Google Anthos a shot.

Kentaro Wakayama
09 August 2022

The advent of innovative cloud offerings for container deployments has led to a shift from monolithic applications to microservices as part of the digital transformation process. While every leading cloud service provider has its own version of managed and unmanaged services available for hosting containers, Kubernetes has become the preferred container orchestration platform for deploying microservices in production.
As customers look for the best cloud services for their containerized workloads, multicloud deployments are becoming increasingly popular. According to Flexera’s 2020 State of the Cloud Report, 93% of enterprises have a multicloud strategy, and use of container services are second in popularity only to database services. Multiple clouds for containerized workloads help organizations avoid vendor lock-in and keep their options open.
However, the deployment of containerized workloads across multiple cloud platforms increases operational complexity, thus demanding single-pane visibility and management capabilities across all environments. To this end, Google Anthos enables you to run Kubernetes clusters across different environments (i.e., on-premises or multicloud) with a consistent management experience. This article explores the many features Google Anthos offers as well as ideal use cases.
Prior to its public release, Kubernetes (K8s) was an in-house container orchestration solution developed by Google with the code name “Borg.” Today, K8s is maintained by the Cloud Native Computing Foundation (CNCF) and is implemented in Google’s programming language, Go.
While possible to use independent containers for testing and development purposes, production environments demand the agility, scalability, and reliability offered by the K8s platform. Google Kubernetes Engine (GKE) is the managed Kubernetes service in Google Cloud that handles Kubernetes cluster control plane deployment, upgrades, scaling, security, and more.
The Anthos platform is based on GKE and enables customers to take advantage of the Kubernetes managed service features both in the cloud as well as in on-premises data centers.
Anthos GKE is based on the same GKE that is available on the Google Cloud Platform (GCP)—the clusters can be maintained with the same Kubernetes version, and they can be managed using the same GCP ecosystem tools for monitoring, security, and policy management across different environments. Anthos addresses the main pain points organizations face when it comes to deploying workloads in containers, i.e., flexibility and ease of management.
Though cloud offers many innovative services for containers, there may be scenarios in which customers would want to run part of their workloads on-premises. The do-it-yourself approach, where end-to-end management of container orchestration stacks are the responsibility of the customer, can be cumbersome.
Anthos addresses this by offering such capabilities as differentiated security, networking, monitoring, and more right out of the box.
Anthos consists of multiple components to help with orchestration, infrastructure management, policy enforcement, and service management for containerized workloads. Application development and deployment to Anthos can be done through Google Cloud integrated services, such as Cloud Code, GitLab, CircleCI, or Cloud Run.
Let’s explore 8 key features of Anthos.
Anthos provides flexibility for customers to deploy containerized applications in their environment of choice, whether “born-in-cloud” applications or legacy ones. It extends the GKE experience to hybrid and multicloud deployment architectures. Anthos cluster environments can be deployed on Google Cloud, on-premises, on VMware platforms or bare metal, and on AWS.
You can also attach other conformant Kubernetes clusters to Anthos, including Amazon Elastic Kubernetes Service (EKS), Microsoft Azure Kubernetes Service (AKS), Red Hat OpenShift Kubernetes Engine (OKE), and Red Hat OpenShift Container Platform (OCP), among others.
With a single-pane view, Anthos acts as an overarching control plane that can manage your Kubernetes cluster configuration and enforce policies across these diverse environments.
In an environment where clusters are deployed across multiple environments, maintaining the same configurations across these deployments becomes an overhead. Anthos addresses this pain point by enabling centralized policy and configuration management.
Anthos uses a centralized Git repository to store policies for role-based access control, namespace configuration, and resource quotas which are then uniformly applied across different environments. Anthos monitors changes to policies and configuration in the repository and rolls out necessary changes to any Kubernetes clusters connected to it.
Customers have the flexibility to deploy this centralized policy and configuration Git repository on-premises, on any hosted Git provider—like GitHub/GitLab—or in Google Cloud.
As the number of microservices grows, it becomes increasingly difficult to scale and manage them—especially when they are scattered across clusters in different environments. Anthos Service Mesh addresses this concern by providing a set of tools that provides visibility to containerized services deployed across on-premises and cloud platforms, thus simplifying the management process.
Anthos Service Mesh is made up of a data plane and a control plane. The data plane consists of a set of distributed proxies to manage traffic between individual microservices, while the control plane, known as Traffic Director, is fully managed. Traffic Director helps with global load balancing of services deployed across multiple K8s clusters and VMs, and configures ingress and egress traffic control policies.
Anthos comes with several security controls out of the box that enable a defense-in-depth (DiD) security strategy for containerized workloads. A Zero Trust security model is adopted as the baseline by implementing perimeters through workload isolation and network segmentation.
Additional Anthos features for securing deployed workloads include:
For organizations planning to migrate applications to containers, Migrate for Anthos is the perfect solution. It removes the complexities associated with the modernization of applications, allowing them to be easily deployed to containers.
This service strips the unnecessary, redundant, layers of applications (e.g., OS). It bundles only the relevant application components into a containerized format that can be deployed to GKE clusters on-premises, or in the cloud (managed by Anthos).
Cloud Run for Anthos is based on Knative, an open-source project for serverless deployment to Kubernetes. Along with simplifying the deployment process by abstracting the underlying infrastructure, the service also takes care of operational requirements, like autoscaling and automated network configuration.
With Cloud Run for Anthos, you can leverage existing K8s clusters managed by Anthos to deploy serverless workloads.
Anthos can be integrated with Google Cloud’s operations suite, which can be used to monitor health, performance, and workload uptime. This service collects metrics and logs from all of the services you use, and enables analysis of telemetry data for deeper insights into your application health.
With the operations suite, out-of-the box dashboards and tools provide much-needed visibility to applications deployed across clusters connected to Anthos.
Organizations that do not have an existing investment in hypervisor technology can still benefit from Anthos through its bare-metal deployment option.
Anthos can be deployed on customer-managed physical servers, and comes with a built-in K8s networking stack that includes overlay networking and load balancing components to get you started. Like other deployment options, bare-metal Anthos deployment has a full suite of features supported by Anthos, and can be easily integrated with the GCP ecosystem, including the operations suite.
Anthos is a great option for organizations exploring hybrid cloud environments during the initial phases of cloud adoption, and this is in fact one of the most common use cases for the service.
Anthos provides the flexibility to deploy some of your containerized workloads in your on-premises data centers, while leveraging the cloud for some parts of the architecture. The service offers a consistent experience for deployment and management of K8s clusters, with a common set of tools and services hosted in the cloud. It also enables scaling your container workloads to the cloud, for extra, on-demand capacity.
Customers who are more advanced in their cloud journey, with workloads deployed across multiple cloud platforms (i.e., GCP, AWS, or Azure), can benefit from Anthos’ unified container orchestration. Instead of managing each cluster individually from their respective cloud platforms, they can all be integrated with Anthos and managed through a single pane.
If you’re looking to run your workloads closer to your users to improve customer experience, the Anthos at the Edge service can be used to run workloads at a remote office outside of your data centers or telco edge locations. This improves the user experience through high-performance computing and reduced latency, while maintaining consistent configuration and compliance across all clusters.
Anthos can also be used in data center exit use cases, where migrating to the cloud using Migrate for Anthos can offer an advantage. This can be done without extensive investments in modernizing the application. Anthos’ lift-and-shift approach is most beneficial for legacy applications that can be neither upgraded nor decommissioned, and which may create speed bumps on the path toward digital transformation.
As organizations mature in the cloud, multicloud and hybrid architectures are becoming the norm rather than the exception.
In addition to Anthos, services offered by AWS and Azure will soon work for such deployments. AWS has announced EKS Anywhere, which will become available in 2021, and will enable customers to deploy EKS via on-premises bare-metal servers and VMs. It will provide a unified management experience for EKS clusters deployed on-premises as well as in AWS.
Azure Arc also aims to simplify deployment and management of workloads across on-premises, multicloud, and edge locations. An Azure Arc-enabled Kubernetes allows you to manage any CNCF-certified K8s service directly from Azure, which includes AKS deployment, multicloud (EKS/GKE), or on-premises clusters.
With Anthos and Anthos-equivalent services from AWS and Azure, customers can look forward to the next era of cloud computing, where deployment environments are abstracted and the focus will shift to building robust containerized applications that can run anywhere.