Setting Up Istio from Scratch: Part 1

In this article, we'll guide you through the process of deploying Istio in an existing Kubernetes environment. We'll also discuss common troubleshooting issues and the benefits of using Istio in a Kubernetes ecosystem.

Kentaro Wakayama Avatar

Kentaro Wakayama

15 March 2023

Setting Up Istio from Scratch: Part 1

In a cloud-native ecosystem, a service mesh represents a dedicated infrastructure layer that solves the challenges of microservices orchestration by managing interactions between individual services. Rather than adding new components to an existing framework, a service mesh finetunes interservice communications by introducing multiple proxies. 

Istio is one of the most popular open-source service mesh platforms, offering a simple, iterative approach to connect, monitor, and secure microservices. The platform comes with a powerful control plane that enables efficient load balancing, authentication, and policy management.

This article, the first in a two-part series about Istio, will show you how to set up Istio in an existing Kubernetes cluster. It will also address common post-setup troubleshooting issues and the benefits of deploying Istio in a Kubernetes ecosystem.

Leveraging Istio for Kubernetes Workloads

Kubernetes workloads typically consist of multiple services that communicate and share data with each other. Kubernetes automatically provides replicas of microservices as the application workload grows, thereby introducing new points of failure and complicating the communication between services.

To help with this, the Istio service mesh provides an abstracted infrastructure layer to facilitate communication between services. It also helps you visualize the infrastructure layer and the interaction between each service, making it easier to optimize performance at scale. The Istio service mesh presents various aspects of interservice interactions as performance metrics. Operation teams typically use this data to specify rules for service-to-service communication and optimizing service requests. 

While use cases vary for different organizations, there are several commonly known reasons for integrating the Istio service mesh into a microservices stack:

  1. Cloud-Native Security

The Istio service mesh supplements comprehensive security by offering robust identity, encryption, authentication, authorization, and auditing mechanisms of microservices. Communication between each service is encrypted to defend against man-in-the-middle attacks. While implementing mutual TLS and fine-tuned access policies for flexible access control, Istio also integrates with other security systems to enable multi-layered application security. 

  1. Traffic Management

Istio includes multiple routing features that enable efficient traffic management across multiple components of the service mesh. The request routing feature is used to direct user requests to multiple versions of a service dynamically. Engineering teams can also use the fault injection feature to test how resilient the application is. By connecting to Kubernetes’ service discovery system to identify all endpoints and the services they belong to, Istio uses this information to populate its service registry, then direct requests to the relevant services.

  1. Node and VM Deployment

Istio offers comprehensive observability by generating telemetry information for all communications within the connected cluster. Through visibility and fine-grained infrastructure controls, deployment teams can manage the provisioning of nodes used to orchestrate Kubernetes applications. The platform also allows software teams to connect external workloads to the service mesh. This enables VMs and other legacy workloads to leverage the benefits that Istio offers to Kubernetes applications.

  1. Load Balancing

Istio enables Kubernetes load balancing through service registration and discovery. It relies on the service registry to keep track of VMs and pods associated with a particular service. The Istio pilot then uses the service registry to enable service discovery that dynamically load-balances across instances of a service in the mesh.

  1. Policy Management

Istio allows administrators to enforce runtime rules through custom policy configuration. With Istio’s policy management, software teams can enforce rate limiting for traffic management, as well as control header rewrites and redirects. Custom policies can also be used to restrict service access through whitelists, blacklists, and denials. 

Setting Up Istio for a Production Kubernetes Cluster

This section explores how to install the Istio service mesh on a Kubernetes cluster using the Istio operator. These instructions are not platform specific, and can be performed on any production-grade Kubernetes cluster, as long as it satisfies specific application requirements

Prerequisites

  1. CLI to run commands in the Kubernetes cluster context. Additionally, the istioctl command-line tool should be installed to allow interaction with the Istio API through the CLI.

Special Note: The steps involved in this article can be used for both production-grade (AKS, GKE, EKS, etc.) as well as Minikube clusters. The latter can particularly prove handy for those who are looking to test setups at a much lower scale, such as using their own local machine for hosting a Kubernetes instance.

  1. During setup, you’ll use the Istio operator to manage the installation and upgrading of the Istio service mesh in a production environment. This makes it easier to manage various Istio versions, since you only have to update the Istio operator’s custom resource while the operator controller manages configuration changes. 

1. Installing the Istio Operator

Install the Istio operator using this command:

$ istioctl operator init

This creates an istio-operator namespace, then various resources for running the operator, including the operator’s CRD and controller deployment**,** a metric access service**,** and RBAC rules for the Istio operator.

Once installation is complete, the CLI displays this message:

Using operator Deployment image: docker.io/istio/operator:1.7.4
✔ Istio operator installed
✔ Installation complete 

2. Configuring the Service Mesh

Use the operator to install the Istio configuration profile. First, create a namespace istio-system with this command:

$ kubectl create ns istio-system

Then apply the configuration by running the command:

$ kubectl apply -f - <<EOF
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
  namespace: istio-system
  name: example-istiocontrolplane
spec:
  profile: demo
EOF  

The operator installs Istio components based on the configuration specified by the demo profile.

To confirm the deployment of the Istio service, run the following command:

$  kubectl get svc -n istio-system

This shows the services deployed as:

NAME                   TYPE           CLUSTER-IP     EXTERNAL-IP     PORT(S)                                                                      AGE
istio-egressgateway    ClusterIP      10.0.118.188   <none>          80/TCP,443/TCP,15443/TCP                                                     4m11s
istio-ingressgateway   LoadBalancer   10.0.21.224    20.50.134.231   15021:32461/TCP,80:32583/TCP,443:32170/TCP,31400:30753/TCP,15443:31121/TCP   4m11s
istiod                 ClusterIP      10.0.153.86    <none>          15010/TCP,15012/TCP,443/TCP,15014/TCP,853/TCP                                4m21s 

Alternatively, you can check the number of pods deployed in the namespace, using this command:

$ kubectl get pods -n istio-system  

This will return a result similar to:

NAME                                    READY   STATUS    RESTARTS   AGE
istio-egressgateway-f765d8fb5-mgb5t     1/1     Running   0          5m34s
istio-ingressgateway-5f554c94dd-wnqhw   1/1     Running   0          5m34s
istiod-7fd6d8d4d9-fjt5s                 1/1     Running   0          5m44s

That’s it—straightforward and simple! 

3. Post-Setup Troubleshooting

Two of the most common problems with running Istio in clusters involve inactive proxies and sidecar injection challenges. Once you are done with setup, it’s a common best practice to troubleshoot and ensure that the service mesh is correctly set up and the required proxies are connected with the sidecar. If you’re lucky, you won’t notice these issues, but resolving them is pretty simple. 

4. Validating Sidecar Components

If the result of the sidecar injection is unexpected, first ensure that the pod doesn’t reside in the kube-system and kube-public namespaces, and that the pod’s hostNetwork spec isn’t true.

It’s also important to check whether the namespace selector for the webhook is opt-in or opt-out. Do this by running this command:

$ kubectl get mutatingwebhookconfiguration istio-sidecar-injector -o yaml | grep "namespaceSelector:" -A5  

If the webhook’s namespace selector is set to opt-in, the result will look similar to this:

namespaceSelector:
    matchLabels:
      istio-injection: enabled
  rules:
  - apiGroups:
    - "" 

This webhook is invoked for pods created in namespaces with the label istio-injection=enabled.

If pods cannot be created at all, check the namespace event log, as it typically captures any failures to involve a webhook.

5. Checking Proxy Status

The proxy-status command gives an overview of the service mesh and provides traffic-management configuration details. This is essentially shown by highlighting whether the Envoy proxy is receiving configuration from Istiod as expected. To check, run this command:

$ istioctl proxy-status 

The output will be similar to this:

NAME                                                   CDS        LDS        EDS        RDS          ISTIOD                            VERSION
istio-ingressgateway-5dccf968cf-hhtmj.istio-system     SYNCED     SYNCED     SYNCED     NOT SENT     istiod-1-7-0-7997d7d998-pv7q6     1.7.4

Any proxies missing from the list or showing “not connected” or “not synced” to the pilot indicate networking or scaling issues.

6. Performing Updates

Once the controller is deployed, it’s easy to change the service mesh configuration by altering the operator’s custom resource. This can be done in two ways:

In-Place Upgrade

This is performed by completely replacing the istioOperator resource. For instance, to switch the profile from demo to default, run the following command:

$ kubectl apply -f - <<EOF
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
  namespace: istio-system
  name: example-istiocontrolplane
spec:
  profile: default
EOF 

Canary Upgrade

This upgrade runs the old and new versions of Istio concurrently until the older control plane is deleted. To install the new version of the control plane based on the operator’s custom resource, run this command:

$ istioctl operator init --revision 1-7-0

Once you do this, there will be two control planes running side by side. It’s recommended to first verify the new deployment. Then, the older control plane can be deleted, using this command:

$ istioctl operator remove --revision <revision>

Benefits of Integrating Istio in a Kubernetes Cluster

There are many benefits of using the Istio service mesh for Kubernetes applications, including:

  • Consistent Service Networking

Istio operators automatically and consistently manage networking for all services. This means that organizations can offer consistent network performance for Kubernetes workloads at scale without introducing developer overhead.

  • Improved Service Security

Istio’s service operators enable security teams to protect data shared between services that support the Kubernetes application. These operators implement access control, encryption, and security-policy enforcement to reduce the attack surface in the network of microservices. Istio also comes with a standard certificate authority out of the box to generate self-signed root certificates and encryption keys.

  • Enhanced Application Performance

The Istio service mesh’s control plane captures metrics and passes them along to application performance monitoring tools. These tracing capabilities act as the foundation to observability while helping security professionals troubleshoot various operational and request-specific issues in real time.

Summary

Istio is an open-source PaaS solution built around three core principles: policy-driven networking, security, and tracing. Istio’s primary goal is to offer enhanced observability across distributed systems by collecting telemetry data from multiple components of a microservices framework. It helps monitor application performance using metrics such as latency, throughput, etc., while offering component-level visibility of the infrastructure for quicker troubleshooting. 

Though Istio solves several complexities of a distributed framework, installing it is simple. The Istio operator offers easy deployment of the Istio service mesh in a production Kubernetes cluster. 

In part two of this series, we’ll build up from the existing Istio setup and perform the next level of configurations, debugging, and optimization.

For our latest insights and updates, follow us on LinkedIn

Kentaro Wakayama Avatar

Kentaro Wakayama

Managing Director, CEO

Kentaro leads Coder Society as CEO, bringing hands-on expertise in software development, cloud technologies, and building high-performing engineering teams.

Contact us