Kubernetes observability

Luca Pompei
5 min readFeb 22, 2022

The open-source platform of the future

Kubernetes, also known as K8S, derives from the Greek, meaning helmsman or pilot. It’s an open source orchestration tool developed by Google for managing micro-services or containerized applications in a distributed cluster of nodes.

Kubernetes provides a highly resilient infrastructure with several out-of-the-box capabilities, such as:

  • Downtime-free deployment
  • Automatic rollout and rollback
  • Scaling
  • Service discovery and load balancing
  • Workload optimization
  • Self-healing
  • Scheduling
  • Storage orchestration

Its main goal is to hide the complexity of managing a set of containers and it’s portable in nature, which means it can run on various public or private cloud platforms such as AWS, Azure and more.

Kubernetes builds on more than a decade of Google’s experience running large-scale production workloads using a system called Borg, combined with the community’s best ideas and practices. Google opened the Kubernetes project in 2014 by donating it to the Cloud Native Computing Foundation.

Image taken from https://kubernetes.io/docs/concepts/overview/components/

The challenge

With the growth and spread of micro-services and mini-services (a single function as a service) monitoring and observability becomes crucial. It’s essential to know the status of each component and node and more in general the status of each resource. What you need to know at all times is:

  • Is my system in the correct state?
  • Do I need to scale differently?
  • Do I need more resources?
  • Can I save resources due to the low workload?
  • Are there situations in which it’s necessary to intervene due to continuous failures encountered?

And so on for more. Knowing the general state of the system is vital.

A solution

Kubernetes, through the Custom Resource Definitions (CRD), allows you to extend its APIs enabling the creation of additional resources, such as those provided by Kong, Prometheus and Grafana, thanks to which easily face and overcome the observability challenge.

The goal of these configurations is to set up the Kubernetes cluster so that each micro-service that will expose its functionalities, through a Kong Ingress Controller and in a completely transparent way, will enable Kong itself to gather the evaluation metrics and send them to the Prometheus server which will collect and analyze them so that through Grafana it’s possible to easily consult them and setup preventive alerts if necessary.

Premise

Helm was chosen for the following configuration, but the same result can also be achieved by using kubectl directly.

Prometheus and Grafana configuration

The first thing to do is to prepare your Kubernetes cluster by configuring and preparing Prometheus and Grafana, so that it’s possible to collect the evaluation metrics and carry out the appropriate analyzes.

Below there are the main commands to complete the installation, within a namespace called ‘monitoring’, using a basic configuration described in the file called prometheus.yaml.

prometheus:
prometheusSpec:
scrapeInterval: 10s
grafana:
persistence:
enabled: true # enable persistence using Persistent Volumes
dashboardProviders:
dashboardproviders.yaml:
apiVersion: 1
providers:
- name: 'default'
orgId: 1
folder: ''
type: file
disableDeletion: false
editable: true
options:
path: /var/lib/grafana/dashboards/default
dashboards:
default:
kong-dash:
gnetId: 7424
revision: 5
datasource: Prometheus
# Add repository
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update
# Create namespace
kubectl create namespace monitoring
# Install Prometheus+Grafana stack
helm install promstack prometheus-community/kube-prometheus-stack --namespace monitoring --version 25.1.0 -f prometheus.yaml

Testing the installation, typing ‘kubect get all -n monitoring’, you can see the following resources.

Kong configuration

Subsequently, it’s time to configure Kong, so that its components and plugins can be used. Inside the dedicated namespace, called ‘kong’, the following commands allow the definition of all the Kong resources.

# Add repository
helm repo add kong https://charts.konghq.com
helm repo update
# Create namespace
kubectl create namespace kong
# Install Kong
helm install kong kong/kong --namespace kong --set serviceMonitor.enabled=true --set serviceMonitor.labels.release=promstack

Once all Kong resources have been defined, a global configuration described in a separated file called ‘global-plugins.yaml’ will be applied, thanks to which each Kong Ingress Controller used by the Kubernetes components will automatically send the evaluation metrics to the previously installed Prometheus server.

apiVersion: configuration.konghq.com/v1
kind: KongClusterPlugin
metadata:
name: prometheus
annotations:
kubernetes.io/ingress.class: kong
labels:
global: "true"
plugin: prometheus
# Configure global K8S plugins
kubectl apply -f globalPlugins.yaml -n kong

In this way, in a transparent way for the micro-services that will expose their APIs, the monitoring and the observability for the Kubernetes cluster is automatically powered and guaranteed.

Checking the new namespace, typing ‘kubect get all -n kong’, this is the scenario.

For each micro-service that you want to expose via a Kong Ingress Controller a specification like the following is required.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: microservice-name
annotations:
konghq.com/strip-path: "true"
spec:
ingressClassName: kong
rules:
- host: localhost
http:
paths:
- path: /microservice-name
pathType: ImplementationSpecific
backend:
service:
name: microservice-name
port:
number: 80

Results

The Kubernetes cluster has been configured. At each interaction with the external routes, Kong will take care of collecting the evaluation metrics and sending them to the Prometheus server. Grafana, then, will allow you to consult what has been collected and configure alerts if necessary.

Below there are some screenshots representative of the Grafana dashboards that will allow you to view and analyze the collected data.

Kong’s official dashboard
Prometheus 2.0 stats dashboard

Conclusions

Kubernetes represents the open-source platform of the future. Setting up a cluster correctly is essential to keep the progress of its resources under control and to promptly intervene when necessary.

Monitoring and observability play a crucial role in maintenance.

Kubernetes provide a lots of out-of-the-box functionality but also allows you to define new custom resources. In this sense, it’s thanks to Kong, Prometheus and Grafana that it’s possible in a very simple way to collect and analyze the usage metrics of a Kubernetes cluster.

Complete and updated examples of what is reported can be found on GitHub at: https://github.com/lucapompei/k8s-cluster

--

--