Sending logs and metrics from a Kubernetes cluster
This guide will show how you can monitor a Kubernetes (k8s) cluster using your Opstrace cluster.
Specifically, the goal is to
- scrape k8s system metrics, and push them to the Opstrace cluster.
- collect logs of workloads (containers) running on the k8s cluster, and send them to the Opstrace cluster.
To that end,
- we will deploy a single Prometheus instance in the k8s cluster (as a k8s deployment): it will scrape the various k8s system metric endpoints, and then push all collected data to the remote Opstrace cluster.
- we will deploy a Promtail instance on each node in the k8s cluster (as a k8s daemon set): it will collect all local container logs and push them into the remote Opstrace cluster.
Prerequisites
- An Opstrace cluster.
- A decision: for which Opstrace tenant would you like to send data?
- An Opstrace tenant authentication token file (for the tenant of your choice). Also see concepts.
For following this guide step-by-step you will additionally need kind installed on your computer.
Kind is a tool for running a local k8s cluster using Docker.
3: Create a k8s secret with tenant authentication token
Copy the default tenant's data API authentication token.
Create a new file and name it secret.yaml
apiVersion: v1kind: Secretmetadata:name: tenant-auth-token-defaultstringData:authToken: <YOUR AUTH TOKEN HERE>
Create the secret:
kubectl apply -f secret.yaml
4: Deploy a Prometheus instance in the k8s cluster for scraping metrics
Create a file named prometheus-config.yaml with the following contents.
Replace ${CLUSTER_NAME} with the name of your Opstrace cluster.
Note that the configuration snippets below assume that the Opstrace tenant you would like to send data to is called default.
kind: ConfigMapapiVersion: v1metadata:name: prometheus-configdata:prometheus.yml: |-remote_write:- url: https://cortex.${TENANT_NAME}.${CLUSTER_NAME}.opstrace.io/api/v1/pushbearer_token_file: /var/run/${TENANT_NAME}-tenant/authTokenscrape_configs:- job_name: 'kubernetes-pods'kubernetes_sd_configs:- role: podtls_config:ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crtbearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/tokenrelabel_configs:# Always use HTTPS for the api server- source_labels: [__meta_kubernetes_service_label_component]regex: apiserveraction: replacetarget_label: __scheme__replacement: https# Rename jobs to be <namespace>/<name, from pod name label>- source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_pod_label_name]action: replaceseparator: /target_label: jobreplacement: $1# Rename instances to be the pod name- source_labels: [__meta_kubernetes_pod_name]action: replacetarget_label: instance# Include node name as a extra field- source_labels: [__meta_kubernetes_pod_node_name]target_label: node# This scrape config gather all nodes- job_name: 'kubernetes-nodes'kubernetes_sd_configs:- role: nodetls_config:insecure_skip_verify: truebearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/tokenrelabel_configs:- target_label: __scheme__replacement: https- source_labels: [__meta_kubernetes_node_label_kubernetes_io_hostname]target_label: instance# This scrape config just pulls in the default/kubernetes service- job_name: 'kubernetes-service'kubernetes_sd_configs:- role: endpointstls_config:insecure_skip_verify: truebearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/tokenrelabel_configs:- source_labels: [__meta_kubernetes_service_label_component]regex: apiserveraction: keep- target_label: __scheme__replacement: https- source_labels: []target_label: jobreplacement: default/kubernetes
Submit this config map:
kubectl apply -f prometheus-config.yaml
Next up, create a file named prometheus.yaml
apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRolemetadata:name: prometheusrules:- apiGroups: [""]resources:- nodes- nodes/proxy- services- endpoints- podsverbs: ["get", "list", "watch"]- nonResourceURLs: ["/metrics"]verbs: ["get"]---apiVersion: v1kind: ServiceAccountmetadata:name: prometheus---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata:name: prometheusroleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: prometheussubjects:- kind: ServiceAccountname: prometheusnamespace: default---apiVersion: apps/v1kind: Deploymentmetadata:name: prometheusspec:replicas: 1selector:matchLabels:name: prometheustemplate:metadata:labels:name: prometheusspec:serviceAccountName: prometheuscontainers:- name: retrievalimage: prom/prometheus:v2.21.0imagePullPolicy: IfNotPresentargs:- --config.file=/etc/prometheus/prometheus.ymlports:- containerPort: 9090volumeMounts:- name: config-volumemountPath: /etc/prometheus- name: tenant-auth-token-defaultmountPath: /var/run/default-tenantreadOnly: truevolumes:- name: config-volumeconfigMap:name: prometheus-config- name: tenant-auth-token-defaultsecret:secretName: tenant-auth-token-default
Then start the Prometheus deployment with the following command:
kubectl apply -f prometheus.yaml
5: Deploy Promtail in the k8s cluster for collecting and pushing logs
Create a file named promtail-config.yaml with the following contents.
Replace ${CLUSTER_NAME} with the name of your Opstrace cluster.
Note that the configuration snippets below assume that the Opstrace tenant you would like to send data to is called default.
apiVersion: v1kind: ConfigMapmetadata:name: promtail-configdata:promtail.yml: |client:url: https://loki.default.#{CLUSTER_NAME}.opstrace.io/loki/api/v1/pushbearer_token_file: /var/run/default-tenant/authTokenscrape_configs:- pipeline_stages:- docker:job_name: kubernetes-pods-namekubernetes_sd_configs:- role: podrelabel_configs:- source_labels:- __meta_kubernetes_pod_label_nametarget_label: __service__- source_labels:- __meta_kubernetes_pod_node_nametarget_label: __host__- action: dropregex: ^$source_labels:- __service__- action: replacereplacement: $1separator: /source_labels:- __meta_kubernetes_namespace- __service__target_label: k8s_app- action: replacesource_labels:- __meta_kubernetes_namespacetarget_label: k8s_namespace_name- action: replacesource_labels:- __meta_kubernetes_pod_nametarget_label: k8s_pod_name- action: replacesource_labels:- __meta_kubernetes_pod_container_nametarget_label: k8s_container_name- replacement: /var/log/pods/*$1/*.logseparator: /source_labels:- __meta_kubernetes_pod_uid- __meta_kubernetes_pod_container_nametarget_label: __path__- replacement: /var/log/pods/*$1/*.logseparator: /source_labels:- __meta_kubernetes_pod_uid- __meta_kubernetes_pod_container_nametarget_label: __path__- pipeline_stages:- docker:job_name: kubernetes-pods-statickubernetes_sd_configs:- role: podrelabel_configs:- action: dropregex: ^$source_labels:- __meta_kubernetes_pod_annotation_kubernetes_io_config_mirror- action: replacesource_labels:- __meta_kubernetes_pod_label_componenttarget_label: __service__- source_labels:- __meta_kubernetes_pod_node_nametarget_label: __host__- action: dropregex: ^$source_labels:- __service__- action: replacereplacement: $1separator: /source_labels:- __meta_kubernetes_namespace- __service__target_label: k8s_app- action: replacesource_labels:- __meta_kubernetes_namespacetarget_label: k8s_namespace_name- action: replacesource_labels:- __meta_kubernetes_pod_nametarget_label: k8s_pod_name- action: replacesource_labels:- __meta_kubernetes_pod_container_nametarget_label: k8s_container_name- replacement: /var/log/pods/*$1/*.logseparator: /source_labels:- __meta_kubernetes_pod_annotation_kubernetes_io_config_mirror- __meta_kubernetes_pod_container_nametarget_label: __path__
Submit this config map:
kubectl apply -f promtail-config.yaml
Create a file promtail.yaml with the following contents:
apiVersion: apps/v1kind: DaemonSetmetadata:name: promtailspec:minReadySeconds: 10selector:matchLabels:name: promtailtemplate:metadata:labels:name: promtailspec:containers:- args:- -config.file=/etc/promtail/promtail.ymlenv:- name: HOSTNAMEvalueFrom:fieldRef:fieldPath: spec.nodeNameimage: grafana/promtail:1.6.1imagePullPolicy: Alwaysname: promtailreadinessProbe:httpGet:path: /readyport: http-metricsscheme: HTTPinitialDelaySeconds: 10ports:- containerPort: 80name: http-metricssecurityContext:privileged: truerunAsUser: 0volumeMounts:- mountPath: /etc/promtailname: promtail-config- mountPath: /var/logname: varlog- mountPath: /var/lib/docker/containersname: varlibdockercontainersreadOnly: true- mountPath: /var/run/default-tenantname: tenant-auth-token-defaultreadOnly: trueserviceAccount: promtailtolerations:- effect: NoScheduleoperator: Existsvolumes:- configMap:name: promtail-configname: promtail-config- secret:secretName: tenant-auth-token-defaultname: tenant-auth-token-default- hostPath:path: /var/logname: varlog- hostPath:path: /var/lib/docker/containersname: varlibdockercontainersupdateStrategy:type: RollingUpdate---apiVersion: v1kind: ServiceAccountmetadata:name: promtail---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRolemetadata:name: promtailrules:- apiGroups:- ""resources:- nodes- nodes/proxy- services- endpoints- podsverbs:- get- list- watch---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata:name: promtailroleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: promtailsubjects:- kind: ServiceAccountname: promtailnamespace: default
Start the Promtail deployment with the following command:
kubectl apply -f promtail.yaml