prometheus学习笔记之集群内服务发现环境准备

news/2024/9/28 23:38:20

一、环境介绍

主要演示prometheus在k8s集群中如何通过服务自动去发现k8s集群自有服务及其他服务发现场景,后续会演示集群外部署prometheus自动发现k8s服务并获取数据

创建监控使用的namespaces

kubectl create ns monitoring

配置docker可以下载镜像

[root@k8s-master deploy]# cat /etc/docker/daemon.json #配置registry-mirrors
{"registry-mirrors": ["https://docker.m.daocloud.io","https://huecker.io","https://dockerhub.timeweb.cloud"],"exec-opts": ["native.cgroupdriver=systemd"]
}
[

prometheus相关镜像参考地址:https://hub.docker.com/u/prom 都在prom下

二、k8s安装node-exporter

参考地址:https://github.com/prometheus/node_exporter

kubectl create ns monitoring #创建专有的NS
mkdir -p manifest/monitor
cd manifest/monitor
vim node
-export-ds.yaml #注意类型是DaemonSet,相比二进制部署,daemonSet方式部署在节点经常变动的情况下避免了手动安装的问题 apiVersion: apps/v1 kind: DaemonSet metadata:name: node-exporternamespace: monitoring labels:k8s-app: node-exporter spec:selector:matchLabels:k8s-app: node-exportertemplate:metadata:labels:k8s-app: node-exporterspec:tolerations:- effect: NoSchedulekey: node-role.kubernetes.io/master #注意,你的master节点污点和我的未必一致 通过describe命令查看containers:- image: registry-vpc.cn-shanghai.aliyuncs.com/zdbl-base/node-exporter:v1.3.1 #这里我是下载到我的私有仓库,可以直接使用官方的镜像地址,docker配置后应当可以直接下载name: prometheus-node-exporterports:- containerPort: 9100hostPort: 9100 #如果不写默认与containerPort一致,注意端口冲突protocol: TCPname: metricsvolumeMounts:- mountPath: /host/procname: proc- mountPath: /host/sysname: sys- mountPath: /hostname: rootfsargs:- --path.procfs=/host/proc- --path.sysfs=/host/sys- --path.rootfs=/hostvolumes:- name: prochostPath:path: /proc- name: syshostPath:path: /sys- name: rootfshostPath:path: /hostNetwork: truehostPID: truehostIPC: truekubectl apply -f node-export-ds.yaml kubectl get pods -n monitoring NAME READY STATUS RESTARTS AGE node-exporter-7bp55 1/1 Running 0 1m node-exporter-klx2b 1/1 Running 0 1m node-exporter-pcht8 1/1 Running 0 1m netstat -tnlp|grep 9100 #因为是hostPort

浏览器访问节点 9100 端口 /metrics 接口

二、k8s安装cadvisor

略,参考https://www.cnblogs.com/panwenbin-logs/p/18385045

三、安装NFS 存储类

如果有请忽略

参考文档:https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner/

wget https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner/archive/refs/tags/nfs-subdir-external-provisioner-4.0.18.tar.gz
tar xf nfs-subdir-external-provisioner-4.0.18.tar.gz
cd nfs-subdir-external-provisioner-nfs-subdir-external-provisioner-4.0.18/
cp -r deploy deploy-bak
cd deploy
kubectl create ns nfs-provisioner #创建专用的名称空间
sed -i 's/namespace: default/namespace: nfs-provisioner/g' `grep -rl 'namespace: default' ./` #替换配置文件名称空间
vim deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:name: nfs-client-provisionerlabels:app: nfs-client-provisioner# replace with namespace where provisioner is deployednamespace: nfs-provisioner
spec:replicas: 1strategy:type: Recreateselector:matchLabels:app: nfs-client-provisionertemplate:metadata:labels:app: nfs-client-provisionerspec:serviceAccountName: nfs-client-provisionercontainers:- name: nfs-client-provisionerimage: registry-vpc.cn-shanghai.aliyuncs.com/zdbl-base/nfs-subdir-external-provisioner:v4.0.2volumeMounts:- name: nfs-client-rootmountPath: /persistentvolumesenv:- name: PROVISIONER_NAMEvalue: k8s-sigs.io/nfs-subdir-external-provisioner- name: NFS_SERVER  #server及挂载点根据你的实际情况进行配置,此处使用的是阿里云的NASvalue: 332fa4bbd2-rho88.cn-shanghai.nas.aliyuncs.com- name: NFS_PATHvalue: /k8s-nfs-scvolumes:- name: nfs-client-rootnfs:server: 332fa4bbd2-rho88.cn-shanghai.nas.aliyuncs.com #上面的保持一致path: /k8s-nfs-sc
kubectl apply -k .
kubectl get pods -n nfs-provisioner

验证

vim test-claim.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:name: test-claim
spec:storageClassName: nfs-client #注意此处名称和class.yaml中定义的名称保持一致accessModes:- ReadWriteManyresources:requests:storage: 1Mi
---
kind: Pod
apiVersion: v1
metadata:name: test-pod
spec:containers:- name: test-podimage: busybox:stablecommand:- "/bin/sh"args:- "-c"- "touch /mnt/SUCCESS && exit 0 || exit 1" #挂载并创建一个文件volumeMounts:- name: nfs-pvcmountPath: "/mnt"restartPolicy: "Never"volumes:- name: nfs-pvcpersistentVolumeClaim:claimName: test-claimkubectl apply -f  test-pod.yaml
kubectl get pods NAME READY STATUS RESTARTS AGE test-pod 0/1 Completed 0 63m
kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-f4e9b50d-6a19-4c7f-bdc9-18855c712fdd 1Mi RWX Delete Bound default/test-claim nfs-client 63m
kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE test-claim Bound pvc-f4e9b50d-6a19-4c7f-bdc9-18855c712fdd 1Mi RWX nfs-client 63m
ll /mnt/k8s-nfs-sc/default-test-claim-pvc-f4e9b50d-6a19-4c7f-bdc9-18855c712fdd/SUCCESS -rw-r--r-- 1 root root 0 2024-09-03 09:29 /mnt/k8s-nfs-sc/default-test-claim-pvc-f4e9b50d-6a19-4c7f-bdc9-18855c712fdd/SUCCESS

设置为NFS默认SC

kubectl get sc
NAME         PROVISIONER                                   RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
nfs-client   k8s-sigs.io/nfs-subdir-external-provisioner   Delete          Immediate           false                  14m
kubectl patch storageclass nfs-client -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}' #通过打补丁的方式设置 nfs-client 这个SC为默认的SC storageclass.storage.k8s.io/nfs-client patched
kubectl get sc #可以看到nfs-clinet SC已经有default标识了 NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE nfs-client (default) k8s-sigs.io/nfs-subdir-external-provisioner Delete Immediate false 15m

四、k8s部署prometheus

1.创建prometheus相关配置文件

prometheus-cm.yaml

kind: ConfigMap
apiVersion: v1
metadata:labels:app: prometheusname: prometheus-confignamespace: monitoring
data:prometheus.yml: |global:  #全局配置,同二进制scrape_interval: 15sscrape_timeout: 10sevaluation_interval: 1mscrape_configs: #数据抓取配置- job_name: 'kubernetes-node' #k8s自动发现kubernetes_sd_configs:- role: noderelabel_configs:- source_labels: [__address__]regex: '(.*):10250'   #kubelet 默认访问10250 但是我们的node-export部署的端口是9100,所以在抓取数据前需要修改数据采集的端口replacement: '${1}:9100' #如果监控cadvisor 复制kubernetes-node job_name 将9100端口修改为cadvisor端口即可target_label: __address__action: replace- action: labelmapregex: __meta_kubernetes_node_label_(.+)- job_name: 'kubernetes-node-cadvisor'  #这来抓取cadvisor数据通过 kubeapi,因为k8s已经内置了cadvisor,如果通过部署的DS获取,参考上面的 kubernetes-node job_namekubernetes_sd_configs:- role:  nodescheme: httpstls_config:ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt  #访问api-server需要认证,此处使用证书,相关证书和token已经自动挂载到pod中bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/tokenrelabel_configs:- action: labelmapregex: __meta_kubernetes_node_label_(.+)- target_label: __address__replacement: kubernetes.default.svc:443- source_labels: [__meta_kubernetes_node_name]regex: (.+)target_label: __metrics_path__replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor #变量替换,将$1替换为node名称,名称从上面的regex 正则中获取- job_name: 'kubernetes-apiserver'kubernetes_sd_configs:- role: endpointsscheme: httpstls_config:ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt #xxxxxxxxxxxxbearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/tokenrelabel_configs:- source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]action: keep #保留source_labels中标签,其他过滤regex: default;kubernetes;https   #与source_labels中对应,分别为名称空间  svc名称  svc中port的名称,注意此处是名称不是协议,协议在scheme字段配置- job_name: 'kubernetes-service-endpoints'kubernetes_sd_configs:- role: endpointsrelabel_configs:- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape]action: keepregex: true- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme]action: replacetarget_label: __scheme__regex: (https?)- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path]action: replacetarget_label: __metrics_path__regex: (.+)- source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port]action: replacetarget_label: __address__regex: ([^:]+)(?::\d+)?;(\d+)replacement: $1:$2- action: labelmapregex: __meta_kubernetes_service_label_(.+) #匹配svc metadata中 lable的标签名称和值- source_labels: [__meta_kubernetes_namespace]action: replacetarget_label: kubernetes_namespace- source_labels: [__meta_kubernetes_service_name]action: replacetarget_label: kubernetes_name

prometheus-deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:name: prometheus-servernamespace: monitoringlabels:app: prometheus
spec:replicas: 1selector:matchLabels:app: prometheuscomponent: servertemplate:metadata:labels:app: prometheuscomponent: serverannotations:prometheus.io/scrape: 'false'spec:serviceAccountName: monitorcontainers:- name: prometheus
        image: registry-vpc.cn-shanghai.aliyuncs.com/zdbl-base/prometheus:v2.36.1command:- prometheus- --config.file=/etc/prometheus/prometheus.yml- --storage.tsdb.path=/prometheus- --storage.tsdb.retention=720h- --web.enable-lifecycleports:- containerPort: 9090protocol: TCPvolumeMounts:- mountPath: /etc/prometheus/prometheus.ymlname: prometheus-configsubPath: prometheus.yml- mountPath: /prometheus/name: prometheus-storage-volumevolumes:- name: prometheus-configconfigMap:name: prometheus-configitems:- key: prometheus.ymlpath: prometheus.ymlmode: 0644- name: prometheus-storage-volume  #这里使用了NFS存储类创建的PVC,如果没有可以直接使用hostPath方式persistentVolumeClaim:claimName: prometheus-data

prometheus-pvc.yaml

kind: PersistentVolumeClaim
apiVersion: v1
metadata:name: prometheus-datanamespace: monitoring
spec:storageClassName: nfs-clientaccessModes:- ReadWriteManyresources:requests:storage: 10G

prometheus-rbac.yaml

apiVersion: v1
kind: ServiceAccount  #prometheus监控集群需要很多权限,此处为了方便支持通过 cluster-admin授权,实际应当给予所以资源读取权限即可
metadata:name: monitornamespace: monitoring
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:name: monitor-clusterrolebinding
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: cluster-admin
subjects:
- kind: ServiceAccountname: monitornamespace: monitoring

prometheus-svc.yaml

apiVersion: v1
kind: Service
metadata:name: prometheusnamespace: monitoringlabels:app: prometheus
spec:type: NodePortports:- port: 9090targetPort: 9090nodePort: 30090protocol: TCPselector:app: prometheuscomponent: server

2.应用配置文件

kubectl apply -f .
kubectl get pods -n monitoring |grep prometheus
kubectl logs -n monitoring prometheus-server-5d5cc898b6-q69wp

3.访问web页面 节点IP:30090

 

prometheus参考:https://prometheus.io/docs/prometheus/latest/configuration/configuration/

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.ryyt.cn/news/56223.html

如若内容造成侵权/违法违规/事实不符,请联系我们进行投诉反馈,一经查实,立即删除!

相关文章

使用列表推导式实现嵌套列表的平铺

先遍历列表中嵌套的子列表,然后再遍历子列表的元素并提取出来作为最终列表中的元素 a=[[1,2,3],[4,5,6],[7,8,9]] 通过列表推导式[c for b in a for c in b],我们首先将a中的每个子列表(这里用变量b表示) 遍历出来,然后对每个b进行遍历,提取出它的每个元素(这里用变量c表…

章10——面向对象编程(高级部分)——两种单例模式

代码如下: //单例模式 //instance--实例 //该篇中记录了饿汉模式和懒汉模式 public class HungryMan {public static void main(String[] args) {Single01.say();Single02.say();} }class Single01{//只能有instance这一个实例。private Single01(){System.out.println("…

列表操作示例

首先先定义一个列表,列表是写在[]里,用逗号隔开的,元素是可以改变的 列表的截取语法结构是:变量[头下标:尾下标] L = [abc,12,3.45,python,2.789] 输出完整列表 print(L) 输出列表的第一个元素 print(L[0]) 将列表的第一个元素修改为‘a’ L[0]=a 将列表的第2个元素到第3个…

如何设计真正的实时数据湖?

通过剖析“以湖代仓”观点的认知误区,本文提出了数据的流表二象性理论,并基于“流驱表”理念指出了湖仓融合的正确发展方向——利用流式计算加速湖内仓的数据流转,落地真正的实时数据湖。汽车制造行业在企业数据管理方案上的探索已有数十年之久,本文以辩证的视角回顾了这段…

【ELF系列】一文手撕ELF文件

一、ELF文件是什么二、ELF可重定位文件2.1 .shstrtab和.strtab2.2 .rel.text和.symtab2.3 .text节三、ELF可执行文件四、 参考原创 嵌入一下? 谦益行在一文了解ECU软件刷写文件HEX和S19格式 (qq.com)中介绍汽车控制器的刷写软件的格式,不管是编译生成的目标软件格式是哪种,同…

wsl2+arch+个人向美化

也算是入教arch了,本来想物理机的,但是又舍不得笔记本上的环境,刚好想着玩玩wsl2。到处缝缝补补也算是弄了个感觉能看的 最终效果图 图一内置主题图二p10k使用材料 终端直接用的是win的terminal,不是因为他善,只是我懒。喜欢捣鼓可以拿wezterm来 shell用的是on my zsh + p…

Python复杂网络社区检测:并行谱聚类算法设计与多种算法应用实战研究

原文链接: https://tecdat.cn/?p=37574 原文出处:拓端数据部落公众号 分析师:Leiyun Liao 在当今的网络科学领域,复杂网络中的社区检测成为了一个至关重要的研究课题。随着信息技术的飞速发展,各种大规模网络不断涌现,如社交网络、生物网络等。准确地识别这些网络中的社…