使用docker-compose安装Prometheus

Prometheus监控
一、 总览主要组件:Prometheus server: 用于收集和存储时间序列数据exporter: 客户端生成监控指标Alertmanager: 处理警报Grafana: 数据可视化和输出Pushgateway:主动推送数据给Prometheus server
架构图:

使用docker-compose安装Prometheus

文章插图
二 、环境搭建2.1 环境准备软件版本OSCentOS Linux release 7.8.2003docker20.10.17docker-composev2.6.0IP192.168.0.802.2 编辑prometheus配置文件mkdir /etc/prometheusvim /etc/prometheus/prometheus.yml/etc/prometheus/prometheus.yml
# 全局配置global:scrape_interval: 15sevaluation_interval: 15s# scrape_timeout is set to the global default (10s).# 告警配置alerting:alertmanagers:- static_configs:- targets: ['192.168.1.200:9093']# 加载一次规则,并根据全局“评估间隔”定期评估它们 。rule_files:- "/etc/prometheus/rules.yml"# 控制Prometheus监视哪些资源# 默认配置中,有一个名为prometheus的作业,它会收集Prometheus服务器公开的时间序列数据 。scrape_configs:# 作业名称将作为标签“job=<job_name>`添加到此配置中获取的任何数据 。- job_name: 'prometheus'static_configs:- targets: ['localhost:9090']- job_name: 'node'static_configs:- targets: ['localhost:9100']labels:env: devrole: docker2.3 编辑告警规则文件/etc/prometheus/rules.yml
groups:- name: examplerules: # Alert for any instance that is unreachable for >5 minutes.- alert: InstanceDownexpr: up == 0for: 1mlabels:serverity: pageannotations:summary: "Instance {{ $labels.instance }} down"description: "{{ $labels.instance }} of job {{ $labels.job }} has been down for more than 5 minutes."2.4 编辑告警配置文件【使用docker-compose安装Prometheus】/etc/alertmanager/alertmanager.yml
global:resolve_timeout: 5msmtp_smarthost: 'xxx@xxx:587'smtp_from: 'zhaoysz@xxx'smtp_auth_username: 'xxx@xxx'smtp_auth_password: 'xxxx'smtp_require_tls: trueroute:group_by: ['alertname']group_wait: 10sgroup_interval: 10srepeat_interval: 1hreceiver: 'test-mails'receivers:- name: 'test-mails'email_configs:- to: 'scottcho@qq.com'2.5 编辑docker-compose/docker-compose/prometheus/docker-compose.yml
services:prometheus:image: prom/prometheusvolumes:- /etc/prometheus/:/etc/prometheus/- prometheus_data:/prometheuscommand:- '--config.file=/etc/prometheus/prometheus.yml'- '--storage.tsdb.path=/prometheus'- '--web.console.libraries=/usr/share/prometheus/console_libraries'- '--web.console.templates=/usr/share/prometheus/consoles'- '--web.external-url=http://192.168.1.200:9090/'- '--web.enable-lifecycle'- '--storage.tsdb.retention=15d'ports:- 9090:9090links:- alertmanager:alertmanagerrestart: alwaysalertmanager:image: prom/alertmanagerports:- 9093:9093volumes:- /etc/alertmanager/:/etc/alertmanager/- alertmanager_data:/alertmanagercommand:- '--config.file=/etc/alertmanager/alertmanager.yml'- '--storage.path=/alertmanager'restart: alwaysgrafana:image: grafana/grafanaports:- 3000:3000volumes:- /etc/grafana/:/etc/grafana/provisioning/- grafana_data:/var/lib/grafanaenvironment:- GF_INSTALL_PLUGINS=camptocamp-prometheus-alertmanager-datasourcelinks:- prometheus:prometheus- alertmanager:alertmanagerrestart: alwaysvolumes:prometheus_data: {}grafana_data: {}alertmanager_data: {}2.6 启动composerdocker-compose up -d
2.7 访问端点
  • http://localhost:9090Prometheus server主页
  • http://localhost:9090/metricsPrometheus server自身指标
  • http://192.168.0.80:3000Grafana
三、 添加监控主机Job3.1 安装Node_Exportnode_export用于采集主机信息,本质是一个采用http的协议的api
RedHat家族的操作系统可以采用yum进行安装
yum 安装方法:https://copr.fedorainfracloud.org/coprs/ibotty/prometheus-exporters/
curl -Lo /etc/yum.repos.d/_copr_ibotty-prometheus-exporters.repo https://copr.fedorainfracloud.org/coprs/ibotty/prometheus-exporters/repo/epel-7/ibotty-prometheus-exporters-epel-7.repoyum -y install node_exportersystemctl start node_exportersystemctl enable node_exporter.service二进制文件安装官网下载地址(https://prometheus.io/download/)
tar -zxvf node_exporter-1.0.0-rc.1.linux-amd64.tar.gz ./node_exporter --web.listen-address=:9100访问地址: http://localhost:9100/metrics
3.2 在promethues中添加该监控- job_name: 'node'static_configs:- targets: ['localhost:9100']labels:env: devrole: docker3.3 重启Prometheus由于这是静态配置,需要重启Prometheus服务,后面可以做成自动发现的
docker compose restart
四、 配置Grafana访问http://192.168.0.80:3000/,初始登录账号/密码: admin/admin
创建Prometheus数据源单击侧栏中的“齿轮”以打开“配置”菜单 。单击“数据源” 。点击“添加数据源” 。选择“ Prometheus”作为类型 。设置适当的Prometheus服务器网址(例如,http://192.168.0.80:9090/)根据需要调整其他数据源设置(例如 , 选择正确的访问方法) 。单击“保存并测试”以保存新的数据源 。

推荐阅读