Zabbix's client is more of only reporting things, push mode. In Prometheus, the client also stores monitoring data locally, and the server regularly pulls the desired data.
Zabbix's client agent can easily read the database, log and other files in the machine through scripts for reporting. The zabbix client agent can easily read the database, log and other files in the machine through scripts for reporting. Prometheus reporting clients are divided into SDKs in different languages and exporters for different purposes. For example, if you want to monitor machine status, mysql performance, etc., there are a large number of mature exporters to use directly out of the box, and serve through HTTP communication. The terminal provides information reporting (server to pull information);
图片
安装Prometheus:
install Prometheus
官网下载地址:
Official website download address
https://prometheus.io/download/
下载您想要的版本后,进行安装使用即可。
After downloading the version you want, install it and use it
cby@cby-Inspiron-7577:~/prometheus-2.21.0.linux-amd64$ ./prometheus --version
prometheus, version 2.21.0 (branch: HEAD, revision: e83ef207b6c2398919b69cd87d2693cfc2fb4127)
build user: root@a4d9bea8479e
build date: 20200911-11:35:02
go version: go1.15.2
查看启动sever文件:
View startup sever file
cby@cby-Inspiron-7577:~/prometheus-2.21.0.linux-amd64$ cat prometheus.yml
# my global config
global:
scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
# scrape_timeout is set to the global default (10s).
# Alertmanager configuration
alerting:
alertmanagers:
- static_configs:
- targets:
# - alertmanager:9093
# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
# - "first_rules.yml"
# - "second_rules.yml"
# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
# The job name is added as a label \`job=<job_name>\` to any timeseries scraped from this config.
- job_name: 'prometheus'
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
static_configs:
- targets: \['localhost:9090'\]
global: global configuration, in which scrape_interval represents the interval of data capture, evaluation_interval represents the interval of alarm rule detection;
scrape_configs: The goal of grabbing monitoring information. A job_name is a target, and its targets are the IP and port for collecting information. Prometheus itself is monitored by default here, and the monitoring port of Prometheus can be modified by modifying this. Each exporter of Prometheus will be a target, they can report different monitoring information, such as machine status, or mysql performance, etc., different language SDK will also be a target, they will report your customized business monitoring information.
After running, you can use the default port 9090 to access it. If you can't access it, you can check if there is a firewall restriction. If there is no restriction, check if it is started normally and there is port monitoring.
On the download page of the official website, you can find the tar package of node_exporter. This plug-in can monitor basic hardware information, such as CPU memory and hard disk information. The node_exporter itself is also an http service that can be used directly.
下载最新的此插件,同时进行解压,并运行:
Download the latest plug-in, unzip at the same time, and run
If you can access normally, you can add a target in the prometheus.yml file
\# my global config
global:
scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
# scrape_timeout is set to the global default (10s).
# Alertmanager configuration
alerting:
alertmanagers:
- static_configs:
- targets:
# - alertmanager:9093
# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
# - "first_rules.yml"
# - "second_rules.yml"
# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
# The job name is added as a label \`job=<job_name>\` to any timeseries scraped from this config.
- job_name: 'prometheus'
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
static_configs:
- targets: \['localhost:9090'\]
- job_name: 'server'
static_configs:
- targets: \['localhost:9100'\]
在标签栏的 Status --> Targets 中可以:
In Status --> Targets in the tab bar, you can
安装Grafana:
Install Grafana
cby@cby-Inspiron-7577:~$ sudo apt-get install -y adduser libfontconfig1
cby@cby-Inspiron-7577:~$ wget https://dl.grafana.com/oss/release/grafana_7.2.0_amd64.deb
cby@cby-Inspiron-7577:~$ sudo dpkg -i grafana_7.2.0_amd64.deb
正在选中未选择的软件包 grafana。
(正在读取数据库 ... 系统当前共安装有 211277 个文件和目录。)
准备解压 grafana_7.2.0_amd64.deb ...
正在解压 grafana (7.2.0) ...
正在设置 grafana (7.2.0) ...
正在添加系统用户"grafana" (UID 130)...
正在将新用户"grafana" (UID 130)添加到组"grafana"...
无法创建主目录"/usr/share/grafana"。
### NOT starting on installation, please execute the following statements to configure grafana to start automatically using systemd
sudo /bin/systemctl daemon-reload
sudo /bin/systemctl enable grafana-server
### You can start grafana-server by executing
sudo /bin/systemctl start grafana-server
正在处理用于 systemd (245.4-4ubuntu3.2) 的触发器 ...
The default port is 3000, you can access by using IP plus port, the default user name and password is admin, you can see the home page after logging in. Add Prometheus monitoring data in the settings.
After adding monitoring data, import a monitoring panel, or industrious people can configure the panel by themselves, wow ha ha ha, and you can find a favorite panel in the official panel interface
The address is: https://grafana.com/dashboards
After downloading the json of the panel, you can import the panel.
导入后即可显示看到花里胡哨的面版了
After importing, you can see the bells and whistles