ELK 7.x 配置手册

snow chuai汇总、整理、撰写---2020/2/3
最后更新日期---2020/12/19


1. 安装Java环境
1) 安装Java11
[root@node1 ~]# yum install java-11-openjdk java-11-openjdk-devel -y
2) 设置Java11的Shell环境 [root@node1 ~]# cat > /etc/profile.d/java11.sh << EOF export JAVA_HOME=$(dirname $(dirname $(readlink $(readlink $(which javac))))) export ES_JAVA_HOME=$(dirname $(dirname $(readlink $(readlink $(which javac))))) export PATH=\$PATH:\$ES_JAVA_HOME/bin EOF
[root@node1 ~]# source /etc/profile.d/java11.sh
3) 确认Java版本 [root@node1 ~]# java --version openjdk 11.0.6 2020-01-14 LTS OpenJDK Runtime Environment 18.9 (build 11.0.6+10-LTS) OpenJDK 64-Bit Server VM 18.9 (build 11.0.6+10-LTS, mixed mode, sharing)
4) 如果Java版本不对,可以进行切换 [root@node1 ~]# alternatives --config java
2. 安装并运行Elasticsearch
1) 设置ES源
[root@node1 ~]# cat > /etc/yum.repos.d/elasticsearch.repo << EOF
[elasticsearch-7.x]
name=Elasticsearch repository for 7.x packages
baseurl=https://mirrors.tuna.tsinghua.edu.cn/elasticstack/7.x/yum/
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md
EOF
2) 安装ES [root@node1 ~]# yum install elasticsearch -y
[root@node1 ~]# vim /etc/elasticsearch/elasticsearch.yml # 定义23行,取消注释并主机节点名称 node.name: node1.1000cc.net
# 定义55行,取消注释并定义ES所监听的IP地址 network.host: 192.168.10.11
# 定义70行,取消注释并定义本地主机的IP地址或FQDN discovery.seed_hosts: ["node1.1000cc.net"]
# 定义72行,设定ES的cluster master节点组成部分. (如不设置,elasticsearch将只能通过127.0.0.1访问) cluster.initial_master_nodes: ["node1.1000cc.net"]
# 于文件最底部追加如下内容 # 允许在开启http时,允许所有主机跨域访问 http.cors.enabled: true http.cors.allow-origin: "*"
[root@node1 ~]# systemctl enable --now elasticsearch
[root@node1 ~]# netstat -lnatp | grep 9200 tcp6 0 0 192.168.10.11:9200 :::* LISTEN 1654/java
3) 确认ES是否能够工作 [root@node1 ~]# curl http://192.168.10.11:9200 { "name" : "node1.1000cc.net", "cluster_name" : "elasticsearch", "cluster_uuid" : "Da4mmBLaSjOrrNO3QnWChA", "version" : { "number" : "7.5.2", "build_flavor" : "default", "build_type" : "rpm", "build_hash" : "8bec50e1e0ad29dad5653712cf3bb580cd1afcdf", "build_date" : "2020-01-15T12:11:52.313576Z", "build_snapshot" : false, "lucene_version" : "8.3.0", "minimum_wire_compatibility_version" : "6.8.0", "minimum_index_compatibility_version" : "6.0.0-beta1" }, "tagline" : "You Know, for Search" }
3. ES基本使用
1) 显示当前ES的index列表
[root@node1 ~]# curl http://192.168.10.11:9200/_aliases?pretty
{ }
2) 创建一个index列表 [root@node1 ~]# curl -X PUT "http://192.168.10.11:9200/snow-index" {"acknowledged":true,"shards_acknowledged":true,"index":"snow-index"}
3) 验证 [root@node1 ~]# curl http://192.168.10.11:9200/_aliases?pretty { "snow-index" : { "aliases" : { } } }
[root@node1 ~]# curl http://192.168.10.11:9200/snow-index/_settings?pretty { "snow-index" : { "settings" : { "index" : { "creation_date" : "1580740296271", "number_of_shards" : "1", "number_of_replicas" : "1", "uuid" : "kUMjzwJSQ2isMXX0o1pQOQ", "version" : { "created" : "7050299" }, "provided_name" : "snow-index" } } } }
4) 定义映射并插入测试数据 [root@node1 1]# curl -H "Content-Type: application/json" -X PUT "http://192.168.10.11:9200/snow-index/doc01/1" -d '{ "subject" : "Test Post No.1", "description" : "This is the initial post", "content" : "This is the test message for using Elasticsearch." }' {"_index":"snow-index","_type":"doc01","_id":"1","_version":1,"result":"created","_shards":{"total":2,"successful":1,"failed":0},"_seq_no":0,"_primary_term":1}
5) 验证 [root@node1 ~]# curl "http://192.168.10.11:9200/snow-index/_mapping/?pretty" { "snow-index" : { "settings" : { "index" : { "creation_date" : "1580740296271", "number_of_shards" : "1", "number_of_replicas" : "1", "uuid" : "kUMjzwJSQ2isMXX0o1pQOQ", "version" : { "created" : "7050299" }, "provided_name" : "snow-index" } } } } [root@node1 ~]# curl -H "Content-Type: application/json" -X PUT "http://192.168.10.11:9200/snow-index/doc01/1" -d '{ "subject" : "Test Post No.1", "description" : "This is the initial post", "content" : "This is the test message for using Elasticsearch." }' {"_index":"snow-index","_type":"doc01","_id":"1","_version":1,"result":"created","_shards":{"total":2,"successful":1,"failed":0},"_seq_no":0,"_pri mary_term":1}
[root@node1 ~]# curl "http://192.168.10.11:9200/snow-index/_mapping/?pretty" { "snow-index" : { "mappings" : { "properties" : { "content" : { "type" : "text", "fields" : { "keyword" : { "type" : "keyword", "ignore_above" : 256 } } }, "description" : { "type" : "text", "fields" : { "keyword" : { "type" : "keyword", "ignore_above" : 256 } } }, "subject" : { "type" : "text", "fields" : { "keyword" : { "type" : "keyword", "ignore_above" : 256 } } } } } } }
6) 搜索数据(搜索条件为[Description]字段包含单词) [root@node1 ~]# curl "http://192.168.10.11:9200/snow-index/doc01/_search?q=description:initial&pretty=true" { "took" : 146, "timed_out" : false, "_shards" : { "total" : 1, "successful" : 1, "skipped" : 0, "failed" : 0 }, "hits" : { "total" : { "value" : 1, "relation" : "eq" }, "max_score" : 0.2876821, "hits" : [ { "_index" : "snow-index", "_type" : "doc01", "_id" : "1", "_score" : 0.2876821, "_source" : { "subject" : "Test Post No.1", "description" : "This is the initial post", "content" : "This is the test message for using Elasticsearch." } } ] } }
4. 实现ES集群
1) 将其他节点安装啊上ES(本实验总计3台节点)
***详见1-2章节***
2) 修改所有节点的配置文件 [root@node1 ~]# vim /etc/elasticsearch/elasticsearch.yml # 修改第17行,取消注释并命名集群名称 cluster.name: es-cluster
# 修改第23行,取消注释并定义主机名,可用变量${HOSTNMAE} node.name: ${HOSTNAME}
# 修改第55行,定义监听的IP地址 network.host: 0.0.0.0
# 68行,开启集群节点自动发现 discovery.seed_hosts: ["node1.1000cc.net", "node2.1000cc.net", "node3.1000cc.net"]
# 修改72行,添加集群的所有节点 cluster.initial_master_nodes: - node1.1000cc.net - node2.1000cc.net - node3.1000cc.net
[root@node1 ~]# pscp.pssh -h host-list.txt /etc/elasticsearch/elasticsearch.yml /etc/elasticsearch/ [root@node1 ~]# pssh -h host-list.txt -i 'systemctl enable --now elasticsearch'
5) 防火墙配置 [root@node1 ~]# pssh -h host-list.txt -i 'firewall-cmd --add-port={9200/tcp,9300/tcp} --permanent' [root@node1 ~]# pssh -h host-list.txt -i 'firewall-cmd --reload'
6) 验证 [root@node1 ~]# curl http://192.168.10.11:9200/_cluster/health?pretty { "cluster_name" : "es-cluster", #集群名称 "status" : "green", #集群状态 "timed_out" : false, "number_of_nodes" : 3, #集群节点数 "number_of_data_nodes" : 3, "active_primary_shards" : 1, "active_shards" : 2, "relocating_shards" : 0, "initializing_shards" : 0, "unassigned_shards" : 0, "delayed_unassigned_shards" : 0, "number_of_pending_tasks" : 0, "number_of_in_flight_fetch" : 0, "task_max_waiting_in_queue_millis" : 0, "active_shards_percent_as_number" : 100.0 }
5. 实现Kibana
1) 配置ES源
2) 安装Kibana [root@node1 ~]# yum install kibana -y
3) 配置并启动Kibana [root@node1 ~]# vim /etc/kibana/kibana.yml # 修改7行,设定kibana监听的IP地址/FQDN server.host: "node1.1000cc.net"
# 修改25行,设定kibana的节点名称 server.name: "node1.1000cc.net"
# 修改28行,设定elasticsearch的IP及端口 (如果是cluster,也可以只连接1台ES即可。同步信息由集群完成) elasticsearch.hosts: ["http://node1.1000cc.net:9200", "http://node2.1000cc.net:9200", "http://node3.1000cc.net:9200"]
# 于文件最底部添加如下内容,以显示中文界面 i18n.locale: zh-CN
[root@node1 ~]# systemctl enable --now kibana
4) 防火墙配置 [root@node1 ~]# firewall-cmd --add-port=5601/tcp --permanent success [root@node1 ~]# firewall-cmd --reload success
5) 进入Kibana WEB UI # 耐心等几分钟 [浏览器]===>[http://node1.1000cc.net:5601]
6. 实现Logstash
1) 配置好ES源
2) 安装Logstash [root@node1 ~]# yum install logstash -y
3) 创建一个收集文件并运行Logstash # 从[/var/log/secure]获取sshd fail的行,并输出到ES的index中. [root@node1 ~]# vim /etc/logstash/conf.d/sshd.conf input { file { type => "secure_log" path => "/var/log/secure" } } filter { grok { add_tag => [ "sshd_fail" ] match => { "message" => "Failed %{WORD:sshd_auth_type} for %{USERNAME:sshd_invalid_user} from %{IP:sshd_client_ip} port %{NUMBER:sshd_port} %{GREEDYDATA:sshd_protocol}" } } }
output { elasticsearch { hosts => ["192.168.10.11:9200"] index => "sshd_fail-%{+YYYY.MM}" } }

[root@node1 ~]# chgrp logstash /var/log/secure [root@node1 ~]# chmod 640 /var/log/secure [root@node1 ~]# systemctl enable --now logstash
4) 确认index文件生成 # 如果文件生成不了,可以自行做几次错误的ssh连接 [root@node1 ~]# curl node1.1000cc.net:9200/_cat/indices?v health status index uuid pri rep docs.count docs.deleted store.size pri.store.size green open .kibana_task_manager_1 LWtNm1bQRNalSdauxN0pIw 1 1 2 1 95kb 43.6kb green open .apm-agent-configuration fZtyejmZQN2ogYjcFc83zQ 1 1 0 0 566b 283b green open snow-index kUMjzwJSQ2isMXX0o1pQOQ 1 1 1 0 10.4kb 5.2kb green open sshd_fail-2020.02 gYJ8xCohRweIsQv1A7UTSA 1 1 9 0 97.2kb 55.6kb green open .kibana_1 ydx07P4jTOi8C6jf27_Pbw 1 1 8 0 66.4kb 33.1kb
[root@node1 ~]# curl node1.1000cc.net:9200/sshd_fail-2020.02/_search?pretty
5) 邮件告警 (1) 查看并安装logstash的email插件 [root@node1 ~]# /usr/share/logstash/bin/logstash-plugin list
[root@node1 ~]# /usr/share/logstash/bin/logstash-plugin install logstash-output-email
(2) 修改logstash的日志收集文件 [root@node1 ~]# vim /etc/logstash/conf.d/sshd.conf input { file { type => "secure_log" path => "/var/log/secure" } } filter { grok { add_tag => [ "sshd_fail" ] match => { "message" => "Failed %{WORD:sshd_auth_type} for %{USERNAME:sshd_invalid_user} from %{IP:sshd_client_ip} port %{NUMBER:sshd_port} %{GREEDYDATA:sshd_protocol}" } } }
output { elasticsearch { hosts => ["192.168.10.11:9200"] index => "sshd_fail-%{+YYYY.MM}" } # 追加如下内容 email { to => "snow@1000y.cloud" from => "lisa@1000y.cloud" username => "snow@1000y.cloud" password => "123456" address => "mail.1000y.cloud" port => 25 via => "smtp" use_tls => false subject => "sshd出现登录失败情况" body => "日志: %{message}" authentication => "plain" } }
(3) 测试 1. 做一次ssh登录失败的动作
2. 在指定邮箱中查询告警信息

6) Kibana上显示




7. 实现Metricbeat
1) 配置好ES源
2) 安装Metricbeat [root@node1 ~]# yum install metricbeat -y
3) 配置并启动Metricbeat [root@node1 ~]# vim /etc/metricbeat/metricbeat.yml # 67行,设定kibana的信息 host: "node1.1000cc.net:5601"
# 94行,设定elasticsearch信息 hosts: ["node1.1000cc.net:9200"]
[root@node1 ~]# vim /etc/metricbeat/metricbeat.reference.yml # 60行,开启所需要的监控项 - module: system metricsets: - cpu # CPU usage - load # CPU load averages - memory # Memory usage - network # Network IO - process # Per process metrics - process_summary # Process summary - uptime # System Uptime - socket_summary # Socket summary - core # Per CPU core usage - diskio # Disk IO - filesystem # File system usage for each mountpoint - fsstat # File system summary metrics #- raid # Raid - socket # Sockets and connection info (linux only) ...... ...... ...... ...... ...... ...... # 1907行,设定kibana信息 Host: "node1.1000cc.net:5601"
[root@node1 ~]# systemctl enable --now metricbeat
4) 确认Metricbeat index文件生成 [root@node1 ~]# curl node1.1000cc.net:9200/_cat/indices?v health status index uuid pri rep docs.count docs.deleted store.size pri.store.size green open .kibana_task_manager_1 LWtNm1bQRNalSdauxN0pIw 1 1 2 1 95kb 43.6kb green open metricbeat-7.5.2-2020.02.03-000001 MVyUB_-NSMmhn-KWDOYSCQ 1 1 62 0 447kb 243.8kb green open .apm-agent-configuration fZtyejmZQN2ogYjcFc83zQ 1 1 0 0 566b 283b green open snow-index kUMjzwJSQ2isMXX0o1pQOQ 1 1 1 0 10.4kb 5.2kb green open sshd_fail-2020.02 gYJ8xCohRweIsQv1A7UTSA 1 1 9 0 97.9kb 56kb green open .kibana_1 ydx07P4jTOi8C6jf27_Pbw 1 1 9 0 74.1kb 33.6kb
[root@node1 ~]# curl node1.1000cc.net:9200/metricbeat-7.5.2-2020.02.03-000001/_search?pretty
5) 开启Metricbeat DashBoard [root@node1 ~]# metricbeat setup --dashboards Loading dashboards (Kibana must be running and reachable) Loaded dashboards
6) 使用Kibana查看Metric
8. 实现Packetbeat
1) 配好ES源
2) 安装PacketBeat [root@node1 ~]# yum install packetbeat -y
3) 配置并启用PacketBeat [root@node1 ~]# vim /etc/packetbeat/packetbeat.yml # 29行起,定义收集的网络协议包。如果不需要可使用"enable: false"来关闭指定的协议 #149行,设置kibana信息 host: "node1.1000cc.net:5601"
# 176行,设置ES hosts: ["node1.1000cc.net:9200"]
[root@node1 ~]# vim /etc/packetbeat/packetbeat.reference.yml # 1527行,修改[setup.kibana]区段中的信息 host: "node1.1000cc.net:5601"
[root@node1 ~]# systemctl enable --now packetbeat
4) 确认生成PacketBeat index文件及相关信息 [root@node1 ~]# curl node1.1000cc.net:9200/_cat/indices?v health status index uuid pri rep docs.count docs.deleted store.size pri.store.size green open packetbeat-7.5.2-2020.02.03-000001 RaTZh6kRSfiPY1mEHTvprQ 1 1 97 0 108.5kb 54.2kb green open .kibana_task_manager_1 LWtNm1bQRNalSdauxN0pIw 1 1 2 1 95kb 43.6kb green open metricbeat-7.5.2-2020.02.03-000001 MVyUB_-NSMmhn-KWDOYSCQ 1 1 1371 0 1.4mb 672.3kb green open .apm-agent-configuration fZtyejmZQN2ogYjcFc83zQ 1 1 0 0 566b 283b green open snow-index kUMjzwJSQ2isMXX0o1pQOQ 1 1 1 0 10.4kb 5.2kb green open sshd_fail-2020.02 gYJ8xCohRweIsQv1A7UTSA 1 1 11 0 123.5kb 72.6kb green open .kibana_1 ydx07P4jTOi8C6jf27_Pbw 1 1 918 28 1.3mb 718.9kb
[root@node1 ~]# curl node1.1000cc.net:9200/packetbeat-7.5.2-2020.02.03-000001/_search?pretty
5) 开启PacketBeat Dashboard [root@node1 ~]# packetbeat setup --dashboards Loading dashboards (Kibana must be running and reachable) Loaded dashboards
6) 在Kibana上查看PacketBeat信息
9. 实现Filebeat
1) 准备好ES源
2) 安装FileBeat [root@node1 ~]# yum install filebeat -y
3) 配置FileBeat [root@node1 ~]# vim /etc/filebeat/filebeat.yml # 24行,设置日志收集器开启 enabled: true # 27-28行,设置日志收集 paths: - /var/log/*.log
# 123行,设置kibana信息 host:”http://node1.1000cc.net1:5601# 150行,设置 Elasticsearch信息 hosts: ["node1.1000cc.net:9200"]
[root@node1 ~]# vim /etc/filebeat/filebeat.reference.yml # 选择开启所需要收集的日志(15行起),如: ...... ...... - module: system # Syslog syslog: enabled: true ...... ...... # 2143行,于[setup.kibana]区段,设置kibana信息 host: "node1.1000cc.net:5601"
[root@node1 ~]# systemctl enable --now filebeat
4) 确认FileBeat的index文件生成及其他信息 [root@node1 ~]# curl node1.1000cc.net:9200/_cat/indices?v health status index uuid pri rep docs.count docs.deleted store.size pri.store.size green open filebeat-7.5.2-2020.02.03-000001 9voVnGmRTtGWlTb2cSp2ug 1 1 1392 0 667.4kb 335kb green open packetbeat-7.5.2-2020.02.03-000001 RaTZh6kRSfiPY1mEHTvprQ 1 1 8832 0 7mb 3.4mb green open .kibana_task_manager_1 LWtNm1bQRNalSdauxN0pIw 1 1 2 1 95kb 43.6kb green open metricbeat-7.5.2-2020.02.03-000001 MVyUB_-NSMmhn-KWDOYSCQ 1 1 3935 0 3.3mb 1.6mb green open .apm-agent-configuration fZtyejmZQN2ogYjcFc83zQ 1 1 0 0 566b 283b green open snow-index kUMjzwJSQ2isMXX0o1pQOQ 1 1 1 0 10.4kb 5.2kb green open sshd_fail-2020.02 gYJ8xCohRweIsQv1A7UTSA 1 1 13 0 149.1kb 89.2kb green open .kibana_1 ydx07P4jTOi8C6jf27_Pbw 1 1 1326 30 2mb 1mb
[root@node1 ~]# curl node1.1000cc.net:9200/filebeat-7.5.2-2020.02.03-000001/_search?pretty
5) 开启FileBeat DashBoard [root@node1 ~]# filebeat setup --dashboards Loading dashboards (Kibana must be running and reachable) Loaded dashboards
6) 使用Kibana查看FileBeat信息
10. 实现Auditbeat
1) 设置好ES源
2) 安装Auditbeat [root@node1 ~]# yum install auditbeat -y
3) 配置并启动Auditbeat [root@node1 ~]# vim /etc/auditbeat/auditbeat.yml # 从13行起,开启所需Audit动作
# 从117行起,设置[setup.kibana]区段的主机信息 host: "node1.1000cc.net:5601" # 从142行起,设置[Elasticsearch output]区段的主机信息 output.elasticsearch: # Array of hosts to connect to. hosts: ["node1.1000cc.net:9200"]
[root@node1 ~]# vim /etc/auditbeat/auditbeat.reference.yml ...... ...... # 修改34行,设置auditd所需要开启的module - module: auditd resolve_ids: true failure_mode: silent backlog_limit: 8196 rate_limit: 0 include_raw_message: false include_warnings: false audit_rules: | ..... ..... [root@node1 ~]# systemctl enable --now auditbeat
4) 确认Audit Index文件存在 [root@node1 ~]# curl node1.1000cc.net:9200/_cat/indices?v health status index uuid pri rep docs.count docs.deleted store.size pri.store.size green open filebeat-7.5.2-2020.02.03-000001 9voVnGmRTtGWlTb2cSp2ug 1 1 1393 0 692.1kb 347.5kb green open packetbeat-7.5.2-2020.02.03-000001 RaTZh6kRSfiPY1mEHTvprQ 1 1 13412 0 10.7mb 5.3mb green open .kibana_task_manager_1 LWtNm1bQRNalSdauxN0pIw 1 1 2 1 95kb 43.6kb green open metricbeat-7.5.2-2020.02.03-000001 MVyUB_-NSMmhn-KWDOYSCQ 1 1 5266 0 3.9mb 1.9mb green open .apm-agent-configuration fZtyejmZQN2ogYjcFc83zQ 1 1 0 0 566b 283b green open snow-index kUMjzwJSQ2isMXX0o1pQOQ 1 1 1 0 10.4kb 5.2kb green open sshd_fail-2020.02 gYJ8xCohRweIsQv1A7UTSA 1 1 15 0 98.6kb 29.7kb green open auditbeat-7.5.2-2020.02.03-000001 IEs-mLhIRnuUdCywS3DW0g 1 1 1918 0 2.2mb 1mb green open .kibana_1 ydx07P4jTOi8C6jf27_Pbw 1 1 2376 45 2.6mb 1.2mb
[root@node1 ~]# curl node1.1000cc.net:9200/auditbeat-7.5.2-2020.02.03-000001/_search?pretty
5) 开启Audit Dashboard [root@node1 ~]# auditbeat setup --dashboards Loading dashboards (Kibana must be running and reachable) Loaded dashboards
6) 使用Kibana查看Audit
11. 实现Heartbeat
1) 设置好ES源
2) 安装Heartbeat [root@node1 ~]# yum install heartbeat-elastic -y
3) 配置并启动Heartbeat [root@node1 ~]# vim /etc/heartbeat/heartbeat.yml ...... ...... # 自13行开启,进行items设置.默认监测http://localhost:9200 heartbeat.monitors: - type: http
urls: ["http://node1.1000cc.net:9200"]
#30行,设定10s一次检查可用性状态 schedule: '@every 10s'
# 33行,设定连接超时时间 timeout: 16s
...... ...... # 设定65行,[setup.kibana]的主机设置 host: "node1.1000cc.net:5601"
# 设定92行,[Elasticsearch output]的主机设置 hosts: ["node1.1000cc.net:9200"]
[root@node1 ~]# vim /etc/heartbeat/heartbeat.reference.yml ...... ...... #24行起,可开启icmp监控 - type: icmp # monitor type `icmp` (requires root) uses ICMP Echo Request to ping # configured hosts
# Monitor name used for job name and document type. name: icmp
# Enable/Disable monitor enabled: true
...... ...... # 90行起,可设定TCP监控 - type: tcp # monitor type `tcp`. Connect via TCP and optionally verify endpoint # by sending/receiving a custom payload
# Monitor name used for job name and document type name: tcp
# Enable/Disable monitor enabled: true
...... ......
[root@node1 ~]# systemctl enable --now heartbeat-elastic
3) 检测heartbeat的index文件是否生成 [root@node1 ~]# curl node1.1000cc.net:9200/_cat/indices?v health status index uuid pri rep docs.count docs.deleted store.size pri.store.size green open filebeat-7.5.2-2020.02.03-000001 9voVnGmRTtGWlTb2cSp2ug 1 1 1394 0 716.1kb 359.5kb green open heartbeat-7.5.2-2020.02.03-000001 N23ynDJ-Qo2V5j9LktvPzQ 1 1 4 0 90.3kb 45.1kb green open packetbeat-7.5.2-2020.02.03-000001 RaTZh6kRSfiPY1mEHTvprQ 1 1 18396 0 14mb 6.9mb green open .kibana_task_manager_1 LWtNm1bQRNalSdauxN0pIw 1 1 2 1 95kb 43.6kb green open metricbeat-7.5.2-2020.02.03-000001 MVyUB_-NSMmhn-KWDOYSCQ 1 1 6746 0 6.3mb 3.6mb green open .apm-agent-configuration fZtyejmZQN2ogYjcFc83zQ 1 1 0 0 566b 283b green open snow-index kUMjzwJSQ2isMXX0o1pQOQ 1 1 1 0 10.4kb 5.2kb green open sshd_fail-2020.02 gYJ8xCohRweIsQv1A7UTSA 1 1 17 0 116.6kb 38.8kb green open .kibana_1 ydx07P4jTOi8C6jf27_Pbw 1 1 2721 63 2.9mb 1.4mb green open auditbeat-7.5.2-2020.02.03-000001 IEs-mLhIRnuUdCywS3DW0g 1 1 2919 0 3.8mb 1.7mb
[root@node1 ~]# curl node1.1000cc.net:9200/heartbeat-7.5.2-2020.02.03-000001/_search?pretty
[浏览器]==>node1.1000cc.net:5601==>[Observability](可观测性)==>[运行时间](Uptime)
12. 实现X-Pack.Monitoring
1) 配置X-Pack(所有节点进行配置)
[root@node1 ~]# vim /etc/elasticsearch/elasticsearch.yml # 于文件最后追加如下内容 # 定义xpack license为试用权限 xpack.license.self_generated.type: trial # 开启x-pcak.monitoring xpack.monitoring.collection.enabled: true # 关闭x-pack相关安全特性 xpack.security.enabled: false
[root@node1 ~]# pscp.pssh -h host.list.txt /etc/elasticsearch/elasticsearch.yml /etc/elasticsearch/
2) 启用X-Pack.Monitoring(所有节点) [root@node1 ~]# systemctl restart elasticsearch kibana logstash [root@node2 ~]# systemctl restart elasticsearch [root@node3 ~]# systemctl restart elasticsearch
3) 确认X-Pack.Monitoring的index文件存在 [root@node1 ~]# curl node1.1000cc.net:9200/_cat/indices?v health status index uuid pri rep docs.count docs.deleted store.size pri.store.size green open filebeat-7.5.2-2020.02.03-000001 9voVnGmRTtGWlTb2cSp2ug 1 1 1394 0 716.1kb 356.6kb green open heartbeat-7.5.2-2020.02.03-000001 N23ynDJ-Qo2V5j9LktvPzQ 1 1 91 0 289.1kb 201.4kb green open packetbeat-7.5.2-2020.02.03-000001 RaTZh6kRSfiPY1mEHTvprQ 1 1 25633 0 25.4mb 12mb green open .kibana_task_manager_1 LWtNm1bQRNalSdauxN0pIw 1 1 2 1 32.6kb 16.3kb green open .monitoring-es-7-2020.02.03 kBPiHY6QTneNUZNWzYW3Ig 1 1 88 0 859.1kb 291.4kb green open metricbeat-7.5.2-2020.02.03-000001 MVyUB_-NSMmhn-KWDOYSCQ 1 1 7926 0 11.7mb 5.4mb green open .apm-agent-configuration fZtyejmZQN2ogYjcFc83zQ 1 1 0 0 566b 283b green open snow-index kUMjzwJSQ2isMXX0o1pQOQ 1 1 1 0 10.4kb 5.2kb green open .monitoring-kibana-7-2020.02.03 Pdcu6Q5dTSKpByOxYx2F6w 1 1 2 0 77.3kb 16.9kb green open sshd_fail-2020.02 gYJ8xCohRweIsQv1A7UTSA 1 1 20 0 159.4kb 95.1kb green open auditbeat-7.5.2-2020.02.03-000001 IEs-mLhIRnuUdCywS3DW0g 1 1 3954 0 8.7mb 3.6mb green open .kibana_1 ydx07P4jTOi8C6jf27_Pbw 1 1 2723 6 2.7mb 1.3mb
4) 使用Kibana查看X-Pack.Monitoring的信息
13. 开启X-Pack.Security实现Kibana登录验证
1) 配置X-Pack(所有节点进行配置)
[root@node1 ~]# vim /etc/elasticsearch/elasticsearch.yml
......
......
# 注释最后一行内容
#xpack.security.enabled: false
# 于文件最后追加如下内容,并复制到所有的集群节点
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
[root@node1 ~]# pscp.pssh -h host.list.txt /etc/elasticsearch/elasticsearch.yml /etc/elasticsearch/ 2) 重启所有节点 [root@node1 ~]# systemctl restart elasticsearch [root@node2 ~]# systemctl restart elasticsearch [root@node3 ~]# systemctl restart elasticsearch
3) 配置ELK的相关账户密码 [root@node1 ~]# /usr/share/elasticsearch/bin/elasticsearch-setup-passwords interactive Initiating the setup of passwords for reserved users elastic,apm_system,kibana,logstash_system,beats_system,remote_monitoring_user. You will be prompted to enter passwords as the process progresses. Please confirm that you would like to continue [y/N] y
Enter password for [elastic]: # 输入密码[本操作所有默认密码为123456] Reenter password for [elastic]: Enter password for [apm_system]: Reenter password for [apm_system]: Enter password for [kibana]: Reenter password for [kibana]: Enter password for [logstash_system]: Reenter password for [logstash_system]: Enter password for [beats_system]: Reenter password for [beats_system]: Enter password for [remote_monitoring_user]: Reenter password for [remote_monitoring_user]: Changed password for user [apm_system] Changed password for user [kibana] Changed password for user [logstash_system] Changed password for user [beats_system] Changed password for user [remote_monitoring_user] Changed password for user [elastic]
4) 验证es账户登录功能 [浏览器]====>[http://$es-ip:9200]




5) 配置Kibana账户验证 [root@node1 ~]# vim /etc/kibana/kibana.yml ...... ......
# 于文件最底部追加如下内容 elasticsearch.username: "kibana" elasticsearch.password: "123456" # kibana密码
[root@node1 ~]# systemctl restart kibana
6) 验证Kibana账户登录功能 [浏览器]====>[http://$kibana-ip:5601]




7) 配置logstash认证账户,以便于连接es [root@node1 ~]# vim /etc/logstash/conf.d/sshd.conf input { file { type => "secure_log" path => "/var/log/secure" } } filter { grok { add_tag => [ "sshd_fail" ] match => { "message" => "Failed %{WORD:sshd_auth_type} for %{USERNAME:sshd_invalid_user} from %{IP:sshd_client_ip} port %{NUMBER:sshd_port} %{GREEDYDATA:sshd_protocol}" } } }
output { elasticsearch { # 增加elasticsearch的账户及密码 user => "elastic" password => "123456" hosts => ["192.168.10.11:9200"] index => "sshd_fail-%{+YYYY.MM}" } email { to => "snow@1000y.cloud" from => "lisa@1000y.cloud" username => "snow@1000y.cloud" password => "123456" address => "mail.1000y.cloud" port => 25 via => "smtp" use_tls => false subject => "sshd出现登录失败情况" body => "日志: %{message}" authentication => "plain" }
}
[root@node1 ~]# systemctl restart logstash

 

 

如对您有帮助,请随缘打个赏。^-^

gold