(简单易学)EFK+告警

    科技2025-07-30  25

    今天的目标: 1:EFK平台的搭建 2:利用EFK平台收集nginx日志 3:EFK平台添加告警功能

    环境:centos7 mini 配置:2核心2G内存

    192.168.1.7 jdk,zk,kafka,filebeat,es 192.168.1.8 jdk,zk,kafka,filebeat,logstash 192.168.1.9 jdk,zk,kafka,filebeat,kibana

    1 初始化环境 时间同步: yum -y install ntpdate ntpdate pool.ntp.org 2 关闭防火墙 systemctl stop firewalld setenforce 0

    3 修改主机名 hostnamectl set-hostname kafka01 hostnamectl set-hostname kafka02 hostnamectl set-hostname kafka03

    4 编辑hosts文件 192.168.1.7 kafka01 192.168.1.8 kafka02 192.168.1.9 kafka03

    5 安装jdk yum -y install jdk-8u131-linux-x64_.rpm java -version

    6 安装zookeeper tar zxvf zookeeper-3.4.14.tar.gz mv zookeeper-3.4.14 /usr/local/zookeeper

    编辑zoo.conf cd /usr/local/zookeeper/conf mv zoo_sample.cfg zoo.cfg

    vim zoo.cfg 2888:集群通信端口 3888:集群选举端口

    server.1=192.168.1.7:2888:3888 server.2=192.168.1.8:2888:3888 server.3=192.168.1.9:2888:3888

    创建data目录 mkdir -p /tmp/zookeeper

    创建myid文件 kafka01上执行:echo “1” > /tmp/zookeeper/myid kafka02上执行:echo “2” > /tmp/zookeeper/myid kafka03上执行:echo “3” > /tmp/zookeeper/myid

    7 启动zookeepr服务 /usr/local/zookeeper/bin/zkServer.sh start 查看服务状态 /usr/local/zookeeper/bin/zkServer.sh status 1个leader,2个follower

    8: 安装kafka 消息中间件 tar zxvf kafka_2.11-2.2.0.tgz mv kafka_2.11-2.2.0 /usr/local/kafka 修改kafka的主配置文件: cd /usr/local/kafka/config vim server.properties [root@kafka01 config]# cat server.properties |grep -v “^#” |sed ‘/^$/d’|egrep “broker|advertised|zookeeper” broker.id=0 advertised.listeners=PLAINTEXT://kafka01:9092 zookeeper.connect=192.168.1.7:2181,192.168.1.8:2181,192.168.1.9:2181

    [root@kafka02 src]# cat /usr/local/kafka/config/server.properties|grep -v “^#” |sed ‘/^$/d’|egrep “broker|advertised|zookeeper” broker.id=1 advertised.listeners=PLAINTEXT://kafka02:9092 zookeeper.connect=192.168.1.7:2181,192.168.1.8:2181,192.168.1.9:2181

    [root@kafka03 src]# cat /usr/local/kafka/config/server.properties|grep -v “^#” |sed ‘/^$/d’|egrep “broker|advertised|zookeeper” broker.id=2 advertised.listeners=PLAINTEXT://kafka03:9092 zookeeper.connect=192.168.1.7:2181,192.168.1.8:2181,192.168.1.9:2181

    9:启动kafka服务 /usr/local/kafka/bin/kafka-server-start.sh -daemon /usr/local/kafka/config/server.properties 端口验证 [root@kafka01 config]# netstat -lptnu|grep 9092 tcp6 0 0 :::9092 ::😗 LISTEN 15980/java

    创建一个topic 主题 创建一个名为wg007的主题,并指定该主题的分区数为3,副本数为2. /usr/local/kafka/bin/kafka-topics.sh --create --zookeeper 192.168.1.7:2181 --replication-factor 2 --partitions 3 --topic wg007 查看当前有多少个topic主题 /usr/local/kafka/bin/kafka-topics.sh --list --zookeeper 192.168.1.7:2181

    模拟生产者: /usr/local/kafka/bin/kafka-console-producer.sh --broker-list 192.168.1.8:9092 --topic wg007 模拟消费者: /usr/local/kafka/bin/kafka-console-consumer.sh --bootstrap-server 192.168.1.9:9092 --topic wg007 --from-beginning

    10: 配置filebeat的yum源 yum -y install filebeat 编辑主配置文件: mv filebeat.yml filebeat.yml.bak [root@kafka01 filebeat]# cat filebeat.yml filebeat.inputs:

    type: log enabled: true paths: /var/log/messages

    output.kafka: enabled: true hosts: [“192.168.1.7:9092”,“192.168.1.8:9092”,“192.168.1.9:9092”] topic: messages

    [root@kafka02 filebeat]# cat filebeat.yml filebeat.inputs:

    type: log enabled: true paths: /var/log/secure

    output.kafka: enabled: true hosts: [“192.168.1.7:9092”,“192.168.1.8:9092”,“192.168.1.9:9092”] topic: secure

    [root@kafka03 filebeat]# cat filebeat.yml filebeat.inputs:

    type: log enabled: true paths: /var/log/nginx/access.log

    output.kafka: enabled: true hosts: [“192.168.1.7:9092”,“192.168.1.8:9092”,“192.168.1.9:9092”] topic: nginx

    11:安装elasticsearch rpm -ivh elasticsearch-6.6.2.rpm 配置es [root@kafka01 ELK]# cat /etc/elasticsearch/elasticsearch.yml |grep -v “^#” |sed ‘/^$/d’ cluster.name: wg007 node.name: node-1 path.data: /var/lib/elasticsearch path.logs: /var/log/elasticsearch network.host: 192.168.1.7 http.port: 9200

    12: 启动es systemctl enable elasticsearch systemctl start elasticsearch 验证: [root@kafka01 ELK]# netstat -lptnu|grep 9200 tcp6 0 0 192.168.1.7:9200 ::😗 LISTEN 17994/java

    12:安装logstash rpm -ivh logstash-6.6.0.rpm 编辑配置文件: [root@kafka02 logstash]# vim pipelines.yml

    pipeline.id: messages path.config: “/etc/logstash/conf.d/messages.conf”pipeline.id: secure path.config: “/etc/logstash/conf.d/secure.conf”pipeline.id: nginx path.config: “/etc/logstash/conf.d/nginx.conf” ============================================================= [root@kafka02 logstash]# vim /etc/logstash/conf.d/messages.conf

    input { kafka { bootstrap_servers => [“192.168.1.7:9092,192.168.1.8:9092,192.168.1.9:9092”] group_id => “logstash” topics => “messages” consumer_threads => 5 } }

    output { elasticsearch { hosts => “192.168.1.7:9200” index => “messages-%{+YYYY.MM.dd}” } }

    [root@kafka02 conf.d]# cat nginx.conf input { kafka { bootstrap_servers => [“192.168.1.7:9092,192.168.1.8:9092,192.168.1.9:9092”] group_id => “logstash” topics => “nginx” consumer_threads => 5 } }

    output { elasticsearch { hosts => “192.168.1.7:9200” index => “nginx-%{+YYYY.MM.dd}” } }

    [root@kafka02 conf.d]# cat secure.conf input { kafka { bootstrap_servers => [“192.168.1.7:9092,192.168.1.8:9092,192.168.1.9:9092”] group_id => “logstash” topics => “secure” consumer_threads => 5 } }

    output { elasticsearch { hosts => “192.168.1.7:9200” index => “secure-%{+YYYY.MM.dd}” } }

    13:安装kibana rpm -ivh kibana-6.6.2-x86_64.rpm 配置kibana vim /etc/kibana/kibana.yml [root@kafka03 ELK]# cat /etc/kibana/kibana.yml |grep -v “^#”|sed ‘/^$/d’ server.port: 5601 server.host: “192.168.1.9” elasticsearch.hosts: [“http://192.168.1.7:9200”]

    14 :启动kibana systemctl enable kibana systemctl start kibana 验证: [root@kafka03 ELK]# netstat -lptnu|grep 5601 tcp 0 0 192.168.1.9:5601 0.0.0.0:* LISTEN 12097/node

    注意了!! 如果没有产生index 那么请执行以下操作: 1: chmod 777 -R /var/log 2: echo “test1” >> /var/log/secure(这一步是为了产生新的日志!!)

    数据内容: 类似一块猪肉–乡镇一级的检疫站—市一级的检疫站----消费者手里(多了很多各种各样的戳!!) “test"–>1:filebeat(帮我们收集日志的)–>2:kafka<-----3:logstash(从kafka里取数据)–>elasitcsearch(存储)

    15 elastalert 安装告警插件 上传压缩包 yum 安装依赖 yum -y install openssl openssl-devel epel-release gcc gcc-c++

    tar xf Python-3.6.2.tgz cd Python-3.6.2 ./configure --prefix=/usr/local/python --with-openssl make && make install rm -rf /usr/bin/python 设置软连接 ln -s /usr/bin/python3.6 /usr/bin/python ln -s /usr/bin/pip3.6 /usr/bin/pip

    解压告警插件 tar zxvf v0.2.1_elasticalert.tar.gz mv elastalert-0.2.1/ /usr/local/elastalert

    cd /usr/local/elastalert pip install -r requirements.txt -i http://mirrors.aliyun.com/pypi/simple/ --trusted-host mirrors.aliyun.com python setup.py install

    设置软连接 ln -s /usr/local/python/bin/elastalert* /usr/bin/ 设置elastalert索引

    设置配置文件 mv config.yaml.example config.yaml

    设置告警规则: cp example_frequency.yaml nginx_frequency.yaml 启动服务 elastalert --config /usr/local/elastalert/config.yaml --rule /usr/local/elastalert/example_rules/nginx_frequency.yaml --verbose

    Processed: 0.043, SQL: 8