康靖 发表于 2023-6-21 00:54:32

ELK日志收集记录

logstash在需要收集日志的服务器里运行,将日志数据发送给es
在kibana页面查看es的数据
es和kibana安装:
Install Elasticsearch with RPM | Elasticsearch Guide | ElasticConfiguring Elasticsearch | Elasticsearch Guide | ElasticInstall Kibana with RPM | Kibana Guide | Elasticrpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch

cat << EOF >/etc/yum.repos.d/elasticsearch.repo

name=Elasticsearch repository for 8.x packages
baseurl=https://artifacts.elastic.co/packages/8.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=0
autorefresh=1
type=rpm-md
EOF

yum install -y --enablerepo=elasticsearch elasticsearch<br><br># 安装完成后,在终端里可以找到es的密码<br># 修改密码:'/usr/share/elasticsearch/bin/elasticsearch-reset-password -u elastic'<br># config file: /etc/elasticsearch/elasticsearch.yml<br># network.host: 0.0.0.0 允许其他服务器访问<br># http.port 修改成可以外部访问的端口<br><br># 启动es<br><em id="__mceDel">systemctl start elasticsearch.service<br><br></em><em><em><em><em><em><em># 测试是否可以访问:curl --cacert /etc/elasticsearch/certs/http_ca.crt -u elastic https://localhost:es_host<br># 如果要在其他服务器里访问的话,需要先把证书移过去:</em></em></em></em></em></em><em><em>/etc/elasticsearch/certs/http_ca.crt,直接复制证书的内容,在客户端保存成一个证书文件即可<br># 在客户端里测试是否可以访问:</em></em><em id="__mceDel"><em><em><em><em><em><em>curl --cacert path_to_ca.crt -u elastic https://localhost:es_host</em></em></em></em></em></em></em><em><em><em><em><em><em>
# install kibana
cat << EOF >/etc/yum.repos.d/kibana.repo

name=Kibana repository for 8.x packages
baseurl=https://artifacts.elastic.co/packages/8.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md
EOF
<br># kibana和es可以安装到同一台服务器
yum install -y kibana
# /etc/kibana/kibana.yml 修改server.port为外部可以访问的端口,server.host修改为0.0.0.0允许其他服务器访问,elasticsearch部分的可以先不用设置,
# root用户使用:/usr/share/kibana/bin/kibana --allow-root
systemctl start kibana.service
# 首次打开kibana页面需要添加elastic的token,使用如下命令生成token
# /usr/share/elasticsearch/bin/elasticsearch-create-enrollment-token -s kibana<br># 登录的时候也需要es的用户名和密码<br># 登录成功之后,</em></em></em></em></em></em><em id="__mceDel"><em id="__mceDel">/etc/kibana/kibana.yml的底部会自动添加elasticsearch的连接信息</em></em>需要收集日志的服务器里安装logstash:
rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch

cat <<EOF > /etc/yum.repos.d/logstash.repo

name=Elastic repository for 8.x packages
baseurl=https://artifacts.elastic.co/packages/7.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md
EOF

yum install -y logstash
ln -s /usr/share/logstash/bin/logstash /usr/bin/logstash

# install filebeat
rpm --import https://packages.elastic.co/GPG-KEY-elasticsearch
cat <<EOF > /etc/yum.repos.d/filebeat.repo

name=Elastic repository for 8.x packages
baseurl=https://artifacts.elastic.co/packages/8.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md
EOF
yum install -y filebeat<br>ln -s /usr/share/filebeat/bin/filebeat /usr/bin/logstash

#filebeat->logstash->ES
#filebeat从具体目录里拿文件的内容发送给logstash,logstash将数据发送给es
<br>midr -m 777 -p /data/logstash<br>
cat <<EOF >/data/logstash/filebeat.conf
filebeat.inputs:
- type: log
paths:
    - /your_log_path/*.log
output.logstash:
hosts: ["127.0.0.1:5044"]
EOF


cat <<EOF >/data/logstash/logstash.conf
# Sample Logstash configuration for creating a simple
# Beats -> Logstash -> Elasticsearch pipeline.

input {
beats {
    port => 5044
    client_inactivity_timeout => 600
}
}

filter{
mutate{
    remove_field => ["agent"]
    remove_field => ["ecs"]
    remove_field => ["event"]
    remove_field => ["tags"]
    remove_field => ["@version"]
    remove_field => ["input"]
    remove_field => ["log"]
}
}

output {
elasticsearch {
    hosts => ["https://es_ip_address:es_port"]
    index => "log-from-logstash"
    user => "es_user_name"
    password => "es_password"
    ssl_certificate_authorities => "path_to_es_http_ca.crt"
}
}
EOF<br>
#es_http_ca.crt的内容和es服务器里的/etc/elasticsearch/certs/http_ca.crt内容相同
#filter里移除一些不必要的字段

#启动
logstash -f /data/logstash/logstash.conf >/dev/null 2>&1 &
filebeat -e -c /data/logstash/filebeat.conf >/dev/null 2>&1 &启动之后,filebeat.conf里配置的日志路径里可以copy一些文件做测试,或者已经有一些日志文件的话,都可以在kabana里看到配置的index被自动创建:<br>
 创建一个DataView就可以查看index里的文档内容:

 在Discover里选择配置的dataview查看数据:

 
 
来源:https://www.cnblogs.com/huizit1/p/17494824.html
免责声明:由于采集信息均来自互联网,如果侵犯了您的权益,请联系我们【E-Mail:cb@itdo.tech】 我们会及时删除侵权内容,谢谢合作!
页: [1]
查看完整版本: ELK日志收集记录