官方安装包下载地址:https://www.elastic.co/cn/downloads/beats/
同2.1
- wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-8.2.3-x86_64.rpm
-
- # rpm -ivh filebeat-8.2.3-x86_64.rpm
-
- 警告:filebeat-8.2.3-x86_64.rpm: 头V4 RSA/SHA512 Signature, 密钥 ID d88e42b4: NOKEY
- 准备中... ################################# [100%]
- 正在升级/安装...
- 1:filebeat-8.2.3-1 ################################# [100%]
-
filebeat的配置文件位于: /etc/filebeat/filebeat.yml
- # cat /etc/filebeat/filebeat.yml
- ###################### Filebeat Configuration Example #########################
-
- # This file is an example configuration file highlighting only the most common
- # options. The filebeat.reference.yml file from the same directory contains all the
- # supported options with more comments. You can use it as a reference.
- #
- # You can find the full configuration reference here:
- # https://www.elastic.co/guide/en/beats/filebeat/index.html
-
- # For more available modules and options, please see the filebeat.reference.yml sample
- # configuration file.
-
- # ============================== Filebeat inputs ===============================
-
- filebeat.inputs:
-
- # Each - is an input. Most options can be set at the input level, so
- # you can use different inputs for various configurations.
- # Below are the input specific configurations.
-
- # filestream is an input for collecting log messages from files.
- - type: filestream
-
- # Unique ID among all inputs, an ID is required.
- id: my-filestream-id
-
- # Change to true to enable this input configuration.
- enabled: true
-
- # Paths that should be crawled and fetched. Glob based paths.
- paths:
- - /var/log/*.log
- #- c:\programdata\elasticsearch\logs\*
-
- # Exclude lines. A list of regular expressions to match. It drops the lines that are
- # matching any regular expression from the list.
- #exclude_lines: ['^DBG']
-
- # Include lines. A list of regular expressions to match. It exports the lines that are
- # matching any regular expression from the list.
- #include_lines: ['^ERR', '^WARN']
-
- # Exclude files. A list of regular expressions to match. Filebeat drops the files that
- # are matching any regular expression from the list. By default, no files are dropped.
- #prospector.scanner.exclude_files: ['.gz$']
-
- # Optional additional fields. These fields can be freely picked
- # to add additional information to the crawled log files for filtering
- #fields:
- # level: debug
- # review: 1
-
- # ============================== Filebeat modules ==============================
-
- filebeat.config.modules:
- # Glob pattern for configuration loading
- path: ${path.config}/modules.d/*.yml
-
- # Set to true to enable config reloading
- reload.enabled: false
-
- # Period on which files under path should be checked for changes
- #reload.period: 10s
-
- # ======================= Elasticsearch template setting =======================
-
- setup.template.settings:
- index.number_of_shards: 1
- #index.codec: best_compression
- #_source.enabled: false
-
-
- # ================================== General ===================================
-
- # The name of the shipper that publishes the network data. It can be used to group
- # all the transactions sent by a single shipper in the web interface.
- #name:
-
- # The tags of the shipper are included in their own field with each
- # transaction published.
- #tags: ["service-X", "web-tier"]
-
- # Optional fields that you can specify to add additional information to the
- # output.
- #fields:
- # env: staging
-
- # ================================= Dashboards =================================
- # These settings control loading the sample dashboards to the Kibana index. Loading
- # the dashboards is disabled by default and can be enabled either by setting the
- # options here or by using the `setup` command.
- #setup.dashboards.enabled: false
-
- # The URL from where to download the dashboards archive. By default this URL
- # has a value which is computed based on the Beat name and version. For released
- # versions, this URL points to the dashboard archive on the artifacts.elastic.co
- # website.
- #setup.dashboards.url:
-
- # =================================== Kibana ===================================
-
- # Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
- # This requires a Kibana endpoint configuration.
- setup.kibana:
-
- # Kibana Host
- # Scheme and port can be left out and will be set to the default (http and 5601)
- # In case you specify and additional path, the scheme is required: http://localhost:5601/path
- # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
- #host: "localhost:5601"
-
- # Kibana Space ID
- # ID of the Kibana Space into which the dashboards should be loaded. By default,
- # the Default Space will be used.
- #space.id:
-
- # =============================== Elastic Cloud ================================
-
- # These settings simplify using Filebeat with the Elastic Cloud (https://cloud.elastic.co/).
-
- # The cloud.id setting overwrites the `output.elasticsearch.hosts` and
- # `setup.kibana.host` options.
- # You can find the `cloud.id` in the Elastic Cloud web UI.
- #cloud.id:
-
- # The cloud.auth setting overwrites the `output.elasticsearch.username` and
- # `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
- #cloud.auth:
-
- # ================================== Outputs ===================================
-
- # Configure what output to use when sending the data collected by the beat.
-
- # ---------------------------- Elasticsearch Output ----------------------------
- output.elasticsearch:
- # Array of hosts to connect to.
- hosts: ["192.168.100.31:9200","192.168.100.32:9200"]
-
- # Protocol - either `http` (default) or `https`.
- #protocol: "https"
-
- # Authentication credentials - either API key or username/password.
- #api_key: "id:api_key"
- #username: "elastic"
- #password: "changeme"
-
- # ------------------------------ Logstash Output -------------------------------
- #output.logstash:
- # The Logstash hosts
- #hosts: ["localhost:5044"]
-
- # Optional SSL. By default is off.
- # List of root certificates for HTTPS server verifications
- #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
-
- # Certificate for SSL client authentication
- #ssl.certificate: "/etc/pki/client/cert.pem"
-
- # Client Certificate Key
- #ssl.key: "/etc/pki/client/cert.key"
-
- # ================================= Processors =================================
- processors:
- - add_host_metadata:
- when.not.contains.tags: forwarded
- - add_cloud_metadata: ~
- - add_docker_metadata: ~
- - add_kubernetes_metadata: ~
-
- # ================================== Logging ===================================
-
- # Sets log level. The default log level is info.
- # Available log levels are: error, warning, info, debug
- #logging.level: debug
-
- # At debug level, you can selectively enable logging only for some components.
- # To enable all selectors use ["*"]. Examples of other selectors are "beat",
- # "publisher", "service".
- #logging.selectors: ["*"]
-
- # ============================= X-Pack Monitoring ==============================
- # Filebeat can export internal metrics to a central Elasticsearch monitoring
- # cluster. This requires xpack monitoring to be enabled in Elasticsearch. The
- # reporting is disabled by default.
-
- # Set to true to enable the monitoring reporter.
- #monitoring.enabled: false
-
- # Sets the UUID of the Elasticsearch cluster under which monitoring data for this
- # Filebeat instance will appear in the Stack Monitoring UI. If output.elasticsearch
- # is enabled, the UUID is derived from the Elasticsearch cluster referenced by output.elasticsearch.
- #monitoring.cluster_uuid:
-
- # Uncomment to send the metrics to Elasticsearch. Most settings from the
- # Elasticsearch output are accepted here as well.
- # Note that the settings should point to your Elasticsearch *monitoring* cluster.
- # Any setting that is not set is automatically inherited from the Elasticsearch
- # output configuration, so if you have the Elasticsearch output configured such
- # that it is pointing to your Elasticsearch monitoring cluster, you can simply
- # uncomment the following line.
- #monitoring.elasticsearch:
-
- # ============================== Instrumentation ===============================
-
- # Instrumentation support for the filebeat.
- #instrumentation:
- # Set to true to enable instrumentation of filebeat.
- #enabled: false
-
- # Environment in which filebeat is running on (eg: staging, production, etc.)
- #environment: ""
-
- # APM Server hosts to report instrumentation results to.
- #hosts:
- # - http://localhost:8200
-
- # API Key for the APM Server(s).
- # If api_key is set then secret_token will be ignored.
- #api_key:
-
- # Secret token for the APM Server(s).
- #secret_token:
-
-
- # ================================= Migration ==================================
-
- # This allows to enable 6.7 migration aliases
- #migration.6_to_7.enabled: true
-
修改完后使用cat命令查看设置
- # cat /etc/filebeat/filebeat.yml | grep -Ev "#|^$"
- filebeat.inputs:
- - type: filestream
- id: my-filestream-id
- enabled: true
- paths:
- - /var/log/*.log
- filebeat.config.modules:
- path: ${path.config}/modules.d/*.yml
- reload.enabled: false
- setup.template.settings:
- index.number_of_shards: 1
- setup.kibana:
- output.elasticsearch:
- hosts: ["192.168.100.31:9200","192.168.100.32:9200"]
- processors:
- - add_host_metadata:
- when.not.contains.tags: forwarded
- - add_cloud_metadata: ~
- - add_docker_metadata: ~
- - add_kubernetes_metadata: ~
-
此时的filebeat直接向elasticsearch传输数据。形成了EFK架构。
- # cat /etc/filebeat/filebeat.yml | grep -Ev "#|^$"
- filebeat.inputs:
- - type: filestream
- id: nginxlogtest
- enabled: true
- paths:
- - /opt/nginx_logs
- filebeat.config.modules:
- path: ${path.config}/modules.d/*.yml
- reload.enabled: false
- setup.template.settings:
- index.number_of_shards: 1
- setup.kibana:
- output.logstash:
- hosts: ["192.168.100.33:5044"]
- processors:
- - add_host_metadata:
- when.not.contains.tags: forwarded
- - add_cloud_metadata: ~
- - add_docker_metadata: ~
- - add_kubernetes_metadata: ~
-
logstash管道配置文件
- # cat filebeat.conf
-
- input {
- beats {
- port => 5044
- }
- }
-
- output {
- elasticsearch {
- hosts => ["192.168.100.31:9200","192.168.100.32:9200"]
- index => "filebeattologstash-%{+YYYY.MM.dd}"
- }
- }
-
- systemctl start filebeat.service
-
在老版本的7.x和6.x版本,人们用node.master自定义es集群中的master节点、使用node.data自定义存放数据的节点。
但在8.x之后的版本中node.master已经被弃用,取而代之的是node.roles: [ data, master ],这两个是默认选项,每个节点都必须有这两个选项。
https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-node.html
错误原因:
- # ---------------------------------- Network -----------------------------------
- #
- # By default Elasticsearch is only accessible on localhost. Set a different
- # address here to expose this node on the network:
- #
- network.host: 0.0.0.0
- #
-
监听地址为0.0.0.0,表示可以监听任意IP地址。假如服务器中有多个网卡,多个IP地址。es会自动从中选一个作为监听IP。假如在集群设置中