When we set it upDocker How to collect logs after clustering ELK Provides a complete solution This paper mainly introduces the use ofDocker buildELK
collectDocker Cluster logs

ELK brief introduction

ELK fromElasticSearch,Logstash andKiabana Three open source tools


Elasticsearch Is an open source distributed search engine, Its characteristics are: Distributed, Zero configuration, Automatic discovery, Index auto sharding, Index copy mechanism,restful Style interface, Multiple data sources, Automatic search load, etc.

Logstash Is a fully open source tool, He can collect your logs, filter, And store it for later use

Kibana  It's also an open source and free tool, itKibana Can be Logstash and ElasticSearch Log analysis friendly Web
Interface, Can help you summarize, Analyze and search important data logs.

UseDocker buildELK platform

Let's edit it first logstash Profile for logstash.conf
input { udp { port => 5000 type => json } } filter { json { source =>
"message" } } output { elasticsearch { hosts => "elasticsearch:9200"
# takelogstash Output to elasticsearch This is your ownhost } }
And then we need a little moreKibana Start mode of

Writing startup scripts wait forelasticserach Start after successful operation
#!/usr/bin/env bash # Wait for the Elasticsearch container to be ready before
starting Kibana. echo "Stalling for Elasticsearch" while true; do nc -q 1
elasticsearch 9200 2>/dev/null && break done echo "Starting Kibana" exec kibana
modifyDockerfile Build customKibana image
FROM kibana:latest RUN apt-get update && apt-get install -y netcat COPY
entrypoint.sh /tmp/entrypoint.sh RUN chmod +x /tmp/entrypoint.sh RUN kibana
plugin --install elastic/sense CMD ["/tmp/entrypoint.sh"]
It can also be modifiedKibana Profile for Select the required plug-ins


# Kibana is served by a back end server. This controls which port to use.
port: 5601 # The host to bind the server to. host: "0.0.0.0" # The
Elasticsearch instance to use for all your queries. elasticsearch_url:
"http://elasticsearch:9200" # preserve_elasticsearch_host true will send the
hostname specified in `elasticsearch`. If you set it to false, # then the host
you use to connect to *this* Kibana instance will be sent.
elasticsearch_preserve_host: true # Kibana uses an index in Elasticsearch to
store saved searches, visualizations # and dashboards. It will create a new
index if it doesn't already exist. kibana_index: ".kibana" # If your
Elasticsearch is protected with basic auth, this is the user credentials # used
by the Kibana server to perform maintence on the kibana_index at statup. Your
Kibana # users will still need to authenticate with Elasticsearch (which is
proxied thorugh # the Kibana server) # kibana_elasticsearch_username: user #
kibana_elasticsearch_password: pass # If your Elasticsearch requires client
certificate and key # kibana_elasticsearch_client_crt: /path/to/your/client.crt
# kibana_elasticsearch_client_key: /path/to/your/client.key # If you need to
provide a CA certificate for your Elasticsarech instance, put # the path of the
pem file here. # ca: /path/to/your/CA.pem # The default application to load.
default_app_id: "discover" # Time in milliseconds to wait for elasticsearch to
respond to pings, defaults to # request_timeout setting # ping_timeout: 1500 #
Time in milliseconds to wait for responses from the back end or elasticsearch.
# This must be > 0 request_timeout: 300000 # Time in milliseconds for
Elasticsearch to wait for responses from shards. # Set to 0 to disable.
shard_timeout: 0 # Time in milliseconds to wait for Elasticsearch at Kibana
startup before retrying # startup_timeout: 5000 # Set to false to have a
complete disregard for the validity of the SSL # certificate. verify_ssl: true
# SSL for outgoing requests from the Kibana Server (PEM formatted) #
ssl_key_file: /path/to/your/server.key # ssl_cert_file:
/path/to/your/server.crt # Set the path to where you would like the process id
file to be created. # pid_file: /var/run/kibana.pid # If you would like to send
the log output to a file you can set the path below. # This will also turn off
the STDOUT log output. log_file: ./kibana.log # Plugins that are included in
the build, and no longer found in the plugins/ folder bundled_plugin_ids: -
plugins/dashboard/index - plugins/discover/index - plugins/doc/index -
plugins/kibana/index - plugins/markdown_vis/index - plugins/metric_vis/index -
plugins/settings/index - plugins/table_vis/index - plugins/vis_types/index -
plugins/visualize/index
OK, let's write it Docker-compose.yml Convenient construction

Ports and so on can be modified according to their own needs Modify the path of the configuration file according to your directory High requirements for overall system configuration Please select a better configured machine
elasticsearch: image: elasticsearch:latest command: elasticsearch
-Des.network.host=0.0.0.0 ports: - "9200:9200" - "9300:9300" logstash: image:
logstash:latest command: logstash -f /etc/logstash/conf.d/logstash.conf
volumes: - ./logstash/config:/etc/logstash/conf.d ports: - "5001:5000/udp"
links: - elasticsearch kibana: build: kibana/ volumes: -
./kibana/config/:/opt/kibana/config/ ports: - "5601:5601" links: - elasticsearch
# Okay, command. It can be started directlyELK 了 docker-compose up -d
Access to previous settingskibanna Of5601 The port can see whether the startup is successful

Uselogspout collectDocker Journal

Next we will uselogspout YesDocker Log for collection Let's modify it according to our needslogspout image

Write profile modules.go
package main import ( _ "github.com/looplab/logspout-logstash" _
"github.com/gliderlabs/logspout/transports/udp" )
To writeDockerfile
FROM gliderlabs/logspout:latest COPY ./modules.go /src/modules.go
After rebuilding the image Run at each node
docker run -d --name="logspout"
--volume=/var/run/docker.sock:/var/run/docker.sock \ jayqqaa12/logspout
logstash:// Yourlogstash address
Open nowKibana You can see the collected docker Log

Be carefulDocker The container should be selected toconsole output So that we can collect



All right, ourDocker Cluster basedELK The log collection system is deployed

If it is a large cluster, you need to addlogstash andelasticsearch colony Let's break it down next time.

The above is the whole content of this article, I hope it will be helpful for your study, I also hope you can support yunqi community.