ELK 5 with X-Pack support
This commit is contained in:
commit
af9e335a3c
50
README.md
50
README.md
|
@ -4,6 +4,8 @@
|
||||||
|
|
||||||
Run the latest version of the ELK (Elasticseach, Logstash, Kibana) stack with Docker and Docker-compose.
|
Run the latest version of the ELK (Elasticseach, Logstash, Kibana) stack with Docker and Docker-compose.
|
||||||
|
|
||||||
|
**Note**: This version has [X-Pack support](https://www.elastic.co/products/x-pack).
|
||||||
|
|
||||||
It will give you the ability to analyze any data set by using the searching/aggregation capabilities of Elasticseach and the visualization power of Kibana.
|
It will give you the ability to analyze any data set by using the searching/aggregation capabilities of Elasticseach and the visualization power of Kibana.
|
||||||
|
|
||||||
Based on the official images:
|
Based on the official images:
|
||||||
|
@ -20,27 +22,36 @@ Based on the official images:
|
||||||
2. Install [Docker-compose](http://docs.docker.com/compose/install/).
|
2. Install [Docker-compose](http://docs.docker.com/compose/install/).
|
||||||
3. Clone this repository
|
3. Clone this repository
|
||||||
|
|
||||||
|
## Increase max_map_count on your host (Linux)
|
||||||
|
|
||||||
|
You need to increase `max_map_count` on your Docker host:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ sudo sysctl -w vm.max_map_count=262144
|
||||||
|
```
|
||||||
|
|
||||||
## SELinux
|
## SELinux
|
||||||
|
|
||||||
On distributions which have SELinux enabled out-of-the-box you will need to either re-context the files or set SELinux into Permissive mode in order for docker-elk to start properly.
|
On distributions which have SELinux enabled out-of-the-box you will need to either re-context the files or set SELinux into Permissive mode in order for docker-elk to start properly.
|
||||||
For example on Redhat and CentOS, the following will apply the proper context:
|
For example on Redhat and CentOS, the following will apply the proper context:
|
||||||
|
|
||||||
````bash
|
```bash
|
||||||
.-root@centos ~
|
.-root@centos ~
|
||||||
-$ chcon -R system_u:object_r:admin_home_t:s0 docker-elk/
|
-$ chcon -R system_u:object_r:admin_home_t:s0 docker-elk/
|
||||||
````
|
```
|
||||||
|
|
||||||
## Windows
|
## Windows
|
||||||
|
|
||||||
When cloning this repo on Windows with line ending conversion enabled (git option `core.autocrlf` set to `true`), the script `kibana/entrypoint.sh` will malfunction due to a corrupt shebang header (which must not terminated by `CR+LF` but `LF` only):
|
When cloning this repo on Windows with line ending conversion enabled (git option `core.autocrlf` set to `true`), the script `kibana/entrypoint.sh` will malfunction due to a corrupt shebang header (which must not terminated by `CR+LF` but `LF` only):
|
||||||
|
|
||||||
````bash
|
```bash
|
||||||
...
|
...
|
||||||
Creating dockerelk_kibana_1
|
Creating dockerelk_kibana_1
|
||||||
Attaching to dockerelk_elasticsearch_1, dockerelk_logstash_1, dockerelk_kibana_1
|
Attaching to dockerelk_elasticsearch_1, dockerelk_logstash_1, dockerelk_kibana_1
|
||||||
: No such file or directory/usr/bin/env: bash
|
: No such file or directory/usr/bin/env: bash
|
||||||
````
|
```
|
||||||
|
|
||||||
So you have to either
|
So you have to either:
|
||||||
|
|
||||||
* disable line ending conversion *before* cloning the repository by setting `core.autocrlf` set to `false`: `git config core.autocrlf false`, or
|
* disable line ending conversion *before* cloning the repository by setting `core.autocrlf` set to `false`: `git config core.autocrlf false`, or
|
||||||
* convert the line endings in script `kibana/entrypoint.sh` from `CR+LF` to `LF` (e.g. using Notepad++).
|
* convert the line endings in script `kibana/entrypoint.sh` from `CR+LF` to `LF` (e.g. using Notepad++).
|
||||||
|
@ -67,18 +78,15 @@ Now that the stack is running, you'll want to inject logs in it. The shipped log
|
||||||
$ nc localhost 5000 < /path/to/logfile.log
|
$ nc localhost 5000 < /path/to/logfile.log
|
||||||
```
|
```
|
||||||
|
|
||||||
And then access Kibana UI by hitting [http://localhost:5601](http://localhost:5601) with a web browser.
|
And then access Kibana UI by hitting [http://localhost:5601](http://localhost:5601) with a web browser and use the following credentials to login:
|
||||||
|
|
||||||
*NOTE*: You'll need to inject data into logstash before being able to create a logstash index in Kibana. Then all you should have to do is to
|
* user: *elastic*
|
||||||
hit the create button.
|
* password: *changeme*
|
||||||
|
|
||||||
|
*NOTE*: You'll need to inject data into logstash before being able to create a logstash index in Kibana. Then all you should have to do is to hit the create button.
|
||||||
|
|
||||||
See: https://www.elastic.co/guide/en/kibana/current/setup.html#connect
|
See: https://www.elastic.co/guide/en/kibana/current/setup.html#connect
|
||||||
|
|
||||||
You can also access:
|
|
||||||
* Sense: [http://localhost:5601/app/sense](http://localhost:5601/app/sense)
|
|
||||||
|
|
||||||
*NOTE*: In order to use Sense, you'll need to query the IP address associated to your *network device* instead of localhost.
|
|
||||||
|
|
||||||
By default, the stack exposes the following ports:
|
By default, the stack exposes the following ports:
|
||||||
* 5000: Logstash TCP input.
|
* 5000: Logstash TCP input.
|
||||||
* 9200: Elasticsearch HTTP
|
* 9200: Elasticsearch HTTP
|
||||||
|
@ -113,7 +121,7 @@ If you want to override the default configuration, add the *LS_HEAP_SIZE* enviro
|
||||||
```yml
|
```yml
|
||||||
logstash:
|
logstash:
|
||||||
build: logstash/
|
build: logstash/
|
||||||
command: logstash -f /etc/logstash/conf.d/logstash.conf
|
command: -f /etc/logstash/conf.d/
|
||||||
volumes:
|
volumes:
|
||||||
- ./logstash/config:/etc/logstash/conf.d
|
- ./logstash/config:/etc/logstash/conf.d
|
||||||
ports:
|
ports:
|
||||||
|
@ -140,12 +148,11 @@ Update the container in the `docker-compose.yml` to add the *LS_JAVA_OPTS* envir
|
||||||
```yml
|
```yml
|
||||||
logstash:
|
logstash:
|
||||||
build: logstash/
|
build: logstash/
|
||||||
command: logstash -f /etc/logstash/conf.d/logstash.conf
|
command: -f /etc/logstash/conf.d/
|
||||||
volumes:
|
volumes:
|
||||||
- ./logstash/config:/etc/logstash/conf.d
|
- ./logstash/config:/etc/logstash/conf.d
|
||||||
ports:
|
ports:
|
||||||
- "5000:5000"
|
- "5000:5000"
|
||||||
- "18080:18080"
|
|
||||||
links:
|
links:
|
||||||
- elasticsearch
|
- elasticsearch
|
||||||
environment:
|
environment:
|
||||||
|
@ -163,9 +170,11 @@ Then, you'll need to map your configuration file inside the container in the `do
|
||||||
```yml
|
```yml
|
||||||
elasticsearch:
|
elasticsearch:
|
||||||
build: elasticsearch/
|
build: elasticsearch/
|
||||||
command: elasticsearch -Des.network.host=_non_loopback_
|
|
||||||
ports:
|
ports:
|
||||||
- "9200:9200"
|
- "9200:9200"
|
||||||
|
- "9300:9300"
|
||||||
|
environment:
|
||||||
|
ES_JAVA_OPTS: "-Xms1g -Xmx1g"
|
||||||
volumes:
|
volumes:
|
||||||
- ./elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
|
- ./elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
|
||||||
```
|
```
|
||||||
|
@ -178,6 +187,9 @@ elasticsearch:
|
||||||
command: elasticsearch -Des.network.host=_non_loopback_ -Des.cluster.name: my-cluster
|
command: elasticsearch -Des.network.host=_non_loopback_ -Des.cluster.name: my-cluster
|
||||||
ports:
|
ports:
|
||||||
- "9200:9200"
|
- "9200:9200"
|
||||||
|
- "9300:9300"
|
||||||
|
environment:
|
||||||
|
ES_JAVA_OPTS: "-Xms1g -Xmx1g"
|
||||||
```
|
```
|
||||||
|
|
||||||
# Storage
|
# Storage
|
||||||
|
@ -191,9 +203,11 @@ In order to persist Elasticsearch data even after removing the Elasticsearch con
|
||||||
```yml
|
```yml
|
||||||
elasticsearch:
|
elasticsearch:
|
||||||
build: elasticsearch/
|
build: elasticsearch/
|
||||||
command: elasticsearch -Des.network.host=_non_loopback_
|
|
||||||
ports:
|
ports:
|
||||||
- "9200:9200"
|
- "9200:9200"
|
||||||
|
- "9300:9300"
|
||||||
|
environment:
|
||||||
|
ES_JAVA_OPTS: "-Xms1g -Xmx1g"
|
||||||
volumes:
|
volumes:
|
||||||
- /path/to/storage:/usr/share/elasticsearch/data
|
- /path/to/storage:/usr/share/elasticsearch/data
|
||||||
```
|
```
|
||||||
|
|
|
@ -1,24 +1,33 @@
|
||||||
|
version: '2'
|
||||||
|
|
||||||
|
services:
|
||||||
elasticsearch:
|
elasticsearch:
|
||||||
image: elasticsearch:latest
|
build: elasticsearch/
|
||||||
command: elasticsearch -Des.network.host=0.0.0.0
|
|
||||||
ports:
|
ports:
|
||||||
- "9200:9200"
|
- "9200:9200"
|
||||||
- "9300:9300"
|
- "9300:9300"
|
||||||
|
environment:
|
||||||
|
ES_JAVA_OPTS: "-Xms1g -Xmx1g"
|
||||||
|
networks:
|
||||||
|
- docker_elk
|
||||||
logstash:
|
logstash:
|
||||||
build: logstash/
|
build: logstash/
|
||||||
command: logstash -f /etc/logstash/conf.d/logstash.conf
|
command: -f /etc/logstash/conf.d/
|
||||||
volumes:
|
volumes:
|
||||||
- ./logstash/config:/etc/logstash/conf.d
|
- ./logstash/config:/etc/logstash/conf.d
|
||||||
ports:
|
ports:
|
||||||
- "5000:5000"
|
- "5000:5000"
|
||||||
links:
|
networks:
|
||||||
- elasticsearch
|
- docker_elk
|
||||||
kibana:
|
kibana:
|
||||||
build: kibana/
|
build: kibana/
|
||||||
volumes:
|
volumes:
|
||||||
- ./kibana/config/:/opt/kibana/config/
|
- ./kibana/config/:/opt/kibana/config/
|
||||||
ports:
|
ports:
|
||||||
- "5601:5601"
|
- "5601:5601"
|
||||||
links:
|
networks:
|
||||||
- elasticsearch
|
- docker_elk
|
||||||
|
|
||||||
|
networks:
|
||||||
|
docker_elk:
|
||||||
|
driver: bridge
|
||||||
|
|
|
@ -0,0 +1,7 @@
|
||||||
|
FROM elasticsearch:5
|
||||||
|
|
||||||
|
ENV ES_JAVA_OPTS="-Des.path.conf=/etc/elasticsearch"
|
||||||
|
|
||||||
|
RUN elasticsearch-plugin install --batch x-pack
|
||||||
|
|
||||||
|
CMD ["-E", "network.host=0.0.0.0", "-E", "discovery.zen.minimum_master_nodes=1"]
|
|
@ -1,10 +1,10 @@
|
||||||
FROM kibana:latest
|
FROM kibana:5
|
||||||
|
|
||||||
RUN apt-get update && apt-get install -y netcat
|
RUN apt-get update && apt-get install -y netcat bzip2
|
||||||
|
|
||||||
COPY entrypoint.sh /tmp/entrypoint.sh
|
COPY entrypoint.sh /tmp/entrypoint.sh
|
||||||
RUN chmod +x /tmp/entrypoint.sh
|
RUN chmod +x /tmp/entrypoint.sh
|
||||||
|
|
||||||
RUN kibana plugin --install elastic/sense
|
RUN kibana-plugin install x-pack
|
||||||
|
|
||||||
CMD ["/tmp/entrypoint.sh"]
|
CMD ["/tmp/entrypoint.sh"]
|
||||||
|
|
|
@ -1,76 +1,92 @@
|
||||||
# Kibana is served by a back end server. This controls which port to use.
|
# Kibana is served by a back end server. This setting specifies the port to use.
|
||||||
port: 5601
|
server.port: 5601
|
||||||
|
|
||||||
# The host to bind the server to.
|
# This setting specifies the IP address of the back end server.
|
||||||
host: "0.0.0.0"
|
server.host: "0.0.0.0"
|
||||||
|
|
||||||
# The Elasticsearch instance to use for all your queries.
|
# Enables you to specify a path to mount Kibana at if you are running behind a proxy. This setting
|
||||||
elasticsearch_url: "http://elasticsearch:9200"
|
# cannot end in a slash.
|
||||||
|
# server.basePath: ""
|
||||||
|
|
||||||
# preserve_elasticsearch_host true will send the hostname specified in `elasticsearch`. If you set it to false,
|
# The maximum payload size in bytes for incoming server requests.
|
||||||
# then the host you use to connect to *this* Kibana instance will be sent.
|
# server.maxPayloadBytes: 1048576
|
||||||
elasticsearch_preserve_host: true
|
|
||||||
|
|
||||||
# Kibana uses an index in Elasticsearch to store saved searches, visualizations
|
# The Kibana server's name. This is used for display purposes.
|
||||||
# and dashboards. It will create a new index if it doesn't already exist.
|
# server.name: "your-hostname"
|
||||||
kibana_index: ".kibana"
|
|
||||||
|
|
||||||
# If your Elasticsearch is protected with basic auth, this is the user credentials
|
# The URL of the Elasticsearch instance to use for all your queries.
|
||||||
# used by the Kibana server to perform maintence on the kibana_index at statup. Your Kibana
|
elasticsearch.url: "http://elasticsearch:9200"
|
||||||
# users will still need to authenticate with Elasticsearch (which is proxied thorugh
|
|
||||||
# the Kibana server)
|
|
||||||
# kibana_elasticsearch_username: user
|
|
||||||
# kibana_elasticsearch_password: pass
|
|
||||||
|
|
||||||
# If your Elasticsearch requires client certificate and key
|
# When this setting’s value is true Kibana uses the hostname specified in the server.host
|
||||||
# kibana_elasticsearch_client_crt: /path/to/your/client.crt
|
# setting. When the value of this setting is false, Kibana uses the hostname of the host
|
||||||
# kibana_elasticsearch_client_key: /path/to/your/client.key
|
# that connects to this Kibana instance.
|
||||||
|
# elasticsearch.preserveHost: true
|
||||||
|
|
||||||
# If you need to provide a CA certificate for your Elasticsarech instance, put
|
# Kibana uses an index in Elasticsearch to store saved searches, visualizations and
|
||||||
# the path of the pem file here.
|
# dashboards. Kibana creates a new index if the index doesn’t already exist.
|
||||||
# ca: /path/to/your/CA.pem
|
# kibana.index: ".kibana"
|
||||||
|
|
||||||
# The default application to load.
|
# The default application to load.
|
||||||
default_app_id: "discover"
|
# kibana.defaultAppId: "discover"
|
||||||
|
|
||||||
# Time in milliseconds to wait for elasticsearch to respond to pings, defaults to
|
# If your Elasticsearch is protected with basic authentication, these settings provide
|
||||||
# request_timeout setting
|
# the username and password that the Kibana server uses to perform maintenance on the Kibana
|
||||||
# ping_timeout: 1500
|
# index at startup. Your Kibana users still need to authenticate with Elasticsearch, which
|
||||||
|
# is proxied through the Kibana server.
|
||||||
|
# elasticsearch.username: "user"
|
||||||
|
# elasticsearch.password: "pass"
|
||||||
|
|
||||||
# Time in milliseconds to wait for responses from the back end or elasticsearch.
|
# Paths to the PEM-format SSL certificate and SSL key files, respectively. These
|
||||||
# This must be > 0
|
# files enable SSL for outgoing requests from the Kibana server to the browser.
|
||||||
request_timeout: 300000
|
# server.ssl.cert: /path/to/your/server.crt
|
||||||
|
# server.ssl.key: /path/to/your/server.key
|
||||||
|
|
||||||
# Time in milliseconds for Elasticsearch to wait for responses from shards.
|
# Optional settings that provide the paths to the PEM-format SSL certificate and key files.
|
||||||
# Set to 0 to disable.
|
# These files validate that your Elasticsearch backend uses the same key files.
|
||||||
shard_timeout: 0
|
# elasticsearch.ssl.cert: /path/to/your/client.crt
|
||||||
|
# elasticsearch.ssl.key: /path/to/your/client.key
|
||||||
|
|
||||||
# Time in milliseconds to wait for Elasticsearch at Kibana startup before retrying
|
# Optional setting that enables you to specify a path to the PEM file for the certificate
|
||||||
# startup_timeout: 5000
|
# authority for your Elasticsearch instance.
|
||||||
|
# elasticsearch.ssl.ca: /path/to/your/CA.pem
|
||||||
|
|
||||||
# Set to false to have a complete disregard for the validity of the SSL
|
# To disregard the validity of SSL certificates, change this setting’s value to false.
|
||||||
# certificate.
|
# elasticsearch.ssl.verify: true
|
||||||
verify_ssl: true
|
|
||||||
|
|
||||||
# SSL for outgoing requests from the Kibana Server (PEM formatted)
|
# Time in milliseconds to wait for Elasticsearch to respond to pings. Defaults to the value of
|
||||||
# ssl_key_file: /path/to/your/server.key
|
# the elasticsearch.requestTimeout setting.
|
||||||
# ssl_cert_file: /path/to/your/server.crt
|
# elasticsearch.pingTimeout: 1500
|
||||||
|
|
||||||
# Set the path to where you would like the process id file to be created.
|
# Time in milliseconds to wait for responses from the back end or Elasticsearch. This value
|
||||||
# pid_file: /var/run/kibana.pid
|
# must be a positive integer.
|
||||||
|
# elasticsearch.requestTimeout: 30000
|
||||||
|
|
||||||
# If you would like to send the log output to a file you can set the path below.
|
# List of Kibana client-side headers to send to Elasticsearch. To send *no* client-side
|
||||||
# This will also turn off the STDOUT log output.
|
# headers, set this value to [] (an empty list).
|
||||||
# log_file: ./kibana.log
|
# elasticsearch.requestHeadersWhitelist: [ authorization ]
|
||||||
# Plugins that are included in the build, and no longer found in the plugins/ folder
|
|
||||||
bundled_plugin_ids:
|
# Time in milliseconds for Elasticsearch to wait for responses from shards. Set to 0 to disable.
|
||||||
- plugins/dashboard/index
|
# elasticsearch.shardTimeout: 0
|
||||||
- plugins/discover/index
|
|
||||||
- plugins/doc/index
|
# Time in milliseconds to wait for Elasticsearch at Kibana startup before retrying.
|
||||||
- plugins/kibana/index
|
# elasticsearch.startupTimeout: 5000
|
||||||
- plugins/markdown_vis/index
|
|
||||||
- plugins/metric_vis/index
|
# Specifies the path where Kibana creates the process ID file.
|
||||||
- plugins/settings/index
|
# pid.file: /var/run/kibana.pid
|
||||||
- plugins/table_vis/index
|
|
||||||
- plugins/vis_types/index
|
# Enables you specify a file where Kibana stores log output.
|
||||||
- plugins/visualize/index
|
# logging.dest: stdout
|
||||||
|
|
||||||
|
# Set the value of this setting to true to suppress all logging output.
|
||||||
|
# logging.silent: false
|
||||||
|
|
||||||
|
# Set the value of this setting to true to suppress all logging output other than error messages.
|
||||||
|
# logging.quiet: false
|
||||||
|
|
||||||
|
# Set the value of this setting to true to log all events, including system usage information
|
||||||
|
# and all requests.
|
||||||
|
# logging.verbose: false
|
||||||
|
|
||||||
|
# Set the interval in milliseconds to sample system and process performance
|
||||||
|
# metrics. Minimum is 100ms. Defaults to 10000.
|
||||||
|
# ops.interval: 10000
|
||||||
|
|
|
@ -1,4 +1,4 @@
|
||||||
FROM logstash:latest
|
FROM logstash:5
|
||||||
|
|
||||||
# Add your logstash plugins setup here
|
# Add your logstash plugins setup here
|
||||||
# Example: RUN logstash-plugin install logstash-filter-json
|
# Example: RUN logstash-plugin install logstash-filter-json
|
|
@ -9,5 +9,7 @@ input {
|
||||||
output {
|
output {
|
||||||
elasticsearch {
|
elasticsearch {
|
||||||
hosts => "elasticsearch:9200"
|
hosts => "elasticsearch:9200"
|
||||||
|
user => "elastic"
|
||||||
|
password => "changeme"
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
Loading…
Reference in New Issue