ELK 5 with X-Pack support
This commit is contained in:
commit
af9e335a3c
50
README.md
50
README.md
|
@ -4,6 +4,8 @@
|
|||
|
||||
Run the latest version of the ELK (Elasticseach, Logstash, Kibana) stack with Docker and Docker-compose.
|
||||
|
||||
**Note**: This version has [X-Pack support](https://www.elastic.co/products/x-pack).
|
||||
|
||||
It will give you the ability to analyze any data set by using the searching/aggregation capabilities of Elasticseach and the visualization power of Kibana.
|
||||
|
||||
Based on the official images:
|
||||
|
@ -20,27 +22,36 @@ Based on the official images:
|
|||
2. Install [Docker-compose](http://docs.docker.com/compose/install/).
|
||||
3. Clone this repository
|
||||
|
||||
## Increase max_map_count on your host (Linux)
|
||||
|
||||
You need to increase `max_map_count` on your Docker host:
|
||||
|
||||
```bash
|
||||
$ sudo sysctl -w vm.max_map_count=262144
|
||||
```
|
||||
|
||||
## SELinux
|
||||
|
||||
On distributions which have SELinux enabled out-of-the-box you will need to either re-context the files or set SELinux into Permissive mode in order for docker-elk to start properly.
|
||||
For example on Redhat and CentOS, the following will apply the proper context:
|
||||
|
||||
````bash
|
||||
```bash
|
||||
.-root@centos ~
|
||||
-$ chcon -R system_u:object_r:admin_home_t:s0 docker-elk/
|
||||
````
|
||||
```
|
||||
|
||||
## Windows
|
||||
|
||||
When cloning this repo on Windows with line ending conversion enabled (git option `core.autocrlf` set to `true`), the script `kibana/entrypoint.sh` will malfunction due to a corrupt shebang header (which must not terminated by `CR+LF` but `LF` only):
|
||||
|
||||
````bash
|
||||
```bash
|
||||
...
|
||||
Creating dockerelk_kibana_1
|
||||
Attaching to dockerelk_elasticsearch_1, dockerelk_logstash_1, dockerelk_kibana_1
|
||||
: No such file or directory/usr/bin/env: bash
|
||||
````
|
||||
```
|
||||
|
||||
So you have to either
|
||||
So you have to either:
|
||||
|
||||
* disable line ending conversion *before* cloning the repository by setting `core.autocrlf` set to `false`: `git config core.autocrlf false`, or
|
||||
* convert the line endings in script `kibana/entrypoint.sh` from `CR+LF` to `LF` (e.g. using Notepad++).
|
||||
|
@ -67,18 +78,15 @@ Now that the stack is running, you'll want to inject logs in it. The shipped log
|
|||
$ nc localhost 5000 < /path/to/logfile.log
|
||||
```
|
||||
|
||||
And then access Kibana UI by hitting [http://localhost:5601](http://localhost:5601) with a web browser.
|
||||
And then access Kibana UI by hitting [http://localhost:5601](http://localhost:5601) with a web browser and use the following credentials to login:
|
||||
|
||||
*NOTE*: You'll need to inject data into logstash before being able to create a logstash index in Kibana. Then all you should have to do is to
|
||||
hit the create button.
|
||||
* user: *elastic*
|
||||
* password: *changeme*
|
||||
|
||||
*NOTE*: You'll need to inject data into logstash before being able to create a logstash index in Kibana. Then all you should have to do is to hit the create button.
|
||||
|
||||
See: https://www.elastic.co/guide/en/kibana/current/setup.html#connect
|
||||
|
||||
You can also access:
|
||||
* Sense: [http://localhost:5601/app/sense](http://localhost:5601/app/sense)
|
||||
|
||||
*NOTE*: In order to use Sense, you'll need to query the IP address associated to your *network device* instead of localhost.
|
||||
|
||||
By default, the stack exposes the following ports:
|
||||
* 5000: Logstash TCP input.
|
||||
* 9200: Elasticsearch HTTP
|
||||
|
@ -113,7 +121,7 @@ If you want to override the default configuration, add the *LS_HEAP_SIZE* enviro
|
|||
```yml
|
||||
logstash:
|
||||
build: logstash/
|
||||
command: logstash -f /etc/logstash/conf.d/logstash.conf
|
||||
command: -f /etc/logstash/conf.d/
|
||||
volumes:
|
||||
- ./logstash/config:/etc/logstash/conf.d
|
||||
ports:
|
||||
|
@ -140,12 +148,11 @@ Update the container in the `docker-compose.yml` to add the *LS_JAVA_OPTS* envir
|
|||
```yml
|
||||
logstash:
|
||||
build: logstash/
|
||||
command: logstash -f /etc/logstash/conf.d/logstash.conf
|
||||
command: -f /etc/logstash/conf.d/
|
||||
volumes:
|
||||
- ./logstash/config:/etc/logstash/conf.d
|
||||
ports:
|
||||
- "5000:5000"
|
||||
- "18080:18080"
|
||||
links:
|
||||
- elasticsearch
|
||||
environment:
|
||||
|
@ -163,9 +170,11 @@ Then, you'll need to map your configuration file inside the container in the `do
|
|||
```yml
|
||||
elasticsearch:
|
||||
build: elasticsearch/
|
||||
command: elasticsearch -Des.network.host=_non_loopback_
|
||||
ports:
|
||||
- "9200:9200"
|
||||
- "9300:9300"
|
||||
environment:
|
||||
ES_JAVA_OPTS: "-Xms1g -Xmx1g"
|
||||
volumes:
|
||||
- ./elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
|
||||
```
|
||||
|
@ -178,6 +187,9 @@ elasticsearch:
|
|||
command: elasticsearch -Des.network.host=_non_loopback_ -Des.cluster.name: my-cluster
|
||||
ports:
|
||||
- "9200:9200"
|
||||
- "9300:9300"
|
||||
environment:
|
||||
ES_JAVA_OPTS: "-Xms1g -Xmx1g"
|
||||
```
|
||||
|
||||
# Storage
|
||||
|
@ -191,9 +203,11 @@ In order to persist Elasticsearch data even after removing the Elasticsearch con
|
|||
```yml
|
||||
elasticsearch:
|
||||
build: elasticsearch/
|
||||
command: elasticsearch -Des.network.host=_non_loopback_
|
||||
ports:
|
||||
- "9200:9200"
|
||||
- "9300:9300"
|
||||
environment:
|
||||
ES_JAVA_OPTS: "-Xms1g -Xmx1g"
|
||||
volumes:
|
||||
- /path/to/storage:/usr/share/elasticsearch/data
|
||||
```
|
||||
|
|
|
@ -1,24 +1,33 @@
|
|||
elasticsearch:
|
||||
image: elasticsearch:latest
|
||||
command: elasticsearch -Des.network.host=0.0.0.0
|
||||
version: '2'
|
||||
|
||||
services:
|
||||
elasticsearch:
|
||||
build: elasticsearch/
|
||||
ports:
|
||||
- "9200:9200"
|
||||
- "9300:9300"
|
||||
|
||||
logstash:
|
||||
environment:
|
||||
ES_JAVA_OPTS: "-Xms1g -Xmx1g"
|
||||
networks:
|
||||
- docker_elk
|
||||
logstash:
|
||||
build: logstash/
|
||||
command: logstash -f /etc/logstash/conf.d/logstash.conf
|
||||
command: -f /etc/logstash/conf.d/
|
||||
volumes:
|
||||
- ./logstash/config:/etc/logstash/conf.d
|
||||
ports:
|
||||
- "5000:5000"
|
||||
links:
|
||||
- elasticsearch
|
||||
kibana:
|
||||
networks:
|
||||
- docker_elk
|
||||
kibana:
|
||||
build: kibana/
|
||||
volumes:
|
||||
- ./kibana/config/:/opt/kibana/config/
|
||||
ports:
|
||||
- "5601:5601"
|
||||
links:
|
||||
- elasticsearch
|
||||
networks:
|
||||
- docker_elk
|
||||
|
||||
networks:
|
||||
docker_elk:
|
||||
driver: bridge
|
||||
|
|
|
@ -0,0 +1,7 @@
|
|||
FROM elasticsearch:5
|
||||
|
||||
ENV ES_JAVA_OPTS="-Des.path.conf=/etc/elasticsearch"
|
||||
|
||||
RUN elasticsearch-plugin install --batch x-pack
|
||||
|
||||
CMD ["-E", "network.host=0.0.0.0", "-E", "discovery.zen.minimum_master_nodes=1"]
|
|
@ -1,10 +1,10 @@
|
|||
FROM kibana:latest
|
||||
FROM kibana:5
|
||||
|
||||
RUN apt-get update && apt-get install -y netcat
|
||||
RUN apt-get update && apt-get install -y netcat bzip2
|
||||
|
||||
COPY entrypoint.sh /tmp/entrypoint.sh
|
||||
RUN chmod +x /tmp/entrypoint.sh
|
||||
|
||||
RUN kibana plugin --install elastic/sense
|
||||
RUN kibana-plugin install x-pack
|
||||
|
||||
CMD ["/tmp/entrypoint.sh"]
|
||||
|
|
|
@ -1,76 +1,92 @@
|
|||
# Kibana is served by a back end server. This controls which port to use.
|
||||
port: 5601
|
||||
# Kibana is served by a back end server. This setting specifies the port to use.
|
||||
server.port: 5601
|
||||
|
||||
# The host to bind the server to.
|
||||
host: "0.0.0.0"
|
||||
# This setting specifies the IP address of the back end server.
|
||||
server.host: "0.0.0.0"
|
||||
|
||||
# The Elasticsearch instance to use for all your queries.
|
||||
elasticsearch_url: "http://elasticsearch:9200"
|
||||
# Enables you to specify a path to mount Kibana at if you are running behind a proxy. This setting
|
||||
# cannot end in a slash.
|
||||
# server.basePath: ""
|
||||
|
||||
# preserve_elasticsearch_host true will send the hostname specified in `elasticsearch`. If you set it to false,
|
||||
# then the host you use to connect to *this* Kibana instance will be sent.
|
||||
elasticsearch_preserve_host: true
|
||||
# The maximum payload size in bytes for incoming server requests.
|
||||
# server.maxPayloadBytes: 1048576
|
||||
|
||||
# Kibana uses an index in Elasticsearch to store saved searches, visualizations
|
||||
# and dashboards. It will create a new index if it doesn't already exist.
|
||||
kibana_index: ".kibana"
|
||||
# The Kibana server's name. This is used for display purposes.
|
||||
# server.name: "your-hostname"
|
||||
|
||||
# If your Elasticsearch is protected with basic auth, this is the user credentials
|
||||
# used by the Kibana server to perform maintence on the kibana_index at statup. Your Kibana
|
||||
# users will still need to authenticate with Elasticsearch (which is proxied thorugh
|
||||
# the Kibana server)
|
||||
# kibana_elasticsearch_username: user
|
||||
# kibana_elasticsearch_password: pass
|
||||
# The URL of the Elasticsearch instance to use for all your queries.
|
||||
elasticsearch.url: "http://elasticsearch:9200"
|
||||
|
||||
# If your Elasticsearch requires client certificate and key
|
||||
# kibana_elasticsearch_client_crt: /path/to/your/client.crt
|
||||
# kibana_elasticsearch_client_key: /path/to/your/client.key
|
||||
# When this setting’s value is true Kibana uses the hostname specified in the server.host
|
||||
# setting. When the value of this setting is false, Kibana uses the hostname of the host
|
||||
# that connects to this Kibana instance.
|
||||
# elasticsearch.preserveHost: true
|
||||
|
||||
# If you need to provide a CA certificate for your Elasticsarech instance, put
|
||||
# the path of the pem file here.
|
||||
# ca: /path/to/your/CA.pem
|
||||
# Kibana uses an index in Elasticsearch to store saved searches, visualizations and
|
||||
# dashboards. Kibana creates a new index if the index doesn’t already exist.
|
||||
# kibana.index: ".kibana"
|
||||
|
||||
# The default application to load.
|
||||
default_app_id: "discover"
|
||||
# kibana.defaultAppId: "discover"
|
||||
|
||||
# Time in milliseconds to wait for elasticsearch to respond to pings, defaults to
|
||||
# request_timeout setting
|
||||
# ping_timeout: 1500
|
||||
# If your Elasticsearch is protected with basic authentication, these settings provide
|
||||
# the username and password that the Kibana server uses to perform maintenance on the Kibana
|
||||
# index at startup. Your Kibana users still need to authenticate with Elasticsearch, which
|
||||
# is proxied through the Kibana server.
|
||||
# elasticsearch.username: "user"
|
||||
# elasticsearch.password: "pass"
|
||||
|
||||
# Time in milliseconds to wait for responses from the back end or elasticsearch.
|
||||
# This must be > 0
|
||||
request_timeout: 300000
|
||||
# Paths to the PEM-format SSL certificate and SSL key files, respectively. These
|
||||
# files enable SSL for outgoing requests from the Kibana server to the browser.
|
||||
# server.ssl.cert: /path/to/your/server.crt
|
||||
# server.ssl.key: /path/to/your/server.key
|
||||
|
||||
# Time in milliseconds for Elasticsearch to wait for responses from shards.
|
||||
# Set to 0 to disable.
|
||||
shard_timeout: 0
|
||||
# Optional settings that provide the paths to the PEM-format SSL certificate and key files.
|
||||
# These files validate that your Elasticsearch backend uses the same key files.
|
||||
# elasticsearch.ssl.cert: /path/to/your/client.crt
|
||||
# elasticsearch.ssl.key: /path/to/your/client.key
|
||||
|
||||
# Time in milliseconds to wait for Elasticsearch at Kibana startup before retrying
|
||||
# startup_timeout: 5000
|
||||
# Optional setting that enables you to specify a path to the PEM file for the certificate
|
||||
# authority for your Elasticsearch instance.
|
||||
# elasticsearch.ssl.ca: /path/to/your/CA.pem
|
||||
|
||||
# Set to false to have a complete disregard for the validity of the SSL
|
||||
# certificate.
|
||||
verify_ssl: true
|
||||
# To disregard the validity of SSL certificates, change this setting’s value to false.
|
||||
# elasticsearch.ssl.verify: true
|
||||
|
||||
# SSL for outgoing requests from the Kibana Server (PEM formatted)
|
||||
# ssl_key_file: /path/to/your/server.key
|
||||
# ssl_cert_file: /path/to/your/server.crt
|
||||
# Time in milliseconds to wait for Elasticsearch to respond to pings. Defaults to the value of
|
||||
# the elasticsearch.requestTimeout setting.
|
||||
# elasticsearch.pingTimeout: 1500
|
||||
|
||||
# Set the path to where you would like the process id file to be created.
|
||||
# pid_file: /var/run/kibana.pid
|
||||
# Time in milliseconds to wait for responses from the back end or Elasticsearch. This value
|
||||
# must be a positive integer.
|
||||
# elasticsearch.requestTimeout: 30000
|
||||
|
||||
# If you would like to send the log output to a file you can set the path below.
|
||||
# This will also turn off the STDOUT log output.
|
||||
# log_file: ./kibana.log
|
||||
# Plugins that are included in the build, and no longer found in the plugins/ folder
|
||||
bundled_plugin_ids:
|
||||
- plugins/dashboard/index
|
||||
- plugins/discover/index
|
||||
- plugins/doc/index
|
||||
- plugins/kibana/index
|
||||
- plugins/markdown_vis/index
|
||||
- plugins/metric_vis/index
|
||||
- plugins/settings/index
|
||||
- plugins/table_vis/index
|
||||
- plugins/vis_types/index
|
||||
- plugins/visualize/index
|
||||
# List of Kibana client-side headers to send to Elasticsearch. To send *no* client-side
|
||||
# headers, set this value to [] (an empty list).
|
||||
# elasticsearch.requestHeadersWhitelist: [ authorization ]
|
||||
|
||||
# Time in milliseconds for Elasticsearch to wait for responses from shards. Set to 0 to disable.
|
||||
# elasticsearch.shardTimeout: 0
|
||||
|
||||
# Time in milliseconds to wait for Elasticsearch at Kibana startup before retrying.
|
||||
# elasticsearch.startupTimeout: 5000
|
||||
|
||||
# Specifies the path where Kibana creates the process ID file.
|
||||
# pid.file: /var/run/kibana.pid
|
||||
|
||||
# Enables you specify a file where Kibana stores log output.
|
||||
# logging.dest: stdout
|
||||
|
||||
# Set the value of this setting to true to suppress all logging output.
|
||||
# logging.silent: false
|
||||
|
||||
# Set the value of this setting to true to suppress all logging output other than error messages.
|
||||
# logging.quiet: false
|
||||
|
||||
# Set the value of this setting to true to log all events, including system usage information
|
||||
# and all requests.
|
||||
# logging.verbose: false
|
||||
|
||||
# Set the interval in milliseconds to sample system and process performance
|
||||
# metrics. Minimum is 100ms. Defaults to 10000.
|
||||
# ops.interval: 10000
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
FROM logstash:latest
|
||||
FROM logstash:5
|
||||
|
||||
# Add your logstash plugins setup here
|
||||
# Example: RUN logstash-plugin install logstash-filter-json
|
|
@ -9,5 +9,7 @@ input {
|
|||
output {
|
||||
elasticsearch {
|
||||
hosts => "elasticsearch:9200"
|
||||
user => "elastic"
|
||||
password => "changeme"
|
||||
}
|
||||
}
|
||||
|
|
Loading…
Reference in New Issue