How to Secure Your Elastic Stack (Plus Kibana, Logstash and Beats)

Posted in: Open Source, Site Reliability Engineering, Technical Track

Editor’s Note: Because our bloggers have lots of useful tips, every now and then we bring forward a popular post from the past. We originally published today’s post on December 16, 2019.

This is the second of a series of blog posts related to Elastic Stack and the components around it. The final objective is to deploy and secure a production-ready environment using these freely available tools. 

In this post, I’ll be focusing on securing your elastic stack (plus Kibana, Logstash and Beats) using HTTPS, SSL and TLS. If you need to install an Elasticsearch cluster, please make sure to check out the first post which covered Installing Elasticsearch Using Ansible.

As I mentioned in the first post, one thing I find disturbing in this day and age is Elastic Stack’s default behavior. It’s always configured to have all the messages exchanged between the components in the stack in plain text! Of course, we always imagine the components are in a secure channel — the nodes of the cluster, the information shipping to them via Beats, etc. All of them should be on a private, secure network. The truth is, that’s not always the case.

We all heard the great news from the vendor, Elastic, a few months ago — starting with version 6.8.0 and 7.1.0, most of the security features on Elasticsearch are now free! Before this, we had to use X-Pack (paid) features. If we needed any secure communications between the components of our cluster, we had to pay. Fortunately, this is no more and now we have a way to both quickly deploy and secure our stack.

The bad news is that vendor documentation about securing it is still scarce. The good news is we have this blog post as a guide! :)

Sample Elastic Stack architecture

I’ll work on this post under the assumption the architecture is as it is in the following diagram. Of course, this will NOT be the case for your deployment, so please adjust the components as necessary. The diagram is just for information purposes.

Sample Elastic Stack architecture plus Logstash, Kibana and Beats

 

Securing Elasticsearch cluster

First, we need to create the CA for the cluster:

/usr/share/elasticsearch/bin/elasticsearch-certutil ca

Then, it’s necessary to create the certificates for the individual components:

/usr/share/elasticsearch/bin/elasticsearch-certutil cert --ca elastic-stack-ca.p12 --dns esmaster1,esmaster2,esmaster3,esdata1,esdata2,esdata3,escoord1,escoord2,eslogstash1,eslogstash2

You can create both certificates on any of the servers and they can be distributed afterward.  (By default, under /usr/share/elasticsearch/, with the names of elastic-stack-ca.p12 (CA) and elastic-certificates.p12 certificates).

I recommend setting the certificates to expire at a future date. Three years would be a safe value. It’s just a matter of remembering when will they expire and renewing them beforehand.

There are some options that must be added to all of the nodes for the cluster, such as the following:

xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: security/elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: security/elastic-certificates.p12
xpack.security.http.ssl.enabled: true
xpack.security.http.ssl.keystore.path: security/elastic-certificates.p12
xpack.security.http.ssl.truststore.path: security/elastic-certificates.p12
xpack.security.http.ssl.verification_mode: certificate

Remember how in my first post I recommended using Ansible to deploy the Elasticsearch cluster? Now is the time to use it to easily redeploy with the security options.

The updated Ansible configuration file is this:

- hosts: masters
  roles:
   - role: elastic.elasticsearch
  vars:
   es_heap_size: "8g"
   es_config:
     cluster.name: "esprd"
     network.host: 0
     cluster.initial_master_nodes: "esmaster1,esmaster2,esmaster3"
     discovery.seed_hosts: "esmaster1:9300,esmaster2:9300,esmaster3:9300"
     http.port: 9200
     node.data: false
     node.master: true
     node.ingest: false
     node.ml: false
     cluster.remote.connect: false
     bootstrap.memory_lock: true
     xpack.security.enabled: true
     xpack.security.transport.ssl.enabled: true
     xpack.security.transport.ssl.verification_mode: certificate
     xpack.security.transport.ssl.keystore.path: security/elastic-certificates.p12
     xpack.security.transport.ssl.truststore.path: security/elastic-certificates.p12
     xpack.security.http.ssl.enabled: true
     xpack.security.http.ssl.keystore.path: security/elastic-certificates.p12
     xpack.security.http.ssl.truststore.path: security/elastic-certificates.p12
     xpack.security.http.ssl.verification_mode: certificate
- hosts: data
  roles:
   - role: elastic.elasticsearch
  vars:
   es_data_dirs:
     - "/var/lib/elasticsearch"
   es_heap_size: "30g"
   es_config:
     cluster.name: "esprd"
     network.host: 0
     discovery.seed_hosts: "esmaster1:9300,esmaster2:9300,esmaster3:9300"
     http.port: 9200
     node.data: true
     node.master: false
     node.ml: false
     bootstrap.memory_lock: true
     indices.recovery.max_bytes_per_sec: 100mb
     xpack.security.enabled: true
     xpack.security.transport.ssl.enabled: true
     xpack.security.transport.ssl.verification_mode: certificate
     xpack.security.transport.ssl.keystore.path: security/elastic-certificates.p12
     xpack.security.transport.ssl.truststore.path: security/elastic-certificates.p12
     xpack.security.http.ssl.enabled: true
     xpack.security.http.ssl.keystore.path: security/elastic-certificates.p12
     xpack.security.http.ssl.truststore.path: security/elastic-certificates.p12
     xpack.security.http.ssl.verification_mode: certificate
- hosts: coordinating
  roles:
   - role: elastic.elasticsearch
  vars:
   es_heap_size: "16g"
   es_config:
     cluster.name: "esprd"
     network.host: 0
     discovery.seed_hosts: "esmaster1:9300,esmaster2:9300,esmaster3:9300"
     http.port: 9200
     node.data: false
     node.master: false
     node.ingest: false
     node.ml: false
     cluster.remote.connect: false
     bootstrap.memory_lock: true
     xpack.security.enabled: true
     xpack.security.transport.ssl.enabled: true
     xpack.security.transport.ssl.verification_mode: certificate
     xpack.security.transport.ssl.keystore.path: security/elastic-certificates.p12
     xpack.security.transport.ssl.truststore.path: security/elastic-certificates.p12
     xpack.security.http.ssl.enabled: true
     xpack.security.http.ssl.keystore.path: security/elastic-certificates.p12
     xpack.security.http.ssl.truststore.path: security/elastic-certificates.p12
     xpack.security.http.ssl.verification_mode: certificate

If you didn’t deploy via Ansible, you can still add the options manually to the configuration file.

After adding the options and restarting the cluster, Elasticsearch will be accessible via https. You can check with https://esmaster1:9200/_cluster/health.

We need to create the default users and set up passwords for security on Elasticsearch. Again, this can be done on any of the Elasticsearch nodes.

/usr/share/elasticsearch/bin/elasticsearch-setup-passwords interactive

The list of users will be similar to this one:

elastic: REDACTED
apm-system: REDACTED
kibana: REDACTED
logstash_system: REDACTED
beats_system: REDACTED
remote_monitoring_user: REDACTED

Securing Kibana

After all security options are set on the Elastic cluster, we move into Kibana configuration. We will create a PEM format certificate and key with the following command:

/usr/share/elasticsearch/bin/elasticsearch-certutil cert --pem -ca elastic-stack-ca.p12 --dns eskibana1
/usr/share/elasticsearch/bin/elasticsearch-certutil cert --pem -ca elastic-stack-ca.p12 --dns eskibana2

Once done, we need to move the certificates into the corresponding Kibana nodes under /etc/kibana/.

The configuration file for Kibana needs to end up being similar to this one. We’ll focus only on the basic and security-related parts of it.

server.port: 5601
server.host: "0.0.0.0"
server.name: "eskibana1"
elasticsearch.hosts: ["https://esmaster1:9200"]
elasticsearch.username: "kibana"
elasticsearch.password: "REDACTED"
server.ssl.enabled: true
server.ssl.certificate: /etc/kibana/instance.crt
server.ssl.key: /etc/kibana/instance.key
elasticsearch.ssl.verificationMode: none
xpack.security.encryptionKey: "REDACTED"

After restarting Kibana, you can now access it via https. For example, at https://eskibana1:5601/app/kibana/

Securing Logstash

Logstash security configuration requires the certificate to be on PEM (as opposed to PK12 for Elasticsearch and Kibana). This is an undocumented “feature” (requirement)!

We’ll convert the general PK12 certificate into PEM for Logstash certificates:

openssl pkcs12 -in elastic-certificates.p12 -out /etc/logstash/logstash.pem -clcerts -nokeys
In order to extract the individual certificate, key and CA from the .p12 bundle, we can use the following commands to obtain them:
    1. Obtain the key:
      openssl pkcs12 -in elastic-certificates.p12 -nocerts -nodes | sed -ne '/-BEGIN PRIVATE KEY-/,/-END PRIVATE KEY-/p' > logstash-ca.key
    2. Obtain the CA:
      openssl pkcs12 -in elastic-certificates.p12 -cacerts -nokeys -chain | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > logstash-ca.crt
    3. Obtain the node certificate:
      openssl pkcs12 -in elastic-certificates.p12 -clcerts -nokeys | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > logstash.crt
(Please note: the certificates are the same for Elasticsearch and for Logstash, so you can just rename logstash-ca.crt to es-ca.crt if / when required, or give any other desired name).

Then, we need to edit the Logstash output filters to reflect the new security settings:

input {
  beats {
  port => 5044
  }
  }
output {
  elasticsearch {
  ssl => true
  ssl_certificate_verification => true
  cacert => '/etc/logstash/logstash.pem'
  hosts => ["esmaster1:9200","esmaster2:9200","esmaster3:9200"]
  index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
  user => "elastic"
  password => "REDACTED"
  }
  }

As we can see, Logstash will now talk to Elasticsearch using SSL and the certificate we just converted.

Finally, we edit Logstash’s configuration file /etc/logstash/logstash.yml to be like the following (focus only on security-related parts of it):

path.data: /var/lib/logstash
config.reload.automatic: true
path.logs: /var/log/logstash
xpack.monitoring.enabled: true
xpack.monitoring.elasticsearch.username: logstash_system
xpack.monitoring.elasticsearch.password: REDACTED
xpack.monitoring.elasticsearch.hosts: ["https://esmaster1:9200", "https://esmaster2:9200", "https://esmaster3:9200"]
xpack.monitoring.elasticsearch.ssl.certificate_authority: /etc/logstash/es-ca.crt
xpack.monitoring.elasticsearch.sniffing: true
xpack.monitoring.collection.interval: 10s
xpack.monitoring.collection.pipeline.details.enabled: true

Restart Logstash to get the new settings on the file.

Taking a break — almost there!

(Phew, we’re almost there! We’re on the way to secure your Elastic Stack. Please just be a bit more patient. It’s going to be worth it for the security. Right? RIGHT?!)

Security on XKCD.

Security on XKCD. Permission to use the image at https://xkcd.com/about/

Securing Beats — changes (still) on Logstash servers

We need to create a new certificate in order for Logstash to accept SSL connections from Beats. This certificate will be used only for communication between these two components of the stack. This certificate is also different than the one used for Logstash to communicate with the Elasticsearch cluster to send data. This is why the CA and the crt/key (in PEM format) are different.

Okay, let’s create the certificates!

/usr/share/elasticsearch/bin/elasticsearch-certutil cert --ca-cert logstash-ca/logstash-ca.crt --ca-key logstash-ca/logstash-ca.key --dns eslogstash1,eslogstash2 --pem
openssl pkcs8 -in logstash.key -topk8 -nocrypt -out logstash.pkcs8.key

If you need to maintain both the plain text (but why?!) and the secure communication, there is an extra step. You will need to create two Logstash configurations, one for the plain text communication and another for the SSL one. The first input in plain text (incoming from Beats), output in SSL (to Elasticsearch cluster) is the one listed in the above section.

The new (secure) input (from Beats) + output (to Elasticsearch) configuration would be:

input {
beats {
port => 5045
ssl => true
ssl_certificate_authorities => ["/etc/logstash/ca.crt"]
ssl_certificate => "/etc/logstash/instance.crt"
ssl_key => "/etc/logstash/logstash.pkcs8.key"
}
}
output {
elasticsearch {
ssl => true
ssl_certificate_verification => true
cacert => '/etc/logstash/logstash.pem'
hosts => ["esmaster1:9200","esmaster2:9200","esmaster3:9200"]
index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
user => "elastic"
password => "REDACTED"
}
}
}

Notice that regular logs (plain text) can come on port 5044/tcp, but SSL logs come into port 5045/tcp. Adjust the port number if you need to.

Securing Beats — changes (finally!) on Beats shipping instances

Once the Logstash configuration is ready, it’s just a matter of setting the certificates on the Beats side. Easy as pie!

This is an example of the Metricbeat configuration. Please focus on the security part of it. These same certificates can be applied to Filebeat and any other beat!

metricbeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: true
reload.period: 10s
setup.template.settings:
index.number_of_shards: 1
index.codec: best_compression
name: my-happy-metricbeat-shipper-name
fields:
service: myimportantservice
output.logstash:
hosts: ["eslogstash1:5045"]
ssl.certificate_authorities: ["/etc/metricbeat/logstash-ca.crt"]
ssl.certificate: "/etc/metricbeat/logstash.crt"
ssl.key: "/etc/metricbeat/logstash.pkcs8.key"

Restart Logstash and corresponding beat(s) and that’s it!

Now you have a completely secure Elastic Stack (including Elasticsearch, Kibana, Logstash and Beats). You can be really proud of it because this is not a trivial task!

email
Want to talk with an expert? Schedule a call with our team to get the conversation started.

About the Author

Site Reliability Consultant
Mexican living in France with way too many interests to list here, but in general technology is my passion. I love running, videogames (Final Fantasy series!), Pokémon Go, languages and food! SysAdmin since 1994, sometimes I feel way too old to still be working on this :)

57 Comments. Leave new

I have read dozens of blogs, references including document from Elastic themselves… however, this is by far the BEST article I have read about TLS/SSL for Elasticsearch!

Kudos!

Reply
Alejandro Gonzalez
May 18, 2020 11:40 am

Thank you very much for your kind words, it’s a pleasure to know you found this information useful! :)

Reply

Hi Alejandro,

I found this article very useful and detailed. Congratulations!

I am in the process of securing my ELK nodes and I have been struggling with the security settings for the last few days. After spending some time on this, I finally have Elasticsearch and Kibana configured for secure connection and both using certificates in PKCS#12 format.

Most of the documentation found around the web explain how to configure Kibana to use only PEM format, and so with Logstash, but I was wondering if like Kibana, Logstash is now able to handle PKCS#12.

Do you know anything about it? If YES, would you know how to setup Logstash to use PKCS#12? The official documentation does not helps much.

Thank you

Reply
Alejandro Gonzalez
September 7, 2020 8:59 am

Hi, Manuel! Thank you very much for your comments and for your input. I haven’t experimented with PKCS#12 format on Logstash and for now I just use what I’ve provided in this blog post. What I did is that in the past few days I updated the post with additional instructions on how to convert the certificates as I saw some people were struggling with that. I wanted to ask you, is there any special reason why you want to use PKCS#12? Just curious, of course, every use case must be different. I agree with you that official documentation doesn’t help that much :( . Thanks!

Reply

Hi all,

The CA.cert can be obtained from generate the initial certificates within the ELK cluster

bin/elasticsearch-certutil cert –keep-ca-key –pem –in

When we generated our SSL certificates, we provided the –keep-ca-key option which means the certs.zip file contains a ca/ca.key file alongside the ca/ca.crt file. If you ever decide to add more nodes to your Elasticsearch cluster, you’ll want to generate additional node certificates, and for that you will need both of those “ca” files as well as the password you used to generate them.

Cheers!

Reply
Alejandro Gonzalez
September 7, 2020 9:00 am

Thank you for the clarification. Additional instructions have been updated on the original post in order to reflect this.

In order to extract the individual certificate, key and CA from the .p12 bundle, we can use the following commands to obtain them:

Obtain the key:
openssl pkcs12 -in elastic-certificates.p12 -nocerts -nodes | sed -ne ‘/-BEGIN PRIVATE KEY-/,/-END PRIVATE KEY-/p’ > logstash-ca.key
Obtain the CA:
openssl pkcs12 -in elastic-certificates.p12 -cacerts -nokeys -chain | sed -ne ‘/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p’ > logstash-ca.crt
Obtain the node certificate:
openssl pkcs12 -in elastic-certificates.p12 -clcerts -nokeys | sed -ne ‘/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p’ > logstash.crt

(Please note: the certificates are the same for Elasticsearch and for Logstash, so you can just rename logstash-ca.crt to es-ca.crt if / when required, or give any other desired name).

Cheers!

Reply

Well written and very useful. It’s really easy to follow what you have written.
Thank you.

Reply
Alejandro Gonzalez
September 7, 2020 9:01 am

Thank you for your kind words and feedback. I’m glad it worked for you!

Reply

Thank you so much for posting this – your walkthrough is better than any documentation. Couldn’t get it working until I read your article. The explanations are great.

Reply
Alejandro Gonzalez
September 7, 2020 9:01 am

Thank you very much, Franky! I’m really happy to know this helped you in securing your stack :)

Reply

How did you create the es-ca.crt? And do you have post for Configure Metricbeat 7.8 to monitor Elasticsearch Cluster Setup over HTTPS? Thanks

Reply

Hi,

The CA.cert can be obtained from generate the initial certificates within the ELK cluster

bin/elasticsearch-certutil cert –keep-ca-key –pem –in

When we generated our SSL certificates, we provided the –keep-ca-key option which means the certs.zip file contains a ca/ca.key file alongside the ca/ca.crt file. If you ever decide to add more nodes to your Elasticsearch cluster, you’ll want to generate additional node certificates, and for that you will need both of those “ca” files as well as the password you used to generate them.

Cheers!

Reply
Alejandro Gonzalez
September 7, 2020 9:04 am

Hi, Norman, thanks for your question! I’ve updated the original post with the instructions to convert the certificates as some people were struggling with this step. Please take a look at the updated post or I’ll just paste the instructions here:

In order to extract the individual certificate, key and CA from the .p12 bundle, we can use the following commands to obtain them:

Obtain the key:
openssl pkcs12 -in elastic-certificates.p12 -nocerts -nodes | sed -ne ‘/-BEGIN PRIVATE KEY-/,/-END PRIVATE KEY-/p’ > logstash-ca.key
Obtain the CA:
openssl pkcs12 -in elastic-certificates.p12 -cacerts -nokeys -chain | sed -ne ‘/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p’ > logstash-ca.crt
Obtain the node certificate:
openssl pkcs12 -in elastic-certificates.p12 -clcerts -nokeys | sed -ne ‘/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p’ > logstash.crt

(Please note: the certificates are the same for Elasticsearch and for Logstash, so you can just rename logstash-ca.crt to es-ca.crt if / when required, or give any other desired name).

Regarding configure Metricbeat 7.x to monitor Elasticsearch Cluster over HTTPS, could you please further explain what are you trying to accomplish? With the instructions provided in the post your Metricbeat would be sending metrics over a secure connection to the Elasticsearch stack. As you know, Metricbeat collects the metrics in the instance itself so this is the source of my confusion but perhaps we can clarify it together. Thanks!

Reply

Thank you very much for your tutorial. Just one more question, based on your sample ELK architecture, you have 2 kibana and using a load balancer. May I ask what load balancer you used and how you set it up?

Reply
Alejandro Gonzalez
September 10, 2020 10:04 am

Hi, Norman!

I’ve used both haproxy and nginx as the Load Balancers. Both would be pretty straightforward to setup, just take into account for them to listen on the specific/required ports and then to redirect the TCP traffic to the required Kibana instance, I like to use Round Robin to balance the traffic but you can use any method you choose.

As per the configuration, it’s out of the scope to go into detail here but you can find the appropriate pointers to configuration on each of the following sites:

HAProxy:
http://cbonte.github.io/haproxy-dconv/2.2/intro.html#3.3.5

Nginx:
https://nginx.org/en/docs/http/ngx_http_upstream_module.html

Perhaps in a near future I can take the time to write a step-by-step blog post related to this configuration, could be a great subject of discussion, thanks!

Reply

Hi..
Have you already write the step by step configuration for the load balancer?

Thanks for this ! “logstash-ca.crt” and “logstash-ca.key” are used but how/when are they generated ? this is the only step that is missing to do the job ;-)

Reply
Alejandro Gonzalez
September 7, 2020 9:05 am

Pika: as some people were struggling with this step, I’ve updated the original post to let you know how to extract the certificates, the steps are:

In order to extract the individual certificate, key and CA from the .p12 bundle, we can use the following commands to obtain them:

Obtain the key:
openssl pkcs12 -in elastic-certificates.p12 -nocerts -nodes | sed -ne ‘/-BEGIN PRIVATE KEY-/,/-END PRIVATE KEY-/p’ > logstash-ca.key
Obtain the CA:
openssl pkcs12 -in elastic-certificates.p12 -cacerts -nokeys -chain | sed -ne ‘/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p’ > logstash-ca.crt
Obtain the node certificate:
openssl pkcs12 -in elastic-certificates.p12 -clcerts -nokeys | sed -ne ‘/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p’ > logstash.crt

(Please note: the certificates are the same for Elasticsearch and for Logstash, so you can just rename logstash-ca.crt to es-ca.crt if / when required, or give any other desired name).

I hope this answers your question! :)

Reply

Hi Alejandro, I have a secure ELK Stack cluster with 3 hosts: [“host1:5044”, “host2:5044”, “host3:5044”]. In the “ssl.certificate” Filebeat.yml file, which of the 3 crts do I have to indicate? Or I have to indicate the 3 crts? Thank you very much!

Reply
Alejandro Gonzalez
September 7, 2020 9:07 am

Hi, Jorge!

I assume you have 3 Logstash servers and you want to know if you can indicate more than one server in your Filebeat configuration for the logs shipping instances? If that’s the case, then yes, absolutely you can configure multiple Logstash servers.

If what you’re asking is if you can use multiple certificates then that is not possible, BUT in one certificate you can specify the nodes names so they all can/will be included on it. Please refer to the beginning of the post on how to add multiple DNS entries to the certificate, or you can create new ones with your CA file you must have saved.

Cheers!

Reply

Hi Alejandro, thanks for the answer. Yes that’s it, I want indicate more than one server in the Filebeat configuration. How can I do it? Thanks and regards!

Reply
Alejandro Gonzalez
September 8, 2020 10:14 am

Jorge:

In order to include more than one Logstash server in the Filebeat output you just need to add them in the configuration file, like in this example:

output.logstash:
hosts: [“server1:port”, “server2:port”, “server3:port”]

This way Filebeat will send the logs to one of the 3 Logstash servers, chosen randomly. If one of the servers is unreachable or unresponsive, then another one will be tried.

For more details on the Filebeat configuration you can review the official documentation at https://www.elastic.co/guide/en/beats/filebeat/current/configuring-howto-filebeat.html or if you want to discuss a specific detail perhaps we can do it as well.

Good luck!

Reply

Hi Alejandro, following the tutorial (very good tutorial) I obtained this error: “x509: certificate signed by unknown authority (possibly because of “crypto/rsa: verification error” while trying to verify candidate authority certificate “Elastic Certificate Tool Autogenerated CA”)”. Any help plesae?

Alejandro Gonzalez
September 11, 2020 7:00 am

Jorge:

That warning refers that the certificates were self-made and self-signed instead of using an official certification company. As you created yourself the certificates it’s safe to ignore the warning, or if you want of course you can obtain and pay for the certificates from the proper source.

Where did you see that message? As we’re using certificates for all of the components on the stack then it’s important to know where are you getting it, although I have a feeling this is while accessing Kibana?

Hi Alejandro, the error is the next one:

Failed to connect to backoff (async(tcp://dns_name:5044)): x509: certificate signed by unknown authority (possibly because of “crypto/rsa: verification error” while trying to verify candidate authority certificate “Elastic Certificate Tool Autogenerated CA”)

I obtained this error in the Filebeat log, when it tried to connect with Logstash. Thanks again!

Alejandro Gonzalez
September 14, 2020 5:32 am

Jorge:

It appears to me that either you aren’t using the same CA on the “ssl.certificate_authorities:” configuration line for Filebeat, or that perhaps the certificate you created isn’t including the DNS name of your Logstash instance. Perhaps you could make a backup copy of your current certificates for the stack (if this is a test one) and make sure to recreate all of them you will need to use with the appropriate DNS names, as this could be a common error.

Good luck!

Hi Alejandro,

for Filebeat.yml output-Logstash I apply this conf:

ssl.certificate_authorities => [“C:\\Elastic Beats\\logstash-ca.crt”]
ssl.key => “C:\\Elastic Beats\\logstash.pkcs8.key”
ssl.certificate => “C:\\Elastic Beats\\instance.crt”

The same one that in the input of my logstash.conf
ssl_certificate_authorities => [“/etc/logstash/logstash-ca.crt”]
ssl_key => “/etc/logstash/logstash.pkcs8.key”
ssl_certificate => “/etc/logstash/instance.crt”

And when I generated the certs I stablish my dns servers with the commands:

/usr/share/elasticsearch/bin/elasticsearch-certutil cert –ca-cert logstash-ca/logstash-ca.crt –ca-key logstash-ca/logstash-ca.key –dns dns1,dns2,dns3 –pem

openssl pkcs8 -in logstash-ca.key -topk8 -nocrypt -out logstash.pkcs8.key

I don’t understand the error… Thanks for all and quickly replies!

Alejandro Gonzalez
September 14, 2020 6:57 am

Jorge:

I believe you need to replace your “dns names” with the appropriate instance names (as resolved by DNS) in the following line:

/usr/share/elasticsearch/bin/elasticsearch-certutil cert –ca-cert logstash-ca/logstash-ca.crt –ca-key logstash-ca/logstash-ca.key –dns dns1,dns2,dns3 –pem

You would need to put, for example, -dns logstash1,logstash2 if those are the names of your instances, as resolved by your DNS server.

Yes, that is clear. But my question is, I have to replace it with the machine hostnames (Jorge.domain.com) o or by my node.name (logstash1-domain) in Logstash.yml? Because in my case is not the the same.
I have 3 servers with 3 elastic and 3 logstash installed (Kibana only in the server 1). I created the certs with the hostname of the machines resolved by the DNS server.

Thanks and regards!

Alejandro Gonzalez
September 14, 2020 9:41 am

As long as the DNS can resolve the node name it should be fine the way you’re putting on the names. I have never encountered the same error that you have, so I’m running out of ideas but it appears that the CA you’re using with the certificates perhaps is not the same as the one on the Filebeat + Logstash? I see a similar issue reported on one of the Elasticsearch forums and at the end the person reporting it was able to solve it by redoing his certificates. Perhaps worth to take a look at: https://discuss.elastic.co/t/secure-filebeat-to-logstash/242899/18

How can we configure a 3 Logstash secure nodes?

Reply
Alejandro Gonzalez
September 7, 2020 9:08 am

Hi, Jorge!

Please refer to my answer on the question above.

Thank you!

Reply

how do you create the es-ca.crt for logstash configuration?

Reply
Alejandro Gonzalez
September 7, 2020 9:09 am

Hi, Norman! Thanks for your question.

I’ve updated the post to reflect this step as some people were struggling with this part of the process. You can also find the instructions here:

In order to extract the individual certificate, key and CA from the .p12 bundle, we can use the following commands to obtain them:

Obtain the key:
openssl pkcs12 -in elastic-certificates.p12 -nocerts -nodes | sed -ne ‘/-BEGIN PRIVATE KEY-/,/-END PRIVATE KEY-/p’ > logstash-ca.key
Obtain the CA:
openssl pkcs12 -in elastic-certificates.p12 -cacerts -nokeys -chain | sed -ne ‘/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p’ > logstash-ca.crt
Obtain the node certificate:
openssl pkcs12 -in elastic-certificates.p12 -clcerts -nokeys | sed -ne ‘/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p’ > logstash.crt

(Please note: the certificates are the same for Elasticsearch and for Logstash, so you can just rename logstash-ca.crt to es-ca.crt if / when required, or give any other desired name).

I hope this works well for you.

Have a nice one!

Reply

There is no explanation here as to how you ended up with the logstash-ca.crt?
Going through this article, my progress stops the moment this cert is mentioned?

None of the commands listed here generates these, and as such the command here;
/usr/share/elasticsearch/bin/elasticsearch-certutil cert –ca-cert logstash-ca/logstash-ca.crt –ca-key logstash-ca/logstash-ca.key –dns eslogstash1,eslogstash2 –pem
Does not work, as the logstash-ca.crt was never created/does not exist?

Having followed these steps from start both this article and others, I have gotten ES secured behind certificates, both for transport and HTTP. Grafana is even talking to ES, but Metricbeats setup remains a mystery.

This guide, although detailed, is not user friendly enough and does not explain or counter into how many certificates are created.

At this point; openssl pkcs12 -in elastic-certificates.p12 -out /etc/logstash/logstash.pem -clcerts -nokeys
suddenly afterwards, your config defines; xpack.monitoring.elasticsearch.ssl.certificate_authority: /etc/logstash/es-ca.crt
But the command above does not create es-ca.crt, so how was it created?

Reply
Alejandro Gonzalez
September 7, 2020 9:11 am

Hi, Emil!

I acknowledge your issue and I apologize for the lack of clarity. As this question was something some other people were asking as well, I’ve updated the original post with the instructions on how to extract the certificates from the bundle. You can check it from the post or just follow the instructions pasted here:

In order to extract the individual certificate, key and CA from the .p12 bundle, we can use the following commands to obtain them:

Obtain the key:
openssl pkcs12 -in elastic-certificates.p12 -nocerts -nodes | sed -ne ‘/-BEGIN PRIVATE KEY-/,/-END PRIVATE KEY-/p’ > logstash-ca.key
Obtain the CA:
openssl pkcs12 -in elastic-certificates.p12 -cacerts -nokeys -chain | sed -ne ‘/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p’ > logstash-ca.crt
Obtain the node certificate:
openssl pkcs12 -in elastic-certificates.p12 -clcerts -nokeys | sed -ne ‘/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p’ > logstash.crt

(Please note: the certificates are the same for Elasticsearch and for Logstash, so you can just rename logstash-ca.crt to es-ca.crt if / when required, or give any other desired name).

Please let me know if this works for you! :)

Reply

An amazing walkthrough. Very few that I’ve seen as detailed.
One question however; can you clarify how you created the es-ca.crt certificate authority in the logstash.yml config? There’s no mention of it anywhere else that I can see

Reply
Alejandro Gonzalez
September 7, 2020 9:12 am

Hi, Evan! Thank you for your feedback, it’s greatly appreciated.

As some people were struggling with this part of the process, I’ve updated the post with the instructions to do so, you can check them there or just see here:

In order to extract the individual certificate, key and CA from the .p12 bundle, we can use the following commands to obtain them:

Obtain the key:
openssl pkcs12 -in elastic-certificates.p12 -nocerts -nodes | sed -ne ‘/-BEGIN PRIVATE KEY-/,/-END PRIVATE KEY-/p’ > logstash-ca.key
Obtain the CA:
openssl pkcs12 -in elastic-certificates.p12 -cacerts -nokeys -chain | sed -ne ‘/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p’ > logstash-ca.crt
Obtain the node certificate:
openssl pkcs12 -in elastic-certificates.p12 -clcerts -nokeys | sed -ne ‘/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p’ > logstash.crt

(Please note: the certificates are the same for Elasticsearch and for Logstash, so you can just rename logstash-ca.crt to es-ca.crt if / when required, or give any other desired name).

I hope this clarifies your question. Thank you!

Reply

Hi everybody,

The CA.cert can be obtained from generate the initial certificates within the ELK cluster

bin/elasticsearch-certutil cert –keep-ca-key –pem –in

When we generated our SSL certificates, we provided the –keep-ca-key option which means the certs.zip file contains a ca/ca.key file alongside the ca/ca.crt file. If you ever decide to add more nodes to your Elasticsearch cluster, you’ll want to generate additional node certificates, and for that you will need both of those “ca” files as well as the password you used to generate them.

I hope this can help.

Cheers!

Reply
Stephen Wilkins
August 28, 2020 2:26 pm

I was looking for a proper guide to achieve this and I was going mad but then I found this piece of a very nice work and everything was very clear and straightforward! Many thanks to the author who clearly has a deep knowledge on the matter!

Reply
Alejandro Gonzalez
September 7, 2020 9:14 am

Stephen:

Thank you so much for your kind feedback! I’m really glad this helped you to secure your environment.

Regards!

Reply

Thanks for the blog, it really helpful.

qq. Wondering why the log stash output is pointing to esmaster nodes, i thought it should go to data nodes instead.

output {
elasticsearch {
ssl => true
ssl_certificate_verification => true
cacert => ‘/etc/logstash/logstash.pem’
hosts => [“esmaster1:9200″,”esmaster2:9200″,”esmaster3:9200”]

Reply
Alejandro Gonzalez
October 1, 2020 5:10 am

Hi, Saisurya, thank you for your kind comment!

This is a great catch: in general we would want master nodes to have the less possible interaction with any external load, so they can focus 100% on ensuring the cluster is in a consistent state at all times and this is why we don’t want to overload them. If you see the diagram a the beginning of the post, I meant to send the Logstash output to the coordinating nodes (as opposed to the data or master nodes), and this is because the role of the coordinating node is to only redirect requests to the appropriate node (the one that is available to receive information, the one that is most likely to be not busy, etc.) so the best approach would be to send the Logstash output to said coordinating nodes.

Of course, due to the nature of Elasticsearch you could send data to *either* node in the cluster (coordinating, master, data) but that wouldn’t be a best practice, so we want to stay away from this. There is a good amount of information related to nodes at https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-node.html

I’ll ask the blog editor to please change the master nodes to coordinating nodes on the Logstash output configuration to avoid any future confusion, thank you very much for your contribution!

Best regards.

Reply

[ERROR][logstash.licensechecker.licensereader] Unable to retrieve license information from license server {:message=>”Host name ‘139.162.11.6’ does not match the certificate subject provided by the peer (CN=instance)”}
Failed to fetch X-Pack information from Elasticsearch. This is likely due to failure to reach a live Elasticsearch cluster.

How about solve these issues ? Thanks

Reply
Alejandro Gonzalez
October 9, 2020 5:22 am

Hi, Randi!

It appears that perhaps you didn’t create the certificates with the DNS name of your instances. Please double check the certificate creation (around these lines):

/usr/share/elasticsearch/bin/elasticsearch-certutil cert –ca elastic-stack-ca.p12 –dns esmaster1,esmaster2,esmaster3,esdata1,esdata2,esdata3,escoord1,escoord2,eslogstash1,eslogstash2

Perhaps this is what is missing. Good luck!

Reply

Hi Alejandro

Thanks for feedback. Yes, correct. I create the certfiicates with –ip flags. Now, success. But, when i try logstash with 3 master node Elasticsearch, i found error again . If i try just single master node Elasticsearch, its running well.

Reply

Yes. I’m running with –IP flag. for 2 kibana is successfull. But, when i try for logstash. Just a single node Elastics have running, if i try with 3 node Elasticsearch error.

Reply

Alejandro very thanks for your tutorial. But, please add notice. When setup openssl in machine have logstash installed. Because, yesterday i am generate in machine have Elastic installed. I am struggle until 3 days. Thanks Alejandro

Reply
Alejandro Gonzalez
October 13, 2020 6:29 am

Randi:

I’m glad you’ve been successful in setting up and securing your stack. OpenSSL is a requirement when you work with certificates, I’m sorry you had to struggle to get it done and I’ll make sure to include a note about this.

Thank you and good luck!

Reply

hi, I followed your tutorial and I set up elastic search nodes and kibana just fine. However, I am experiencing difficulties while configuiring logstash. I have generated all the appropriate certificates and copied them to logstash machine( I have an elk solution in which all the nodes are running on separate VMs in GCP and communicating via private network) .
The exact problem is:
[ERROR] 2020-10-18 19:49:53.122 [Converge PipelineAction::Create] agent – Failed to execute action {:id=>:mai
n, :action_type=>LogStash::ConvergeResult::FailedAction, :message=>”Could not execute action: PipelineAction::Creat
e, action_result: false”, :backtrace=>nil}

Those logs are being displayed when I run logstash manually with the conf file for debugging purpose (to see logs)

Reply

Thanks for this guide. Do you know how to secure Elastic using your Microsoft Windows CA instead of creating a CA using certutil?

Thanks

Reply

I followed your post but I am not being able to connect Logstash to the Elasticsearch. I have Elasticsearch, Logstash and Kibana installed on the same server. Filebeat for client machine.

[DEBUG][logstash.outputs.elasticsearch][main] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>https://server.domain:9200/, :path=>”/”}
[WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>”https://server.domain:9200/”, :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>”Elasticsearch Unreachable: [https://server.domain:9200/][Manticore::SocketException] Connection refused (Connection refused)”}

Also, after configuring Elasticsearch and Kibana, this isn’t working
https://esmaster1:9200/_cluster/health.
https://eskibana1:5601/app/kibana/

But I can login to Kibana just fine. Also, if the stack isn’t secured with SSL, the logs get forwarded to the ELK server just fine.

Reply

did you resolve this issue. i am also facing same issue. could you please provide the solution

Reply

hi,
thanks for this useful guide. I have a question and appreciate any guidance. I have an elasticsearch instance without x-pack enabled but it is secure, mTLS is enabled. How can I connect to this elastic from another client like elastalert? How can I generate client certificate and key? can I use similar steps like below to create client cert and key?

openssl pkcs12 -in elastic-certificates.p12 -clcerts -nokeys | sed -ne ‘/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p’ > client.crt

Reply

Hi, can you tell me which openssl version are you using to generate the certs/keys to secure communications between logstash and elasticsearch? Thank you

Reply
Tomislav Simnett
October 14, 2021 9:13 am

This is really clear, thank you. However, where did the logstash.key file come from when securing the beats logstash piece? I can’t see a previous reference to it.

Reply

In this command:

openssl pkcs8 -in logstash.key -topk8 -nocrypt -out logstash.pkcs8.key

Where does logstash.key come from? I don’t see that filename show up anywhere else above.

Reply

Leave a Reply

Your email address will not be published. Required fields are marked *