Arnaud Loos

Categories

  • elasticsearch

At some point, after probably dozens of test Elasticsearch instances, you’ll want to actually deploy a cluster into production. If you’re now responsible for a production cluster you’ll need to protect against credential harvesting and random curl DELETE queries that can cause all your indexes to disappear. Thus the motivation for purchasing X-Pack.

Throughout this post we’ll generate certificates for elasticsearch (using a root CA and certificates for each node signed with this root CA), as well as enable authentication, change the built-in account passwords, secure ES node-to-node communication (port 9300 traffic), force HTTPS queries to ES (port 9200 traffic), modify Kibana and Logstash to talk to ES, and then secure the Kibana front-end.

I’m making a few assumptions before we start. X-Pack should already be installed by default. You should have also applied your license or enabled the 30 day trial license. You might as well go ahead and enable monitoring as well. I’m also assuming you have no Machine Learning jobs running.

Also note that there are many sources you can use for Authentication. I’ll be using the “Native Realm” which just means Elasticsearch will store accounts and password inside a local index, it’s also the default.

In this scenario I have 3 cluster members. One is just a master node and two are master and data nodes.

Generating certificates

SSH into one of your Elasticsearch hosts. Create a file to be used as a template and enter the information for each Elasticsearch host in your cluster. In my elasticsearch.yml I specify my hosts by IP address. If you use a host name I believe you want the FQDN listed in the dns: section as an additional entry: - "elastic1.mydomain.com".

sudo nano ~/cert-gen.yml

instances:
  - name: "elastic1" 
    dns:
      - "elastic1"
    ip:
      - "192.168.1.11"
  - name: "elastic2"
    dns:
      - "elastic2"
    ip:
      - "192.168.1.12"
  - name: "elastic3"
    dns:
      - "elastic3"
    ip:
      - "192.168.1.13"

Now generate both the CA certificate as well as the node certificates.

/usr/share/elasticsearch/bin/elasticsearch-certutil cert --in cert-gen.yml --keep-ca-key

I opted not to use a password for the certificates, you may wish to.

If you want to use a commercial or organization-specific CA, you can use the elasticsearch-certutil csr command to generate certificate signing requests (CSR) for the nodes in your cluster. Find more info here.

You should now have a certificate-bundle.zip file.

Configuring Node-to-Node Encryption

Let’s unzip the certificate bundle.

apt install unzip
unzip certificate-bundle.zip

You should have a ca folder, as well as a folder for each host.

First thing we’ll do is go into the ca folder and convert out p12 certificate to PEM format. This will be necessary for the Kibana and Logstash servers.

cd /ca
openssl pkcs12 -in ca.p12 -clcerts -nokeys -chain -out ca.pem

Copy the relevant node certificates to each Elasticsearch node, and copy the ca.pem certificate to your Kibana and Logstash servers.
I’ll scp the files to my user’s home directory (where that user has permission to write files) and then on each host I’ll create a certs directory in /etc/elasticsearch/ and copy the cert there. For each Elasticsearch host you only need the single host p12 file, not the CA file.

scp elastic2.p12 user@elastic2:/home/user

On each host, SSH in, create /etc/elasticsearch/certs, and copy over the certificate.

Stop Logstash so no more data is being sent to Elasticsearch.

Login to Kibana, go to Dev Tools and perform a flush sync.

POST _flush/synced

Keep running this command until there are no more failures.

We’re now going to shutdown the entire cluster. Elasticsearch will not start re-allocating shards until after the index.unassigned.node_left.delayed_timeout value has expired which is one minute by default. Hopefully you’re able to shutdown all your hosts in under a minute. If not, look here for instructions on disabling allocation of replicas.

sudo systemctl stop elasticsearch.service

On each host edit elasticsearch.yml and add the following lines, being sure that the certificate path and name are correct.

xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.keystore.path: /etc/elasticsearch/certs/elastic1.p12 
xpack.security.transport.ssl.truststore.path: /etc/elasticsearch/certs/elastic1.p12

Once you’ve modified all the nodes we can start bringing them back up. I’ll start with my master-only node since it has no data. Since I have my minimum master nodes set to 2 in elasticsearch.yml it should start up and wait since the minimum threshold has not been met.

Start the first host.

sudo systemctl start elasticsearch.service

View the logs.

tail -f /var/log/elasticsearch/cluster-name.log

You should see a message saying the host is waiting on the minimum master nodes.

Bring up the second node. If you’re watching the log file on the first node you should see the second node come up and a master elected. Now bring up the third node.

On the master host the log file should show the cluster going from red to yellow, and eventually yellow to green.

Before we can login to Kibana and look around we’ll need to reset the internal account passwords.

Set Built-in Account Passwords

On one of your Elasticsearch hosts run:
/usr/share/elasticsearch/bin/elasticsearch-setup-passwords interactive

Your cluster needs to be running and healthy or the command will throw an error.

You’ll be prompted to change the passwords for the following users: elastic, kibana, logstash_system, beats_system, apm_system, and remote_monitoring_user. You should probably have these written down beforehand.

Once complete this information is stored in a .security index in Elasticsearch.

You cannot run the elasticsearch-setup-passwords command a second time. Instead, you can update passwords from the Management > Users UI in Kibana, use the security user API, or delete the entire .security index.

Now point your web browser to http://<ES_Host>:9200. You should be prompted to log in to Elasticsearch. To log in, you can use the built-in elastic user and the password you just specified. You should see returned a status OK JSON message.

Now go into your kibana.yml file and uncomment the following lines:

elasticsearch.user: kibana
elasticsearch.password: <your password here>

Instead of writing the user password in kibana.yml you can add it to a kibana-keystore also. See the Elastic website for details.

Now log into Kibana by visiting the website. You should be prompted to enter credentials. Use elastic for the user and supply the password.

Kibana login

Under Management in the left-hand menu you’ll now see a new Security section with Users and Roles.

Take a minute to create an account for yourself and assign it the superadmin role. Remember that the principle of least privilege applies here as well. While it’s important to have a superadmin account you should also create another account for yourself with fewer privileges for daily use.

I’ll also create a role for my Logstash writers named logstash_hostname by selecting “New Role”.
For cluster privileges, add manage_index_templates and monitor.
For Indices privileges, add write, delete, and create_index.
Now create a user with the same name and assign it this new role. This is what we will use in our logstash output {} to connecto to Elasticsearch.

We’ve now enabled authentication and transport layer encryption for internal node communications.

Configuring HTTP Security

We’ll continue on by enabling HTTPS as well for every client that wants to make requests to the cluster. Note that I could have enabled the following at the same time that I enabled transport layer security but doing this in 2 steps aids in troubleshooting should something go wrong and allows you to run the cluster for a few days with only transport security enabled if you wish to do so.

This procedure mimics a rolling upgrade. We’ll only be shutting down a single host at a time so the cluster stays up. Any clients (Logstash, Kibana, etc.) will break until we configure them with the new settings. You should stop their services now.

This won’t have any effect on inter-node communication since we already secured that. That means after each host goes down and comes back up we’ll wait until the cluster status is green again before proceeding to the next host.

SSH to the first host and stop the Elasticsearch service. Keep in mind that Kibana is connecting to a specific host. It’ll be beneficial to save this host for last.

Edit the config file and add the following lines.

xpack.security.http.ssl.enabled: true
xpack.security.http.ssl.keystore.path: /etc/elasticsearch/certs/elastic1.p12
xpack.security.http.ssl.truststore.path: /etc/elasticsearch/certs/elastic1.p12 

Start the Elasticsearch service. Run tail -f /var/log/elasticsearch/cluster-name.log to check for any issues. Login to Kibana, go to Monitoring, and check the cluster status. Wait until it’s green before proceeding to the next host.

Kibana will continue to work until you change the settings on the host it connects to. If you save this host for last you can check the cluster status at each step until the last.

Kibana

Now we’ll configure Kibana to both connect to Elasticsearch securely as well as require HTTPS for the front-end.

For the front-end certificate I needed it to be trusted by my browsers so I used my Enterprise certificate authority to generate a new certificate. This will allow all the clients in my Domain to trust it automatically. You can either do this or purchase just this single certificate from a trusted third-party. I have a few recommendations in my software list.

Hopefully earlier you copied over the ca.pem file that we generated and moved that to /etc/kibana/certs/. If not, scroll up and do that now.

We’re now going to change the ownership and permissions on the certs directory and it’s files.

cd /etc/kibana/
sudo chown -R root:kibana certs/
sudo chmod -R 750 certs/

Kibana and Logstash both require execute permissions on their certs directories.

Modify Kibana to connect to ES via HTTPS.

elasticsearch.hosts: ["https://<your_elasticsearch_host>:9200"]
elasticsearch.ssl.certificateAuthorities: /etc/kibana/certs/ca.pem

If you have issues you can disable certificate validation in the configuration, but since all the node certificates were signed by this ca.pem everything should validate cleanly.

I copied over a .key and .crt file from my Enterprise CA to use for the front-end and set the following.

server.ssl.enabled: true
server.ssl.key: /etc/kibana/certs/server.key
server.ssl.certificate: /etc/kibana/certs/server.crt

Re-start the service and connect to your Kibana front-end over HTTPS.

Logstash

This process will closely mimic the Kibana setup.

Start by changing the certs directory ownership and permissions

cd /etc/logstash
sudo chown -R root:logstash certs/
sudo chmod -R 750 certs/

Now with our certificate in place we can modify the logstash output section of our logstash configuration.

I created a new logstash_hostname user from the Kibana UI to assign to Logstash. Follow the directions here to do the same or use the logstash_writer built-in account.

output {
  elasticsearch {
    host => https://<elasticsearch_host>:9200
    user => logstash_hostname
    password => <password-here>
    ssl => true
    cacert => /etc/logstash/certs/ca.pem
  }
}

Now restart logstash sudo systemctl restart logstash

Check the logs to make sure logstash is able to connect to Elasticsearch. sudo tail -f /var/log/logstash/logstash-plain.log

And that’s it, everyone’s communicating over encrypted channels. Well, almost everyone. Monitoring traffic, if enabled, is still unencrypted. The recommendation here is to send monitoring metrics to a separate cluster, which means generating a CA cert for that cluster and configuring all of our servers to send data to a different IP. We’ll save that for another post.