MySQL High availability with HAProxy, Consul and Orchestrator

Posted in: MySQL, Open Source, Technical Track

Introduction

In this post we will explore one approach to MySQL high availability with HAProxy, Consul and Orchestrator.
Let’s briefly go over each piece of the puzzle first:
– HAProxy is usually installed on the application servers or an intermediate connection layer, and is in charge of connecting the application to the appropriate backend (reader or writer). The most common deployment I’ve seen is to have separate ports for writes (which are routed to the master) and reads (which are load balanced over a pool of slaves).
– Orchestrator’s role is to monitor the topology and perform auto recovery as needed.
The key piece here is how we can make HAProxy aware that a topology change has happened, and the answer lies within Consul (and Consul templates).
– Consul is meant to be told the identity of the new master by Orchestrator. By leveraging Consul templates, we can then in turn propagate that information to HAProxy.

Proof of concept

For this POC, I installed 3 test servers which run both MySQL and Consul: mysql1, mysql2 and mysql3. On server mysql3 I also installed HAProxy, Orchestrator and Consul-template.

Installing Consul

1. Install Consul on mysql1, mysql2 and mysql3

$ sudo yum -y install unzip 
$ sudo useradd consul
$ sudo mkdir -p /opt/consul 
$ sudo touch /var/log/consul.log 
$ cd /opt/consul
$ sudo wget https://releases.hashicorp.com/consul/1.0.7/consul_1.0.7_linux_amd64.zip
$ sudo unzip consul_1.0.7_linux_amd64.zip
$ sudo ln -s /opt/consul/consul /usr/local/bin/consul
$ sudo chown consul:consul -R /opt/consul* /var/log/consul.log

2. Bootstrap the Consul cluster from one node. I’ve picked mysql3 here:

$ sudo vi /etc/consul.conf.json
{
  "datacenter": "dc1",
  "data_dir": "/opt/consul/",
  "log_level": "INFO",
  "node_name": "mysql3",
  "server": true,
  "ui": true,
  "bootstrap": true,
  "client_addr": "0.0.0.0",
  "advertise_addr": "192.168.56.102"  
}
 
$ sudo su - consul -c 'consul agent -config-file=/etc/consul.conf.json -config-dir=/etc/consul.d > /var/log/consul.log &'

3. Start Consul on mysql1 and have it join the cluster

$ sudo vi /etc/consul.conf.json
{
  "datacenter": "dc1",
  "data_dir": "/opt/consul/",
  "log_level": "INFO",
  "node_name": "mysql1",  
  "server": true,
  "ui": true,
  "bootstrap": false,   
  "client_addr": "0.0.0.0",
  "advertise_addr": "192.168.56.100"  
}
 
$ sudo su - consul -c 'consul agent -config-file=/etc/consul.conf.json -config-dir=/etc/consul.d > /var/log/consul.log &'
$ consul join 192.168.56.102

4. Start Consul on mysql2 and have it join the cluster

$ sudo vi /etc/consul.conf.json
{
  "datacenter": "dc1",
  "data_dir": "/opt/consul/",
  "log_level": "INFO",
  "node_name": "mysql2", 
  "server": true,
  "ui": true,
  "bootstrap": false,   
  "client_addr": "0.0.0.0",
  "advertise_addr": "192.168.56.101"
}
 
$ sudo su - consul -c 'consul agent -config-file=/etc/consul.conf.json -config-dir=/etc/consul.d > /var/log/consul.log &'
$ consul join 192.168.56.102

At this point we have a working 3 node consul cluster. We can test writing k/v pairs to it and retrieving them back:

$ consul kv put foo bar
Success! Data written to: foo
$ consul kv get foo
bar

Configuring Orchestrator to write to Consul

Luckily, Orchestrator has built-in support for Consul so there is little assembly required there. The only caveat is we need to have Orchestrator populate the values in Consul manually by calling orchestrator-client the first time. This is because Orchestrator will only write the values each time there is a master change.

1. Configure Orchestrator to write to Consul on each master change. Add the following lines to Orchestrator config.

$ vi /etc/orchestrator.conf.json
  "KVClusterMasterPrefix": "mysql/master",
  "ConsulAddress": "127.0.0.1:8500",

2. Restart Orchestrator

$ service orchestrator restart

3. Populate the current master value manually

$ orchestrator-client -c submit-masters-to-kv-stores

4. Check the stored values from command line

$ consul kv get mysql/master/testcluster
mysql1:3306

Using Consul template to manage haproxy

Since we have HAProxy running on mysql3, we need to install Consul template on that host to manage haproxy config. The idea is to configure Consul template to dynamically update HAProxy config file template, and reload HAProxy when there are changes to its configuration.
For HAProxy, I am setting up two different pools here, the master is reachable through HAProxy via port 3307, while the slaves are accesible over 3308.

1. Install Consul template on mysql3

$ mkdir /opt/consul-template
$ cd /opt/consul-template
$ sudo wget https://releases.hashicorp.com/consul-template/0.19.4/consul-template_0.19.4_linux_amd64.zip
$ sudo unzip consul-template_0.19.4_linux_amd64.zip
$ sudo ln -s /opt/consul-template/consul-template /usr/local/bin/consul-template

2. Create a template for HAProxy config file

$ vi /opt/consul-template/templates/haproxy.ctmpl
 
global
log 127.0.0.1 local0
log 127.0.0.1 local1 notice
maxconn 4096
chroot /usr/share/haproxy
user haproxy
group haproxy
daemon
 
defaults
log global
mode http
option tcplog
option dontlognull
retries 3
option redispatch
maxconn 2000
contimeout 5000
clitimeout 50000
srvtimeout 50000
 
frontend writer-front
bind *:3307
mode tcp
default_backend writer-back
 
frontend stats-front
bind *:80
mode http
default_backend stats-back
 
frontend reader-front
bind *:3308
mode tcp
default_backend reader-back
 
backend writer-back
mode tcp
option httpchk
server master {{key "mysql/master/testcluster"}} check port 9200 inter 12000 rise 3 fall 3
 
backend stats-back
mode http
balance roundrobin
stats uri /haproxy/stats
stats auth user:pass
 
backend reader-back
mode tcp
balance leastconn
option httpchk
server slave1 192.168.56.101:3306 check port 9200 inter 12000 rise 3 fall 3
server slave2 192.168.56.102:3306 check port 9200 inter 12000 rise 3 fall 3
server master 192.168.56.100:3306 check port 9200 inter 12000 rise 3 fall 3

3. Create consul template config file

$ vi /opt/consul-template/config/consul-template.cfg
 
consul {
  auth {
    enabled = false
  }
 
  address = "127.0.0.1:8500"
 
  retry {
    enabled = true
    attempts = 12
    backoff = "250ms"
    max_backoff = "1m"
  }
 
  ssl {
    enabled = false
  }
}
 
reload_signal = "SIGHUP"
kill_signal = "SIGINT"
max_stale = "10m"
log_level = "info"
 
wait {
  min = "5s"
  max = "10s"
}
 
template {
  source = "/opt/consul-template/templates/haproxy.ctmpl"
  destination = "/etc/haproxy/haproxy.cfg"
  command = "sudo service haproxy reload || true"
  command_timeout = "60s"
  perms = 0600
  backup = true 
  wait = "2s:6s"
}

4. Give sudo permissions to consul-template so it can reload haproxy

$ sudo vi /etc/sudoers
 
consul ALL=(root) NOPASSWD:/usr/bin/lsof, ...,/sbin/service haproxy reload

5. Start consul template

$ nohup /usr/local/bin/consul-template -config=/opt/consul-template/config/consul-template.cfg > /var/log/consul-template/consul-template.log 2>&1 &

And that is all the pieces we need. The next step is doing a master change (e.g. via Orchestrator GUI) and seeing the effects:

[root@mysql3 config]$ tail -f /var/log/consul-template/consul-template.log
2018/04/17 12:56:25.863912 [INFO] (runner) rendered "/opt/consul-template/templates/haproxy.ctmpl" => "/etc/haproxy/haproxy.cfg"
2018/04/17 12:56:25.864024 [INFO] (runner) executing command "sudo service haproxy reload || true" from "/opt/consul-template/templates/haproxy.ctmpl" => "/etc/haproxy/haproxy.cfg"
2018/04/17 12:56:25.864078 [INFO] (child) spawning: sudo service haproxy reload
Redirecting to /bin/systemctl reload  haproxy.service

What happened? Orchestrator updated the K/V in Consul, and Consul template detected the change and updated the haproxy config file in turn, reloading haproxy after.

Conclusion

HAProxy is still being widely used as a proxy/load balancer in front of MySQL, so it’s nice to be able to combine it with Orchestrator and Consul to put together a high availability solution.

While this is a viable alternative, for a new deployment I usually recommend going with with ProxySQL instead. For one, you have the benefit of graceful switchover without returning any errors to the application. The setup is also a bit easier as there are less moving parts with ProxySQL Cluster (one could get rid of Consul). Finally, having a SQL aware proxy opens up more interesting possibilities like r/w splitting and query mirroring.

 

email

Interested in working with Ivan? Schedule a tech call.

Lead Database Consultant

1 Comment. Leave new

Shlomi Noach
April 23, 2018 4:06 pm

Checkout all the KV entries written by orchestrator consul via `consul kv get -recurse mysql/master/testcluster`. As example, you will see something like:

“`
mysql/master/testcluster:mysql1.myservice.com:3306
?mysql/master/testcluster/hostname:mysql1.myservice.com
?mysql/master/testcluster/ipv4:10.0.0.23
?mysql/master/testcluster/ipv6:
?mysql/master/testcluster/port:3306
“`
You may want to use this breakdown within the consul-template so as to have finer grained config file generation.

Also worth noting that when using orchestrator/raft, each raft node will apply the KV change independently. If you have multiple, independent Consul setups (e.g. completely isolated Consul cluster in each datacenter), orchestrator/raft is your means to distribute the KV change to all Consul clusters.

Reply

Leave a Reply

Your email address will not be published. Required fields are marked *