Tungsten Replicator: MariaDB Master-Master and Master-Slave Topologies

Posted in: Technical Track

A common concern in the MySQL community is how to best implement high availability for MySQL. There are various built-in mechanisms to accomplish this such as Master/Master and Master/Slave replication using binary logs as well as FOSS solutions such as Galera and Tungsten, just to name a few. Often times, IT Managers and DBAs alike opt to avoid implementing a third party solution due to the added administrative overhead without fully evaluating the available solutions. In today’s blog post, I would like to describe the process for configuring a Master/Slave topology and switching to a Master/Master topology with Tungsten Replicator.

Tungsten Replicator is a well known tool that has gained much acclaim in the area of MySQL Enterprise database implementation, however, many teams tend to stay away from the implementation to avoid over-complicating the replication topology. I have listed and described all of the steps required to configure a replication topology for 1 to N nodes (today’s how-to guide serves for a 2-node implementation but I will described the additional steps that would be required to implement these topologies for N nodes).

The 2 nodes I will be using are vm128-142 and vm129-117, the first part of the document contains the steps that need to be performed on both nodes and the latter describes the steps to be performed on either one of the two nodes. As soon as Tungsten Replicator has been installed on both nodes with the same configuration files the switch is as simple as “one, two, three” – all it requires is running the script that configures the topology of your choice. The main topologies that are available are :

  • Master – Slave: Replication flowing from 1 .. N nodes using Tungsten Replicator
  • Master – Master: Bi-directional replication for 1 .. N nodes
  • Star Topology: A central node acts as a hub and all spokes are Master nodes
  • Fan-in Topology: A single slave node with replication from 1 .. N Master nodes

(Check out https://code.google.com/p/tungsten-replicator/wiki/TRCMultiMasterInstallation for further details)

So, let’s continue with the actual steps required (please note I’m using the “root” account with SSH passwordless authentication for the purposes of this article, it is best to define another user on production systems). The parameters and values in red text require customization for your system / topology. The configuration files are all indented in the text is royal blue:


### The following commands should be executed on all nodes (vm128-142 & vm129-117 in this how-to)

su - root
cd /root # or alternatively to a place like /opt/ or /usr/local/
vi /etc/yum.repos.d/MariaDB.repo

 # MariaDB 5.5 CentOS repository list - created 2014-08-25 16:59 UTC
 # https://mariadb.org/mariadb/repositories/
 [mariadb]
 name = MariaDB
 baseurl = https://yum.mariadb.org/5.5/centos6-amd64
 gpgkey=https://yum.mariadb.org/RPM-GPG-KEY-MariaDB
 gpgcheck=1

vi /etc/security/limits.conf

 # add the following line
 * - nofile 65535

yum update

yum install wget MariaDB-server MariaDB-client ruby openssh-server rsync 
yum install java-1.7.0-openjdk-1.7.0.65-2.5.1.2.el6_5.x86_64 
yum install https://www.percona.com/downloads/XtraBackup/LATEST/binary/redhat/6/x86_64/percona-xtrabackup-2.2.3-4982.el6.x86_64.rpm
ln -s /usr/bin/innobackupex /usr/bin/innobackupex-1.5.1

wget https://downloads.tungsten-replicator.org/download.php?file=tungsten-replicator-2.2.1-403.tar.gz
tar -xzvf download.php\?file\=tungsten-replicator-2.2.1-403.tar.gz
rm download.php\?file\=tungsten-replicator-2.2.1-403.tar.gz
cd tungsten-replicator-2.2.1-403/

vi cookbook/COMMON_NODES.sh

 #!/bin/bash
 # (C) Copyright 2012,2013 Continuent, Inc - Released under the New BSD License
 # Version 1.0.5 - 2013-04-03

 export NODE1=vm128-142.dlab.pythian.com
 export NODE2=vm129-117.dlab.pythian.com
 #export NODE3=host3
 #export NODE4=host4
 #export NODE5=host5
 #export NODE6=host6
 #export NODE7=host7
 #export NODE8=host8

vi cookbook/USER_VALUES.sh

 #!/bin/bash
 # (C) Copyright 2012,2013 Continuent, Inc - Released under the New BSD License
 # Version 1.0.5 - 2013-04-03

 # User defined values for the cluster to be installed.

 cookbook_dir=$(dirname $0 )

 # Where to install Tungsten Replicator
 export TUNGSTEN_BASE=/opt/tungsten-replicator/installs/cookbook

 # Directory containing the database binary logs
 export BINLOG_DIRECTORY=/var/lib/mysql

 # Path to the script that can start, stop, and restart a MySQL server
 export MYSQL_BOOT_SCRIPT=/etc/init.d/mysql

 # Path to the options file
 export MY_CNF=/etc/my.cnf

 # Database credentials
 export DATABASE_USER=tungsten
 export DATABASE_PASSWORD=tungsten
 export DATABASE_PORT=3306

 # Name of the service to install
 export TUNGSTEN_SERVICE=cookbook

 # Replicator ports
 export RMI_PORT=10000
 export THL_PORT=2112

 # If set, replicator starts after installation
 [ -z "$START_OPTION" ] && export START_OPTION=start

 ##############################################################################
 # Options used by the "direct slave " installer only
 # Modify only if you are using 'install_master_slave_direct.sh'
 ##############################################################################
 export DIRECT_MASTER_BINLOG_DIRECTORY=$BINLOG_DIRECTORY
 export DIRECT_SLAVE_BINLOG_DIRECTORY=$BINLOG_DIRECTORY
 export DIRECT_MASTER_MY_CNF=$MY_CNF
 export DIRECT_SLAVE_MY_CNF=$MY_CNF
 ##############################################################################

 ##############################################################################
 # Variables used when removing the cluster
 # Each variable defines an action during the cleanup
 ##############################################################################
 [ -z "$STOP_REPLICATORS" ] && export STOP_REPLICATORS=1
 [ -z "$REMOVE_TUNGSTEN_BASE" ] && export REMOVE_TUNGSTEN_BASE=1
 [ -z "$REMOVE_SERVICE_SCHEMA" ] && export REMOVE_SERVICE_SCHEMA=1
 [ -z "$REMOVE_TEST_SCHEMAS" ] && export REMOVE_TEST_SCHEMAS=1
 [ -z "$REMOVE_DATABASE_CONTENTS" ] && export REMOVE_DATABASE_CONTENTS=0
 [ -z "$CLEAN_NODE_DATABASE_SERVER" ] && export CLEAN_NODE_DATABASE_SERVER=1
 ##############################################################################


 #
 # Local values defined by the user.
 # If ./cookbook/USER_VALUES.local.sh exists,
 # it is loaded at this point

 if [ -f $cookbook_dir/USER_VALUES.local.sh ]
 then
 . $cookbook_dir/USER_VALUES.local.sh
 fi

service iptables stop 

 # or open ports listed below:
 # 3306 (MySQL database)
 # 2112 (Tungsten THL)
 # 10000 (Tungsten RMI)
 # 10001 (JMX management)

vi /etc/my.cnf.d/server.cnf

 # These groups are read by MariaDB server.
 # Use it for options that only the server (but not clients) should see
 #
 # See the examples of server my.cnf files in /usr/share/mysql/
 #

 # this is read by the standalone daemon and embedded servers
 [server]

 # this is only for the mysqld standalone daemon
 [mysqld]
 open_files_limit=65535
 innodb-file-per-table=1
 server-id=1 # make server-id unique per server
 log_bin
 innodb-flush-method=O_DIRECT
 max_allowed_packet=64M
 innodb-thread-concurrency=0
 default-storage-engine=innodb
 skip-name-resolve

 # this is only for embedded server
 [embedded]

 # This group is only read by MariaDB-5.5 servers.
 # If you use the same .cnf file for MariaDB of different versions,
 # use this group for options that older servers don't understand
 [mysqld-5.5]

 # These two groups are only read by MariaDB servers, not by MySQL.
 # If you use the same .cnf file for MySQL and MariaDB,
 # you can put MariaDB-only options here
 [mariadb]

 [mariadb-5.5]

service mysql start
mysql -uroot -p -e"CREATE USER 'tungsten'@'%' IDENTIFIED BY 'tungsten';"
mysql -uroot -p -e"GRANT ALL PRIVILEGES ON *.* TO 'tungsten'@'%' WITH GRANT OPTION;"
mysql -uroot -p -e"FLUSH PRIVILEGES;"

ssh-keygen -t rsa
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
cat ~/.ssh/id_rsa.pub | ssh vm129-117 'cat >> ~/.ssh/authorized_keys' # from vm128-142
cat ~/.ssh/id_rsa.pub | ssh vm128-142 'cat >> ~/.ssh/authorized_keys' # from vm129-117
chmod 600 authorized_keys

cookbook/validate_cluster # this is the command used to validate the configuration

vi cookbook/NODES_MASTER_SLAVE.sh

 #!/bin/bash
 # (C) Copyright 2012,2013 Continuent, Inc - Released under the New BSD License
 # Version 1.0.5 - 2013-04-03

 CURDIR=`dirname $0`
 if [ -f $CURDIR/COMMON_NODES.sh ]
 then
 . $CURDIR/COMMON_NODES.sh
 else
 export NODE1=
 export NODE2=
 export NODE3=
 export NODE4=
 export NODE5=
 export NODE6=
 export NODE7=
 export NODE8=
 fi

 export ALL_NODES=($NODE1 $NODE2 $NODE3 $NODE4 $NODE5 $NODE6 $NODE7 $NODE8)
 # indicate which servers will be masters, and which ones will have a slave service
 # in case of all-masters topologies, these two arrays will be the same as $ALL_NODES
 # These values are used for automated testing

 #for master/slave replication
 export MASTERS=($NODE1)
 export SLAVES=($NODE2 $NODE3 $NODE4 $NODE5 $NODE6 $NODE7 $NODE8)

## The following commands should be performed on just one of the nodes
## In my case either vm128-142 OR 129-117

cookbook/install_master_slave # to install master / slave topology
cookbook/show_cluster # here we see master - slave replication running

 --------------------------------------------------------------------------------------
 Topology: 'MASTER_SLAVE'
 --------------------------------------------------------------------------------------
 # node vm128-142.dlab.pythian.com
 cookbook [master] seqno: 1 - latency: 0.514 - ONLINE
 # node vm129-117.dlab.pythian.com
 cookbook [slave] seqno: 1 - latency: 9.322 - ONLINE

cookbook/clear_cluster # run this to destroy the current Tungsten cluster 

cookbook/install_all_masters # to install master - master topology 
cookbook/show_cluster # and here we've switched over to master - master replication

 --------------------------------------------------------------------------------------
 Topology: 'ALL_MASTERS'
 --------------------------------------------------------------------------------------
 # node vm128-142.dlab.pythian.com
 alpha [master] seqno: 5 - latency: 0.162 - ONLINE
 bravo [slave] seqno: 5 - latency: 0.000 - ONLINE
 # node vm129-117.dlab.pythian.com
 alpha [slave] seqno: 5 - latency: 9.454 - ONLINE
 bravo [master] seqno: 5 - latency: 0.905 - ONLINE

email

Author

Want to talk with an expert? Schedule a call with our team to get the conversation started.

No comments

Leave a Reply

Your email address will not be published. Required fields are marked *