On this post I will explain how to do an in-place migration to Tungsten. We will start with a MySQL topology consisting of 3 nodes (1 master, 2 slaves) using native MySQL replication, and convert that into a 3 node Tungsten cluster. I will use a Red Hat based distro but the steps should be similar if you are using Ubuntu.
1. First of all, confirm MySQL replication is correctly running on all nodes:
show slave status\G
2. Check the Tungsten prerequisites are met:
a. Check OS is supported
b. Check db version is supported
c. Check RAM requirements are met
d. Check disk requirements
Next, we will prepare the user environment
e. Configure file and process limits by adding the following lines to /etc/security/limits.conf
tungsten - nofile 65535 mysql - nofile 65535 tungsten - nproc 8096 mysql - nproc 8096
These changes to limits.conf require a MySQL restart to take effect for mysqld process.
f. Set swappiness to a “good” value
echo 5 > /proc/sys/vm/swappiness vi /etc/sysctl.conf vm.swappiness=5
g. If you are using iptables, add required network ports by running the following commands:
## THL replication iptables -I INPUT 4 -p tcp -m tcp --dport 2112 -j ACCEPT ## Manager RMI iptables -I INPUT 4 -p tcp -m tcp --dport 7800:7805 -j ACCEPT iptables -I INPUT 4 -p tcp -m tcp --dport 9997 -j ACCEPT ## Replication connection listener port iptables -I INPUT 4 -p tcp -m tcp --dport 10000:10001 -j ACCEPT ## Tungsten manager iptables -I INPUT 4 -p tcp -m tcp --dport 11999:12000 -j ACCEPT ## Tungsten connector iptables -I INPUT 4 -p tcp -m tcp --dport 9999 -j ACCEPT ## Verify rules iptables -vnL
h. Install JDK and Ruby packages. Verify that the installed versions are supported.
yum install java-1.7.0-openjdk ruby java -version ruby --version
i. Add sudo configuration for tungsten:
vi /etc/sudoers.d/tungsten Defaults !requiretty tungsten ALL=(root) NOPASSWD: ALL
Verify that you can become root as tungsten user
sudo su -
3. Prepare MySQL configuration
vi /etc/my.cnf sync_binlog=1 max_allowed_packet = 52M
Create MySQL users. Make sure they are propagated to all nodes.
-- tungsten replicator user CREATE USER [email protected]'%' IDENTIFIED BY 'password'; GRANT ALL ON *.* TO [email protected]'%' WITH GRANT OPTION; -- tungsten connector/application user CREATE USER [email protected]'%' IDENTIFIED BY 'password'; GRANT ALL ON *.* TO [email protected]'%'; REVOKE SUPER ON *.* FROM [email protected]'%';
4. Install the Tungsten package and verify install files are in place
rpm -ivh /tmp/tungsten.rpm ls /opt/continuent/software
Installing the Tungsten rpm will copy install files to /opt/continuent/software and create the tungsten user.
It will only run the installation process if either /etc/tungsten.ini or /etc/tungsten/tungsten.ini files are present (which is not the case here yet).
5. Each node needs to be able to connect to each other via password-less ssh, so generate a key as tungsten user on one node and copy it to the other nodes:
## on one node only su - tungsten ssh-keygen cat .ssh/id_rsa.pub > .ssh/authorized_keys chmod 600 .ssh/authorized_keys
Propagate the keys to the other nodes:
scp -pr .ssh [email protected]:/opt/continuent/ scp -pr .ssh [email protected]:/opt/continuent/
Adjust perms on the destination nodes as root
chown -R tungsten:tungsten /opt/continuent/.ssh
Tungsten supports two different installation methods: staging host, where you push the config changes and files from a central location, and .ini files, where each host is installed/configured independently. We will use the .ini files method.
6. Create tungsten.ini file on all nodes.
As we are migrating from a native MySQL installation we need to specify start-and-report=false.
The connector is usually installed on each app server, but for simplicity I am installing on each db node here. You can use any available port, I am using 9999 on the example.
vi /etc/tungsten/tungsten.ini [defaults] application-user=app_user application-password=password application-port=9999 replication-user=tungsten replication-password=password replication-port=3306 home-directory=/opt/continuent/ user=tungsten start-and-report=false # this is needed only for the migration skip-validation-check=MySQLNoMySQLReplicationCheck [testcluster] master=mysql1 members=mysql1,mysql2,mysql3 connectors=mysql1,mysql2,mysql3
7. Run tpm install on each node (no particular order)
Note that all tpm or trepctl commands required running as tungsten user:
su - tungsten /opt/continuent/software/continuent-tungsten-4.0.0-2667425/tools/tpm install
Exit your session and login again so it picks up the new environment variables for tungsten user
At this point the cluster is installed but not functional.
We need to have each Tungsten replicator use the correct binlog coordinates and replace MySQL replication with Tungsten.
Note that the procedure listed on the official docs doesn’t work properly
8. Confirm that native replication is working on all slave nodes.
As tungsten user, tpm mysql command will open a MySQL CLI connection to the local MySQL server using the user and pass you specified under replication-user, replication-password and replication-port on tungsten.ini
-- slave 1 echo 'SHOW SLAVE STATUS\G' | tpm mysql | \ egrep 'Master_Host|Last_Error|Relay_Master_Log_File|Exec_Master_Log_Pos|Slave_SQL_Running|Slave_IO_Running' Master_Host: 192.168.57.101 Relay_Master_Log_File: mysql-bin.000005 Slave_IO_Running: Yes Slave_SQL_Running: Yes Last_Error: Exec_Master_Log_Pos: 1122 Slave_SQL_Running_State: Slave has read all relay log; waiting for the slave I/O thread to update it -- slave 2 echo 'SHOW SLAVE STATUS\G' | tpm mysql | \ egrep 'Master_Host|Last_Error|Relay_Master_Log_File|Exec_Master_Log_Pos|Slave_SQL_Running|Slave_IO_Running' Master_Host: 192.168.57.101 Relay_Master_Log_File: mysql-bin.000005 Slave_IO_Running: Yes Slave_SQL_Running: Yes Last_Error: Exec_Master_Log_Pos: 1123 Slave_SQL_Running_State: Slave has read all relay log; waiting for the slave I/O thread to update it
9. On the master only, start all Tungsten processes and put Replicator online.
Set the cluster policy to maintenance to prevent any auto reconfiguration of services. Remember to use tungsten user to run any Tungsten commands.
startall trepctl online echo 'set policy maintenance' | cctrl
On the master, check the status (state should be ONLINE and pendingError=none). Also inspect THL entries to verify that transactions are being correctly recorded.
Tungsten Replicator will start reading the MySQL binary log from the current position, creating the corresponding THL event data entries, which you can verify with
10. On each slave, native replication SQL thread will have stopped with an error.
Manually stop the replication IO thread, and check the position relative to the master where the SQL thread has stopped. It should be the same position on every slave.
echo "STOP SLAVE;" | tpm mysql echo 'SHOW SLAVE STATUS\G' | tpm mysql | \ egrep ' Master_Host| Last_Error| Relay_Master_Log_File| Exec_Master_Log_Pos' Master_Host: 192.168.57.101 Relay_Master_Log_File: mysql-bin.000007 Last_Error: Error 'Table 'tungsten_testcluster.heartbeat' doesn't exist' on query... Exec_Master_Log_Pos: 120
11. On each slave, start Tungsten replicator.
It will create the tungsten metadata db and tables, and automatically start from the correct position. Verify it is working properly with the status command.
replicator start trepctl online trepctl status
12. Start the remaining processes on each host (Manager and Connector), and verify the status is ok with cctrl.
Note that the manager can take a couple of seconds to start.
startall echo ls | cctrl
13. Remove native MySQL replication configuration from all slaves
echo "RESET SLAVE ALL;" | tpm mysql
14. Test you can connect to the database using Tungsten connector
mysql -uapp_user -ppassword -happserver1 -P9999
15. At this point you would modify all application servers to start using the Tungsten Connector. Once that is complete, you may set the policy to automatic so auto-failover is enabled. The policy setting is global so only execute one time (from any node)
echo 'set policy automatic' | cctrl