The process includes adding a new DC with a changed number of num_tokens, decommissioning the old DC one by one, and letting Cassandra automatic mechanisms distribute the existing data into the new nodes.
The below procedure is based on the assumption that you have 2 DC DC1 & DC2.
Procedure:
1. Run repair to keep data consistent across cluster
Make sure to run a full repair with nodetool repair. More detail about repairs can be found here. This ensures that all data is propagated from the datacenter which is being decommissioned.
2. Add new DC DC3 and decommission old Datacenter DC1
Step 1: Download and install a similar Cassandra version to the other nodes in the cluster, but do not start.
How to stop Cassandra
Note: Don’t stop any node in DC1 unless DC3 added. If you used the Debian package, Cassandra starts automatically. You must stop the node and clear the data. |
---|
Step 2: Clear the data from the default directories once the node is down.
sudo rm -rf /var/lib/cassandra/* |
---|
- Seeds: This should include nodes from live DC because new nodes have to stream data from them.
- snitch: Keep it similar to the nodes in live DC.
- cluster_name: Similar to the nodes in another live DC.
- num_tokens: Number of vnodes required.
- initial_tokne: Make sure this is commented out.
Set the local parameters below:
- auto_bootstrap: false
- listen_address: Local to the node
- rpc_address: Local to the node
- data_directory: Local to the node
- saved_cache_directory: Local to the node
- commitlog_directory: Local to the node
Cassandra-rackdc.properties: Set the parameter for new datacenter and rack:
- dc: “dc name”
- rack: “rack name”
ALTER KEYSPACE Keyspace_name WITH REPLICATION = {‘class’ : ‘NetworkTopologyStrategy’, ‘dc1’ : 3, ‘dc2’ : 3, ‘dc3’ : 3}; |
---|
Step 6: Finally, now that the nodes are up and empty, we should run “nodetool rebuild” on each node to stream data from the existing datacenter.
nodetool rebuild “Existing DC Name” |
---|
Step 7: Remove “auto_bootstrap: false” from each Cassandra.yaml or set it to true after the complete process.
auto_bootstrap: true |
---|
Decommission DC1:
- First of all, ensure that the clients point to an existing datacenter.
- Set DCAwareRoundRobinPolicy to local to avoid any requests.
ALTER KEYSPACE “Keyspace_name” WITH REPLICATION = {‘class’ : ‘NetworkTopologyStrategy’, ‘dc2’ : 3, ‘dc3’ : 3}; |
---|
nodetool decommission |
---|
sudo rm -rf “Data_directory”/“Saved_cache_directory”/“Commitlog_directory” |
---|
3. Add new DC DC4 and decommission old DC2
Hopefully, this blog post will help you to understand the procedure for changing the number of vnodes on a live Cluster. Keep in mind that bootstrapping/rebuilding/decommissioning process time depends upon data size.
1 Comment. Leave new
Thanks Payal for sharing your thoughts on this topic. However, I feel the step with “auto_bootstrap” may not be needed provided you first add all the node on DC3 into ring and then alter the keyspace . Also, no harm of course starting the nodes on this new DC as empty.
Also, from our previous experience while doing this activity, we noticed that the rebuilds were constantly failing with stream errors. Upon digging , it was revealed that the production tcp settings as recommended by datastax were not in place for the newly built nodes. Once we corrected them, the rebuilds were successful. So just wanted to mentioned that!
Thanks for sharing. Great post.