We all love using databases to store our application data, but what if your database dies and you don't have a backup? Clustering may just help with that. Whilst we highly recommend you still take backups of your data, having a cluster will not only improve speed for both read and writes, it also provides high availability by distributing changes to different servers. In the event that one of the instances fails, others are already available and will continue serving your application. Who doesn't love that?!
Galera clusters can be run in two different configurations, active-passive and active-active. In the active-passive setup, all writes are made to a single active server and then copied to one or more passive servers that will take over in the event of an active server failure.
We will be configuring our cluster in an active-active setup running on three instances, this is the smallest configurable cluster.
Before we get started
You will need a total of three instances running Ubuntu 16.04, with a minimum of 1GB RAM (so our Extra Small instances will be fine). If you have a particularly high workload application you may want to choose a larger instance size (as a MySQL Galera cluster stores a lot of information in RAM).
Getting started
We first need to add an external repository to each of our instances as MariaDB isn't patched to include Galera clustering by default.
First, we need to add the repository key using the apt-key
command.
sudo apt-key adv --recv-keys --keyserver hkp://keyserver.ubuntu.com:80 0xF1656F24C74CD1D8
Now we have the key, we can add the MariaDB repository and then update the repository list once it has been added.
sudo add-apt-repository 'deb [arch=amd64,i386,ppc64el] http://ftp.utexas.edu/mariadb/repo/10.1/ubuntu xenial main'
sudo apt-get update
Installing MariaDB, Galera and their dependencies
Now we have installed the repository on all three instances, we can install MariaDB. The good news now is that with MariaDB 10.1, the MariaDB and Galera packages are combined, so you can install all the packages and dependencies required with a single command on each instance:
sudo apt-get install mariadb-server
The above command will install all of the dependencies for MariaDB and Galera for you. During the setup process you will be prompted to set a MariaDB administrative user.
Important: When you enter a password, this will be overwritten by the first instances in the cluster when we configure the replication to begin from Galera
We also need to install rsync
to ensure all nodes are kept up to date with file transfers. This should be installed by default on Ubuntu 16.04 but just in case, the command to install it is:
sudo apt-get install rsync
Configuring the first server (the fun bit)
Now we have MariaDB installed on all three instances, we can start to configure it.
Each instance in the cluster will need to have pretty much an identical configuration. This is a good thing, as we can do all the configuration on one instance and then copy it over to the other two.
We will need to create a Galera configuration file that MariaDB can read and put it in the /etc/mysql/conf.d
directory (we'll use vim
, but you can use any other editor such as nano
).
sudo vim /etc/mysql/conf.d/galera.cnf
You can copy and paste the config below and edit the following sections:
wsrep_cluster_name
wsrep_cluster_address
wsrep_node_address
wsrep_node_name
[mysqld]
binlog_format=ROW
default-storage-engine=innodb
innodb_autoinc_lock_mode=2
bind-address=0.0.0.0
# Galera Provider Configuration
wsrep_on=ON
wsrep_provider=/usr/lib/galera/libgalera_smm.so
# Galera Cluster Configuration
wsrep_cluster_name="civo_cluster"
wsrep_cluster_address="gcomm://first_internal_ip,second_internal_ip,third_internal_ip"
# Galera Synchronization Configuration
wsrep_sst_method=rsync
# Galera Node Configuration
wsrep_node_address="this_instance_internal_ip"
wsrep_node_name="this_instance_name"
Once you are happy with your configuration file, copy it to your clipboard and add it to the other two instances and modify the Galera Node Configuration
section to match up to that particular instance IP address and name.
By default MySQL sets the bind address to 127.0.0.1
in /etc/mysql/my.cnf
- we need to comment this out to ensure the galera.cnf
file is used correctly.
Important: This needs to be done on all three instances.
sudo vim /etc/mysql/my.cnf
# Instead of skip-networking the default is now to listen only on
# localhost which is more compatible and is not less secure.
# bind-address = 127.0.0.1
Starting the cluster
During the MariaDB installation, the database service would have started up. We need to stop the service so our cluster can be brought up. On all three of your instances stop the MariaDB service by doing the following:
sudo systemctl stop mysql
You will not get an output to say MariaDB has stopped. However, you can check the status by running the following command:
sudo systemctl status mysql
You should get something similar to the output below that shows the service is stopped:
Nov 10 16:48:15 mysql-1-civo systemd[1]: Stopped MariaDB database server.
Bringing up the first instance in the cluster
To bring up the first instance, we'll need to use a special startup script. The cluster is configured in such a way that each node that comes online tries to connect to at least one other node specified in the galera.cnf file to get its initial state. Without using the special script that allows systemd to pass the the --wsrep-new-cluster
parameter, a normal systemctl start mysql
would fail because there are no instances running for the first instance to connect with. To run the special script to prep the instance, run the following:
sudo galera_new_cluster
Once the command has been run, the instance will now be registered as part of the cluster and we can see if this has been successful by running the following:
mysql -u root -p -e "SHOW STATUS LIKE 'wsrep_cluster_size'"
Which should output something like:
+--------------------+-------+
| Variable_name | Value |
+--------------------+-------+
| wsrep_cluster_size | 1 |
+--------------------+-------+
Adding two more instances to the cluster
Now the first instance is online, we can bring up the remaining two instances using the normal method of starting a MariaDB server. They will automatically search for any member in the cluster list that is already online and join the cluster. We can bring up the second instance by running the following command:
sudo systemctl start mysql
Once done bring up the third instance with the same command. If everything is working as we expect, the cluster size should now have increased to 3 instances:
mysql -u root -p -e "SHOW STATUS LIKE 'wsrep_cluster_size'"
The output should look like:
+--------------------+-------+
| Variable_name | Value |
+--------------------+-------+
| wsrep_cluster_size | 3 |
+--------------------+-------+
The entire cluster is now online and communicating. Before we can really test the replication, there is a little house-keeping we need to do. In the current version of MariaDB for Debian based systems, the servers do routine maintenance with a special maintenance user. When we first install MariaDB on our instances the credentials for the user are randomly generated and stored in /etc/mysql/debian.cnf
and that is then inserted into MariaDB's database.
Some tidy up before testing replication
When we brought up the other two instances in the cluster, the credentials are automatically copied over to them, so the value in debian.cnf
no longer matches the password set in the database. This will mean anything that uses the maintenance account will try and connect to the database with the wrong password, which in turn will then fail on the second and third instance. A simple fix for this is to copy the first instance's debian.cnf
over to the other instances.
On the first node edit the following:
vim /etc/mysql/debian.cnf
It should look something similar to this:
# Automatically generated for Debian scripts. DO NOT TOUCH!
[client]
host = localhost
user = debian-sys-maint
password = c4ok66WEMfLfIBQg
socket = /var/run/mysqld/mysqld.sock
[mysql_upgrade]
host = localhost
user = debian-sys-maint
password = c4ok66WEMfLfIBQg
socket = /var/run/mysqld/mysqld.sock
basedir = /usr
Copy the above to your clipboard and then edit the same file on your second instance. Ignore the warning at the top that says Automatically generated for Debian scripts. DO NOT TOUCH!
. We need to make the changes for the cluster to work efficiently. Delete the content that is already there and simply paste your clipboard into the file on the second instance and save it. Repeat this for the third instance so all three now have the same debian.cnf
file.
Testing Replication
Now we have done the tidy up process, we can start to have fun with the cluster and test that the replication is working as we would like. On the first node let's create a test database and a test table on that database:
root@mysql-1-civo:~# mysql
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 10
Server version: 10.1.29-MariaDB-1~xenial mariadb.org binary distribution
Copyright (c) 2000, 2017, Oracle, MariaDB Corporation Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MariaDB [(none)]> CREATE DATABASE galerarocks;
Query OK, 1 row affected (0.12 sec)
MariaDB [(none)]> CREATE TABLE galerarocks.testrep ( TestColumn VARCHAR(50) );
Query OK, 0 rows affected (2.62 sec)
Now that's done we can check if the database and table has been replicated over to the second and third instances:
root@mysql-2-civo:~# mysql -u root -p galerarocks
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 8
Server version: 10.1.29-MariaDB-1~xenial mariadb.org binary distribution
Copyright (c) 2000, 2017, Oracle, MariaDB Corporation Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MariaDB [galerarocks]> show tables;
+-----------------------+
| Tables_in_galerarocks |
+-----------------------+
| testrep |
+-----------------------+
1 row in set (0.00 sec)
and
root@mysql-3-civo:~# mysql -u root -p galerarocks
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 5
Server version: 10.1.29-MariaDB-1~xenial mariadb.org binary distribution
Copyright (c) 2000, 2017, Oracle, MariaDB Corporation Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MariaDB [galerarocks]> show tables;
+-----------------------+
| Tables_in_galerarocks |
+-----------------------+
| testrep |
+-----------------------+
1 row in set (0.00 sec)
Excellent! As we can see the database has been replicated over to the other two instances in the cluster. You can also write on any one of the instances and it will replicate over to the other two. We now have a fully functioning Galera cluster!