Creating a multimaster MariaDB replica
This topic will cover the process to create a two node Maria DB cluster. The cluster will be configured to allow Soffid console to use either database node, which in turn will replicate data changes to the other one.
|
Node 1 action
|
Node 2 action
|
||||||||
|---|---|---|---|---|---|---|---|---|---|
| Create and setup a Maria DB in node 1. | |||||||||
|
Configure Maria DB to generate binary log files. Add the following lines to /etc/mysql/my.cnf: server-id = 1 |
|||||||||
|
Restart MariaDB: service mysql restart |
|
||||||||
| Create and setup a Maria DB in node 2. | |||||||||
|
Configure Maria DB to generate binary log files. Add the following lines to /etc/mysql/my.conf: server-id = 2 |
|||||||||
|
Restart MariaDB: service mysql restart |
|||||||||
|
Create a user for node 1 to fetch data from node 2. From mysql, execute: grant replication slave on *.* to replication_user@<NODE1-IP>
set password for replication_user@1<NODE1-IP> = password('<NODE1-PASS>') |
|||||||||
|
Create a user for node 2 to fetch data from node 1. From mysql, execute: grant replication slave on *.* to replication_user@<NODE2-IP>
set password for replication_user@1<NODE2-IP> = password('<NODE2-PASS>') |
|||||||||
|
Query current binary log position: MariaDB [(none)]> show master status;
The result should look like this:
The got values will be used on node 2 to start replica process. |
|||||||||
|
Start replication from node 1 to node 2. From mysql, execute the following sentence, replacing proper values: CHANGE MASTER TO |
|||||||||
|
Verify replica is working right, by executing SHOW SLAVE STATUS \G
Check following lines: Slave_IO_Running: Yes |
|||||||||
|
Query current binary log position: MariaDB [(none)]> show master status;
The result should look like this:
The got values will be used on node 1 to start replica process. |
|||||||||
|
Now, start replication from node 2 to node 1. From mysql, execute the following sentence, replacing proper values: CHANGE MASTER TO |
|||||||||
|
Verify replica is working right, by executing SHOW SLAVE STATUS \G
Check following lines: Slave_IO_Running: Yes |
|
||||||||
|
Now, create and start SC_SEQUENCE table in node 1. This sequence will generate values 1, 11, 21, 31, 41, and so on: CREATE TABLE `SC_SEQUENCE` (
INSERT INTO SC_SEQUENCE VALUES (1, 100, 10); |
|||||||||
|
Now, create and start SC_SEQUENCE table in node 2. This sequence will generate values 2, 12, 22, 32, 42, and so on:: CREATE TABLE `SC_SEQUENCE` ( );
INSERT INTO SC_SEQUENCE VALUES (2, 100, 10); |
|||||||||
Now, install heartbeat toconfigure createthe a floating IP addressConsole to connectuse Soffidthe consolefollowing tojdbc database. Create /etc/ha.d/ha.cf file:URL:
autojoin nonebcast eth0warntime 3deadtime 6initdead 60keepalive 1node <NODE1-NAME>node <NODE2-NAME>crm respawn
Create security token in node 1:
( echo -ne "auth 1\n1 sha1\n"; dd if=jdbc:mariadb:sequential:/dev/urandom bs=512 count=1 | openssl sha1 | cut --delimiter=' ' --fields=2 ) > /etc/ha.d/authkeysmariadb-host-1,mariadb-host-2/soffid
chmod 600 /etc/ha.d/authkeys
Restart heartbeat service
Restart pacemater service
Restart heartbeat service
Restart pacemater service
Check cluster status executing
crm_mon -1
It should look like:
Last updated: Mon Dec 26 19:52:24 2016Last change: Wed Oct 21 15:11:31 2015 via cibadmin on logpmgid01vStack: heartbeatCurrent DC: <node 1 name> - partition with quorumVersion: 1.1.10-42f20632 Nodes configured0 Resources configured
Online: [ <node 1 name> <node 2 name>]
Check cluster status executing
crm_mon -1
It should look like:
Last updated: Mon Dec 26 19:52:24 2016Last change: Wed Oct 21 15:11:31 2015 via cibadmin on logpmgid01vStack: heartbeatCurrent DC: <node 1 name>- partition with quorumVersion: 1.1.10-42f20632 Nodes configured0 Resources configured
Online: [ <node 1 name> <node 2 name>]
Disable stonith:
crm configure property stonith-enabled=false
Add floating IP to the cluster:
crm configurecrm(live)configure# primitive site_one_ip IPaddr params ip=<FLOATING-IP> cidr_netmask="<NETMASK>" nic="eth0"crm(live)configure# location site_one_ip_pref site_one_ip 100: <NODE1-NAME>crm(live)configure# commitcrm(live)configure# exit