MySQL – Part 2 – HA with Heartbeat and replication

I. TOPOLOGY

Software version: CentOS 6.5 and MySQL 5.6


MySql replication master-slave with heartbeat

II. INSTALL AND CONFIGURE HEARTBEAT

1. Configure network interface on node 1

# configure network card
[root@mysql1 ~]# vi /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
ONBOOT=yes
NM_CONTROLLED=no
BOOTPROTO=none
IPADDR=172.30.6.110
NETMASK=255.255.255.0
DNS1=8.8.8.8
GATEWAY=172.30.6.1
USERCTL=no
# disable Selinux & Firewall
[root@mysql1 ~]# setenforce 0
[root@mysql1 ~]# service iptables stop
[root@mysql1 ~]# service ip6tables stop
[root@mysql1 ~]# chkconfig iptables off
[root@mysql1 ~]# chkconfig ip6tables off
# configure hostname
[root@mysql2 ~]# vi /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=mysql1.lab.com
# configure host file
[root@mysql1 ~]# vi /etc/hosts
127.0.0.1     localhost
172.30.6.110     mysql1.lab.com mysql1
172.30.6.111     mysql2.lab.com mysql2
# restart network service and setup for automatic boot on
[root@mysql1 ~]# service network restart
[root@mysql1 ~]# chkconfig network on

2. Configure network interface on node 2

Repeat the steps as node 1, just need to change the IP address of node 2

3. Install Heartbeat on both node 1 and node 2

Update repos and install packages Heartbeat on all nodes (node1 and node2)

[root@mysql1 ~]# yum update
[root@mysql1 ~]# yum install epel-release
[root@mysql1 ~]# yum -y install heartbeat

Note 1: If there is no heartbeat available, you can use this repo and install

wget http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
rpm -ivh epel-release-6-8.noarch.rpm
yum install heartbeat

Note 2: To check the repo link, please visit

Note 3: if you receive this error message “Cannot retrieve metalink for repository: epel. Please verify its path and try again”. Try this one to fix

yum upgrade ca-certificates –disablerepo=epel

4. Configure Heartbeat on node 1

Basically, to configure heartbeat it is necessary to add 3 files in the folder /etc/ha.d. The 3 files should be exactly the same in both servers running heartbeat, except in ha.cf the ucast IP address can be different.

  • /etc/ha.d/ha.cf

[root@mysql1 ~]# vi /etc/ha.d/ha.cf
# the content of ha.cf file as follow – node 1
debugfile     /var/log/ha-debug
logfile         /var/log/ha-log
logfacility     local0
keepalive     2          // Time between heartbeats in seconds
deadtime     30          // Node is pronounced dead after 30 seconds
initdead     120
udpport         694
ucast         eth0 172.30.6.111 // Use eth0 to send unicast to node 2
auto_failback on
node mysql1.lab.com
node myslq2.lab.com

  • /etc/ha.d/authkeys

[root@mysql1 ~]# /etc/ha.d/authkeys
# the content of authkeys
auth 2
2 sha1 test-ha
# change the permission of the authkeys file:
[root@mysql1 ~]# chmod 600 /etc/ha.d/authkeys

  • /etc/ha.d/haresources

[root@mysql1 ~]# vi /etc/ha.d/haresources
# the content of haresource
mysql1.lab.com IPaddr::172.30.6.112/24/eth0:0

Note 4: The haresource file tells Heartbeat that the master node is mysql1.lab.com and 172.30.6.112 is the virtual IP

After configured 3 files on node 1, we will need to copy these file to node 2:

[root@mysql1 ~]# cd /etc/ha.d/
[root@mysql1 ha.d]# scp authkeys ha.cf haresources root@172.30.6.111:/etc/ha.d/

5. Configure Heartbeat on node 2

After copied 2 files from node 1, we will need to change the content of ha.cf

[root@mysql2 ~]# vi /etc/ha.d/ha.cf
# the content of ha.cf file as follow – node 2
debugfile     /var/log/ha-debug
logfile         /var/log/ha-log
logfacility     local0
keepalive     2
deadtime     30
initdead     120
udpport         694
ucast         eth0 172.30.6.110
auto_failback on
node mysql1.lab.com
node myslq2.lab.com

6. Start Heartbeat service on node 1, node 2 and testing

Start heartbeat service on all nodes (repeat the following steps on node 2 as well)

[root@mysql1 ~]# service heartbeat start
[root@mysql1 ~]# chkconfig heartbeat on

You should see a new network interface with virtual IP 172.30.6.112 node 1


You can also ping to virtual IP for testing

III. CONFIGURE MYSQL MASTER-SLAVE REPLICATION

1. Install MySQL 5.6 on node 1, node 2

On node 1, login into the server and download the yum repo rpm package. The URL is http://dev.mysql.com/downloads/repo/

[root@mysql1 ~]# wget http://repo.mysql.com/mysql-community-release-el6-5.noarch.rpm
[root@mysql1 ~]# rpm -ivh mysql-community-release-el6-5.noarch.rpm
[root@mysql1 ~]# yum install mysql-server
[root@mysql1 ~]# service mysqld start
[root@mysql1 ~]# mysql_secure_installation
Then follow the wizard to complete (the first root password of MySQL is blank)

After completed the installation, we will create a new database on node 1 and use this database for testing the replication between 2 nodes.

Login to MySQL, then create new database named ‘andb’, a table ‘name’ in this db:

[root@mysql1 ~]# mysql –u root -p
mysql> create database andb;
mysql> create table andb.name (realname nvarchar(10))

On node 2, repeat the steps as above on node 2. DON’T CREATE a table ‘name’, it will be replicated from node 1

2. Configure node 1 – master

# edit MySQL configuration file and add the following lines under
[root@mysql1 ~]# vi /etc/my.cnf
[mysqld]
log-bin=mysql-bin
binlog-do-db=andb
server-id=1
# restart MySQL service to take effect.
[root@mysql1 ~]# service mysqld restart

Login to MySQL:

  • Create a new replicate account with REPLICATION SLAVE privileges. This account will be used by slave when connecting to master

[root@mysql1 ~]# mysql –u root -p
mysql> create user ‘replicator’@’%’ identified by ‘password’;
mysql> grant replication slave on *.* to ‘replicator’@’%’;

  • Block write statement on all the tables, so no changes can be made after taking backup.

mysql> use andb;
mysql> FLUSH TABLES WITH READ LOCK;

  • Check the current binary log file name (File) and current offset (Position) value:

mysql> show master status;

  • The output is showing that the current binary file is using mysql-bin.000001 and offset value is 402. Note down these values to use on next step.


Logout MySQL, take a backup of database, then copy it to slave node:
[root@mysql1 ~]# mysqldump -u root -p andb > andb.sql
[root@mysql1 ~]# scp andb.sql root@172.30.6.111:/var/tmp/

Login to MySQL again, remove the READ LOCK from tables, so that changes can be made.

[root@mysql1 ~]# mysql -u root -p
mysql> UNLOCK TABLES;

Node: Both values mysql-bin.000001 and 402 will be used in step 3 configure node 2 – slave

3. Configure node 2 – slave

# edit MySQL configuration file and add the following lines under
[root@mysql2 ~]# vi /etc/my.cnf
[mysqld]
log-bin=mysql-bin
binlog-do-db=andb
server-id=2
# restart MySQL service to take effect.
[root@mysql2 ~]# service mysqld restart

Restore database backup taken from node 1

[root@mysql2 ~]# mysql -u root -p andb < /var/tmp/andb.sql

Login to MySQL, enable the replication from slave (node 2) to master (node 1)

[root@mysql1 ~]# mysql –u root -p
mysql> CHANGE MASTER TO MASTER_HOST=’172.30.6.110,MASTER_USER=’replicator’, MASTER_PASSWORD=’password’, MASTER_LOG_FILE=mysql-bin.000001′MASTER_LOG_POS=402;

mysql> START SLAVE;

  • Testing the replication

On node 1, type show master status\G;


Take note file and position values and then compare them with node 2.

On node 2, type show slave status\G; both values are the same and the replication works fine.


IV. CHANGE MASTER-SLAVE TO MASTER-MASTER REPLICATION (OPTION)

After configured master-slave, we can change this replication to master-master if needed

1. Prepare node 2 to become master node:

On node 2, login to MySql and create a new replicate account with REPLICATION SLAVE privileges. This account will be used by slave (node 1) when connecting to master (node 2)

[root@mysql2 ~]# mysql –u root –p
mysql> create user ‘replicator2’@’%’ identified by ‘password’;
mysql> grant replication slave on *.* to ‘replicator2’@’%’;

  • Check the current binary log file name (File) and current offset (Position) value:

mysql> show master status;

  • The output is showing that the current binary file is using mysql-bin.000001 and offset value is 1434. Note down these values to use on next step


2. Configure node 1 to become slave node of node 2

On node 1, login to MySQL and execute following command. This will enable the replication from slave (node 1) to master (node 2)

[root@mysql1 ~]# mysql –u root –p
mysql> CHANGE MASTER TO MASTER_HOST=’172.30.6.111′, MASTER_USER=’replicator’, MASTER_PASSWORD=’password’, MASTER_LOG_FILE=‘mysql-bin.000001’ MASTER_LOG_POS=1434; 

mysql> START SLAVE;

  • Testing the replication

On node 2, type show master status\G;


Take note file and position values and then compare them with node 1.

On node 1, type show slave status\G; both values are the same and the replication works fine.


On node 2, create a new table and node 1 will be the same


On node 1, create a new table and node 2 will be the same


VI. INTEGRATE APACHE WITH HEARTBEAT (OPTION)

Now we can install apache and integrate it to heartbeat

1. Node 1

[root@mysql1 ~]# yum install httpd
# create an index.html on DocumentRoot of node 1
[root@mysql1 ~]# vi /var/www/html/index.html

# fill the text
This is node 1
# restart service
[root@mysql1 ~]# service httpd restart
[root@mysql1 ~]# chkconfig httpd on

Try to access node 1 via browser http://172.30.6.110/index.html

2. On node 2

Repeat the steps as node 1, just need to change the text ‘This is node 2

Try to access node 2 via browser http://172.30.6.111/index.html

  • Integrate Apache with Heartbeat

Change file /etc/ha.d/haresources on all nodes:

[root@mysql1 ~]# vi /etc/ha.d/haresources
# fill httpd at the end
mysql1.lab.com IPaddr::172.30.6.112/24/eth0:0 httpd

  • Testing

Open the browser and access to http://172.30.6.112/index.htm, you should see the text: “This is node 1”

Stop the heartbeat service on node 1, refresh the browser, you should see the text: “This is node 2”

It means that the heartbeat work fine and auto failover from node 1 to node 2

VI. APPLICATION AND FAILOVER PROCEDUCE

1. Application configuration

For example, if you have a web application (php), you need to connect to database for loading data, updating data…

Thus you should create a new mysql account to execute the sql query. This account will be replicated between nodes:

[root@mysql1 ~]# mysql –u root –p
mysql> create user ‘dbuser’@’%’ identified by ‘password’;
mysql> grant all privileges on *.* to ‘dbuser’@’%’;

The connection in php configuration should be like this:

<?php

$servername = “172.30.6.112”; // The virtual IP address of 2 MySQL nodes

$username = “dbuser”;

$password = “password”;

$dbname = “andb”;

?>

2. Failover procedure

If you use master-slave and the master node failed. You need to promote slave to become a master and then re-build a new slave

[root@mysql2 ~]# mysql –u root –p
mysql> STOP SLAVE
mysql> RESET MASTER

If you use master-master and there is one master node failed. At that time, we have a master and have to stop slave, reset master and then rebuild a new master

About Terri

System Administrator @Netpower Datacenter

Posted on 16.11.2015, in Linux, Technical Articles and tagged , , , , . Bookmark the permalink. Leave a comment.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: