MySQL Cluster is a technology providing shared-nothing clustering and auto-sharding for the MySQL database management system. It is designed to provide high availability and high throughput with low latency, while allowing for near linear scalability.
--- From Wikipedia
Note(s):
1. Tested on Centos-6.6 (64 bit) openstack instances.
2. Iptables is ON (specific iptables rules are given below).
3. SeLinux is Permissive (setenforce 0)
4. SeLinux Bollean values turned on for MySQL user;
# setsebool -PV allow_user_mysql_connect on
# setsebool -PV mysql_connect_any on
Architecture:
I. Management Node
IP: 172.16.20.21
II. Node - 1 (Data and SQL Node)
IP: 172.16.20.22
III. Node - 2 (Data and SQL Node)
IP: 172.16.20.23
A. Downloads:
For all nodes download the following package;
http://dev.mysql.com/get/Downloads/MySQL-Cluster-7.4/MySQL-Cluster-gpl-7.4.4-1.el6.x86_64.rpm-bundle.tar .
B. Installation on Management Node (IP: 172.16.20.21):
1. Install the needed packages;
yum install perl libaio -y
2. Remove the following mysql library as it would conflict with cluster packages;
yum remove mysql-libs
3. Extract the compressed package and install it;
i. tar -xvf MySQL-Cluster-gpl-7.4.4-1.el6.x86_64.rpm-bundle.tar
ii. rpm -ivh MySQL-Cluster-server-gpl-7.4.4-1.el6.x86_64.rpm
4. Take a note of the initial root password of MySql
You will find that password in '/root/.mysql_secret'.
C. Installation on Data Node (IP: 172.16.20.22, 172.16.20.23):
1. Install the needed packages;
yum install perl libaio -y
2. Remove the following mysql library as it would conflict with cluster packages;
yum remove mysql-libs
3. Extract the compressed package and install it;
i. tar -xvf MySQL-Cluster-gpl-7.4.4-1.el6.x86_64.rpm-bundle.tar
ii. rpm -ivh MySQL-Cluster-server-gpl-7.4.4-1.el6.x86_64.rpm
4. Take a note of the initial root password of MySql:
You will find that password in '/root/.mysql_secret'.
5. Install the addtional MySQL client package on SQL Nodes:
rpm -ivh MySQL-Cluster-client-gpl-7.4.4-1.el6.x86_64.rpm
D. Installation on SQL NOdes (IP: 172.16.20.22, 172.16.20.23):
1. Install the needed packages;
yum install perl libaio -y
2. Remove the following mysql library as it would conflict with cluster packages;
yum remove mysql-libs
3. Extract the compressed package and install it;
i. tar -xvf MySQL-Cluster-gpl-7.4.4-1.el6.x86_64.rpm-bundle.tar
ii. rpm -ivh MySQL-Cluster-server-gpl-7.4.4-1.el6.x86_64.rpm
4. Take a note of the initial root password of MySql:
You will find that password in '/root/.mysql_secret'.
5. Install the addtional MySQL client package on SQL Nodes:
rpm -ivh MySQL-Cluster-client-gpl-7.4.4-1.el6.x86_64.rpm
E. Configurations:
A. Data and SQL Nodes;
# mkdir -p /usr/local/mysql/data
# chown -R mysql:mysql /usr/local/mysql/data
vi /etc/my.cnf
[mysqld]
# Options for mysqld process:
ndbcluster # run NDB storage engine
[mysql_cluster]
# Options for MySQL Cluster processes:
ndb-connectstring=172.16.20.21 # location of management server
B. Management node:
# mkdir /var/lib/mysql-cluster
# chown -R mysql:mysql /var/lib/mysql-cluster
# cd /var/lib/mysql-cluster
vi config.ini
[ndbd default]
# Options affecting ndbd processes on all data nodes:
NoOfReplicas=2 # Number of replicas
DataMemory=80M # How much memory to allocate for data storage
IndexMemory=18M # How much memory to allocate for index storage
# For DataMemory and IndexMemory, we have used the
# default values. Since the "world" database takes up
# only about 500KB, this should be more than enough for
# this example Cluster setup.
ServerPort=50501 # This is to allocate a fixed port through which one node is connected to the other node within the cluster.
# By default, this port is allocated dynamically in such a way as to ensure that no two nodes on the same host
# computer receive the same port number.
# To open specific ports in a firewall to permit communication between data nodes and API nodes (including SQL
# nodes), you can set this parameter to the number of the desired port in an [ndbd] section or (if you need to do
# this for multiple data nodes) the [ndbd default] section of the config.ini file, and then open the port having
# that number for incoming connections from SQL nodes, API nodes, or both.
[tcp default]
# TCP/IP options:
#portnumber=1186 # This the default; however, you can use any
# port that is free for all the hosts in the cluster
# Note: It is recommended that you do not specify the port
# number at all and simply allow the default value to be used
# instead
[ndb_mgmd]
# Management process options:
hostname=172.16.20.21 # Hostname or IP address of MGM node
datadir=/var/lib/mysql-cluster # Directory for MGM node log files
[ndbd]
# Options for data node "A":
hostname=172.16.20.22 # Hostname or IP address
datadir=/usr/local/mysql/data # Directory for this data node's data files
[ndbd]
# Options for data node "B":
hostname=172.16.20.23 # Hostname or IP address
datadir=/usr/local/mysql/data # Directory for this data node's data files
[mysqld]
# SQL node options:
hostname=172.16.20.22 # Hostname or IP address
# (additional mysqld connections can be
# specified for this node for various
# purposes such as running ndb_restore)
[mysqld]
# SQL node options:
hostname=172.16.20.23 # Hostname or IP address
# (additional mysqld connections can be
# specified for this node for various
# purposes such as running ndb_restore)
F. IPtables rules;
(Here, I have allowed connection from any source on the destination ports as I have another set of firewall on top of the cloud and thus port are not exposed to the outside world, but in an ideal
condition, one needs to specify the connection source)
I. On Management Node (INPUT chain):
# iptables -I INPUT -i eth0 -p tcp --dport 1186 -j ACCEPT
# iptables -I INPUT -i eth0 -p tcp --dport 3306 -j ACCEPT
# iptables -I INPUT -i eth0 -p tcp --dport 50501 -j ACCEPT
# service iptables save
# service iptables restart
II. On Data and SQL Nodes (INPUT chain):
# iptables -I INPUT -i eth0 -p tcp --dport 3306 -j ACCEPT
G. Start the services (in a sequence);
1. Management Node:
# ndb_mgmd -f /var/lib/mysql-cluster/config.ini
2. Data Node:
# ndbd
3. MySQL
# service mysql start
On the SQL Node,
Change the default mysql root password as;
# mysqladmin -u root -p'oldpassword' password newpass
(Check the old password in '/root/.mysql_secret')
H. Connect the MySQL Cluster Management Console and check the status (if everything
works fine then you should see similar to the following);
ndb_mgm> show
Cluster Configuration
---------------------
[ndbd(NDB)] 2 node(s)
id=2 @172.16.20.22 (mysql-5.6.23 ndb-7.4.4, Nodegroup: 0, *)
id=3 @172.16.20.23 (mysql-5.6.23 ndb-7.4.4, Nodegroup: 0, *)
[ndb_mgmd(MGM)] 1 node(s)
id=1 @172.16.20.21 (mysql-5.6.23 ndb-7.4.4)
[mysqld(API)] 2 node(s)
id=4 @172.16.20.22 (mysql-5.6.23 ndb-7.4.4)
id=5 @172.16.20.23 (mysql-5.6.23 ndb-7.4.4)
ndb_mgm>
I. Log files:
1. Management Node:
/var/lib/mysql-cluster
2. Data/SQL Nodes:
/usr/local/mysql/data/
--- From Wikipedia
Note(s):
1. Tested on Centos-6.6 (64 bit) openstack instances.
2. Iptables is ON (specific iptables rules are given below).
3. SeLinux is Permissive (setenforce 0)
4. SeLinux Bollean values turned on for MySQL user;
# setsebool -PV allow_user_mysql_connect on
# setsebool -PV mysql_connect_any on
Architecture:
I. Management Node
IP: 172.16.20.21
II. Node - 1 (Data and SQL Node)
IP: 172.16.20.22
III. Node - 2 (Data and SQL Node)
IP: 172.16.20.23
A. Downloads:
For all nodes download the following package;
http://dev.mysql.com/get/Downloads/MySQL-Cluster-7.4/MySQL-Cluster-gpl-7.4.4-1.el6.x86_64.rpm-bundle.tar .
B. Installation on Management Node (IP: 172.16.20.21):
1. Install the needed packages;
yum install perl libaio -y
2. Remove the following mysql library as it would conflict with cluster packages;
yum remove mysql-libs
3. Extract the compressed package and install it;
i. tar -xvf MySQL-Cluster-gpl-7.4.4-1.el6.x86_64.rpm-bundle.tar
ii. rpm -ivh MySQL-Cluster-server-gpl-7.4.4-1.el6.x86_64.rpm
4. Take a note of the initial root password of MySql
You will find that password in '/root/.mysql_secret'.
C. Installation on Data Node (IP: 172.16.20.22, 172.16.20.23):
1. Install the needed packages;
yum install perl libaio -y
2. Remove the following mysql library as it would conflict with cluster packages;
yum remove mysql-libs
3. Extract the compressed package and install it;
i. tar -xvf MySQL-Cluster-gpl-7.4.4-1.el6.x86_64.rpm-bundle.tar
ii. rpm -ivh MySQL-Cluster-server-gpl-7.4.4-1.el6.x86_64.rpm
4. Take a note of the initial root password of MySql:
You will find that password in '/root/.mysql_secret'.
5. Install the addtional MySQL client package on SQL Nodes:
rpm -ivh MySQL-Cluster-client-gpl-7.4.4-1.el6.x86_64.rpm
D. Installation on SQL NOdes (IP: 172.16.20.22, 172.16.20.23):
1. Install the needed packages;
yum install perl libaio -y
2. Remove the following mysql library as it would conflict with cluster packages;
yum remove mysql-libs
3. Extract the compressed package and install it;
i. tar -xvf MySQL-Cluster-gpl-7.4.4-1.el6.x86_64.rpm-bundle.tar
ii. rpm -ivh MySQL-Cluster-server-gpl-7.4.4-1.el6.x86_64.rpm
4. Take a note of the initial root password of MySql:
You will find that password in '/root/.mysql_secret'.
5. Install the addtional MySQL client package on SQL Nodes:
rpm -ivh MySQL-Cluster-client-gpl-7.4.4-1.el6.x86_64.rpm
E. Configurations:
A. Data and SQL Nodes;
# mkdir -p /usr/local/mysql/data
# chown -R mysql:mysql /usr/local/mysql/data
vi /etc/my.cnf
[mysqld]
# Options for mysqld process:
ndbcluster # run NDB storage engine
[mysql_cluster]
# Options for MySQL Cluster processes:
ndb-connectstring=172.16.20.21 # location of management server
B. Management node:
# mkdir /var/lib/mysql-cluster
# chown -R mysql:mysql /var/lib/mysql-cluster
# cd /var/lib/mysql-cluster
vi config.ini
[ndbd default]
# Options affecting ndbd processes on all data nodes:
NoOfReplicas=2 # Number of replicas
DataMemory=80M # How much memory to allocate for data storage
IndexMemory=18M # How much memory to allocate for index storage
# For DataMemory and IndexMemory, we have used the
# default values. Since the "world" database takes up
# only about 500KB, this should be more than enough for
# this example Cluster setup.
ServerPort=50501 # This is to allocate a fixed port through which one node is connected to the other node within the cluster.
# By default, this port is allocated dynamically in such a way as to ensure that no two nodes on the same host
# computer receive the same port number.
# To open specific ports in a firewall to permit communication between data nodes and API nodes (including SQL
# nodes), you can set this parameter to the number of the desired port in an [ndbd] section or (if you need to do
# this for multiple data nodes) the [ndbd default] section of the config.ini file, and then open the port having
# that number for incoming connections from SQL nodes, API nodes, or both.
[tcp default]
# TCP/IP options:
#portnumber=1186 # This the default; however, you can use any
# port that is free for all the hosts in the cluster
# Note: It is recommended that you do not specify the port
# number at all and simply allow the default value to be used
# instead
[ndb_mgmd]
# Management process options:
hostname=172.16.20.21 # Hostname or IP address of MGM node
datadir=/var/lib/mysql-cluster # Directory for MGM node log files
[ndbd]
# Options for data node "A":
hostname=172.16.20.22 # Hostname or IP address
datadir=/usr/local/mysql/data # Directory for this data node's data files
[ndbd]
# Options for data node "B":
hostname=172.16.20.23 # Hostname or IP address
datadir=/usr/local/mysql/data # Directory for this data node's data files
[mysqld]
# SQL node options:
hostname=172.16.20.22 # Hostname or IP address
# (additional mysqld connections can be
# specified for this node for various
# purposes such as running ndb_restore)
[mysqld]
# SQL node options:
hostname=172.16.20.23 # Hostname or IP address
# (additional mysqld connections can be
# specified for this node for various
# purposes such as running ndb_restore)
F. IPtables rules;
(Here, I have allowed connection from any source on the destination ports as I have another set of firewall on top of the cloud and thus port are not exposed to the outside world, but in an ideal
condition, one needs to specify the connection source)
I. On Management Node (INPUT chain):
# iptables -I INPUT -i eth0 -p tcp --dport 1186 -j ACCEPT
# iptables -I INPUT -i eth0 -p tcp --dport 3306 -j ACCEPT
# iptables -I INPUT -i eth0 -p tcp --dport 50501 -j ACCEPT
# service iptables save
# service iptables restart
II. On Data and SQL Nodes (INPUT chain):
# iptables -I INPUT -i eth0 -p tcp --dport 3306 -j ACCEPT
G. Start the services (in a sequence);
1. Management Node:
# ndb_mgmd -f /var/lib/mysql-cluster/config.ini
2. Data Node:
# ndbd
3. MySQL
# service mysql start
On the SQL Node,
Change the default mysql root password as;
# mysqladmin -u root -p'oldpassword' password newpass
(Check the old password in '/root/.mysql_secret')
H. Connect the MySQL Cluster Management Console and check the status (if everything
works fine then you should see similar to the following);
ndb_mgm> show
Cluster Configuration
---------------------
[ndbd(NDB)] 2 node(s)
id=2 @172.16.20.22 (mysql-5.6.23 ndb-7.4.4, Nodegroup: 0, *)
id=3 @172.16.20.23 (mysql-5.6.23 ndb-7.4.4, Nodegroup: 0, *)
[ndb_mgmd(MGM)] 1 node(s)
id=1 @172.16.20.21 (mysql-5.6.23 ndb-7.4.4)
[mysqld(API)] 2 node(s)
id=4 @172.16.20.22 (mysql-5.6.23 ndb-7.4.4)
id=5 @172.16.20.23 (mysql-5.6.23 ndb-7.4.4)
ndb_mgm>
I. Log files:
1. Management Node:
/var/lib/mysql-cluster
2. Data/SQL Nodes:
/usr/local/mysql/data/
No comments:
Post a Comment