Thursday, March 19, 2015

MySQL Cluster Setup on CentOS-6x

MySQL Cluster is a technology providing shared-nothing clustering and auto-sharding for the MySQL database management system. It is designed to provide high availability and high throughput with low latency, while allowing for near linear scalability.   
--- From Wikipedia


Note(s):
1. Tested on Centos-6.6 (64 bit) openstack instances.
2. Iptables is ON (specific iptables rules are given below).
3. SeLinux is Permissive (setenforce 0)
4. SeLinux Bollean values turned on for MySQL user;
# setsebool -PV allow_user_mysql_connect on
# setsebool -PV mysql_connect_any on


Architecture:
I. Management Node  
IP: 172.16.20.21

II. Node - 1   (Data and SQL Node)
IP: 172.16.20.22  

III.        Node - 2   (Data and SQL Node)
IP: 172.16.20.23


A. Downloads:
For all nodes download the following package;
http://dev.mysql.com/get/Downloads/MySQL-Cluster-7.4/MySQL-Cluster-gpl-7.4.4-1.el6.x86_64.rpm-bundle.tar .


B. Installation on Management Node (IP: 172.16.20.21):

1. Install the needed packages;
yum install perl libaio -y

2. Remove the following mysql library as it would conflict with cluster packages;
yum remove mysql-libs

3. Extract the compressed package and install it;
i. tar -xvf MySQL-Cluster-gpl-7.4.4-1.el6.x86_64.rpm-bundle.tar
ii. rpm -ivh MySQL-Cluster-server-gpl-7.4.4-1.el6.x86_64.rpm

4. Take a note of the initial root password of MySql
You will find that password in '/root/.mysql_secret'.

C. Installation on Data Node (IP: 172.16.20.22, 172.16.20.23):
1. Install the needed packages;
yum install perl libaio -y

2. Remove the following mysql library as it would conflict with cluster packages;
yum remove mysql-libs

3. Extract the compressed package and install it;
i. tar -xvf MySQL-Cluster-gpl-7.4.4-1.el6.x86_64.rpm-bundle.tar
ii. rpm -ivh MySQL-Cluster-server-gpl-7.4.4-1.el6.x86_64.rpm

4. Take a note of the initial root password of MySql:
You will find that password in '/root/.mysql_secret'.

5. Install the addtional MySQL client package on SQL Nodes:
rpm -ivh MySQL-Cluster-client-gpl-7.4.4-1.el6.x86_64.rpm


D. Installation on SQL NOdes (IP: 172.16.20.22, 172.16.20.23):

1. Install the needed packages;
yum install perl libaio -y

2. Remove the following mysql library as it would conflict with cluster packages;
yum remove mysql-libs

3. Extract the compressed package and install it;
i. tar -xvf MySQL-Cluster-gpl-7.4.4-1.el6.x86_64.rpm-bundle.tar
ii. rpm -ivh MySQL-Cluster-server-gpl-7.4.4-1.el6.x86_64.rpm

4. Take a note of the initial root password of MySql:
You will find that password in '/root/.mysql_secret'.

5. Install the addtional MySQL client package on SQL Nodes:
rpm -ivh MySQL-Cluster-client-gpl-7.4.4-1.el6.x86_64.rpm


E. Configurations:
A. Data and SQL Nodes;

# mkdir -p /usr/local/mysql/data
# chown -R mysql:mysql /usr/local/mysql/data

vi /etc/my.cnf
[mysqld]
# Options for mysqld process:
ndbcluster                      # run NDB storage engine

[mysql_cluster]
# Options for MySQL Cluster processes:
ndb-connectstring=172.16.20.21  # location of management server


B. Management node:
# mkdir /var/lib/mysql-cluster
# chown -R mysql:mysql /var/lib/mysql-cluster
# cd /var/lib/mysql-cluster

vi config.ini

[ndbd default]
# Options affecting ndbd processes on all data nodes:
NoOfReplicas=2    # Number of replicas
DataMemory=80M    # How much memory to allocate for data storage
IndexMemory=18M   # How much memory to allocate for index storage
 # For DataMemory and IndexMemory, we have used the
 # default values. Since the "world" database takes up
 # only about 500KB, this should be more than enough for
 # this example Cluster setup.

ServerPort=50501  # This is to allocate a fixed port through which one node is connected to the other node within the cluster.
 # By default, this port is allocated dynamically in such a way as to ensure that no two nodes on the same host
 # computer receive the same port number.
 # To open specific ports in a firewall to permit communication between data nodes and API nodes (including SQL
 # nodes), you can set this parameter to the number of the desired port in an [ndbd] section or (if you need to do
 # this for multiple data nodes) the [ndbd default] section of the config.ini file, and then open the port having
 # that number for incoming connections from SQL nodes, API nodes, or both.

[tcp default]
# TCP/IP options:
#portnumber=1186   # This the default; however, you can use any
 # port that is free for all the hosts in the cluster
 # Note: It is recommended that you do not specify the port
 # number at all and simply allow the default value to be used
 # instead

[ndb_mgmd]
# Management process options:
hostname=172.16.20.21           # Hostname or IP address of MGM node
datadir=/var/lib/mysql-cluster  # Directory for MGM node log files

[ndbd]
# Options for data node "A":
hostname=172.16.20.22           # Hostname or IP address
datadir=/usr/local/mysql/data   # Directory for this data node's data files

[ndbd]
# Options for data node "B":
hostname=172.16.20.23           # Hostname or IP address
datadir=/usr/local/mysql/data   # Directory for this data node's data files

[mysqld]
# SQL node options:
hostname=172.16.20.22           # Hostname or IP address
# (additional mysqld connections can be
# specified for this node for various
# purposes such as running ndb_restore)
[mysqld]
# SQL node options:
hostname=172.16.20.23           # Hostname or IP address
# (additional mysqld connections can be
# specified for this node for various
# purposes such as running ndb_restore)



F. IPtables rules;
(Here, I have allowed connection from any source on the destination ports as I have another set of                 firewall on top of the cloud and thus port are not exposed to the outside world, but in an ideal
                condition, one needs to specify the connection source)

I. On Management Node (INPUT chain):

# iptables -I INPUT -i eth0 -p tcp --dport 1186 -j ACCEPT
# iptables -I INPUT -i eth0 -p tcp --dport 3306 -j ACCEPT
# iptables -I INPUT -i eth0 -p tcp --dport 50501 -j ACCEPT
# service iptables save
# service iptables restart

II. On Data and SQL Nodes (INPUT chain):
# iptables -I INPUT -i eth0 -p tcp --dport 3306 -j ACCEPT

G. Start the services (in a sequence);

1. Management Node:
# ndb_mgmd -f /var/lib/mysql-cluster/config.ini

2. Data Node:
# ndbd

3. MySQL
# service mysql start

On the SQL Node,
Change the default mysql root password as;
# mysqladmin -u root -p'oldpassword' password newpass
(Check the old password in '/root/.mysql_secret')

H. Connect the MySQL Cluster Management Console and check the status (if everything        
         works fine then you should see similar to the following);

ndb_mgm> show
Cluster Configuration
---------------------
[ndbd(NDB)]     2 node(s)
id=2    @172.16.20.22  (mysql-5.6.23 ndb-7.4.4, Nodegroup: 0, *)
id=3    @172.16.20.23  (mysql-5.6.23 ndb-7.4.4, Nodegroup: 0, *)

[ndb_mgmd(MGM)] 1 node(s)
id=1    @172.16.20.21  (mysql-5.6.23 ndb-7.4.4)

[mysqld(API)]   2 node(s)
id=4    @172.16.20.22  (mysql-5.6.23 ndb-7.4.4)
id=5    @172.16.20.23  (mysql-5.6.23 ndb-7.4.4)

ndb_mgm>


I. Log files:
1. Management Node:
/var/lib/mysql-cluster

2. Data/SQL Nodes:
/usr/local/mysql/data/

Wednesday, March 18, 2015

NIS (Network Information Service)


Originally known as Yellow Pages (YP). Thus, the commands still begin with 'yp'. It is an RPC based client-server system used for distributed configuration of data such as 'user' and 'hostname' between computers on a computer network.

A NIS/YP system maintains and distributes central directory of user and group information, hostnames, email aliases and other text based tables of information in a network.

Note:  Portmap has been replaced by rpcbind in distros like RHEL-6, CentOS-6, Fedora8 and theire
         later versions. Portmap service is associated with RPCBind package.
/etc/rc.d/init.d/rpcbind

This setup is done on CentOS-6.6 system

A. NIS Server:

1. start the rpcbind (portmap in earlier versions) service;
# service rpcbind start
Starting rpcbind:                                          [  OK  ]

2. If reqd, do a yum search for 'ypserv'

# yum search ypserv
Loaded plugins: fastestmirror, presto
Loading mirror speeds from cached hostfile
* base: mirrors.tummy.com
* extras: mirrors.psychz.net
* updates: mirror.supremebytes.com
======================================= N/S Matched: ypserv ========================================
ypserv.x86_64 : The NIS (Network Information Service) server

  Name and summary matches only, use "search all" for everything.

3. Install 'ypserv' package;
# yum install ypserv -y

4. Set the NISDOMAIN in /etc/sysconfig/network;
NISDOMAIN=blueangle.srv

5. Start the NIS service (ypserv);
# service ypserv start
Setting NIS domain name blueangle.srv:        [  OK  ]
Starting YP server services:                               [  OK  ]

6. # rpcinfo -u localhost ypserv
program 100004 version 1 ready and waiting
program 100004 version 2 ready and waiting

7. Generate NIS Database:

# /usr/lib64/yp/ypinit -m
At this point, we have to construct a list of the hosts which will run NIS
servers.  host-192-168-1-13 is in the list of NIS server hosts.  Please continue to add
the names for the other hosts, one per line.  When you are done with the
list, type a .
next host to add:  host-192-168-1-13
next host to add:
The current list of NIS servers looks like this:

host-192-168-1-13

Is this correct?  [y/n: y]  y
We need a few minutes to build the databases...
Building /var/yp/blueangle.srv/ypservers...
gethostbyname(): Success
Running /var/yp/Makefile...
gmake[1]: Entering directory `/var/yp/blueangle.srv'
Updating passwd.byname...
Updating passwd.byuid...
Updating group.byname...
Updating group.bygid...
Updating hosts.byname...
Updating hosts.byaddr...
Updating rpc.byname...
Updating rpc.bynumber...
Updating services.byname...
Updating services.byservicename...
Updating netid.byname...
Updating protocols.bynumber...
Updating protocols.byname...
Updating mail.aliases...
gmake[1]: Leaving directory `/var/yp/blueangle.srv'

host-192-168-1-13 has been set up as a NIS master server.

Now you can run ypinit -s host-192-168-1-13 on all slave server.

9. After generating the database, you can see a new directory for your domain is created under
        /var/yp as shown below.
# ls -l /var/yp/
total 28
drwxr-xr-x. 2 root root  4096 Mar 18 09:06 blueangle.srv
-rw-r--r--. 1 root root 16675 Oct  7  2013 Makefile
-rw-r--r--. 1 root root    18 Mar 18 09:05 ypservers


B. NIS Client:

1. Install the client packages;
# yum install ypbind -y

2. Start rpcbind service (portmapper in older version)
# service rpcbind start
Starting rpcbind:                                          [  OK  ]

3. Provide the domainname and host info of the NIS server as;
In /etc/yp.conf,
domain blueangle.srv server host-192-168-1-13

4. Resolve the host information in /etc/hosts file on the client as;
192.168.1.13 host-192-168-1-13

5. Test the NIS server using the client tool (this is without starting 'ypbind');
# ypcat passwd
No such map passwd.byname. Reason: Can't bind to server which serves this domain

6. Start the ypbind (client) service and test again;

# service ypbind start
Starting NIS service:                                      [  OK  ]
Binding NIS service: .                                     [  OK  ]

7. # ypcat passwd
bijit:$1$cWrsV2Yk$7Ywe9qJ7x7c3C9ZAPWdBK.:500:500::/home/bijit:/bin/bash
ajith:$1$SWi8yVce$ooPNgNfhEhT9VOCkGkdaR1:501:501::/home/ajith:/bin/bash

These two users do not exist on the client system, they are read from the NIS server the client is
        connected to.

8. Check the NIS server the client is connected to;
# ypwhich
host-192-168-1-13

9. One may add the 'NIS' entry in /etc/nsswitch.conf (Name Service Switch)

passwd:     files nis
shadow:     files nis
group:      files nis


Once done, one may check with 'getent' command (it actually displays entries from the
        databases as supported in /etc/nsswitch.conf)

        [root@host-192-168-1-14 ~]# getent passwd | grep bijit
bijit:$1$cWrsV2Yk$7Ywe9qJ7x7c3C9ZAPWdBK.:500:500::/home/bijit:/bin/bash

10.   Now, try and switch user to 'bijit'

NOTE: To create user's home directory, one need to add the following line in
        /etc/pam.d/system-auth;

# add if you need ( create home directory automatically if it's none )
session     optional      pam_mkhomedir.so skel=/etc/skel umask=077 ## Added

[root@host-192-168-1-14 ~]# su - bijit
Creating directory '/home/bijit'.

Saturday, March 14, 2015

Centralised SysLog server on CentOS

A centralised syslog server helps you to keep track of activities that are happening on remote systems. The centralised facility saves time as one does not need to log into each client to check logs and additionaly, it becomes very handy when a remote system crashes or compromised.

A. Server:

I. Installation

By default rsyslog (syslog on older systems) package is installed. If not, one can install it as;
# yum install rsyslog -y


II. Configuration

1. By default rsyslog is not configured to receive logs/messages from remote systems. One needs the enable the remote logging by uncommenting the following in "/etc/rsyslog.conf"

# Provides UDP syslog reception
$ModLoad imudp
$UDPServerRun 514

# Provides TCP syslog reception
$ModLoad imtcp
$InputTCPServerRun 514

2. Once the changes are made, restart rsyslog service as;

# service rsyslog restart
Shutting down system logger:                          [  OK  ]
Starting system logger:                                    [  OK  ]


3. Check if it is listening on correct ports (both tcp and udp)
# netstat -atupn | grep 514
tcp        0      0 0.0.0.0:514          0.0.0.0:*           LISTEN      14374/rsyslogd
tcp        0      0 :::514                    :::*                    LISTEN      14374/rsyslogd
udp        0      0 0.0.0.0:514         0.0.0.0:*                               14374/rsyslogd
udp        0      0 :::514                   :::*                                        14374/rsyslogd



4. Allow the syslog port (port 514) to accept connetion from the clients for both TCP and UDP (an
                example only, IP 192.168.1.11 acts as client);

# iptables -I INPUT 4 -p tcp --dport 514 -s 192.168.1.11/24 -j ACCEPT
# iptables -I INPUT 5 -p udp --dport 514 -s 192.168.1.11/24 -j ACCEPT
# service iptables save
# service iptables restart


B. Client:

1. On the client system (i.e. 192.168.1.11), install the rsyslog package as was in server.

2. Open /etc/rsyslog.conf
Navigate to the bottom of the file and add the type of log you want your remote server to keep                     track of eg. I did the following

*.info;mail.none;authpriv.none;cron.none   @192.168.1.10

3. Restart syslog service

# service rsyslog restart
Shutting down system logger:                          [  OK  ]
Starting system logger:                                    [  OK  ]




Now, on the server we may check /var/log/messages file to see if the activities on the remote client is being logged !


Happy logging ! :)