Tuesday, December 22, 2015

Push your codes to GIT...

Want to save your codes? The opensource version control system GIT could be your choice.

The following lines describe how to to setup Git on a Centos 6x (should work on RHEL, Fedora and other Linux distributions as well).

(The public IP address was marked as ""xx")


1. Install Git on the server and the Client:
# yum install git -y

2. Created a system user "git" on the server.
# useradd git
# sudo su - git

3. Generate a SSH keypair for the user "git" to facilitate "git" over SSH:
$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/git/.ssh/id_rsa):
Created directory '/home/git/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/git/.ssh/id_rsa.
Your public key has been saved in /home/git/.ssh/id_rsa.pub.
The key fingerprint is:
8a:18:fd:72:37:c2:f9:a0:28:8a:f6:7d:51:98:56:a9 git@ins-1
The key's randomart image is:
+--[ RSA 2048]----+
|          .      |
|         o       |
|        =        |
|   .   E .       |
|  . . . S        |
|   o + +         |
|  . o O +        |
|o. ..+ * .       |
|=.o.... .        |
+-----------------+

4. Copy the contents of the Public SSH key to the authorized_key file for the
         user "git":
$ cat /home/git/.ssh/id_rsa.pub > /home/git/.ssh/authorized_keys

5. Copy the private SSH key to the local system (client) and store it in a file
        under some directory; we would be using this key to connect with Git server         (using "git" user).

6. On the server and the client change the permission of the authorized_key
         and the private key to (0600), else it will complain while making SSH  
         connection;

A. Server:
chmod 0600 /home/git/.ssh/authorized_keys

B. Client:
# chmod 0600 id-rsa-private

7. Test a normal SSH connection from the client;
$ ssh -i id-rsa-private git@xxx.xxx.xx.xx
[git@ins-1 ~]$

So, the SSH connection is working!

8. Store the ssh private key in the Keychain so that you don't need to specify it
        every time while making the SSH connection;

$ eval $(ssh-agent -s)
Agent pid 97010

$ ssh-add id-rsa-private
Identity added: id-rsa-private (id-rsa-private)


9. Create an empty Repository on the server:

        A. Created a Project Directory:

$ mkdir Projects
$ cd Projects/

B. Under Project directory, created one emppty repository:

        $ git init --bare project-1.git
Initialized empty Git repository in /home/git/Projects/project-1.git/

10. On the client let us clone this repository first;

A. Create a Local directory (if you wish)
$ mkdir Project-1-Local
$ cd Project-1-Local/

B. Set up and initialize the new repository on the client:

$ git init && git remote add origin
                   git@xxx.xxx.xx.xx:/home/git/Projects/project-1.git/
Initialized empty Git repository in /home/bijit/GIT-User/Project-1-
                Local/.git/

This will set up the GIT environment locally, check using ls -al
***********************
$ ls -al
total 12
drwxrwxr-x. 3 bijit bijit 4096 Dec  9 13:43 .
drwxrwxr-x. 4 bijit bijit 4096 Dec  9 13:26 ..
drwxrwxr-x. 7 bijit bijit 4096 Dec  9 13:43 .git
***********************

**** For cloning the remote repo locally;
$ git clone git@xxx.xxx.x.x:/home/git/Projects/project-1.git
======================================
eg.
$ git clone git@192.168.1.4:/home/git/Projects/project-1.git Initialized empty Git repository in /home/centos/myProjects-Local/project-1/.git/ warning: You appear to have cloned an empty repository.
======================================

C. Do a git config to see the configurations:

$ cd project-1/

$ git config -l
core.repositoryformatversion=0
core.filemode=true
core.bare=false
core.logallrefupdates=true
remote.origin.url=git@xxx.xxx.xx.xx:/home/git/Projects/project-1.git/
remote.origin.fetch=+refs/heads/*:refs/remotes/origin/*

GIT PUSH:

11. Now, on the client create some test files and directories and push it to the
        GIT server:

A.
$ pwd
/home/bijit/GIT-User/Project-1-Local

B. Create contents:

$ mkdir Test-Dir-1
$ echo "Test content" > test-file-1.txt

C. Add and Commit Locally:

$ git add .

$ git commit -m "Test commit 1"
[master (root-commit) 35aa811] Test commit 1
Committer: Bijit Bhattacharjee
Your name and email address were configured automatically based
on your username and hostname. Please check that they are accurate.
You can suppress this message by setting them explicitly:

git config --global user.name "Your Name"
git config --global user.email you@example.com

After doing this, you may fix the identity used for this commit with:

git commit --amend --reset-author

1 file changed, 1 insertion(+)
create mode 100644 test-file-1.txt


D. Push the contents to Remote GIT server:
$ git push origin master
Counting objects: 3, done.
Writing objects: 100% (3/3), 242 bytes | 0 bytes/s, done.
Total 3 (delta 0), reused 0 (delta 0)
To git@xxx.xxx.xx.xx:/home/git/Projects/project-1.git/
* [new branch]      master -> master


GIT PULL:

1. Make a test directory and pull the contents from the remote Git Repo:

$ mkdir Test-Checkout-P1
$ cd Test-Checkout-P1/

2.     Initialize the directory:
$ git init && git remote add origin
           git@xxx.xxx.xx.xx:/home/git/Projects/project-1.git/
Initialized empty Git repository in /home/bijit/GIT-User/Test-Checkout-
        P1/.git/

3. Now, pull the contents:
$ git pull origin master
From xxx.xxx.xx.xx:/home/git/Projects/project-1
* branch            master     -> FETCH_HEAD
[bijit@localhost Test-Checkout-P1]$ ll
total 8
-rw-rw-r--. 1 bijit bijit  6 Dec  9 15:50 file-2.txt
-rw-rw-r--. 1 bijit bijit 13 Dec  9 15:50 test-file-1.txt

Note:  Did not pull the empty directory (Test-Dir-1) initially, but when that Directory was pushed with a file (file-1.txt). Subsequent pulls were able to bring down the Directory with its contents.


A. Create one branch locally say "bash", write a test script and Push the branch to remote repository; 

1. $ pwd
/home/centos/myProjects-Local/project-1

2. Check all branches once
$ git branch -a
* master
remotes/origin/master

2. git branch bash

3. Switch to local branch "bash"
$ git checkout bash
Switched to branch 'bash'

4. Create a script and change permission;
$ vi hello.sh
$ chmod u+x hello.sh

5. Add the file locally;

$ git add hello.sh
$ git status
# On branch bash
# Changes to be committed:
#   (use "git reset HEAD ..." to unstage)
#
#       new file:   hello.sh
#

6. git commit -m "Added a script to new branch"

7. Push the branch;
$ git push origin bash
Counting objects: 4, done.
Compressing objects: 100% (2/2), done.
Writing objects: 100% (3/3), 345 bytes, done.
Total 3 (delta 0), reused 0 (delta 0)
To git@192.168.1.4:/home/git/Projects/project-1.git
* [new branch]      bash -> bash

B. Checkout the remote branch and merge that branch to master;

1. Check local branch;
$ git branch -l
* bash
 master

2. $ git checkout master

$ git checkout master
Switched to branch 'master'

3. $ git pull origin master
===================
$ git pull origin master
From 192.168.1.4:/home/git/Projects/project-1
* branch            master     -> FETCH_HEAD
Already up-to-date.
====================

4. Merge the branch eg. our branch is "bash"
$ git merge bash
=========================
$ git merge bash
Updating 2420bef..f8d2345
Fast-forward
hello.sh |    5 +++++
1 files changed, 5 insertions(+), 0 deletions(-)
create mode 100755 hello.sh
==========================
 5. Push the changes;
$ git push origin master
===============================
$ git push origin master
Total 0 (delta 0), reused 0 (delta 0)
To git@192.168.1.4:/home/git/Projects/project-1.git
2420bef..f8d2345  master -> master
===============================


Accessing BitBucket using SSH keys, How-To

1. On the client system generate a ssh keypair to be used for git;

# ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): /opt/bijit/others/.ssh-keys/bijit_bb
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /opt/bijit/others/.ssh-keys/bijit_bb.
Your public key has been saved in /opt/bijit/others/.ssh-keys/bijit_bb.pub.
The key fingerprint is:
ba:76:aa:d4:82:ca:32:fa:33:d1:c4:22:ea:81:d0:b0 root@ip-172-16-20-101
The key's randomart image is:
+--[ RSA 2048]----+
|                 |
|.                |
| + .             |
|E o o            |
|+. +    S        |
|o..... .         |
|. o.o o          |
|+oo. ....        |
|=+.o.ooo         |
+-----------------+

2. Copy the contents of the public key (.eg bijit_bb.pub), and paste it in BitBucket project
        under deployment keys section. [Thus, your key would now be identified]

# cat bijit_bb.pub
ssh-rsa  
         AAAAB3NzaC1yc2EAAAADAQABAAABAQDa4dHa3f51tl0H4Ye86qHxp4o6
         VgRwhOIn4JZIArFMKwy9k5DqHFyKh38rvtM7weO2OUkgMZIyDMZL5LM67
         wj+Wem1Baa4aF/DsV69Ns6deB7kGkGPg0HEbU8RrPTZkOaNdt8XjpADhjhkjhjk
         hhj7CS5i6CXF02LJpe4ol6E0vbDyc6eufuUWlCTSqPf6FOaD+CGVaDuv9aCOEP10imt
         O5t3e0BaD80jDi58mbZl7ZuNdDOOTP/N4JNGcEpAkooc7/9LaSnrc87eHI04Fah/7
         i3dbu5DFG73dMCrRve3hOJuqwsvi80VU3vXiSGOpSjLQLrACKpmQK004Qdy
         oUIqcv root@ip-172-16-20-101

3. On the client system, clone the project repo;
# git clone https://bijit@bitbucket.org/project-dir/project-repo-name.git

4.     Add the ssh private key that you have geerated earlier (in either of the following two methods)
        # ssh-agent bash -c 'ssh-add /opt/bijit/others/.ssh-keys/bijit_bb; cd /opt/bijit/others/repos/project-
           name; git pull'

Or,

# eval $(ssh-agent -s)
 Agent pid 3463

# ssh-add /opt/bijit/others/.ssh-keys/bijit_bb
Enter passphrase for /opt/bijit/others/.ssh-keys/bijit_bb:
         Identity added: /opt/bijit/others/.ssh-keys/bijit_bb (/opt/bijit/others/.ssh-keys/bijit_bb)

# cd /opt/bijit/others/repos/project-name/
# git pull
 Already up-to-date.

Wednesday, November 11, 2015

VSFTPD - Set up your FTP server on a CentOS 6x system

Warm Diwali wishes friends! 

Was wondering as what should I do on this festive day which is already spoiled by incessant rain, then felt like lets document something useful. So, here is one;

VSFTPD! Passive or Active, too much confusion? Here is how to set up both on a CentOS-6x system (should work on RHEL, Fedora based systems as well);


A. Installation is a breeze using YUM, do it like;
# yum install vsftpd -y

B. Some custom configuration 
     You may do the following in the "/etc/vsftpd/vsftpd.conf" file;

1. No anonymous login:
anonymous_enable=NO

2. I needed the full log like connection details of clients:

xferlog_std_format=NO

3. Add this;
log_ftp_protocol=YES

4. The following is to restrict FTP users to their directories. Not a good
           thing to allow them to peep into others' :-)

chroot_local_user=YES

6. I wanted specific users to have FTP access, thus I listed the users in
           "/etc/vsftpd/user_list" commenting the default ones;

           userlist_deny=NO
           (to make use of the file /etc/vsftpd/user_list, include only those users
           who needs FTP service)

7. Active OR Passive?

A. Specific to Active FTP:

pam_service_name=vsftpd
userlist_enable=YES
userlist_deny=NO
tcp_wrappers=YES
pasv_enable=NO


B. Specific to Passive FTP (open the passive port range in the
                    firewall):

##For passive ftp mode

pam_service_name=vsftpd
userlist_enable=YES
userlist_deny=NO
tcp_wrappers=YES

pasv_enable=YES
pasv_min_port=50000
pasv_max_port=50999
port_enable=YES
pasv_addr_resolve=YES
pasv_address=xxx.xxx.xxx.xxx   (Public IP of your FTP server,
                   may be it is needed as this Openstack instance is behind a NAT)


8. SELinux !
          (Modify the SELinux Boolean if you encounter "500 OOPS: cannot 
          change directory:/home/ Login failed.")

A. Grab the values:

# getsebool -a | grep ftp
allow_ftpd_anon_write --> off
allow_ftpd_full_access --> off
allow_ftpd_use_cifs --> off
allow_ftpd_use_nfs --> off
ftp_home_dir --> off
ftpd_connect_db --> off
ftpd_use_fusefs --> off
ftpd_use_passive_mode --> on
httpd_enable_ftp_server --> off
tftp_anon_write --> off
tftp_use_cifs --> off
tftp_use_nfs --> off

B. Change the Home directory specific boolean value:

                # setsebool -P ftp_home_dir On
                (Do a man page for options)


C. Test the FTP server

         Create a FTP user (a system user) and set his login shell as /sbin/nologin (to
        deny him SSH access). Test your FTP server using command line FTP
        client or GUI based Filezilla (or any other client of your choice)

Monday, August 31, 2015

Multi Hopping SSH Tunnel


Let's assume you have two remote systems (System A and System B) of which one (System A) is accessible by its Public IP but the other does not have the Public IP or it may be behind the firewall. This situation prevents you from accessing the second system directly from your home.

You can access (SSH) the second system (System B) in two steps.

1. From your home, Log into System A
2. From System A, Log into System B

But, what if you want to access System B directly from your home? 

[Note: HostName, UserName, Keys, IP Address used here are only for example]

Multi-Hop SSH Tunnel

Here, we would use a built in SSH feature known as SSH Hopping using an Intermediate Host (System-A in this case);

ssh -t user@Intermediate Host ssh user@Destination Host

-t switch creates a pseudo terminal to execute some commands. (in this case it executes the ssh to System B)

For example;

# ssh -i id_rsa -t centos@xxx.xxx.xxx.xxx ssh -i id_rsa_private centos@172.16.20.12
Last login: Mon Aug 31 13:05:40 2015 from 172.16.20.13
[centos@ins-2 ~]$

Let's understand what it is doing; 

1. ssh -i id_rsa -t centos@xxx.xxx.xxx.xxx
It opens an ssh connection to server at xxx.xxx.xxx.xxx IP. The "-t " switch actually creates an
        pseudo terminal on that server to execute the following command;

[Note:  The private key "id_rsa" is stored under the current path of execution of the ssh
        command]

2. ssh -i id_rsa_private centos@172.16.20.12
[Note: The private key "id_rsa_private" is stored under the home directory of user "centos" on
        "System A with Pub IP xxx.xxx.xxx.xxx" ]

How about doing the following from your home system?
ssh centos@172.16.20.12

Lets tweak it further, for this we would be requiring NC (Netcat) package on our intermediate host(s).

1. Start an SSH agent on our Home system and store the "Private Key"

# eval $(ssh-agent -s)
Agent pid 3067

# ssh-add id_rsa
Identity added: id_rsa (id_rsa)

.2 Put the following contents in the file ~/.ssh/config 

Host someserver
  HostName xxx.xxx.xxx.xxx
User centos
Port 22

Host 172.16.20.12
HostName 172.16.20.12
User centos
ForwardAgent yes
Port 22

ProxyCommand ssh -q xxx.xxx.xxx.xxx nc %h %p

3. Now, from your home system do a direct SSH to System B i.e. 172.16.20.12 as shown 
        below;

# ssh centos@172.16.20.12
Last login: Mon Aug 31 16:07:18 2015 from 172.16.20.13
[centos@ins-2 ~]$

Thursday, July 16, 2015

Recovering deleted files in Linux ! The ext4magic ...


I was wondering if I could find any tool that would undelete files from ext3/ext4 partitions. While searching I came across this tool which did the job I wanted. I tested it on an old Fedora-14 (32 bit) system;


[Source:   
]


Following are the steps;

1. Download URL;
http://rpms.plnet.rs/plnet-centos6-i386/RPMS.plnet-downloaded/ext4magic-0.3.1-1.2.i686.rpm
http://rpms.plnet.rs/plnet-centos6-x86_64/RPMS.plnet-downloaded/ext4magic-0.3.1-1.2.x86_64.rpm

2. Install;
# yum install ext4magic-0.3.1-1.2.i686.rpm --nogpgcheck
(I could not find the gpg key so I avoided checking the same while installing)

3. I deleted some of the files from "/LVM-1" directory which was acting as the mount point for
        /dev/mapper/vg1-lv--1

4. Stop using the filesystem that needs recovery and unmount it;
# umount /LVM-1

5. Copy partition;
(dd if=/dev/DEVICE of=/BACKUPPATH/DEVICE.img)
# dd if=/dev/mapper/vg1-lv--1 of=/BACKUP/vg1-lv--1.img

6. Examine the directory structure once. I created a separate directory for recovered files;
# tree BACKUP/
BACKUP/
├── RECOVERED
└── vg1-lv--1.img

7. Run ext4magic on the copy/dd-image to recover all deleted files (-m switch for files and 
        -M if entire filesystem needs recovery);

(ext4magic /BACKUPPATH/DEVICE.img -m -j /BACKUPPATH/journal.copy)

# ext4magic /BACKUP/vg1-lv--1-NEW.img -M -d /BACKUP/RECOVERED/
Warning: Activate magic-scan or disaster-recovery function, may be some command line options
         ignored
"/BACKUP/RECOVERED/"  accept for recoverdir
Filesystem in use: /BACKUP/vg1-lv--1-NEW.img

Using  internal Journal at Inode 8
Activ Time after  : Wed Jul 15 14:03:20 2015
Activ Time before : Wed Jul 15 14:07:04 2015
Inode 2 is allocated
-------- /BACKUP/RECOVERED//lost+found
-------- /BACKUP/RECOVERED//Dir-1
-------- /BACKUP/RECOVERED//Dir-2
-------- /BACKUP/RECOVERED//file-1
-------- /BACKUP/RECOVERED//file-2
-------- /BACKUP/RECOVERED//
MAGIC-1 : start lost directory search
MAGIC-2 : start lost file search
MAGIC-2 : start lost in journal search
MAGIC-3 : start ext4-magic-scan search
ext4magic : EXIT_SUCCESS


Bang! Files are back :) 

Wednesday, July 15, 2015

DRBD (Distributed Replicated Block Device)

DRBD (Distributed Replicated Block Device)
---------------------------------------------------------

DRBD refers to block devices which acts as to form high availability (HA) clusters on Linux systems. The process involves mirroring a whole block device over network. This is somewhat similar to network based RAID-1 where data is copied on two storage devices, failure of one results in activating the other one. 


But, there is marked difference between Network based RAID-1 and DRBD. In case of network based RAID-1, there exists a single application which access any one of the two storage devices at any point in time. The RAID layer chooses to read the other device when one fails. There is an abstraction between the application and RAID devices thus the application is not aware of which of the devices it is interacting at any given point. 

But, this is not the case with DRBD where exists two instances of the application, and each can read only from one of the two storage devices. Read more here...

Requirements

– Two disks  (preferably same size)

– Networking between machines (node1 & node2)
– Working DNS resolution  (/etc/hosts file)
– NTP synchronized times on both nodes
– Selinux Permissive (!)
– Iptables ports (7788) allowed

A. Test Environment:

Two CentOS-6.6 systems with IP 172.16.20.46 and 172.16.20.47 designated as 

           "node1" and "node2" respectively 

On both systems Disks are partitioned as;
  /dev/vdb1               1        2082     1049296+  83  Linux
  /dev/vdb2            2083      6241     2096136   83  Linux

DNS resolution is done through /etc/hosts on both the systems by setting hostname 
<--> IP address 
    instead of FQDN  as;
 172.16.20.46 node1
 172.16.20.47 node2

B. SELinux and IPTABLES:


B1. Setting SELinux to permissive (Not a very good idea though!! :) )
# setenfore 0

B2. Allow DRBD port to accept connection from all (test only, change it to specific IPs 
       to your need)
# iptables -I INPUT 4 -p tcp --dport 7788 -j ACCEPT
# service iptables save
iptables: Saving firewall rules to /etc/sysconfig/iptables: [  OK  ]
 
     # service iptables restart
iptables: Setting chains to policy ACCEPT: filter           [  OK  ]
iptables: Flushing firewall rules:                         [  OK  ]
iptables: Unloading modules:                               [  OK  ]
iptables: Applying firewall rules:                         [  OK  ]

C. Configuration:

Note:  Carry out the following steps on both the systems unless mentioned otherwise;

1. Setup ELRepo (not EPEL, as I could not find DRBD packages for CentOS-6.6 release) on both 

    systems;
  # wget http://www.elrepo.org/elrepo-release-6-6.el6.elrepo.noarch.rpm .
# rpm -ivh elrepo-release-6-6.el6.elrepo.noarch.rpm

2. Check the yum repositories once;

# yum repolist
repo id                 repo name                               status
base                  CentOS-6 - Base                                       6,518
datastax              DataStax Repo for Apache Cassandra                         158
elrepo                ELRepo.org Community Enterprise Linux Repository - el6        334
epel                  Extra Packages for Enterprise Linux 6 - x86_64                   11,735
extras                CentOS-6 - Extras                                         38
updates               CentOS-6 - Updates                                         1,336

3. Install the packages;

# yum install drbd83-utils kmod-drbd83

4. Now, insert DRBD module manually into Kernel on both systems (or reboot to make it 

    effective);
  # modprobe drbd
# lsmod | grep drbd
      drbd       332493  0 

5. Create the Distributed Replicated Block Device resource file (/etc/drbd.d/clusterdb.res):
resource clusterdb
{
startup {
wfc-timeout 30;
outdated-wfc-timeout 20;
degr-wfc-timeout 30;
}

net {
cram-hmac-alg sha1;
shared-secret sync_disk;
}

syncer {
rate 10M;
al-extents 257;
on-no-data-accessible io-error;
}

on node1 {
device /dev/drbd0;
disk /dev/vdb2;
address 172.16.20.46:7788;
flexible-meta-disk internal;
}
 
on node2 {
device /dev/drbd0;
disk /dev/vdb2;
address 172.16.20.47:7788;
meta-disk internal;
}
}

6. Copy DRBD configuration files (viz. /etc/drbd.d/clusterdb.res) and entries in /etc/hosts file to    

     "node2"

7. Initialize the DRBD meta data storage on both machines;

# drbdadm create-md clusterdb

--==  Thank you for participating in the global usage survey  ==--
The server's response is:

you are the 25728th user to install this version
Writing meta data...
initializing activity log
NOT initialized bitmap
New drbd meta data block successfully created.
success

8. Start the DRBD service on both systems;
# service drbd start

Starting DRBD resources: [ d(clusterdb) s(clusterdb) n(clusterdb) ]..........


       DRBD's startup script waits for the peer node(s) to appear.

       - In case this node was already a degraded cluster before the
        reboot the timeout is 30 seconds. [degr-wfc-timeout]
        - If the peer was available before the reboot the timeout will
         expire after 30 seconds. [wfc-timeout]
        (These values are for resource 'clusterdb'; 0 sec -> wait forever)
        To abort waiting enter 'yes' [  29]:
.

9. Check status as;

# service drbd status
or,
# cat /proc/drbd

10. As you can see, at the beginning both nodes are secondary, which is normal. We need to figure out   
      which one would act as a primary that will initiate the first 'full sync' between the two nodes; 

      In our case, we choose "node1", thus;
A. On node1:
# drbdadm -- --overwrite-data-of-peer primary clusterdb

11. Put a watch on the sync in progress;

# watch 'cat /proc/drbd'
Every 2.0s: cat /proc/drbd                                 Tue Jul 14 08:59:44 2015

version: 8.3.16 (api:88/proto:86-97)

GIT-hash: a798fa7e274428a357657fb52f0ecf40192c1985 build by phil@Build64R6, 2014-11-24 
      14:51:37
     0: cs:SyncSource ro:Primary/Secondary ds:UpToDate/Inconsistent C r-----
     ns:429056 nr:0 dw:0 dr:429720 al:0 bm:26 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:1666980
[===>................] sync'ed: 20.8% (1666980/2096036)K
finish: 0:03:38 speed: 7,600 (9,128) K/sec

12. Once full sync is achieved, check the status on both the nodes;

A. Node1
# cat /proc/drbd
version: 8.3.16 (api:88/proto:86-97)
GIT-hash: a798fa7e274428a357657fb52f0ecf40192c1985 build by phil@Build64R6, 2014-11-
            24 14:51:37
           0: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r-----
           ns:2096036 nr:0 dw:0 dr:2096700 al:0 bm:128 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0

B. Node2
[root@node2 ~]# cat /proc/drbd 
version: 8.3.16 (api:88/proto:86-97)
GIT-hash: a798fa7e274428a357657fb52f0ecf40192c1985 build by phil@Build64R6, 2014-11-       
            24 14:51:37
0: cs:Connected ro:Secondary/Primary ds:UpToDate/UpToDate C r-----
           ns:0 nr:2096036 dw:2096036 dr:0 al:0 bm:128 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0

13. Create filesystem on Distributed Replicated Block Device device;

On the primary node i.e. node1:
# mkfs.ext4 /dev/drbd0

14. Now, mount DRBD device on primary node;

# mkdir /my-DRBD-Data
# mount /dev/drbd0 /my-DRBD-Data

15. Please note:  

One does not need need to mount the disk from secondary systems explicitly. All data that are 
       written on "/my-DRBD-Data" folder will be synced to the other system i.e. "node2".

16. Lets test that out;

16a. Unmount the "/my-DRBD-Data" folder from the primary node "node1". 
16b. Make the secondary node as primary node.  
16c. Mount back the "/my-DRBD-Data" on the second machine "node2", you will see the same 
                 contents in /my-DRBD-Data  folder.
17. Actual test;

17a. Create some data on Node1:
# cd /my-DRBD-Data
# mkdir bijit
# echo "hi" > test1.txt
# echo "hello" > test2.txt
 
                 [root@node1 my-DRBD-Data]# ll
total 28
drwxr-xr-x. 2 root root  4096 Jul 14 09:20 bijit
drwx------. 2 root root 16384 Jul 14 09:15 lost+found
-rw-r--r--. 1 root root     3 Jul 14 09:20 test1.txt
-rw-r--r--. 1 root root     6 Jul 14 09:21 test2.txt

17b. Unmount the data on the primary Node:

[root@node1 /]# umount /my-DRBD-Data/

17c. Make secondary node as the Primary:

[root@node1 /]# drbdadm secondary clusterdb
[root@node2 ~]# drbdadm -- --overwrite-data-of-peer primary clusterdb

17d. Mount back the "/my-DRBD-Data" on the second machine "node2";
[root@node2 ~]# mkdir /my-DRBD-Data
[root@node2 ~]# mount /dev/drbd0 /my-DRBD-Data/
 
  17e. Check the contents on Node2:
[root@node2 ~]# ll /my-DRBD-Data/
total 28
drwxr-xr-x. 2 root root  4096 Jul 14 09:20 bijit
drwx------. 2 root root 16384 Jul 14 09:15 lost+found
-rw-r--r--. 1 root root     3 Jul 14 09:20 test1.txt
-rw-r--r--. 1 root root     6 Jul 14 09:21 test2.txt

Now, delete and add some contents on the "node2" which now acts as primary. 

Unmount it and make it secondary. Switch to "node1" and make it as primary and mount it back. Check if the changes are replicated !

Monday, June 29, 2015

Two instances of Apache HTTPD

This write up is about running multiple instances of Apache HTTPD on a Linux system. I tested by running two instances but I understand similar approach would work if more than two instances are required to run independently on a Linux system.

One can easily setup more than one instance by installing two or more HTTPD servers but this piece of writing is not about that rather it talks about how one can copy/modify an existing set of files from a running Apache HTTPD server and set up a completely independent HTTPD instance.

Here is how;
[ It assumed that one HTTPD server is installed by yum and is existing on the system. If not, then install it by yum as; yum install httpd. ]

1. Copy all the files /etc/httpd to a location of your choice. Provide a different name for the
         destination e.g. httpd2
# cp -a /etc/httpd /etc/httpd2

2. Copy the existing httpd init script as;
# cp /etc/init.d/httpd /etc/init.d/httpd2

3. Copy the existing sysconfig file for httpd as;
# cp /etc/sysconfig/httpd /etc/sysconfig/httpd2

4. Copy existing 'apachectl' as;
       cp -a /usr/sbin/apachectl /usr/sbin/apachectl2

5. Edit '/usr/sbin/apachectl2' as;
if [ -r /etc/sysconfig/httpd2 ]; then
    . /etc/sysconfig/httpd2
fi

6. Create soft link as;
# ln -s /usr/sbin/httpd /usr/sbin/httpd2

7. Edit the following  /etc/sysconfig/httpd2 as;
HTTPD=/usr/sbin/httpd2
OPTIONS="-f /etc/httpd2/conf/httpd.conf"
LOCKFILE=/var/lock/subsys/httpd2
PIDFILE=/var/run/httpd2/httpd2.pid

8. Edit /etc/init.d/httpd2 as;

if [ -f /etc/sysconfig/httpd2 ]; then
         . /etc/sysconfig/httpd2
fi

      And, also;
prog=httpd2

9. Edit the following in "/etc/httpd2/conf/httpd.conf" as;
a. Listen 90
b. ServerRoot "/etc/httpd2"
c. PidFile run/httpd2.pid
d. DocumentRoot "/opt/httpd2"
(Change this to your need)
e.


10. Do the following;
      a. # cd /etc/httpd2
b. # mkdir -p /var/run/httpd2
# chgrp apache /var/run/httpd2

      c. Remove the old link and create a new one with the current path;
# rm run
# ln -s /var/run/httpd2 run
d. # mkdir -p /var/log/httpd2
e. # rm logs
f. # ln -s /var/log/httpd2/ logs

11. Start the services;
[root@Fedora-14 httpd2]# service httpd start
Starting httpd:                                    [  OK  ]
[root@Fedora-14 httpd2]# service httpd2 start
Starting httpd2:                                   [  OK  ]

12. Check if the ports are listening;
[root@Fedora-14 httpd2]# netstat -apn | grep httpd
tcp        0      0 :::80        :::*       LISTEN      5390/httpd        
tcp        0      0 :::90        :::*       LISTEN      5413/httpd2

13. Check service status;
[root@Fedora-14 httpd2]# service httpd2 status
httpd2 (pid  5413) is running...
[root@Fedora-14 httpd2]# service httpd status
httpd (pid  5390) is running...

14. Stop the services;
[root@Fedora-14 httpd2]# service httpd stop
Stopping httpd:                                    [  OK  ]
[root@Fedora-14 httpd2]# service httpd2 stop
Stopping httpd2:                                   [  OK  ]

Thursday, March 19, 2015

MySQL Cluster Setup on CentOS-6x

MySQL Cluster is a technology providing shared-nothing clustering and auto-sharding for the MySQL database management system. It is designed to provide high availability and high throughput with low latency, while allowing for near linear scalability.   
--- From Wikipedia


Note(s):
1. Tested on Centos-6.6 (64 bit) openstack instances.
2. Iptables is ON (specific iptables rules are given below).
3. SeLinux is Permissive (setenforce 0)
4. SeLinux Bollean values turned on for MySQL user;
# setsebool -PV allow_user_mysql_connect on
# setsebool -PV mysql_connect_any on


Architecture:
I. Management Node  
IP: 172.16.20.21

II. Node - 1   (Data and SQL Node)
IP: 172.16.20.22  

III.        Node - 2   (Data and SQL Node)
IP: 172.16.20.23


A. Downloads:
For all nodes download the following package;
http://dev.mysql.com/get/Downloads/MySQL-Cluster-7.4/MySQL-Cluster-gpl-7.4.4-1.el6.x86_64.rpm-bundle.tar .


B. Installation on Management Node (IP: 172.16.20.21):

1. Install the needed packages;
yum install perl libaio -y

2. Remove the following mysql library as it would conflict with cluster packages;
yum remove mysql-libs

3. Extract the compressed package and install it;
i. tar -xvf MySQL-Cluster-gpl-7.4.4-1.el6.x86_64.rpm-bundle.tar
ii. rpm -ivh MySQL-Cluster-server-gpl-7.4.4-1.el6.x86_64.rpm

4. Take a note of the initial root password of MySql
You will find that password in '/root/.mysql_secret'.

C. Installation on Data Node (IP: 172.16.20.22, 172.16.20.23):
1. Install the needed packages;
yum install perl libaio -y

2. Remove the following mysql library as it would conflict with cluster packages;
yum remove mysql-libs

3. Extract the compressed package and install it;
i. tar -xvf MySQL-Cluster-gpl-7.4.4-1.el6.x86_64.rpm-bundle.tar
ii. rpm -ivh MySQL-Cluster-server-gpl-7.4.4-1.el6.x86_64.rpm

4. Take a note of the initial root password of MySql:
You will find that password in '/root/.mysql_secret'.

5. Install the addtional MySQL client package on SQL Nodes:
rpm -ivh MySQL-Cluster-client-gpl-7.4.4-1.el6.x86_64.rpm


D. Installation on SQL NOdes (IP: 172.16.20.22, 172.16.20.23):

1. Install the needed packages;
yum install perl libaio -y

2. Remove the following mysql library as it would conflict with cluster packages;
yum remove mysql-libs

3. Extract the compressed package and install it;
i. tar -xvf MySQL-Cluster-gpl-7.4.4-1.el6.x86_64.rpm-bundle.tar
ii. rpm -ivh MySQL-Cluster-server-gpl-7.4.4-1.el6.x86_64.rpm

4. Take a note of the initial root password of MySql:
You will find that password in '/root/.mysql_secret'.

5. Install the addtional MySQL client package on SQL Nodes:
rpm -ivh MySQL-Cluster-client-gpl-7.4.4-1.el6.x86_64.rpm


E. Configurations:
A. Data and SQL Nodes;

# mkdir -p /usr/local/mysql/data
# chown -R mysql:mysql /usr/local/mysql/data

vi /etc/my.cnf
[mysqld]
# Options for mysqld process:
ndbcluster                      # run NDB storage engine

[mysql_cluster]
# Options for MySQL Cluster processes:
ndb-connectstring=172.16.20.21  # location of management server


B. Management node:
# mkdir /var/lib/mysql-cluster
# chown -R mysql:mysql /var/lib/mysql-cluster
# cd /var/lib/mysql-cluster

vi config.ini

[ndbd default]
# Options affecting ndbd processes on all data nodes:
NoOfReplicas=2    # Number of replicas
DataMemory=80M    # How much memory to allocate for data storage
IndexMemory=18M   # How much memory to allocate for index storage
 # For DataMemory and IndexMemory, we have used the
 # default values. Since the "world" database takes up
 # only about 500KB, this should be more than enough for
 # this example Cluster setup.

ServerPort=50501  # This is to allocate a fixed port through which one node is connected to the other node within the cluster.
 # By default, this port is allocated dynamically in such a way as to ensure that no two nodes on the same host
 # computer receive the same port number.
 # To open specific ports in a firewall to permit communication between data nodes and API nodes (including SQL
 # nodes), you can set this parameter to the number of the desired port in an [ndbd] section or (if you need to do
 # this for multiple data nodes) the [ndbd default] section of the config.ini file, and then open the port having
 # that number for incoming connections from SQL nodes, API nodes, or both.

[tcp default]
# TCP/IP options:
#portnumber=1186   # This the default; however, you can use any
 # port that is free for all the hosts in the cluster
 # Note: It is recommended that you do not specify the port
 # number at all and simply allow the default value to be used
 # instead

[ndb_mgmd]
# Management process options:
hostname=172.16.20.21           # Hostname or IP address of MGM node
datadir=/var/lib/mysql-cluster  # Directory for MGM node log files

[ndbd]
# Options for data node "A":
hostname=172.16.20.22           # Hostname or IP address
datadir=/usr/local/mysql/data   # Directory for this data node's data files

[ndbd]
# Options for data node "B":
hostname=172.16.20.23           # Hostname or IP address
datadir=/usr/local/mysql/data   # Directory for this data node's data files

[mysqld]
# SQL node options:
hostname=172.16.20.22           # Hostname or IP address
# (additional mysqld connections can be
# specified for this node for various
# purposes such as running ndb_restore)
[mysqld]
# SQL node options:
hostname=172.16.20.23           # Hostname or IP address
# (additional mysqld connections can be
# specified for this node for various
# purposes such as running ndb_restore)



F. IPtables rules;
(Here, I have allowed connection from any source on the destination ports as I have another set of                 firewall on top of the cloud and thus port are not exposed to the outside world, but in an ideal
                condition, one needs to specify the connection source)

I. On Management Node (INPUT chain):

# iptables -I INPUT -i eth0 -p tcp --dport 1186 -j ACCEPT
# iptables -I INPUT -i eth0 -p tcp --dport 3306 -j ACCEPT
# iptables -I INPUT -i eth0 -p tcp --dport 50501 -j ACCEPT
# service iptables save
# service iptables restart

II. On Data and SQL Nodes (INPUT chain):
# iptables -I INPUT -i eth0 -p tcp --dport 3306 -j ACCEPT

G. Start the services (in a sequence);

1. Management Node:
# ndb_mgmd -f /var/lib/mysql-cluster/config.ini

2. Data Node:
# ndbd

3. MySQL
# service mysql start

On the SQL Node,
Change the default mysql root password as;
# mysqladmin -u root -p'oldpassword' password newpass
(Check the old password in '/root/.mysql_secret')

H. Connect the MySQL Cluster Management Console and check the status (if everything        
         works fine then you should see similar to the following);

ndb_mgm> show
Cluster Configuration
---------------------
[ndbd(NDB)]     2 node(s)
id=2    @172.16.20.22  (mysql-5.6.23 ndb-7.4.4, Nodegroup: 0, *)
id=3    @172.16.20.23  (mysql-5.6.23 ndb-7.4.4, Nodegroup: 0, *)

[ndb_mgmd(MGM)] 1 node(s)
id=1    @172.16.20.21  (mysql-5.6.23 ndb-7.4.4)

[mysqld(API)]   2 node(s)
id=4    @172.16.20.22  (mysql-5.6.23 ndb-7.4.4)
id=5    @172.16.20.23  (mysql-5.6.23 ndb-7.4.4)

ndb_mgm>


I. Log files:
1. Management Node:
/var/lib/mysql-cluster

2. Data/SQL Nodes:
/usr/local/mysql/data/