GFS2

How to Setup GFS2 or GFS in Linux Centos
Posted on April 22 by Clay
var dzone_url = "http://hungred.com/how-to/setup-gfs2-gfs-linux-centos/";
var dzone_title = "How to Setup GFS2 or GFS in Linux Centos";
var dzone_style = "1";
var dzone_blurb = " It has been a nightmare for me setting up GFS2 with my 3 servers and 1 SAN Storage. I have been reading all over the internet and the solutions to this is either outdated or contains bug that cannot make my SAN storage SAN to work. Finally, i...";
digg_url = "http://hungred.com/how-to/setup-gfs2-gfs-linux-centos/";
digg_bgcolor = "#FFFFFF";
digg_skin = "";
digg_window = "new";
digg_title = "How to Setup GFS2 or GFS in Linux Centos";
digg_media = "news";
digg_topic = "";
digg_bodytext = "It has been a nightmare for me setting up GFS2 with my 3 servers and 1 SAN Storage. I have been reading all over the internet and the solutions to this is either outdated or contains bug that cannot make my SAN storage SAN to work. Finally, i managed to setup my GFS2 on my Dell MD3200i with 10TB of disk space.GFS2/GFS Test EnvironmentHere is the test...";
0diggsdigg

It has been a nightmare for me setting up GFS2 with my 3 servers and 1 SAN Storage. I have been reading all over the internet and the solutions to this is either outdated or contains bug that cannot make my SAN storage SAN to work. Finally, i managed to setup my GFS2 on my Dell MD3200i with 10TB of disk space.
GFS2/GFS Test Environment
Here is the test environment equipment that i utilized for this setup.
3 Centos Web Server
1 MD3200i Dell SAN Storage
1 Switch to connect all these equipment together
Assumption
I will assume you would have setup all your 3 Centos servers to communicate with your SAN ISCSI storage. This means that all your 3 Centos servers will be able to view your newly created LUN using iscsiadmn. And you have switch off your iptabls and selinux.
Setup GFS2/GFS packages
On all of your 3 Centos servers, you must install the following packages:
cman
gfs-utils
kmod-dlm
modcluster
ricci
luci
cluster-snmp
iscsi-initiator-utils
openais
oddjobs
rgmanager
Or you can simple type the following yum on all 3 Centos machine
view source
print?
1
yum install -y cman gfs-utils kmod-gfs kmod-dlm modcluster ricci luci cluster-snmp iscsi-initiator-utils openais oddjob rgmanager
Or even simplier, you can just add the cluster group via the following line
view source
print?
1
yum groupinstall -y Clustering
2
yum groupinstall -y "Storage Cluster"
Oh, remember to update your Centos before proceeding to do all of the above.
view source
print?
1
yum -y check-update
2
yum -y update
After you have done all of the above, you should have all the packages available to setup GFS2/GFS on all your 3 Centos machine.
Configuring GFS2/GFS Cluster on Centos
Once you have your required centos packages installed, you would need to setup your Centos machine. Firstly, you would need to setup all your hosts file with all 3 servers machine name. Hence, i appended all my 3 servers machine name across and in each machine i would have the following additional line in my /etc/hosts file.
view source
print?
1
111.111.111.1 gfs1.hungred.com
2
111.111.111.2 gfs2.hungred.com
3
111.111.111.3 gfs3.hungred.com
where *.hungred.com is each machine name and the ip beside it are the machine ip addresses which allows each of them to communicate with each other by using the ip stated there.
Next, we will need to setup the cluster configuration of the server. On each machine, you will need to execute the following instruction to create a proper cluster configuration on each Centos machine.
view source
print?
1
ccs_tool create HungredCluster
2
ccs_tool addfence -C node1_ipmi fence_ipmilan ipaddr=111.111.111.1 login=root passwd=machine_1_password
3
ccs_tool addfence -C node2_ipmi fence_ipmilan ipaddr=111.111.111.2 login=root passwd=machine_2_password
4
ccs_tool addfence -C node3_ipmi fence_ipmilan ipaddr=111.111.111.3 login=root passwd=machine_3_password
5
ccs_tool addnode -C gfs1.hungred.com -n 1 -v 1 -f node1_ipmi
6
ccs_tool addnode -C gfs2.hungred.com -n 2 -v 1 -f node2_ipmi
7
ccs_tool addnode -C gfs3.hungred.com -n 3 -v 1 -f node3_ipmi
Next, you will need to start cman.
view source
print?
1
service cman start
2
service rgmanager start
cman should starts without any error. If you have any error while starting cman, your GFS2/GFS will not work. If everything works fine, you should see the following when you type the command as shown below,
view source
print?
1
[root@localhost ]# cman_tool nodes
2
10.0.0.1
3
Node Sts Inc Joined Name
4
1 M 16 2011-1-06 02:30:27 gfs1.hungred.com
5
2 M 20 2011-1-06 02:30:02 gfs2.hungred.com
6
3 M 24 2011-1-06 02:36:01 gfs3.hungred.com
If the above shows, this means that you have properly setup your GFS2 cluster. Next we will need to setup GFS2!
Setting up GFS2/GFS on Centos
You will need to start the following services.
service gfs start
service gfs2 start
Once, this two has been started. All you need to do is to partition your SAN storage LUN. If you want to use GFS2, partition it with gfs2
view source
print?
1
/sbin/mkfs.gfs2 -j 10 -p lock_dlm -t HungredCluster:GFS /dev/sdb
Likewise, if you like to use gfs, just change it to gfs instead of gfs2
view source
print?
1
/sbin/mkfs.gfs -j 10 -p lock_dlm -t HungredCluster:GFS /dev/sdb
A little explanation here. HungredCluster is the one we created while we were setup out GFS2 Cluster. /dev/sdb is the SAN storage lun space which was discovered using iscsiadm. -j 10 is the number of journals. each machine within the cluster will require 1 cluster. Therefore, it is good to determine the number of machine you will place into this cluster. -p lock_dlm is the lock type we will be using. There are other 2 more types beside lock_dlm which you can search online.
P.S: All of the servers that will belong to the GFS cluster will need to be located in the same VLAN. Contact support if you need assistance regarding this.If you are only configuring two servers in the cluster, you will need to manually edit the file /etc/cluster/cluster.conf file on each server. After the tag, add the following text:
If you do not make this change, the servers will not be able to establish a quorum and will refuse to cluster by design.
Setup GFS2/GFS run on startup
Key the following to ensure that GFS2/GFS starts everytime the system reboot.
view source
print?
1
chkconfig gfs on
2
chkconfig gfs2 on
3
chkconfig clvmd on //if you are using lvm
4
chkconfig cman on
5
chkconfig iscsi on
6
chkconfig acpid off
7
chkconfig rgmanager on
8
echo "/dev/sdb /home gfs2 defaults,noatime,nodiratime 0 0" >>/etc/fstab
9
mount /dev/sdb
Once this is done, your GFS2/GFS will have mount on your system to /home. You can check whether it works using the following command.
view source
print?
1
[root@localhost ~]# df -h
You should now be able to create files on one of the nodes in the cluster, and have the files appear right away on all the other nodes in the cluster.
Optimize GFS2/GFS
There are a few ways to optimize your gfs file system. Here are some of them.Set your plock rate to unlimited and ownership to 1 in /etc/cluster/cluster.conf
view source
print?
1

2

Set noatime and nodiratime in your fstab.
view source
print?
1
echo "/dev/sdb /home gfs2 defaults,noatime,nodiratime 0 0" >>/etc/fstab
lastly, we can tune gfs directy by decreasing how often GFS2 demotes its locks via this method.
view source
print?
1
echo "
2
gfs2_tool settune /GFS demote_secs 20
3
gfs2_tool settune /GFSquota_account 0
4
gfs2_tool settune /GFS statfs_fast 1
5
gfs_tool settune /GFS statfs_slots 128
6
" >> /etc/rc.local
credit goes to linuxdynasty.
iptables and gfs2/gfs port
If you wish to have iptables remain active, you will need to open up the following ports.
view source
print?
1
-A INPUT -i 10.10.10.200 -m state --state NEW -p udp -s 10.10.10.0/24 -d 10.10.10.0/24 --dport 5404, 5405 -j ACCEPT
2
-A INPUT -i 10.10.10.200 -m state --state NEW -m multiport -p tcp -s 10.10.10.0/24 -d 10.10.10.0/24 --dports 8084 -j ACCEPT
3
-A INPUT -i 10.10.10.200 -m state --state NEW -m multiport -p tcp -s 10.10.10.0/24 -d 10.10.10.0/24 --dports 11111 -j ACCEPT
4
-A INPUT -i 10.10.10.200 -m state --state NEW -m multiport -p tcp -s 10.10.10.0/24 -d 10.10.10.0/24 --dports 14567 -j ACCEPT
5
-A INPUT -i 10.10.10.200 -m state --state NEW -m multiport -p tcp -s 10.10.10.0/24 -d 10.10.10.0/24 --dports 16851 -j ACCEPT
6
-A INPUT -i 10.10.10.200 -m state --state NEW -m multiport -p tcp -s 10.10.10.0/24 -d 10.10.10.0/24 --dports 21064 -j ACCEPT
7
-A INPUT -i 10.10.10.200 -m state --state NEW -m multiport -p tcp -s 10.10.10.0/24 -d 10.10.10.0/24 --dports 41966,41967,41968,41969 -j ACCEPT
8
-A INPUT -i 10.10.10.200 -m state --state NEW -m multiport -p tcp -s 10.10.10.0/24 -d 10.10.10.0/24 --dports 50006,50008,50009 -j ACCEPT
9
-A INPUT -i 10.10.10.200 -m state --state NEW -m multiport -p udp -s 10.10.10.0/24 -d 10.10.10.0/24 --dports 50007 -j ACCEPT
Once these ports are open on your iptables, your cman should be able to restart properly without getting start either on fencing or cman starting point. Good Luck!
Troubleshooting
You might face some problem setting up GFS2 or GFS. Here are some of them which might be of some help
CMAN fencing failed
You get something like the following when you start your cman
view source
print?
1
Starting cluster:
2
Loading modules... done
3
Mounting configfs... done
4
Starting ccsd... done
5
Starting cman... done
6
Starting daemons... done
7
Starting fencing... failed
One of the possibility that is causing this is that your gfs2 has already been mounted to a drive. Hence, fencing failed. Try to unmount it and start it again.
mount.gfs2 error
if you are getting the following error
view source
print?
1
mount.gfs2: can't connect to gfs_controld: Connection refused
you need to try to start the cman service
Reference
http://knowledgelayer.softlayer.com/questions/443/GFS+howto
http://www.linuxdynasty.org/howto-setup-gfs2-with-clustering.html
http://gcharriere.com/blog/?tag=gfs2
http://securfox.wordpress.com/2009/08/11/how-to-setup-gfs/
http://pbraun.nethence.com/doc/filesystems/gfs2.html
AKPC_IDS += "3450,";
Like this post? Share it!

Comments

Popular posts from this blog

VIOS TIPs

Configure Solaris 10 LDOM on Solaris 11.4

Change P410i from HBA mode to Raid mdoe