Gluster FS server
Making a redundant, replicated 2 node Gluster service
- Check that every brick in the Gluster structure has same data in hosts file or they resolv correctly from your DNS.
- Install epel repo http://fedoraproject.org/wiki/EPEL
- Allow traffic between bricks and from bricks to clients, at least ports tcp: 24007:24047 111 38465:38467 and udp: 111
- Install packages and fire up Gluster in both servers / bricks <code>yum install fuse fuse-libs glusterfs glusterfs-server glusterfs-fuse glusterfs-geo-replication chkconfig glusterd on; service glusterd start gluster peer probe <other node ip> gluster peer status</code>
- You should see information about the other peer(s), check both bricks
- Make and mount the volume (remember fstab) to use for sharing, can make many logical volumes or a single one, can make several Gluster volumes inside a logical volume to share same capacity <code> lvcreate -L1000G -n Glustervol1 VolGroup00 mkdir /mnt/gluster mount /dev/mapper/VolGroup00-Glustervol1 /mnt/gluster mkdir /mnt/gluster/vol1 mkdir /mnt/gluster/vol2 gluster volume create volume1 replica 2 transport tcp <1st brick ip>:/mnt/gluster/vol1/ <2nd brick up>:/mnt/gluster/vol1/ gluster volume create volume2 replica 2 transport tcp <1st brick ip>:/mnt/gluster/vol2/ <2nd brick up>:/mnt/gluster/vol2/ # gluster volume set volume1 auth.allow 10. # gluster volume set volume2 auth.allow 10. gluster volume start vol1 gluster volume start vol2 gluster volume info vol1 gluster volume info vol2 </code>
Gluster native client
Assuming we use only ethernet for accessing.
yum install glusterfs glusterfs-fuse glusterfs modprobe fuse dmesg | grep -i fuse
- There was some speculation that you should disable transparent hugepages to make Gluster work more stable but most likely this should be ignored. But just in case:
echo never > /sys/kernel/mm/redhat_transparent_hugepage/enabled
- Put the mount straightforward to /etc/fstab. Use _netdev to make the mount come later in startup and never use the IP address of localhost, even if you have server and client on same server. The startup may not be finished by the time you get to mounting. Fstab entry:
<brick ip>:/vol1 /home/directory glusterfs defaults,_netdev 0 0