Posts

Showing posts from January, 2014

How to remove GlusterFS Volumes

To remove the GlusterFS Volume from the servers following steps need to be followed:

These Steps need to be followed on all the servers which are part of gluster Cluster:

[root@ip-10-138-150-225 ~]# setfattr -x trusted.glusterfs.volume-id /data/share [root@ip-10-138-150-225 ~]# setfattr -x trusted.gfid /data/share [root@ip-10-138-150-225 ~]# service glusterd stop [  OK  ]
[root@ip-10-138-150-225 ~]# cd /data/share/                [root@ip-10-138-150-225 share]# ls -a .  ..  a1  a2  a3  b1  b2  b3  c1  c2  c3  c4  d1  d2  d3  .glusterfs [root@ip-10-138-150-225 share]# rm -rf .glusterfs [root@ip-10-138-150-225 share]# ls a1  a2  a3  b1  b2  b3  c1  c2  c3  c4  d1  d2  d3 [root@ip-10-138-150-225 share]# cd /var/lib/glusterd/ [root@ip-10-138-150-225 glusterd]# ls glusterd.info  glustershd  groups  hooks  nfs  options  peers  vols [root@ip-10-138-150-225 glusterd]# rm -rf * [root@ip-10-138-150-225 glusterd]# service glusterd start Starting glusterd:                                …

How to configure raid 0 in AWS with glusterFS to have high availability

Here is the procedure to configure Raid 0 on AWS's EBS to have high performance and GlusterFS to get High Availability..

This process can be used to get Central Storage in AWS as well as physical servers, as some of the Application needs central storage..

We are using 2 amazon instances with 4 EBS attached in each to configure Raid 0 on them to have good throughput..

Server 1 : ip-10-128-50-246
Server 2 : ip-10-138-150-225

Check the Attached EBS in each of the server:

[ec2-user@ip-10-128-50-246 ~]$ hostname ip-10-128-50-246 [ec2-user@ip-10-128-50-246 ~]$ lsblk NAME  MAJ:MIN RM SIZE RO TYPE MOUNTPOINT xvdb  202:16   0   5G  0 disk xvdc  202:32   0   5G  0 disk xvdd  202:48   0   5G  0 disk xvde  202:64   0   5G  0 disk xvda1 202:1    0   8G  0 disk /
[root@ip-10-138-150-225 ec2-user]# hostname ip-10-138-150-225 [root@ip-10-138-150-225 ec2-user]# lsblk NAME  MAJ:MIN RM SIZE RO TYPE MOUNTPOINT xvdb  202:16   0   5G  0 disk xvdc  202:32   0   5G  0 disk xvdd  202:48   0   5G  …