So you've reached a point where you have run out of storage and you need to scale or allocate more storage to your GlusterFS Volume.
Previous Posts:
From our GlusterFS Series we have covered the following:
- GlusterFS: Distributed Replicated Volume
- GlusterFS: Distributed Storage Volume
- GlusterFS: Replicated Storage Volume
- GlusterFS: Adding Bricks to your Volume
- GlusterFS: Replace Faulty Bricks
Adding Bricks to your GlusterFS Volume
In this tutorial we will alloate another block device to each node in our glusterfs volume. Currently we are running a Distributed-Replicated Volume with the Replica count of 2, which means we have to add bricks in numbers of the same as the replica count.
First we created and attached the volumes to the instances, next is to format and mount them to the instances:
- Node1 -
/dev/xvdi
=> /gluster/f - Node2 -
/dev/xvdi
=> /gluster/g
Prepare the disk on Node1:
$ sudo mkfs.xfs /dev/xvdi
$ sudo mkdir /gluster/f
$ sudo mount /dev/xvdi /gluster/f
$ sudo mkdir /gluster/f/brick
Prepare the disk on Node2:
$ sudo mkfs.xfs /dev/xvdi
$ sudo mkdir /gluster/g
$ sudo mount /dev/xvdi /gluster/g
$ sudo mkdir /gluster/g/brick
Review the Disk Layout
Review the block disk layout:
$ sudo lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 8G 0 disk
└─xvda1 202:1 0 8G 0 part /
xvdg 202:96 0 50G 0 disk /gluster/b
xvdh 202:112 0 50G 0 disk /gluster/e
xvdi 202:128 0 50G 0 disk /gluster/f
Review if the disk is mounted to its partition:
$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 488M 0 488M 0% /dev
/dev/xvda1 7.7G 1.3G 6.5G 17% /
/dev/xvdg 50G 33M 50G 1% /gluster/b
/dev/xvdh 50G 33M 50G 1% /gluster/e
/dev/xvdi 50G 33M 50G 1% /gluster/f
localhost:/gfs 100G 65M 100G 1% /mnt
Add the Brick to your Volume:
Add the bricks that you prepared to the GlusterFS Volume:
$ sudo gluster volume add-brick gfs \
ip-172-31-44-169:/gluster/f/brick \
ip-172-31-47-175:/gluster/g/brick
volume add-brick: success
Confirm that the disks is mounted:
$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 488M 0 488M 0% /dev
tmpfs 100M 4.4M 95M 5% /run
/dev/xvda1 7.7G 1.3G 6.5G 17% /
tmpfs 496M 0 496M 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 496M 0 496M 0% /sys/fs/cgroup
/dev/xvdg 50G 33M 50G 1% /gluster/b
tmpfs 100M 0 100M 0% /run/user/1000
localhost:/gfs 150G 98M 150G 1% /mnt
/dev/xvdh 50G 33M 50G 1% /gluster/e
/dev/xvdi 50G 33M 50G 1% /gluster/f
Let's have a look at the GlusterFS Volume Layout, and we should see the bricks that we added to the volume:
$ sudo gluster volume info gfs
Volume Name: gfs
Type: Distributed-Replicate
Volume ID: 4b0d3931-73be-4dff-b1a5-56d791fccaea
Status: Started
Number of Bricks: 3 x 2 = 6
Transport-type: tcp
Bricks:
Brick1: ip-172-31-44-169:/gluster/e/brick
Brick2: ip-172-31-47-175:/gluster/c/brick
Brick3: ip-172-31-44-169:/gluster/b/brick
Brick4: ip-172-31-47-175:/gluster/d/brick
Brick5: ip-172-31-44-169:/gluster/f/brick
Brick6: ip-172-31-47-175:/gluster/g/brick
Options Reconfigured:
performance.readdir-ahead: on
Rebalancing GlusterFS Volumes:
When expanding or shrinking a volume, without migrating the data, and when using the add-brick and remove-brick commands, you need to rebalance the data among the servers. In a replicated volume, at least one of the brick in the replica should be up.
$ sudo gluster volume rebalance gfs start
volume rebalance: gfs: success: Rebalance on gfs has been started successfully. Use rebalance status command to check status of the rebalance process.
ID: e1d2a828-647e-4f0b-a172-2a27f4f7d6b7
Having a look at the status:
$ sudo gluster volume rebalance gfs status
Node Rebalanced-files size scanned failures skipped status run time in secs
--------- ----------- ----------- ----------- ----------- ----------- ------------ --------------
localhost 2 17Bytes 5 0 0 completed 0.00
ip-172-31-47-175 0 0Bytes 0 0 0 completed 0.00
volume rebalance: gfs: success
The fix-layout option has been deprecated for versions < 2.4, but the output should look like this:
$ sudo gluster volume rebalance gfs fix-layout start
volume rebalance: gfs: success: Rebalance on gfs has been started successfully. Use rebalance status command to check status of the rebalance process.
ID: d48cc72e-25d3-495b-bad7-e716f3b336b1
$ sudo gluster volume rebalance gfs status
Node Rebalanced-files size scanned failures skipped status run time in secs
--------- ----------- ----------- ----------- ----------- ----------- ------------ --------------
localhost 0 0Bytes 0 0 0 fix-layout completed 0.00
ip-172-31-47-175 0 0Bytes 0 0 0 fix-layout completed 1.00
volume rebalance: gfs: success
And that is an overview of how a brick is added to your GlusterFS Volume.
Thanks
Thanks for reading :)
Comments