XtreemFS for Distributed Storage
XtreemFS is a Distributed Filesystem and it's pretty awesome!
It's features includes, but not limited to:
- XtreemFS automatically handles all failure modes.
- Fault-Tolerant Replication gives you a peace of mind.
- XtreemFS is also scalable, where you can add datanodes to your distributed filesystem to expand your storage needs.
In this tutorial, I will be working with 3 nodes, demonstrating how the Distributed Setup works.
Our Environment:
xtrmfs1: 172.31.10.226 (master)
xtrmfs2: 172.31.10.227
xtrmfs3: 172.31.10.228
On each node, setup XtreemFS:
Setup XtreemFS:
$ cd /etc/yum.repos.d/
$ wget "http://download.opensuse.org/repositories/home:/xtreemfs/CentOS_6/home:xtreemfs.repo"
yum install xtreemfs-client xtreemfs-server -y
Configuration:
We will update dir_service.host
to our xtrmfs1
node on each of our targeted XtreemFS nodes:
$ sed -i s'/dir_service.host = localhost/dir_service.host = xtrmfs1/'g /etc/xos/xtreemfs/osdconfig.properties
$ sed -i s'/dir_service.host = localhost/dir_service.host = xtrmfs1/'g /etc/xos/xtreemfs/mrcconfig.properties
Once this is done on all of our nodes, let start up XtreemFS on only our first node xtrmfs1
:
Start XtreemFS:
# start directory, metadata en object storage device (osd) service
$ /etc/init.d/xtreemfs-dir start
$ /etc/init.d/xtreemfs-mrc start
$ /etc/init.d/xtreemfs-osd start
Also ensure that the FUSE Kernel modue is loaded:
$ modprobe fuse
Once everything is started, you can view statistics on: http://localhost:30638
Creating the Volume:
$ mkfs.xtreemfs xtrmfs1/vol1
Creating the Mountpoint Directory:
$ mkdir /vol1
Mount XtreemFS:
$ mount.xtreemfs xtrmfs1/vol1 /vol1
View Diskspace:
$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 7.8G 1.2G 6.5G 16% /
xtreemfs@xtrmfs1/vol1 7.8G 1.2G 6.5G 16% /vol1
At this moment in time XtreemFS is only started on node xtrmfs1
so we are only seeing the disk space of xtrmfs1
. Now we will continue to node xtrmfs2
Setup the second node:
$ /etc/init.d/xtreemfs-dir start
$ /etc/init.d/xtreemfs-mrc start
$ /etc/init.d/xtreemfs-osd start
$ modprobe fuse
$ mkdir /vol1
$ mount.xtreemfs xtrmfs1/vol1 /vol1
Once this is mounted, view the disk space:
$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 7.8G 1.2G 6.5G 16% /
xtreemfs@xtrmfs1/vol1 16G 2.4G 13G 16% /vol1
Setup the last node:
At this moment we can see that our storage space increased from 6.5G to 13G. Continue on our last node and do the same, there after verify your diskspace, and for my scenario, it looked like this:
$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 7.8G 1.2G 6.5G 16% /
xtreemfs@xtrmfs1/vol1 24G 3.5G 20G 16% /vol1
Replication:
Let's setup a Replication Factor of 3 with a Read/Write Policy attached to it:
$ xtfsutil --set-drp --replication-policy quorum --replication-factor 3 /vol1
Copy/Write a file to our XtreemFS Volume:
$ echo "test" > /vol1/test.txt
Now, check the replicas for the file:
$ xtfsutil /vol1/test.txt
Path (on volume) /test.txt
XtreemFS file Id b67e6dc8-5c2c-4089-babf-f21f11d6db99:14
XtreemFS URL pbrpc://xtrmfs1:32638/vol1/test.txt
Owner root
Group root
Type file
Replication policy WqRq
XLoc version 0
Replicas:
Replica 1
Striping policy STRIPING_POLICY_RAID0 / 1 / 128kB
OSD 1 f51d0f71-fa55-4280-b00d-70ed9dcd98cf (172.31.10.228:32640)
Replica 2
Striping policy STRIPING_POLICY_RAID0 / 1 / 128kB
OSD 1 1bda047c-cc0b-4126-904e-4738f967f833 (172.31.10.226:32640)
Replica 3
Striping policy STRIPING_POLICY_RAID0 / 1 / 128kB
OSD 1 2fc25d56-1511-462d-b7ae-bb3cbeaa9226 (172.31.10.227:32640)
Have a look at a Replication with Failover demo:
For more information on Replication, check out their Docs