How to monitor glusterfs volumes
Solution 1:
This has been a request to the GlusterFS developers for a while now and there is nothing out-of-the-box solution you can use. However, with a few scripts it's not impossible.
Pretty much entire Gluster system is managed by a single gluster command and with a few options, you can write yourself health monitoring scripts. See here for listing info on bricks and volumes -- http://gluster.org/community/documentation/index.php/Gluster_3.2:_Displaying_Volume_Information
To monitor performance, look at this link -- http://gluster.org/community/documentation/index.php/Gluster_3.2:_Monitoring_your_GlusterFS_Workload
UPDATE: Do consider upgrading to http://gluster.org/community/documentation/index.php/About_GlusterFS_3.3
You are always better off with being on the latest release since they seem to have more bug fixes and well supported. Ofcourse, run your own tests before moving to a newer release -- http://vbellur.wordpress.com/2012/05/31/upgrading-to-glusterfs-3-3/ :)
There is an admin guide with specific section for monitoring your GlusterFS 3.3 installation in Chapter 10 -- http://www.gluster.org/wp-content/uploads/2012/05/Gluster_File_System-3.3.0-Administration_Guide-en-US.pdf
See here for another nagios script -- http://code.google.com/p/glusterfs-status/
Solution 2:
There is a nagios plugin available for monitoring. You may have to edit it for your version though.
Solution 3:
Please check the attached script at https://www.gluster.org/pipermail/gluster-users/2012-June/010709.html for gluster 3.3; it's probably easily adaptable to gluster 3.2.
#!/bin/bash
# This Nagios script was written against version 3.3 of Gluster. Older
# versions will most likely not work at all with this monitoring script.
#
# Gluster currently requires elevated permissions to do anything. In order to
# accommodate this, you need to allow your Nagios user some additional
# permissions via sudo. The line you want to add will look something like the
# following in /etc/sudoers (or something equivalent):
#
# Defaults:nagios !requiretty
# nagios ALL=(root) NOPASSWD:/usr/sbin/gluster peer status,/usr/sbin/gluster volume list,/usr/sbin/gluster volume heal [[\:graph\:]]* info
#
# That should give us all the access we need to check the status of any
# currently defined peers and volumes.
# define some variables
ME=$(basename -- $0)
SUDO="/usr/bin/sudo"
PIDOF="/sbin/pidof"
GLUSTER="/usr/sbin/gluster"
PEERSTATUS="peer status"
VOLLIST="volume list"
VOLHEAL1="volume heal"
VOLHEAL2="info"
peererror=
volerror=
# check for commands
for cmd in $SUDO $PIDOF $GLUSTER; do
if [ ! -x "$cmd" ]; then
echo "$ME UNKNOWN - $cmd not found"
exit 3
fi
done
# check for glusterd (management daemon)
if ! $PIDOF glusterd &>/dev/null; then
echo "$ME CRITICAL - glusterd management daemon not running"
exit 2
fi
# check for glusterfsd (brick daemon)
if ! $PIDOF glusterfsd &>/dev/null; then
echo "$ME CRITICAL - glusterfsd brick daemon not running"
exit 2
fi
# get peer status
peerstatus="peers: "
for peer in $(sudo $GLUSTER $PEERSTATUS | grep '^Hostname: ' | awk '{print $2}'); do
state=
state=$(sudo $GLUSTER $PEERSTATUS | grep -A 2 "^Hostname: $peer$" | grep '^State: ' | sed -nre 's/.* \(([[:graph:]]+)\)$/\1/p')
if [ "$state" != "Connected" ]; then
peererror=1
fi
peerstatus+="$peer/$state "
done
# get volume status
volstatus="volumes: "
for vol in $(sudo $GLUSTER $VOLLIST); do
thisvolerror=0
entries=
for entries in $(sudo $GLUSTER $VOLHEAL1 $vol $VOLHEAL2 | grep '^Number of entries: ' | awk '{print $4}'); do
if [ "$entries" -gt 0 ]; then
volerror=1
let $((thisvolerror+=entries))
fi
done
volstatus+="$vol/$thisvolerror unsynchronized entries "
done
# drop extra space
peerstatus=${peerstatus:0:${#peerstatus}-1}
volstatus=${volstatus:0:${#volstatus}-1}
# set status according to whether any errors occurred
if [ "$peererror" ] || [ "$volerror" ]; then
status="CRITICAL"
else
status="OK"
fi
# actual Nagios output
echo "$ME $status $peerstatus $volstatus"
# exit with appropriate value
if [ "$peererror" ] || [ "$volerror" ]; then
exit 2
else
exit 0
fi