Installing GlusterFS
wget -O - https://download.gluster.org/pub/gluster/glusterfs/7/rsa.pub | apt-key add - DEBID=$(grep 'VERSION_ID=' /etc/os-release | cut -d '=' -f 2 | tr -d '"') DEBVER=$(grep 'VERSION=' /etc/os-release | grep -Eo '[a-z]+') DEBARCH=$(dpkg --print-architecture) echo deb https://download.gluster.org/pub/gluster/glusterfs/LATEST/Debian/${DEBID}/${DEBARCH}/apt ${DEBVER} main > /etc/apt/sources.list.d/gluster.list apt update sudo apt install glusterfs-server -y sudo systemctl start glusterd sudo systemctl enable glusterd
Install pada semua node… pastikan semua install glusterfs
Kemudian Generate SSH Keys
ssh-keygen -t rsa sudo -s gluster peer probe 192.168.1.56; gluster peer probe 192.168.1.59; gluster pool list
Let’s create a directory to be used for the Gluster volume. This same command will be run on all machines:
sudo mkdir -p /gluster/volume1
Run ni dekat Master node1
sudo gluster volume create staging-gfs replica 3 192.168.1.54:/gluster/volume1 192.168.1.56:/gluster/volume1 192.168.1.59:/gluster/volume1 force
Start the volume with the command: sudo gluster volume start staging-gfs The volume is now up and running, but we need to make sure the volume will mount on a reboot (or other circumstances). We’ll mount the volume to the /mnt directory. To do this, issue the following commands on all machines: sudo -s echo 'localhost:/staging-gfs /mnt glusterfs defaults,_netdev,backupvolfile-server=localhost 0 0' >> /etc/fstab mount.glusterfs localhost:/staging-gfs /mnt chown -R root:docker /mnt exit To make sure the Gluster volume is mounted, issue the command: df -h You should see it listed at the bottom (Figure 2). Figure 2: Our Gluster volume is mounted properly. You can now create new files in the /mnt directory and they’ll show up in the /gluster/volume1 directories on every machine. Using Your New Gluster Volume with Docker At this point, you are ready to integrate your persistent storage volume with docker. Say, for instance, you need persistent storage for a MySQL database. In your docker YAML files, you could add a section like so: <i> volumes: </i><i> - type: bind </i><i> source: /mnt/staging_mysql </i><i> target: /opt/mysql/data</i> 1 2 3 4 <i> volumes: </i><i> - type: bind </i><i> source: /mnt/staging_mysql </i><i> target: /opt/mysql/data</i> Since we’ve mounted our persistent storage in /mnt everything saved there on one docker node will sync with all other nodes. And that’s how you can create persistent storage and then use it within a Docker Swarm cluster. Of course, this isn’t the only way to make persistent storage work, but it is one of the easiest (and cheapest). Give GlusterFS a try as your persistent storage option and see if it doesn’t work out for you.
How we use gluster volume heal command? Whenever bricks go offline, we bring it back online. Then, to list the files in a volume that needs healing, we check its info. For this, our Support Engineers use the command, gluster volume heal <VOLNAME> info This lists all the files that need healing. Basically there are two cases of healing. One is in split-brain or else the files that need healing. Both these will be specified in the list. Then to trigger healing only on the required files, we use the command, gluster volume heal <VOLNAME> It heals the files that require healing. And to heal all the files in the volume use the command, gluster volume heal <VOLNAME> full After healing it shows up a success message. This appears as, Gluster volume heal. Hence, the files are properly synced now.