April 21, 2020 | 19:13

Optimizing your rclone mount

Because your Internet speed is not always as fast as your needs for more storage space you should use some kind of an buffer.

What is mergerfs?

Mergerfs is a tool to logicaly combine multiple drives or folders into one.

On the offical Github theres the following example which pretty much describes everything you need to know about how mergerfs works.

A         +      B        =       C
/disk1           /disk2           /merged
|                |                |
+-- /dir1        +-- /dir1        +-- /dir1
|   |            |   |            |   |
|   +-- file1    |   +-- file2    |   +-- file1
|                |   +-- file3    |   +-- file2
+-- /dir2        |                |   +-- file3
|   |            +-- /dir3        |
|   +-- file4        |            +-- /dir2
|                     +-- file5   |   |
+-- file6                         |   +-- file4
                                  |
                                  +-- /dir3
                                  |   |
                                  |   +-- file5
                                  |
                                  +-- file6

How we will use it

We’ll merge our rclone mount and some local folder, after that every read will first be done on the local folder unless the file is only availabe on the rclone mount. Similar rules apply to writes, files will be writen to the local folder unless the local folder runs out of space.

To automaticaly empty our local folder we’ll be using a little script that will upload the files to our rclone mount and delete the local copies after that.

Installing mergerfs

First of all you need to grab the latest release for your distro over at Github.

After installing you should be able to run mergerfs -v in your shell to check the installed version and get output similar to mine.

mergerfs version: 2.28.2
FUSE library version: 2.9.7-mergerfs_2.28.0
fusermount version: 2.9.2
using FUSE kernel interface version 7.29

Creating the mount

To make sure the mount we will start as soon as the dependicies are ready we’ll be using a systemd service.

Creating a service for systemd is as easy as creating a file under /etc/systemd/system.

Our file should look something like this.

[Unit]
Description=mergerfs mount
Requires=rclone-merge.service
After=< your rclone mount service name goes here>.service
RequiresMountsFor=< your local folder goes here >
RequiresMountsFor=< your rclone mountpath goes here >

[Service]
Type=forking
ExecStart=/usr/bin/mergerfs < your local folder goes here >:< your rclone mountpath goes here > < your target mountpath goes here > \
     -o rw,async_read=false,use_ino,allow_other,func.getattr=newest,category.action=all,category.create=ff,cache.files=partial,dropcacheonclose=true
KillMode=process
Restart=on-failure

[Install]
WantedBy=multi-user.target

After that you need to reload your systemd with following command systemctl daemon-reload. Now you should be able to start and stop your mergerfs with systemctl start rclone-merge.

The upload script

To automate the process of uploading local data to your remote we’ll use this fancy little script. The script itself checks if another instance is running and only runs if it’s the only one.

#!/bin/bash
## RClone Config file
RCLONE_CONFIG=/etc/rclone/rclone.conf
export RCLONE_CONFIG

#exit if running
if [[ "`/usr/sbin/pidof -x $(basename $0) -o %PPID`" ]]; then exit; fi

## Move older local files to the cloud
/usr/bin/rclone move < your local folder goes here > < your remote name goes here >: \
    --log-file /var/log/rclone-upload.log \
    --exclude-from /opt/scripts/excludes \
    --delete-empty-src-dirs \
    --fast-list \
    --bwlimit "6M" \
    --min-age 2h

You can and should ajust the arguments acording to your needs, they are documented over here.

The most importend flags in my opinion are:

  • --exclude-from to exclude some files/folders defined in a text file like this:

    *.rar
    download/**
    
  • --delete-empty-src-dirs to keep the local folder clean.

  • --fast-list to speed up the process.

  • --bwlimit to limit the maximum upload

  • --min-age to upload only files that are older than some time (this helps to make sure only completed copys or downloads are moved)

Executing the Script with cron

Last but not least we should make sure the script will be run in some interval. I choose to run the script every 30 minutes, to do so I use the command crontab -e.

0,30 * * * * /opt/scripts/upload_gdrive.sh

© marschall.systems 2024

This Site is affected by your System Dark/Lightmode

Powered by Hugo & Kiss'Em.