Difference between revisions of "Cloud Setup"

From Servarr
m (Bakerboy448 moved page Rclone Mergerfs Gdrive setup links to Cloud Setup without leaving a redirect)
m (→‎Overview: add mergerfs cmd)
Line 12: Line 12:
 
* '''Absolutely do not write (large files) directly to the rclone mount
 
* '''Absolutely do not write (large files) directly to the rclone mount
 
Instead, setup a <code>/merge</code> with <code>mergerfs</code> that has some <code>/local</code> storage where downloads and imports will live, merge that with your <code>/cloud</code> rclone mount and use the mergerfs create policy of <code>ff</code> or <code>epff</code>.
 
Instead, setup a <code>/merge</code> with <code>mergerfs</code> that has some <code>/local</code> storage where downloads and imports will live, merge that with your <code>/cloud</code> rclone mount and use the mergerfs create policy of <code>ff</code> or <code>epff</code>.
 +
 +
=== MergerFS Command ===
 +
<code>/home/{user}/local:/home/{user}/cloud=NC /home/{user}/merge -o rw,async_read=false,statfs_ignore=nc,use_ino,func.getattr=newest,category.action=all,category.create=epff,cache.files=partial,dropcacheonclose=true,nonempty</code>
  
 
== Setup ==
 
== Setup ==

Revision as of 22:30, 5 September 2020

Rclone Mergerfs and Google Drive

So you want to become a 'cloud pirate'? You want to store your media in the cloud, but still be able to use the Arrs and Plex? For this guide you will use Google Suite Unlimited Storage or a similar Rclone compatible storage service.

Overview

The rules are:

  • Don't download into your Gdrive.
  • Don't import to your Gdrive.
  • Do all large writes locally.
  • Move to cloud on a schedule.
  • Absolutely do not write (large files) directly to the rclone mount

Instead, setup a /merge with mergerfs that has some /local storage where downloads and imports will live, merge that with your /cloud rclone mount and use the mergerfs create policy of ff or epff.

MergerFS Command

/home/{user}/local:/home/{user}/cloud=NC /home/{user}/merge -o rw,async_read=false,statfs_ignore=nc,use_ino,func.getattr=newest,category.action=all,category.create=epff,cache.files=partial,dropcacheonclose=true,nonempty

Setup

Set your download client to download to say /merge/usenet/{tv|movies} and your library to say /merge/media/{TV|Movies}. Then what happens is the download is local, the import is local. And it all looks like it is in the same place. In the background, you have a cron or systemd timer that does rsync move from your local storage to your cloud storage. You can rotate service accounts if needed. It is also way more efficient at uploading vs. just the rclone mount.

The cloud mount allows sonarr/radarr to delete, rename and read stuff, but big writes don't need to go right to cloud.

If you do get rate limited, it won't impact your imports because they're all local.

/local/media/{tv|movies} //this is where ARR imports to.
/local/{usenet|torrents}/{tv/movies} // this is where downloads should be dropped to

/cloud/ //this is your rclone crypt mount (if going cyprt) or GDrive Mount /cloud/media //this is your media folder in Gdrive

/merge/ //is /cloud merged into /local/ using mergerfs

/merge/media/{tv|movies} //point plex and ARRs here as library/root folder

and setup remote path in ARR to say remote /local/ is the same as remote /merge/

Files will be downloaded into local and then imported into /local/media/ on a scheduled basis an rclone job would be created to move from local to gdrive so they will then automatically appear in cloud and it'll be like they never left merge