Difference between revisions of "Cloud Setup"
Bakerboy448 (talk | contribs) m (one more typo) |
|||
Line 102: | Line 102: | ||
Files will be downloaded into local and then imported into /local/media/ on a scheduled basis an rclone job would be created to move from local to gdrive so they will then automatically appear in cloud and it'll be like they never left merge | Files will be downloaded into local and then imported into /local/media/ on a scheduled basis an rclone job would be created to move from local to gdrive so they will then automatically appear in cloud and it'll be like they never left merge | ||
+ | |||
+ | == Recommended Plex Server Changes == | ||
+ | === Increase the Default Cache Size of your Plex DB === | ||
+ | With unlimited storage, some servers may run into database locking/timeout issues. Increasing the default cache size could help alleviate this. | ||
+ | |||
+ | 1. Stop Plex | ||
+ | 2. Locate your Plex DB. cd plex/Library/Application Support/Plex Media Server/Plug-in Support/Databases | ||
+ | 3. sqlite3 com.plexapp.plugins.library.db | ||
+ | PRAGMA default_cache_size = 6000000; | ||
+ | Press CTRL + D | ||
+ | 4. Start Plex. |
Revision as of 19:14, 2 December 2020
Rclone Mergerfs and Google Drive
So you want to become a 'cloud pirate'? You want to store your media in the cloud, but still be able to use the Arrs and Plex? For this guide you will use Google Suite Unlimited Storage or a similar Rclone compatible storage service.
Please note that this guide is for information only and you should only store legally obtained media.
Additionally, to get unlimited storage you should have 5 users (as of writing $12*5 users = $60/mo). If you have less users you technically do not have unlimited storage and Google may enforce said limits at any time.
Overview
The rules are:
- Don't download into your Gdrive.
- Don't import to your Gdrive.
- Do all large writes locally.
- Move to cloud on a schedule.
- Absolutely do not write (large files) directly to the rclone mount
Instead, setup a /merge
with mergerfs
that has some /local
storage where downloads and imports will live, merge that with your /cloud
rclone mount and use the mergerfs create policy of ff
or epff
.
Setting up Rclone
To get started with an Rclone Google Suite Team Drive Mount follow the below instructions.
Making your own Google API Client ID When you use rclone with Google drive in its default configuration you are using rclone's client_id. This is shared between all the rclone users. There is a global rate limit on the number of queries per second that each client_id can do set by Google. It is strongly recommended to use your own client ID as the default rclone ID is heavily used.
- Log into the Google API Console with your Google account. It doesn't matter what Google account you use. (It need not be the same account as the Google Drive you want to access)
- Select a project or create a new project.
- Under "ENABLE APIS AND SERVICES" search for "Drive", and enable the "Google Drive API".
- Click "Credentials" in the left-side panel (not "Create credentials", which opens the wizard), then "Create credentials"
- If you already configured an "Oauth Consent Screen", then skip to the next step; if not, click on "CONFIGURE CONSENT SCREEN" button (near the top right corner of the right panel), then select "External" and click on "CREATE"; on the next screen, enter an "Application name" ("rclone" is OK) then click on "Save" (all other data is optional). Click again on "Credentials" on the left panel to go back to the "Credentials" screen.
- Click on the "+ CREATE CREDENTIALS" button at the top of the screen, then select "OAuth client ID".
- Choose an application type of "Desktop app" if you using a Google account or "Other" if you using a GSuite account and click "Create". (the default name is fine)
- It will show you a client ID and client secret. Write or copy these values down. Use these values in rclone config to add a new remote or edit an existing remote.
Setup your Google Service Account (SA) file this allows it to not be tied to a single user account.
- go to the Google Developer Console.
- go to "IAM & admin" -> "Service Accounts".
- Use the "Create Credentials" button. Fill in "Service account name" with something that identifies your client. e.g.
mount
Leave "Role" Empty
- Tick "Furnish a new private key" - select "Key type JSON".
- Tick "Enable G Suite Domain-wide Delegation". These credentials are what rclone will use for authentication. If you ever need to remove access, press the "Delete service account key" button.
Allow API access to Google Drive
- go to admin console
- Go into "Security" (or use the search bar)
- Select "Show more" and then "Advanced settings"
- Select "Manage API client access" in the "Authentication" section
- In the "Client Name" field enter the service account's "Client ID" - this can be found in the Developer Console under "IAM & Admin" -> "Service Accounts", then "View Client ID" for the newly created service account. It is a ~21 character numerical string.
- In the next field, "One or More API Scopes", enter https://www.googleapis.com/auth/drive to grant access to Google Drive specifically.
rclone config
n
- New Remote- Enter a name for your mount e.g.
gdrive
13
- Google Drive- Enter your Google Application Client Id
- Enter your Google Application Client Secret
1
- Full Access- Leave Blank ID of the root folder
- enter the path and filename to your Google Drive SA Json
n
- Do Not Use Auto Configy
- use team drive- Review for Accuracy
- mount the drive using
rclone mount --daemon --daemon-timeout=5m --allow-non-empty --buffer-size=128M --use-mmap --dir-cache-time=48h --cache-info-age=48h --vfs-cache-mode=writes --vfs-read-chunk-size-limit=off --vfs-cache-max-age=6h --vfs-read-chunk-size=128M --log-file=path/to/drivemount.log --log-level INFO gdrive: /path/to/cloudmount
MergerFS Command
/home/{user}/local:/home/{user}/cloud=NC /home/{user}/merge -o rw,async_read=false,statfs_ignore=nc,use_ino,func.getattr=newest,category.action=all,category.create=ff,cache.files=partial,dropcacheonclose=true,nonempty
Setup
Set your download client to download to say /merge/usenet/{tv|movies}
and your library to say /merge/media/{TV|Movies}
.
Then what happens is the download is local, the import is local. And it all looks like it is in the same place.
In the background, you have a cron or systemd timer that does rsync move from your local storage to your cloud storage.
You can rotate service accounts if needed.
It is also way more efficient at uploading vs. just the rclone mount.
The cloud mount allows sonarr/radarr to delete, rename and read stuff, but big writes don't need to go right to cloud.
If you do get rate limited, it won't impact your imports because they're all local.
/local/media/{tv|movies}
//this is where ARR imports to.
/local/{usenet|torrents}/{tv/movies}
// this is where downloads should be dropped to
/cloud/
//this is your rclone crypt mount (if going crypt) or GDrive Mount /cloud/media //this is your media folder in Gdrive
/merge/
//is /cloud merged into /local/ using mergerfs
/merge/media/{tv|movies}
//point plex and ARRs here as library/root folder
and setup remote path in ARR to say remote /local/ is the same as remote /merge/
Files will be downloaded into local and then imported into /local/media/ on a scheduled basis an rclone job would be created to move from local to gdrive so they will then automatically appear in cloud and it'll be like they never left merge
Recommended Plex Server Changes
Increase the Default Cache Size of your Plex DB
With unlimited storage, some servers may run into database locking/timeout issues. Increasing the default cache size could help alleviate this.
1. Stop Plex 2. Locate your Plex DB. cd plex/Library/Application Support/Plex Media Server/Plug-in Support/Databases 3. sqlite3 com.plexapp.plugins.library.db PRAGMA default_cache_size = 6000000; Press CTRL + D 4. Start Plex.