I back up my home folder to an encrypted drive once a week using rsync, then I create a tarball, encrypt it, and upload it to protondrive just in case.
Selfhosted
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.
Rules:
-
Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.
-
No spam posting.
-
Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.
-
Don't duplicate the full text of your blog or github here. Just post the link for folks to click.
-
Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).
-
No trolling.
Resources:
- selfh.st Newsletter and index of selfhosted software and apps
- awesome-selfhosted software
- awesome-sysadmin resources
- Self-Hosted Podcast from Jupiter Broadcasting
Any issues on the community? Report it using the report flag.
Questions? DM the mods!
For PCs, Daily incremental backups to local storage, daily syncs to my main unRAID server, and weekly off-site copies to a raspberry pi with a large external HDD running at a family member's place. The unRAID server itself has it's config backed up to the unRAID servers and all the local docker stores also to the off-site pi. The most important stuff (pictures, recovery phrases, etc) is further backed up in Google drive.
I use....
- Timeshift ->Local backup on to my RAID array
- borgbackup -> borgbase online backup
- GlusterFS -> experimenting with replicating certain apps across 2 raspberry pi's
Nextcloud with folder sync for both mobile and PC, backs up everything I need.
I backup an encrypted and heavily compressed archive to my local nas and to google drive every night. NAS keeps the version from the first of every month and 7 days prior history and google drive just the latest
I just use duplicity and upload to Google drive.
I run all of my services in containers, and intentionally leave my Docker host as barebones as possible so that it's disposable (I don't backup anything aside from data to do with the services themselves, the host can be launched into the sun without any backups and it wouldn't matter). I like to keep things simple yet practical, so I just run a nightly cron job that spins down all my stacks, creates archives of everything as-is at that time, and uploads them to Wasabi, AWS S3, and Backblaze B2. Then everything just spins back up, rinse and repeat the next night. I use lifecycle policies to keep the last 90 days worth of backups.
I like the cut of your jib!
Any details on the scripts?
Sure, I won't post exactly what I have, but something like this could be used as a starting point:
#!/bin/bash
now="$(date +'%Y-%m-%d')"
echo "Starting backup script"
echo "Backing up directories to Wasabi"
for dir in /home/USERNAME/Docker/*/
do
dir=${dir%*/}
backup_dir_local="/home/USERNAME/Docker/${dir##*/}"
backup_dir_remote="$now/${dir##*/}"
echo "Spinning down stack"
cd $backup_dir_local && docker compose down --remove-orphans
echo "Going to backup $backup_dir_local to s3://BUCKET_NAME/$backup_dir_remote"
aws s3 cp $backup_dir_local s3://BUCKET_NAME/$backup_dir_remote --recursive --profile wasabi
echo "Spinning up stack"
cd $backup_dir_local && docker compose up --detach
done
aws s3 cp /home/USERNAME/Docker/backup.sh s3://USERNAME/$now/backup.sh --profile wasabi
echo "Sending notification that backup tasks are complete"
curl "https://GOTIFY_HOSTNAME/message?token=GOTIFY_TOKEN" -F "title=Backup Complete" -F "message=All container data backed up to Wasabi." -F "priority=5"
echo "Completed backup script"
I have all of my stacks (defined using Docker Compose) in separate subdirectories within the parent directory /home/USERNAME/Docker/
, this is the only subdirectory that matters on the host. I have the backup script in the parent directory (in reality I have a few scripts in use, since my real setup is a bit more elaborate than the above). For each stack (ie. subdirectory) I spin the stack down, make the backup and copy it up to Wasabi, then spin the stack backup, and progress through each stack until done. Then lastly I copy the backup script up itself (in reality I copy up all of the scripts I use for various things). Not included as part of the script and outside of the scope of the example is the fact that I have the AWS CLI configured on the host with profiles to be able to interact with Wasabi, AWS, and Backblaze B2.
That should give you the general idea of how simple it is. In the above example I'm not doing some things I actually do, such as create a compressed archived, validate it to ensure there's no corruption, pruning of files that aren't needed for the backup within any of the stacks, etc. So don't take this to be a "good" solution, but one that would do the minimum necessary to have something.
Can relate to the approach. Keeping host barebones and everything dockerized + data volumes hosted separately will ease maintanance. For rapid redeployment a custom script will set up firewall/fail2ban/SSH/smartCTL/crontab/docker/docker-compose and finally load from another instance backups of all docker images. Complete setup from scratch takes 10-15 minutes. Tried Ansible but ended up custom scripting.
All my data is stored offsite twice a year. Data from high value is stored on a SSD as data volume and 2 other SSD’s as encrypted TAR + AWS S3. Rotation daily/weekly.
In the process of moving stuff over to Backblaze. Home PCs, few clients PCs, client websites all pointing at it now, happy with the service and price. Two unraid instances push the most important data to an azure storage a/c - but imagine i'll move that to BB soon as well.
Docker backups are similar to post above, tarball the whole thing weekly as a get out of jail card - this is not ideal but works for now until i can give it some more attention.
*i have no link to BB other than being a customer who wanted to reduce reliance on scripts and move stuff out of azure for cost reasons.
Would I be correct to assume you are using Backblaze PC backup rather than B2?
Yes, for now. I'll be spinning up some B2 this week however.
I don't backup my personal files since they are all more or less contained in Proton Drive. I do run a handful of small databases, which i back up to ... telegram.
Ah, yes, the ole' "backup a database to telegram" trick. Who hasn't used that one?!?
I did. Split pgp tarball into 2gb files and download 600gb to saved messages
It's just a matter of time when Telegram will crack down on this and limit the amount of cloud Storage used. But until then, I'll happily use Telegram as a fourth backup
For my server I use duplicity, with a daily incremental backup and sending the encrypted diffs away. I researched a few more options some time ago but nothing really fit my use case, but I'm also not super happy with duplicity. Thanks for suggesting borgbackup.
For my personal data I have a NextCloud on a RPi4 at my parents' place, which also syncs between my laptop that I've left there. For an offline and off-site storage, I use the good old strategy where I bring over an external hard drive, rsync it, and bring it back.
No problem! I also see Restic a lot in this thread, so I'll probably try both at some point
I feel the exact same. I've been using Duplicacy for a couple years, it works, but don't totally love it.
When I researched Borg, Restic, others, there were issues holding me back for each. Many are CLI-driven, which I don't mind for most tools. But when shit hits the fan and I need to restore, I really want to have a UI to make it simple (and easily browse file directories).
Got a Veeam community instance running on each of my VMware nodes, backing up 9-10 VMs each.
Using Cloudberry for my desktop, laptop and a couple Windows VMs.
Borg for non-VMware Linux servers/VMs, including my WSL instances, game/AI baremetal rig, and some Proxmox VMs I've got hosted with a friend.
Each backup agent dumps its backups into a share on my nas, which then has a cron task to do weekly uploads to GDrive. I also manually do a monthly copy to an HDD and store it off-site with a friend.
On my home network, devices are backed up using Time Machine over the network. I also use Backblaze to make a second backup of data to their cloud service, using my own private key. Lastly, I throw some backups on a USB drive that I keep in a fire safe.
I use backupninja for the scheduling and management of all the processes. The actual backups are done by rsync, rdiff, borg, and the b2 tool from backblaze depending on the type and destination of the data. I back up everything to a second internal drive, an external drive, and a backblaze bucket for the most critical stuff. Backupninja manages multiple snapshots within the borg repository, and rdiff lets me only copy new data for the large directories.
Rsync script that does deltas per day using hardlinks. Found on the Arch wiki. Works like a charm.
For smaller backups <10GB ea. I run a 3 phased approach
- rsync to a local folder /srv/backup/
- rsync that to a remote nas
- rclone that to a b2 bucket
These scripts run on the cron service and I log this info out to a file using --log-file option for rsync/rclone so I can do spot checks of the results
This way I have access to the data locally if the network is down, remotely on a different networked machine for any other device that can browse it, and finally an offsite cloud backup.
Doing this setup manually through rsync/rclone has been important to get the domain knowledge to think about the overall process; scheduling multiple backups at different times overnight to not overload the drive and network, ensuring versioning is stored for files that might require it and ensuring I am not using too many api calls for B2.
For large media backups >200GB I only use the rclone script and set it to run for 3hrs every night after all the more important backups are finished. Its not important I get it done asap but a steady drip of any changes up to b2 matters more.
My next steps is to maybe figure out a process to email the backup logs every so often or look into a full application to take over with better error catching capabilities.
For any service/process that has a backup this way I try and document a spot testing process to confirmed it works every 6months:
- For my important documents I will add an entry to my keepass db, run the backup, navigate to the cloud service and download the new version of the db and confirm the recently added entry is present.
- For an application I will run through a restore process and confirm certain config or data is present in the newly deployed app. This also forces me to have a fast restore script I can follow for any app if I need to do this every 6months.
My important data is backed up via Synology DSM Hyper backup to:
- Local external HDD attached via USB.
- Remote to backblaze (costs about $1/month for ~100gb of data)
I also have proxmox backup server backup all the VM/CTs every few hours to the same external HDD used above, however these backups aren't crucial, it would just be helpful to rebuild if something went down.
My important data is backed up via Synology DSM Hyper backup to:
- Local external HDD attached via USB.
- Remote to backblaze (costs about $1/month for ~100gb of data)
I also have proxmox backup server backup all the VM/CTs every few hours to the same external HDD used above, however these backups aren't crucial, it would just be helpful to rebuild if something went down.
In short: crontab, rsync, a local and a remote raspberry pi and cryptfs on usb-sticks.
[dupe]
I use borgbackup
Veeam community for me. Cross backup locally between my 2 servers at home, and then a copy job to an offsite NAS.
Have had to restorations before, and never had any issues.
For the 14 pcs (~8 regularly used) in my house I'm running daily backups with Synology Active for Backup to a spinning disk DiskStation, file sync of the User directory using Synology Drive to an SSD DiskStation (also backed up to HDD DS). That data is all deduplicated. Then additionally I've got a few custom scripts to keep programs up to date using Chocolatey and winget which then export the list of installed programs ready to be reinstalled on a new machine.
This allows me to either do full device restores or clean installs where the reinstall of the relevant programs is handled automatically and then it's just setting up sync/backup/office activation and we're off to the races.
I have a central NAS server that hosts all my personal files and shares them (via smb, ssh, syncthing and jellyfin). It also pulls backups from all my local servers and cloud services (google drive, onedrive, dropbox, evernote, mail, calender and contacts, etc.). It runs zfs raid 1 and snapshots every 15 minute. Every night it backs up important files to Backblaze in a US region and azure in a EU region (using restic).
I have a bootstrap procedure in place to do a "clean room recovery" assuming I lost access to all my devices - i only need to remember a tediously long encryption password for a small package containing everything needed to recover from scratch. It is tested every year during Christmas holidays including comparing every single backed and restored file with the original via md5/sha256 comparison.