Selfhosted

42521 readers
860 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
1
 
 

First, a hardware question. I'm looking for a computer to use as a... router? Louis calls it a router but it's a computer that is upstream of my whole network and has two ethernet ports. And suggestions on this? Ideal amount or RAM? Ideal processor/speed? I have fiber internet, 10 gbps up and 10 gbps down, so I'm willing to spend a little more on higher bandwidth components. I'm assuming I won't need a GPU.

Anyways, has anyone had a chance to look at his guide? It's accompanied by two youtube videos that are about 7 hours each.

I don't expect to do everything in his guide. I'd like to be able to VPN into my home network and SSH into some of my projects, use Immich, check out Plex or similar, and set up a NAS. Maybe other stuff after that but those are my main interests.

Any advice/links for a beginner are more than welcome.

Edit: thanks for all the info, lots of good stuff here. OpenWRT seems to be the most frequently recommended thing here so I'm looking into that now. Unfortunately my current router/AP (Asus AX6600) is not supported. I was hoping to not have to replace it, it was kinda pricey, I got it when I upgraded to fiber since it can do 6.6gbps. I'm currently looking into devices I can put upstream of my current hardware but I might have to bite the bullet and replace it.

Edit 2: This is looking pretty good right now.

2
 
 

Hello everyone! Mods here 😊

Tell us, what services do you selfhost? Extra points for selfhosted hardware infrastructure.

Feel free to take it as a chance to present yourself to the community!

🦎

3
 
 

I'm trying to plan a better backup solution for my home server. Right now I'm using Duplicati to back up my 3 external drives, but the backup is staying on-site and on the same kind of media as the original. So, what does your backup setup and workflow look like? Discs at a friend's house? Cloud backup at a commercial provider? Magnetic tape in an underground bunker?

4
 
 

Hi all, I am trying to use Collabora online, but am stuck. I set it up via the docker instructions here (https://sdk.collaboraonline.com/docs/installation/CODE_Docker_image.html).

But when I go to https://127.0.0.1:9980/, all I get is "OK". The reverse proxy works, but the same "OK". How do I actually use the Collabora?

I am anticipating a web interface to a document editor by the browser.

5
 
 

I've been kind of piece-mealing my way towards cleaning up my media server, and could use a little advice on the next steps.

Currently I have a little under 10TB of torrented media that I have been downloading to / seeding from media library folders that Plex and Jellyfin monitor, using my desktop PC as the torrenting client. This requires a bit of manual maintenance--i.e., manually selecting the destination folder for the torrents in a way that Plex/Jellyfin can see.

I recently fired up qBittorrent on my media server (Unraid if that matters), and would like to try out some of the *arrs, but I'm not quite sure how to proceed without creating some kind of unholy mess.

I guess option A is just to import all of my current torrented content from desktop to media server client, and keep manually specifying the torrent destination. It's not a huge deal, since I am typically only adding a few torrents per week, so it's literal seconds or minutes of work to find the content I want.

Option B is to start "clean" and follow one of the many how-tos for starting up an *arr stack. But never having used the software, I don't have a good sense for how it works, and whether there are any pitfalls to watch out for when trying to spin it up with an existing media library that includes both torrented and ripped content.

From a bit of reading, I think radarr for example will only care about new content. So I should be able to migrate all my existing torrents to the new client on my media server, including their existing locations amongst my media library, and then just let radarr locate and manage new content. Is that correct?

Any other advice or suggestions I should be considering?

6
 
 

I didn't like Kodi due to the unpleasant controls, especially on Android, so I decided to try out Jellyfin. It was really easy to get working, and I like it a lot more than Kodi, but I started to have problems after the first time restarting my computer.

I store my media on an external LUKS encrypted hard drive. Because of that, for some reason, Jellyfin's permission to access the drive go away after a reboot. That means something like chgrp -R jellyfin /media/username does work, but it stops working after I restart my computer and unlock the disk.

I tried modifying the /etc/fstab file without really knowing what I was doing, and almost bricked the system. Thank goodness I'm running an atomic distro (Fedora Silverblue), I was able to recover pretty quickly.

How do I give Jellyfin permanent access to my hard drive?

Solution:

  1. Install GNOME Disks
  2. Open GNOME Disks
  3. On the left, click on the drive storing your media
  4. Click "Unlock selected encrypted partition" (the padlock icon)
  5. Enter your password
  6. Click "Unlock"
  7. Select the LUKS partition
  8. Click "Additional partition options" (the gear icon)
  9. Click "Edit Encryption Options..."
  10. Enter your admin password
  11. Click "Authenticate"
  12. Disable "User Session Defaults"
  13. Select "Unlock at system startup"
  14. Enter the encryption password for your drive in the "Passphrase" field
  15. Click "Ok"
  16. Select the decrypted Ext4 partition
  17. Click "Additional partition options" (the gear icon)
  18. Click "Edit Mount Options..."
  19. Disable "User Session Defaults"
  20. Select "Mount at system startup"
  21. Click "Ok"
  22. Navigate to your Jellyfin Dashboard
  23. Go to "Libraries"
  24. Select "Add Media Library"
  25. When configuring the folder, navigate to /mnt and then select the UUID that points to your mounted hard drive
7
 
 

Basically title. I'm in the process of setting up a proper backup for my configured containers on Unraid and I'm wondering how often I should run my backup script. Right now, I have a cron job set to run on Monday and Friday nights, is this too frequent? Whats your schedule and do you strictly backup your appdata (container configs), or is there other data you include in your backups?

8
99
submitted 1 day ago* (last edited 1 day ago) by [email protected] to c/[email protected]
 
 

I have been self-hosting for a while now with Traefik. It works, but I'd like to give Nginx Proxy Manager a try, it seems easier to manage stuff not in docker.

Edit: btw I'm going to try this out on my RPI, not my hetzner vps, so no risk of breaking anything

9
 
 

Hello, I want to standardize my home servers and reduce them to 3 Proxmox computers. 2x a Tiny server and a slightly more powerful one for AI (ollama/open webui and deepseek-r1-70b, CPU based only, no GPU).

For the more powerful server, I am wavering between 2 processors: i9-10940X vs. i9-14900KS.

i9-10940X

  • 14 Cores (3,30-4,8 GHz == 67,2 GHz)
  • 28 Threads
  • Quadro-Channel DDR4-2933 (PC4-23466, 93.9GB/​s)

i9-14900KS

  • 24 Cores (8Power+16Economy - 2,40-6,2 GHz == 117,6GHz)
  • 32 Threads
  • Dual-Channel DDR5-5600 (PC5-44800, 89.6GB/​s)

I don't like the Idea of the Power/Economy-Cores... And the newer i9 has only dual-channel for RAM instead of quad. But it has double of GHz.

Which is better for my solution? I also want a relative low idle power consumption.

thank you all!


10
 
 

I realise this is a very niche question, but I was hoping someone here either knows the answer or can point me to a better place to ask.

My @[email protected] uses Puppeteer to take screenshots of the game for its posts. I want to run the bot on my Synology NAS inside of a Docker container so I can just set it and forget it, rather than needing to ensure my desktop is on and running the bot. Unfortunately, the Synology doesn't seem to play nicely with Puppeteer's use of the Chrome sandbox. I need to add the --no-sandbox and --disable-setuid-sandbox flags to get it to run successfully. That seems rather risky and I'd rather not be running it like that.

It works fine on my desktop, including if run in Docker for Windows on my desktop. Any idea how to set up Synology to have the sandbox work?

11
 
 

Hey everyone,

I just set up a self-hosted GitHub Actions runner in my homelab and wrote about it in my self-hosted blog! This is my second blog entry, so I would really appreciate any feedback or suggestions to help improve my writing is more than welcome.

You can check out the post here: https://cachaza.cc/blog/02-self-hosted-ci-cd

12
 
 

My NAS was getting increasingly annoying.

It would give error messages about not being shut down properly after scheduled restarts.

Apps would sometimes work and sometimes not. I had to manually stop and restart my video library each time to make it work. It was slow, it was refusing to do more than one thing at a time.

So, I finally started it up. Shunted all the data to external drives, setup the box from scratch.

Between it being fresh, and me knowing better what I'm doing and how I want things from the get-go, it's running better than ever, better even than when I got it a few years back.

Interesting, while it was offline and being setup I found myself realising how integral it's become to my day. So much stuff I went to do, only to discover I needed my box.

It was intended as a file backup and server, but so much has changed since then, I've grown used to having it here!

Still tempted to get an upgrade, maybe later this year if things workout well with the cash.

Wanted to share this with a community who can appreciate the feeling of having something working well!

13
 
 

I'm running three servers: one for home automation/NVR, one for NAS/media services, and one for network/firewall services.

Does this breakdown look doable based on the hardware? Should the services be ditributed differently for better efficiency?

Server 1 and 3 are already up and running. I just received my NAS, and am trying to decide where to run each service to best take advantage of my hardware.

I'm also considering UnRaid instead of Proxmox for a NAS OS. I just chose Proxmox because I'm familiar with it, and I like the ability to snapshot. I also intend to run Proxmox Backup Server offsite at some point, and I like the PVE/PBS integration.

Any advice would be much appreciated!

14
 
 

Hello,

Recently, I've been interested in self-hosting various services after coming across Futo's "How to Self Host Your Life Guide" on their Wiki. They recommend using OpenVPN, but I opted for WireGuard instead as I wanted to learn more about it. After investing many hours into setting up my WireGuard configuration in my Nix config, I planned to replace Tailscale with WireGuard and make the setup declarative.

For context, this computer is located at my residence, and I want to be able to VPN into my home network and access my services. Initially, it was quite straightforward; I forwarded a UDP port on my router to my computer, which responded correctly when using the correct WireGuard keys and established a VPN connection. Everywhere online suggests forwarding only UDP as WireGuard doesn't respond unless the correct key is used.

The Networking Complexity

At first, this setup would be for personal use only, but I soon realized that I had created a Docker stack for me and my friends to play on a Minecraft server running on my LAN using Tailscale as the network host. This allowed them to VPN in and join the server seamlessly. However, I grew tired of having to log in to various accounts (e.g., GitHub, Microsoft, Apple) and dealing with frequent sign-outs due to timeouts or playing around with container stacks.

To manage access to my services, I set up ACLs using Tailscale, allowing only specific IP addresses on my network (192.168.8.170) to access HigherGround, nothing else. Recently, I implemented WireGuard and learned two key things: Firstly, when friends VPN into the server, they have full access to everything, which isn't ideal by no means. not that i dont trust my friends but, i would like to fix that :P. I then tried to set allowed IPs in the WireGuard config to 192.168.8.170, but realized that this means they can only access 192.168.8.170 explicitly, not being able to browse the internet or communicate via Signal until I added their specific IP addresses (10.0.0.2 and 10.0.0.3) to their WireGuard configs.

However, I still face a significant issue: every search they perform goes through my IP address instead of theirs.

The Research

I've researched this problem extensively and believe that split tunneling is the solution: I need to configure the setup so that only 192.168.8.170 gets routed through the VPN, while all other traffic is handled by their local router instead of mine. Ideally, my device should be able to access everything on the LAN and automatically route certain traffic through a VPS (like accessing HigherGround), but when performing general internet tasks (e.g., searching for "how to make a sandwich"), it gets routed from my router to ProtonVPN.

I've managed to get ProtonVPN working, but still struggle with integrating WireGuard on my phone to work with ProtonVPN on the server. From what I've read, using iptables and creating specific rules might be necessary to allow only certain devices to access 192.168.8.170 (HigherGround) while keeping their local internet traffic separate.

My long-term goal is to configure this setup so that my friends' local traffic remains on their network, but for HigherGround services, it routes through the VPN tunnel or ProtonVPN if necessary.

My nix Config for wiregaurd (please let me know if im being stoopid with somthing networking is HARRRD)

#WIREGAURD connect to higher ground networking.wg-quick.interfaces = { # "wg0" is the network interface name. You can name the interface arbitrarily. caveout0 = { #Goes to ProtonVPN address = [ "10.2.0.2/32" ]; dns = [ "10.2.0.1" ]; privateKeyFile = "/root/wiregaurd/privatekey"; peers = [ { #From HigherGround to Proton publicKey = "magic numbers and letters"; allowedIPs = [ "0.0.0.0/0" "::/0" ]; endpoint = "79.135.104.37:51820"; persistentKeepalive = 25; } ]; };

cavein0 = { # Determines the IP/IPv6 address and subnet of the client's end of the tunnel interface address = [ "10.0.0.1/24" ]; dns = [ "192.168.8.1" "9.9.9.9" ]; # The port that WireGuard listens to - recommended that this be changed from default listenPort = 51820; # Path to the server's private key privateKeyFile = "magic numbers and letters";

  # This allows the wireguard server to route your traffic to the internet and hence be like a VPN
  postUp = ''
    ${pkgs.iptables}/bin/iptables -A FORWARD -i cavein0 -j ACCEPT
    ${pkgs.iptables}/bin/iptables -t nat -A POSTROUTING -o enp5s0 -j MASQUERADE
  '';

  # Undo the above
  preDown = ''
    ${pkgs.iptables}/bin/iptables -D FORWARD -i cavein0 -j ACCEPT
    ${pkgs.iptables}/bin/iptables -t nat -D POSTROUTING -o enp5s0 -j MASQUERADE
  '';

  peers = [
    { #friend1 
     publicKey = "magic numbers and letters";
     allowedIPs = [ "10.0.0.3/32" "192.168.8.170/24" ];
     endpoint = "magic numbers and letters";
     presharedKey = "magic numbers and letters";
     persistentKeepalive = 25;
    }
    { # My phone
      publicKey = "magic numbers and letters";
      allowedIPs = [ "10.0.0.2/32" ];
      endpoint = "magic numbers and letters";
      presharedKey = "magic numbers and letters";
      persistentKeepalive = 25;
    }
    {# friend 2
      publicKey = "magic numbers and letters";
      allowedIPs = [ "10.0.0.4/32" "192.168.8.170/24" ];
      endpoint = "magic numbers and letters";
      presharedKey = "magic numbers and letters";
      persistentKeepalive = 25;
    }
    {# friend 3
     publicKey = "magic numbers and letters";
     allowedIPs = [ "10.0.0.5/32" ];
     endpoint = "magic numbers and letters";
     presharedKey = "magic numbers and letters";
     persistentKeepalive = 25;
    }
    
    # More peers can be added here.
  ];
};

};

#Enable NAT networking.nat = { enable = true; enableIPv6 = false; externalInterface = "enp5s0"; internalInterfaces = [ "cavein0" ]; };

services.dnsmasq.settings = { enable = true; extraConfig = '' interface=cavein0 ''; };

Any help would be appreciated thanks

References: Futo Wiki: https://wiki.futo.org/index.php/Introduction_to_a_Self_Managed_Life:_a_13_hour_%26_28_minute_presentation_by_FUTO_software

NixOS Wireguard: https://wiki.nixos.org/w/index.php?title=WireGuard&mobileaction=toggle_view_desktop

Just a FYI, the main portion of the paragraph was put into llama3.1 with the prompt "take the following prompt and fix the grammer, spelling and spacing to make it more readable" Because im bad at english and didnt want to pain people with my choppy sentences and poor grammer

Old Client Config

Solution somewhat found! so i didnt understand what wireguard allowIPS really did, well i did but it was confusing. So what i did before was have 10.0.0.2/32 only, this allowed users of the VPS to have acess to my local network. i swapped it to where there was only 192.168.8.170 only and that made it to where i could ONLY acess the service and no other webpage or dns. the solution was to set on the server side, for peers allowed ip adresses to be "192.168.8.170/24" and "10.0.0.2/32, this allows each user to have there own IP adress within the server. so for example my phone has 10.0.0.2/32 and 192.168.8.170. THE CLIENT SIDE MUST MATCH!!! Which is what i missed before, my guess on why this is important is so your network manager on whatever your client os is running, knows that it can only acess 192.168.8.170 and anything within the 10.0.0.2/32 subnet. The reason why you NEED 10.0.0.2/32 is so the client can have an ip adress to talk to the server internally. at least i think im just a guy who dicks around with pc's in his free time :P.

so having 192.168.8.170/24 and 10.0.0.2/32 on both the wireguard client config and the server enforces that the client cannot acess anything but those adresses and subnets.

i still would like to setup split tunneling, because on my server if i wanna VPN from my server to protonVPN my wiregaurd server doesnt connect. but im glad i got it to this state, thanks for helping out everybody :)

15
 
 

I set one up via yunohost and it seems like its doing its job. Any tips? Anyone set it up before?

16
35
submitted 1 day ago* (last edited 1 day ago) by [email protected] to c/[email protected]
 
 

Hey everyone, I'm planning on setting up my first home server this year. Going to use an old Dell Optiplex with a couple 4tb SSD's.

I only need two services running. Jellyfin and immich. I've tested this out in a debian netinstall VM and it works.

Just looking for helpful hints or advice etc. I'm a long time Linux and BSD user and I'm tempted to try it out using Alpine Linux or even NetBSD (my daily driver os) but I thought I'd be sensible and go with Debian for.... Stability?

Anyway, immich is run in a container whereas jellyfin has a binary install. Apparently you can run jellyfin in a container also, not sure I really need to tho?

Thank you for any hints or advice.

17
 
 

I need something and hopefully i dont have to invent the wheel.

I want to subscribe to youtube channels and have new videos automatically detected and downloaded to local storage. Bonus points for jellyfin intergration but i can live without.

I know not too hard to rig something like this uo with youtube-dl but if there is an existing solution that would be amazing.

Anybody know?

18
 
 

Right now I have everything except wireguard setup on my old Thinkpad. I'm planning on hosting a minecraft server, forgejo, jellyfin, and fediverse instances. Before I expose everything to the open web I'd be grateful if someone could look my setup over and tell me if this is secure enough I can just update containers when they need and forget about security

19
 
 

It's Sunday somewhere already so why wait?

Let us know what you set up lately, what kind of problems you currently think about or are running into, what new device you added to your homelab or what interesting service or article you found.

I'll post my ongoing things later/tomorrow but I didn't want to forget the post again.

20
 
 

UDN (Ukranian Data Network) an internet service hosting has been offline for a few days and for a moment on their site has reports a message about their network being off-line as a result of sabotage by Ukrainian police officers. Did anyone use it and have any other info?

I was using it for my TOR node.

21
 
 

Hey everyone, #What is wanderer? wanderer is a self-hosted GPS track database. You can upload your recorded GPS tracks or create new ones and add various metadata to build an easily searchable catalogue. Think of it as a fully FOSS alternative to sites like alltrails, komoot or strava. #What is new? I'm coming back here to tell you a bit about what has been happening since my last update. Since then, we implemented some highly requested features:

  1. A fancy new 3D model on the front page (there is an easter egg, can you find it?)
  2. wanderer now uses vector map tiles which results in a significant performance boost for everything map-related
  3. As a result, we now also support topographical 3D maps in wanderer (see gif)
  4. Greatly improved social features: from list sharing, over profile pages and activity feeds to notifications
  5. The better location search allows you to search right down to the address
  6. And finally probably the most requested feature: integrations. You can now sync all your trails from strava and komoot directly with wanderer without having to manually export/import them

Big thanks to everyone who contributed code or translations! If you have any suggestions/questions feel free to let me know below.

Have a great weekend!

Flomp

22
 
 

Just discovered #spotdl (https://github.com/spotDL/spotify-downloader). It's a great way to download songs from #youtube with metadata and lyrics, or to just quickly listen to that one song somebody sent you. Cli and webui are available and it's very configurable

#spotify #musicdownload #spotifydownloader #selfhosted @selfhosted

23
52
Good mail server for selfhosting (lemmy.cronyakatsuki.xyz)
submitted 3 days ago* (last edited 3 days ago) by [email protected] to c/[email protected]
 
 

So I'm migrating stuff from my old server to a new provider and only thing left is email.

The problem is I used luke smith's emailwiz script ( the script and setup itself isn't a problem ) because it uses system users for managing users with dovecot and friends to setup a mail server.

So now I'm looking for a new email server to selfhost (preferably docker/podman) that in the future I can easilly migrate.Would also love if somebody has a reccomendation on how I could backuo and import emails from the old server.

NOTE: I use caddy as webserver, so the server should have a simple way on getting ssl certs, or abikity to easilly make use if caddy one's.

24
25
17
submitted 3 days ago* (last edited 3 days ago) by [email protected] to c/[email protected]
 
 

I have a remote VPS that acts as a wireguard server (keys omitted):

[Interface]
Address = 10.0.0.2/24
[Peer] # self host server
AllowedIPs = 10.0.0.1/32

(The VPS is configured to be a router from the wg0 to it's WAN via nft masquerading)

And i have another server, my self-host server, which connects to the VPS trough wireguard because it uses wireguard tunnel as a port-forwarder with some nft glue on the VPS side to "port forward" my 443 port:

[Interface]
Address = 10.0.0.1/24
[Peer]
AllowedIPs = 10.0.0.2/24

(omitted the nft glue)

My self-hosted server default route goes trough my home ISP and that must remain the case.

Now, on the self-host server i have one specific user that i need to route trough the wireguard tunnel for it's outgoing traffic, because i need to make sure it's traffic seems to originate from the VPS.

The way i usually handle this is with a couple of nft commands to create a user-specific routing table and assign a different default route to it (uid=1070):

 ip rule add uidrange  1070-1070 lookup 1070
ip route add default via 192.168.0.1 dev eno1 table 1070

(this is the case, and works, to use eno1 as default gateway for user 1070. Traceroute 8.8.8.8 will show user 1070 going trough eno1, while any other user going trough the default gateway)

If i try the same using the wg0 interface, it doesn't work.

 ip rule add uidrange  1070-1070 lookup 1070
ip route add default via 10.0.0.2 dev wg0 table 1070

This doesnt work, wireguard refuses to allow packets trough with an error like:

ping 8.8.8.8
From 10.0.0.1 icmp_seq=3 Destination Host Unreachable                                            
ping: sendmsg: Required key not available 

I tried to change my self-host server AllowedIps like this:

[Interface]
Address = 10.0.0.1/24
[Peer]
AllowedIPs = 10.0.0.2/24, 0.0.0.0/0

and it works! User 1070 can route trough wireguard. BUT... now this works just too much... because all my self-host server traffic goes trough the wg0, which is not what i want.

So i tried to disable the WireGuard messing with routing tables:

[Interface]
Address = 10.0.0.1/24
Table = off
[Peer]
AllowedIPs = 10.0.0.2/24, 0.0.0.0/0

and manually added the routes for user 1070 like above (repeat for clarity):

 ip rule add uidrange  1070-1070 lookup 1070
ip route add default via 10.0.0.2 dev wg0 table 1070

The default route now doesnt get replaced, but now, without any error, the packers for user 1070 just don't get routed. ping 8.8.8.8 for user 1070 just hangs

I am at a loss.... Any suggestions?

(edits for clarity and a few small errors)

view more: next ›