Selfhosted

41875 readers
891 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
1
 
 

First, a hardware question. I'm looking for a computer to use as a... router? Louis calls it a router but it's a computer that is upstream of my whole network and has two ethernet ports. And suggestions on this? Ideal amount or RAM? Ideal processor/speed? I have fiber internet, 10 gbps up and 10 gbps down, so I'm willing to spend a little more on higher bandwidth components. I'm assuming I won't need a GPU.

Anyways, has anyone had a chance to look at his guide? It's accompanied by two youtube videos that are about 7 hours each.

I don't expect to do everything in his guide. I'd like to be able to VPN into my home network and SSH into some of my projects, use Immich, check out Plex or similar, and set up a NAS. Maybe other stuff after that but those are my main interests.

Any advice/links for a beginner are more than welcome.

Edit: thanks for all the info, lots of good stuff here. OpenWRT seems to be the most frequently recommended thing here so I'm looking into that now. Unfortunately my current router/AP (Asus AX6600) is not supported. I was hoping to not have to replace it, it was kinda pricey, I got it when I upgraded to fiber since it can do 6.6gbps. I'm currently looking into devices I can put upstream of my current hardware but I might have to bite the bullet and replace it.

Edit 2: This is looking pretty good right now.

2
 
 

Hello everyone! Mods here 😊

Tell us, what services do you selfhost? Extra points for selfhosted hardware infrastructure.

Feel free to take it as a chance to present yourself to the community!

🦎

3
 
 

A little background first: I'm selfhosting our (wife and mine) files for over 12 years now, started with a simple FreeNAS folder, switched to Owncloud and moved on to Nextcloud after the split. We only really need the files part, and while it works fine in general, setting it up took more tinkering than it should've.

I'm also not a fan of NC's direction, moving from file cloud hosting to a "full-stack" enterprise one-for-all solution. While that wouldn't be an issue in general, it seems that other parts are prioritized without getting the older parts to work correctly first.

Which seems to match with the recent-ish code analysis https://www.bsi.bund.de/DE/Service-Navi/Presse/Alle-Meldungen-News/Meldungen/Projekt-CAOS-30_Nextcloud_250205.html (in German, although CVE entries have an English description) which found nearly 40 vulnerabilities, amongst them modules like 2FA/MFA.

So I've tested through most of the other options, but maybe I missed something obvious.

Requirements:

  • selfhostable in a docker environment
  • file storage/syncing from a central server, preferably selective sync (so Syncthing is out)
  • either structured storage (folders etc) or at least structured export/backup from flat storage for application-indepentent file backup
  • desktop client for Windows, mobile client for Android
  • Web UI for simple browser access
  • virtual file support a definite plus

Things I've tried:

Nextcloud

  • well-working setup, definitely my "fallback" option
  • no fan of the general direction development is going

Syncthing

  • While working absolutely fine for sync between different devices (have it in use in a different scenario), the peer-to-peer nature is unsuitable for what I'm looking for

Pydio Cells

  • server and web UI work fine, desktop and app sync didn't really work (might be an error on my part though)
  • backup fiddly due to needing cells-fuse tool for structured files, although I haven't tested structured storage yet

Seafile

  • will have to test this again, when I did years earlier the storage situation was a little tricky

Owncloud Infinite Scale

  • Similar to Pydio Cells, but haven't really tested yet due to dev exodus

Opencloud.eu

  • several devs from Owncloud moved to Opencloud and forked their "own" OCIS server
  • first release scheduled March '25, so no testing yet
  • I have hopes this might be a useful alternative, but time will tell

So: did I miss something? Any obvious software solution?

4
5
11
submitted 2 hours ago* (last edited 1 hour ago) by [email protected] to c/[email protected]
 
 

TLDR:
Any ideas how to properly setup WiFi roaming between two different WiFi routers in a single floor flat/apartment?

~~Also anyone know if just configuring Mobility Domains in each router would help? or is mobility domain specific to 802.11r[oaming?] (which does not seem to be implemented even on advanced-customer products eg. non-big-enterprise).~~
EDIT: 802.11r == Roaming == Fast Transition - and has to be enabled manually, not all clients seem to deal with FastTrans. well though


Currently I have Turris Omnia (which has customized OpenWRT) as the primary router (DHCP, firewall, etc) and basic TP-Link Archer C6 (stock TP-Link, but the plan is to replace whole box with something more capable), both dual-band.
Archer acts as a simple AP/routing box and is directly connected as client by ethernet to the Omnia.

Direct line both are rather close to each other but with walls between them. TP-Link is in the furthest/farthest? room, Turris basically a bit off the center of the space, so there is some overlap of signals and I've hoped that the devices would sort it out, but with the below "common" setup it seemed to happen too late - especially androids tried to really hold onto the basically dead station for too long.

With this setup I've tried the basic "roaming" configuration:

  • Same SSID
  • Same encryption and PSK
  • Different channels (for each band, per router)
  • Even tried tweaking the signal powers for each so that there is less overlap (reducing power of Archer so that it mostly covers only the farthest room)

But, either tp-link does something extra under the hood which breaks this or the routers are just too close to each other and it does not trigger switching in the client devices (androids, iphones, macbooks, thinkpads).
Also with both routers on the same SSID, it was hard to forcefully tell the devices to connect to the other WiFi thats like almost next to you instead of staying on the previous dying one.

I could replace the cheap basic Archer C6 with capable Mikrotik to get more control and try setup the Mobility Domain but I have no idea how it works and if it even helps with roaming.
One earlier web search hinted that for the usual "roaming", all wifi networks have to be in the same 802.11 mode (N vs AC) for devices to even consider roaming (as in, they like to stick with AC even if there is N network with better signal).

6
 
 

does anyone have a good suggestion for running a mail server on my nixos box?

7
 
 

I self hosted an instance at is.hardlywork.ing, my images are getting cropped and losing the top and bottom 25% of the the image, leaving me a zoomed in rectangle. I tried on web browser, phone app, etc. same issue uploading any 1920x1080 photo.

8
 
 

I have no idea what is going on my push notifications on my server and why they are so inconsistent. Sometimes they will make it through and sometimes they won't. Messages seem to work fine however anything to do with calling and video calling notifications just does not want to work.

Sometimes the notification for calls will come through but most of the time it just doesn't want to function.

I have looked in my Docker container logs and there is nothing indicating an error. It all seems to be working, even troubleshooting notifications in the Element client and Element X client. The tests pass with flying colors.

It feels as if my Element clients are not running in the background and completely shutdown. Anytime I re-open the app it looks as if it is opening for the first time.

The only thing I could possibly think is causing the issue is I may not have setup cloudflare tunnels properly. I don't know what would be causing ntfy or matrix to not play nice.

I am running my server on Ubuntu 20.04 with Docker and CasaOS. I am using Cloudflare Tunnels for my Matrix and ntfy containers. The logs for all the containers show nothing abnormal. The issue is appearing on 2 Google Pixel 8 devices that are running GrapheneOS. All the proper settings for the ntfy app and element have been configured (unrestricted battery, notification vibration and sound, etc.) It seems that calls go through fine if the app remains open. I also have a digital ocean vps for my coturn server.

This is what my element app looks like whenever I open it again. Maybe I interpret the loading screen wrong but to me it looks like it's doing "first time startup" type actions. This doesn't make sense though because both client devices are recieving text notifications.

I also found this github issue that seems to be exactly the issue I am facing (except they are having issues with messages as well it seems, when messages is the only notification that works consistently for me). However I don't know if some of the proposed solutions are only applicable to Synapse and not Dendrite.

https://github.com/element-hq/element-android/issues/7069

Thank you all for your time and help!

Edit: some more github issues referencing my issue. These seem more specific as well:

https://github.com/element-hq/element-android/issues/8761

https://github.com/element-hq/element-x-android/issues/3031

I do not know if this value will work in my Dendrite config:

ip_range_whitelist:

Edit 2: I created a throwaway matrix account and nltifications through my ntfy server worked flawlessly. Which makes me think my issue must be with my co turn server on my vps. Forgot to mention my vps, I have updated the post with this info. Going to try and see if this is the issue.

9
 
 

I’m pulling my hair out over this. I’ve got a proxmox homelab, an LXC running technitium installed from TTeck’s script.

The DNS server is also doing DHCP for my network. I have an authoritative zone for ‘.lan’

I can get NS, SOA, TXT records from the DNS server, but no A records! The DNS query logs show that it gives an answer, and if I am on the DNS server itself I get an answer, but no other machines on the network hear the reply.

I think this means the DNS server is working properly. There are no FWs in the way as I can resolve other types.

Where else can I look, or how can I diagnose this? I am completely at a loss.

10
40
submitted 15 hours ago* (last edited 14 hours ago) by GameGod to c/[email protected]
 
 

I'm thinking about moving my router to be a VM on a server in my homelab. Anyone have any experience to share about this? Any downsides I haven't thought of?

Backstory: My current pfSense router box can't keep up with my new fibre speeds because PPPOE is single threaded on FreeBSD, so as a test, I installed OpenWRT in a VM on a server I have and using VLANs, got it to act as a router for my network. I was able to validate it can keep up with the fibre speeds, so all good there. While shopping for a new routerboard, I was thinking about minimizing power and heat, and it made me realize that maybe I should just keep the router virtualized permanently. The physical server is already on a big UPS, so I could keep it running in a power outage.

I only have 1 gbps fibre and a single GbE port on the server, but I could buff the LAN ports if needed.

Any downsides to keeping your router as a VM over having dedicated hardware for it?

11
232
Ghost blog adding activitypub (activitypub.ghost.org)
submitted 1 day ago* (last edited 1 day ago) by [email protected] to c/[email protected]
 
 

Not sure if this has already been posted since it's kind of old news (early 2024), but I think that's exciting. I'm currently looking into blog software with nice webgui and I might wait for this to become real. Looking at the announcement page, they seem to take it seriously and there's continuous merged PRs since April until recently regarding AP on their GitHub.

12
 
 

Hi c/selfhosted,

here's another Update on PdfDing, the selfhosted PDF manager, viewer and editor offering a seamless user experience on multiple devices. You can find the repo here.

Thanks to being included in the favorite selfhosted apps launched in 2024 on selfh.st, PdfDings's popularity improved greatly. This week the project crossed the 500 Stars on github, which was a big milestone for me. Thanks! Another thing that made me quite happy is that PdfDing got its first two contributions!

Milestones aside there were also new features and improvements since my last post:

  • PDFs can be starred and archived. Starred and archived PDFs can be quickly accessed in the overview. Archived PDFs are hidden from the default overview.
  • New (beautiful) theme inspired by fli.so. You can find a screenshot here.
  • Preview mode: the first page of each PDF can be shown in the overview without entering the viewer.
  • Optional thumbnail mode: The first page of each PDF will be shown as a thumbnail in the overview.
  • Design improvements that (in my opinion) make the whole application feel cleaner and more beautiful
  • I have created a helm chart so it can be easily installed on Kubernetes

As always I am happy if you star the repo or if someone wants to contribute.

13
 
 

Hey everyone. So I'm trying to decide which RAID should I choose for my 6trays NAS. I have 4 x 16TB HDDs, 1x8TB HDD and another one 500GB ssd that I will use as a containers' docker folder usage. I will be using the NAS to store Media files (movies, tv series, photos, music etc.) and also documents. Currently I have the 2 16tb as RAID 1 that only the Media files are stored and I am in between either creating another RAID 1 with the remaining 2 16Tbs or adding them to the other 2 to create a RAID 5 and have a bigger storage pool Have you had any incident where 2 HDDs were lost-damaged simultaneously (as RAID 5 forgives loss of only 1 drive) or not?

In addition I was thinking of having the 8TB HDD as a standalone to backup the documents and maybe the photos and the docker setups.

Does this make sense to anyone that uses similar setup?

Thanks for your inputs!

14
 
 

For some reason my push Notification are not working properly even with my ntfy server. I will miss calls from people because it doesn't seem like my clients are running in the background on my pixel. I have the battery access set to unrestricted. I do not know why the clients don't run in the background to notify me.

It seems they only really notify me if I keep the client open.

I am using a google pixel 8 running grapheneos. The clients I have tried are: schildichat and element.

I have also tried it on my fiance's google pixel 8 running grapheneos and the same issue appears.

I am running both my ntfy server and matrix server on docker on a ubuntu 20.04 machine with casaos. I use cloudflare tunnels to forward my services.

When troubleshooting notifications it passes all the tests on both element and schildichat. however on element x and schildi next I get an error failed to check gateway and push back loop. (which both element x and schildi next functioning 100% for me yet. See other posts)

Any help is appreciated!

Edit: it seems to be a problem with ntfy. Checking the container logs I found this:

INFO Connection closed with HTTP 500 (ntfy error 50003) (error=internal server error: base-url must be be configured for this feature, error_code=50003, http_method=GET, http_path=/_matrix/push/v1/notify, http_status=500, tag=http, visitor_auth_limiter_limit=0.016666666666666666, visitor_auth_limiter_tokens=30, visitor_id=ip:example, visitor_ip=example, visitor_messages=43, visitor_messages_limit=17280, visitor_messages_remaining=17237, visitor_request_limiter_limit=0.2, visitor_request_limiter_tokens=60, visitor_seen=2025-02-06T09:19:26.984Z)

Edit 2: i seemed to have fixed that error by reinstalling ntfy and creating a config file with the base url. However I am still not recieving push notifications. Element still crashes/closes and stops running in the background. I have no idea how to fix this. I have seen mentions of 'ip_range_whitelist' variables not being set properly. However all the documentation that relates to that variable is only for synapse. I do not know if that same variable is applicable to dendrite. I cannot see it listed in the config file.

15
 
 

I'm routing game traffic on my VPS via wireguard to a home server that has games hosted via docker.

Setup is...

VPS/Wireguard -> Internet -> Wireguard/Dockerized Games Server

Now, my current config WORKS... however I'm curious if there is some unnecessary routing going on.

VPS iptable rules (omitted PostDown)

PostUp = iptables -t nat -A PREROUTING -i eth0 -p tcp --match multiport --dports 61000:61100 -j DNAT --to-destination 10.0.0.3
PostUp = iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE

Game Server (omitted PostDown)

Here are the iptable rules on the game server and the --to-destination part is what I'm curious about...

PostUp = iptables -t nat -A PREROUTING -p tcp --dport 61000:61100 -d 10.0.0.3 -j DNAT --to-destination 192.168.1.14
PostUp = iptables -t nat -A POSTROUTING -j MASQUERADE

10.0.0.3 is the same machine as 192.168.1.14

The reason I'm setting the --to-destination ip to that is because the docker rules that are created in the Chain DOCKER section of the iptable rules are looking for the destination nam-games.localdomain which is my dns entry for the game server. I unfortunately don't think I can change these because I'm using a game server management panel called Pterodactyl that adds these. I also don't want to have to manually add rules to this every time I create a server.

Chain DOCKER (2 references)
target     prot opt source               destination         
RETURN     all  --  anywhere             anywhere
DNAT       tcp  --  anywhere             nam-games.localdomain  tcp dpt:61000 to:172.18.0.2:61000
DNAT       udp  --  anywhere             nam-games.localdomain  udp dpt:61000 to:172.18.0.2:61000
DNAT       tcp  --  anywhere             nam-games.localdomain  tcp dpt:61001 to:172.18.0.3:61001
DNAT       udp  --  anywhere             nam-games.localdomain  udp dpt:61001 to:172.18.0.3:61001

Concerns

The setup I described above is the only config I have gotten to work, but I'm curious if it's hitting the server, then going the router, only to be routed back to the same machine again. If it is, is there a better way to set this up?

16
 
 

I'm in the process of getting my Home Assistant environment up and running, and decided to run a test: it turns out that my gaming PC (custom 5800X3D/7900XTX build) uses more power just sitting idle, than both of my storage freezers combined.

Background: In addition to some other things, I bought two "Eightree" brand Zigbee-compatible plugs to see how they fare. One is monitoring the power usage of both freezers on a power strip (don't worry, it's a heavy duty strip meant for this), and the other is measuring the usage of my entire desktop setup (including monitors and the HA server itself, a Lenovo M710q).

After monitoring these for a couple days, I decided that I will shut off my PC unless I'm actively using it. It's not a server, but it does have WOL capability, so if I absolutely need to get into it remotely, it won't be an issue.

Pretty fascinating stuff, and now my wife is completely on board as well; she wants to put a plug on her iMac to see what it draws, as she uses it to hold her cross-stitch files and other things.

17
18
 
 

Hi guys! I was wondering whatever solution you guys might use to check/update your servers/containers? I'd like not having to depend on any cloud, something running locally would be great.

Thanks!

19
 
 

A year ago I built a NAS to reduce my reliance on cloud services, and set up an arr stack. I went with TrueNAS Scale, which was on Bluefin at the time. In the past 12 months, TrueNAS Scale has been through FOUR major OS versions, with a fifth already announced. At least one of those involved a release train switch so, despite diligently checking for updates in the dashboard, I was left in the dust with an obsolete OS, and didn’t find out until it was already a huge hassle to upgrade.

I’ve been really happy with the utility and benefit of having this tool, but holy smokes how is anybody supposed to keep up with all of this? This is far from my only hobby, and I simply do not have the time, patience, or interest for a constant race to keep up with vetting new release versions and fixing what breaks every 3 weeks. I have enough tinkering hobbies as it is.

On top of that, there’s the whole blow up with TrueCharts, which has also left me with an entire suite of obsolete albatrosses around my NAS that I need to deal with. Am I still waiting for them to figure out an upgrade path? I don’t even know anymore.

Sorry for the rant, but I guess what I’m looking for is: how do you keep up with the constant maintenance and updates, and where do I go from here, in February 2025, with a system running Bluefin 22.12, a 32TB ZFS pool (RAIDZ1) that has to remain intact, and a handful of TrueCharts apps that I don’t want to lose the data from (e.g. Jellyfin configs/watch history)?

20
 
 

I'm currently shopping around for something a bit faster than ollama and because I could not get it to use a different context and output length, which seems to be a known and long ignored issue. Somehow everything I’ve tried so far did miss one or more critical features, like:

  • "Hot" model replacement, so loading and unloading models on demand
  • Function calling
  • Support of most models
  • OpenAI API compatibility (to work well with Open WebUI)

I'd be happy about any recommendations!

21
 
 

cross-posted from: https://lemmy.selfhostcat.com/post/108211

  • my methods have been:

  • use trilium for any detailed notes and documentation

  • memos for random thoughts especially if shorter

  • pen and paper when offline or on mobile because mobile trilium and moememos both suck

  • zotero for citation and bibliography manager

  • backed up to nextcloud

  • i have paperless-ngx but found it randomly errors a ton of things and zotero is fine.

  • considering if it’s worth it to have so many different spread out methods

  • theyre fun to use but it creates more chaos then needed

22
 
 

Hi all, I’m one of the creators of ChartDB.

A few months ago, I introduced ChartDB to this community and received an amazing response - tons of positive feedback and feature requests. Thank you for the incredible support!

Recap: For those new to ChartDB, it simplifies database design and visualization, similar to tools like DBeaver, dbdiagram, and DrawSQL, but is completely open-source and self-hosted.

https://github.com/chartdb/chartdb

Key features

  • Instant Schema Import - Import your database schema with just one query.
  • AI-Powered DDL Export - Generate scripts for easy database migration.
  • Broad Databases - Works with PostgreSQL, MySQL, SQLite, MSSQL, ClickHouse, and more.
  • Customizable ER Diagrams - Visualize your database structure as needed.
  • Open-Source & Self-Hostable - Free, flexible, and transparent.

What’s New in v1.7.0 (2025-02-03)

🚀 New Features

  • CockroachDB Support - Now fully supports CockroachDB.
  • ClickHouse Enhancements - Improved ClickHouse integration.
  • DBML Editor - Added a built-in DBML editor in the side panel.
  • Import DBML - Now you can import DBML files directly into ChartDB.
  • Drag & Drop Table Ordering - Easily reorder tables in the side panel.
  • Mini Map Toggle - Added a toggle option for mini-map visibility.

🛠 Bug Fixes & Improvements

  • Docker Build - OPENAI_API_KEY is now optional when using Docker.
  • Canvas Editing - You can now edit table names directly on the canvas.
  • Dark Mode Fixes - Improved UI for the empty state in dark mode.
  • Power User Shortcuts - Added new keyboard shortcuts and key bindings.
  • Performance Boost - Optimized bundle size for faster loading.

What’s Next?

  • AI - Tables Relationships finder - AI-powered tool to detect table relationships.
  • CLI/API Diagram Updates - Option to update diagrams via CLI, API, or a JSON input file.
  • Git Integration for Versioning - Manage and track diagram changes with Git version control.
  • More database support & DBML improvements.
  • Enhanced collaboration & sharing features.
  • Additional performance optimizations.

We’re building ChartDB hand-in-hand with this community and contributors. Your feedback drives our progress, and we’d love to hear more!

Thank you to everybody who contributed! ❤️

23
 
 

I'm on Arch Linux btw. and I have a RTX 3060 with 12 GB VRAM which is cool so a 14b model fits into the VRAM. It works quite well but I wonder if there is any way to help with the speed even more by trying to utilize the iGPU in my Intel 14600K. It always just sits there not doing anything.

But I don't know if it even makes sense to try. From what I read in some comments on the internet, the bottleneck will be the ram speed in the iGPU, which will use my normal ram which is a magnitude slower than the VRAM.

Does anyone have any experience with that?

24
 
 

Is it feasible to self host websites for small businesses? I'm trying to do some research on the amount of infrastructure and stuff you have to know from a security standpoint... I'm fine with building and hosting stuff locally for me but I'm tempted to move to hosting some of my business sites as well.

Does anyone have experience and can give me some advice one way or the other?

25
 
 

How to easily run a Webdav server in a Docker container

A lot of open source software lets you synchronise data via webdav, but how do you get a #webdav server?
Using Apache with the dav module is a common approach, but I couldn't bother to set it up that way.
My way is different: Rclone can act as a webdav server and is easy to configure.
I've been using it for 3 years and it's very reliable.
Have a look at the compose file in the picture.
@selfhosted

view more: next ›