Self-Hosted Alternatives to Popular Services

213 readers
2 users here now

A place to share, discuss, discover, assist with, gain assistance for, and critique self-hosted alternatives to our favorite web apps, web...

founded 2 years ago
MODERATORS
201
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/svenvg93 on 2025-01-12 09:59:11+00:00.


Wrote a small blog post on how to setup Traefik as proxy with LetsEncrypt & Cloudflare for all your self hosted applications. Hope it will helps others!

202
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/fab_space on 2025-01-12 10:09:37+00:00.


A Reddit user asked about a project I am building if is possible to integrate a 404 protection error for clients abusing its website powered by caddy server.

I ended up building a specific caddy module for that, the caddy-mib

Caddy MIB (Middleware IP Ban) is a custom Caddy HTTP middleware designed to track client IPs generating repetitive errors (such as 404 or 500) and temporarily ban them after exceeding a specified threshold. This middleware helps mitigate brute force attacks, excessive requests for non-existent resources, or other abusive behavior by blocking IPs that breach the configured error limits.

Features

  • Track Specific HTTP Error Codes: Configure which HTTP error codes (e.g., 404, 500) to track.
  • Set Error Thresholds: Define the maximum number of errors allowed per IP before banning.
  • Custom Ban Duration: Specify how long an IP should be banned (e.g., 5s, 10s).
  • Dynamic Ban Duration: Increase ban duration exponentially with repeated offenses.
  • Whitelist Trusted IPs: Exempt specific IPs or CIDR ranges from banning.
  • Per-Path Configuration: Define custom error thresholds and ban durations for specific paths.
  • Custom Ban Response: Return a custom response body and header for banned IPs.
  • Configurable Ban Status Code: Set a custom HTTP status code for banned IPs (e.g., 403 Forbidden or 429 Too Many Requests).
  • Debugging: Detailed logs to track IP bans, error counts, and request statuses.
  • Automatic Unbanning: Banned IPs are automatically unbanned after the ban duration expires.

Simple and effective from reddit users to reality in a week ☕️

Have a nice sunday u all dear selfhosters ❤️

203
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/quiteCryptic on 2025-01-12 03:20:40+00:00.


I know why its not as popular - many client appls simply don't support it!

The biggest downside, and why it is not more common in the general world at large is (I believe) because distributing the certificates to users can be cumbersome for large organizations and such.... but most self hosted people only have a few users at most (family/friends) who need access to their network.

I prefer it over using a VPN because you 1. don't have to install vpn client software and 2. don't have to remember to turn on your vpn before trying to connect (or leave an always on VPN connection).

To clarify mTLS is when you authenticate by providing a certificate in your requests. The server then takes that certificate to verify it before allowing you access. Most people have this as a authorization at the reverse proxy level, so if you don't have a valid certificate you can never even reach the applications at all.

Usage is dead simple, move a cert onto your device and click/tap it to install onto your device. When using an application that supports it, it will prompt you once to select which cert to use and then never need to ask again. Voila you can access your self hosted app, and no one else can unless you gave them a self signed cert (that only you can generate)

204
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/SymBiioTE on 2025-01-12 03:14:56+00:00.


Decided to make a full post since the link in my last post stopped working.

So I made a super cheap 10 inch rack using 3D printed parts anyone can find online and two pairs of 8U rack rails. Huge credit to all the people who made the STL files I used for this project. Everything in this project was printed on a CR-10. I used PLA (works for me for now) but I would recommend using a stronger filament.

Video if anyone wants to check out the build: YouTube

Items used for the build:

  1. 2x Gator rack screws
  2. 2x Gator 8U Rails
  3. 10 inch shelf Credit: u/goyko
  4. 10 inch blank (Not thick enough but you can make it thicker in Cura super easy) Credit: u/Mauker
  5. Dell optiplex 7060 mount (works perfectly for the 3050 aswell) Credit: u/TimPrints_686384
  6. TP-Link ER605 shelf (also works for TL-SG108S) Credit: u/FloKun_144444
  7. Rack feet Credit: u/themassofthes_234253
  8. 1U Blank with pass through Credit: u/towilab

I used about 9 1U blanks that I made thicker in Cura. (Super easy to do but if anyone wants the file I can make a remix and post it online). 2 on each of the sides, 4 on the back just to stiffen it up and one on the bottom front. Using the blank with a hole on the top/front to keep things together (I changed this later on to a 12 port 0.5U patch from GeekPi).

You will need to get a pack of nuts a bolts for the sides. There aren't any threads in these holes. (At least from what I saw on mine) I just used washers on the plastic side to prevent damaged.

The server has been running in a small cabinet with two fans keeping it cool. I have notice a very small amount of droop from the dell minis. Not enough to bug me but it is there.

Total build price was about $70 including the 12 port patch and short cat6a cables. $38 without those two. This doesn't include the filament because I have so much PLA that was sitting in my garage that really didn't cost me anything.

205
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/Littol on 2025-01-12 00:49:56+00:00.


Hi everyone,

I decided to switch away from Notion and self-host an obsidian-livesync service but I found that the documentation of the project, while pretty comprehensive, is written in pretty bad English and doesn't provide a ready to use official container. As such and mostly for myself, I created a container which automatically configures CouchDB by downloading and parsing the official install script provided by the obisidian-livesync maintainer and wrote a blog post explaining how to self-host the container.

Here is the container source:

Here is the container on hub.docker.io:

And finally, here is the tutorial:

206
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/aceberg_ on 2025-01-11 15:04:15+00:00.


I created AnyAppStart - a control panel to Start/Stop/Restart/View Logs for apps in Docker, Systemd, VMs or anything else (with user scripts). Written in Go and React. Features:

  • User can add any types (like LXC or WakeOnLAN)
  • Control remote machines via SSH
  • Config in yaml files, no DB
  • Simple API

Reasons for creating AnyAppStart

My use cases:

  • Resource heavy but rarely used apps. Start them only when necessary
  • Environments for development, learning and experimenting. I do not need them all running at the same time
  • Being able to control local and remote apps from one place, no matter what type (Docker, Systemd, VM, bash script)

Installation

There is Docker image available, but inside the container only Docker Type will work, which kinda defeats the purpose of this app. So installing binary is recommended.

All binary packages can be found in the latest release. There are .deb, .rpm, .apk (Alpine Linux) and .tar.gz files.

Supported architectures: amd64, i386, arm_v5, arm_v6, arm_v7, arm64.

For amd64 there is a deb repo available

207
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/dobby3698 on 2025-01-11 06:53:10+00:00.


Hi All, I know we've done the "how you get notifications" piece a fair bit but I'm curious, what notifications matter to you? What services do you want notifications for, is it backups, monitoring your arr stack or seeing uptime on your servers over notification format.

I've just been through and setup a Telegram bot and if the software I am using doesnt natively work directly I use Gotify in the middle. Now I want to know what matters most to you, what notifications do you want to see and ones you don't care about.

208
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/Hakunin_Fallout on 2025-01-11 19:52:34+00:00.

209
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/AMillionMonkeys on 2025-01-11 17:14:25+00:00.


This is where I really miss Plex...

For my own purposes I'd just use Tailscale, but are there better options?

I have a domain if that helps. My server is on a consumer ISP, so some kind of DDNS fiddling would be necessary.

Is there a way to e-mail my user some kind of 'key' such that only users with keys can access jellyfin.mydomain.com?

I'm seeing a lot of solutions that involve Cloudflare, but I don't know enough about networking to understand what it's doing.

210
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/HTTP_404_NotFound on 2025-01-11 16:55:32+00:00.

211
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/TomerHorowitz on 2025-01-11 12:10:32+00:00.


I want a full end to end Books/Manga/Comics local library, help me out understand which service will do what.

For each category (Books/Manga/Comics), what do we use for:

  1. Downloading
  2. Filling Metadata
  3. Reading

I currently have:

  1. Downloading:
    • Readarr: Books
    • Kaizoku: Manga
    • Mylarr: Comics
  2. Metadata:
    • Readarr: Books (shitty)
  3. Reading:
    • Kavita: All

I know about stuff like Komf/Calibre, but I never used them, and I'm not sure if they answer what I'm looking for, which is to have all the services on the server, and only a reader on the client side.

I.E. I'm looking for the most basic flow of:

  1. Searching a Book/Manga/Comic by name, and clicking "download".
    1. The Book/Manga/Comic will be downloaded by the server.
    2. Metadata will be automatically filled by the server.
  2. Opening my reader application, which I can download on any new device, and just have everything I need to start reading.

To sum things up: No metadata/services on the client side.

So... what do we use?

212
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/gaussoil on 2025-01-11 12:04:13+00:00.


I have a few coffee machines at home. I've already modded the controls using an ESP32 and they have an API for me to trigger it remotely, but managing them is becoming troublesome as I buy more coffee machines.

Is there a self-hosted solution that will let me authenticate using SSO and trigger a cup of coffee and deliver the push notification to my phone when the cup is ready?

Update: Since someone asked for a diagram, this is a high-level plan of how I think it should work.

213
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/cupdatedev on 2025-01-11 11:29:00+00:00.


Hi there, r/selfhosted!

I've been working on a service called Cupdate for a while. Cupdate automatically identifies container images in use in your Kubernetes cluster or on your Docker host. Cupdate then identifies the latest available release, release notes, vulnerabilities and more and makes the data available to you via a UI, API or through an RSS feed.

Although developed for my own purposes I'd like to share it with you guys, thinking there may be more with the same use cases. It would also be great to get some feedback from the self-hosted community.

Screenshot of the Cupdate dashboard

Features:

  • Performant and lightweight - uses virtually zero CPU and roughly 14MiB RAM
  • Auto-detect container images in Kubernetes and Docker
  • Auto-detect the latest available container image versions
  • UI for discovering updates, release notes and more
  • Subscribe to updates via an RSS feed
  • Graphs image versions' dependants (containers, pods, jobs etc.) explaining why the image is in use
  • Vulnerability scanning via Docker Scout, Quay and the GitHub Advisory Database
  • APIs for custom integrations
  • Metrics and traces for observability

GitHub:

214
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/doctormay6 on 2025-01-11 05:52:59+00:00.


The goal of stack-back is to make it as easy as possible to make reliable backups of all your stateful container volumes and databases in a docker compose stack.

I found a project called restic-compose-backup while looking for a simple solution to back up my compose stacks. Unfortunately, the project was abandoned, and some things have broken over time. I have forked the project, returned it to a working state, and added some enhancements.

215
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/RathdrumRip on 2025-01-10 20:43:29+00:00.


Relating to selfhosting...

216
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/JamesRy96 on 2025-01-10 22:51:08+00:00.


Hey everyone,

I've been looking for an alternative to Readarr for audio books that would be easier for my users to use to grab audiobooks and haven't found anything too promising so I threw together a simple web app to download book from AudioBook Bay via qBitTorrent.

The app displays search results from AudioBook Bay with the option to view details or download to the server. If a download is chosen the infohash is turned into a magnet link and sent to qBitTorrent.

In my setup the /audiobooks folder in my qBitTorrent container is mapped to the root folder of my Audiobookshelf library. You can set your SAVE_PATH_BASE value anywhere you'd like, subfolders with the book title will be created automatically. This path is relevant to wherever you have qBitTorrent running.

You can run app.py by itself or build the docker container. In the beginning of the app.py script there are values to change for your setup.

This is very sloppy and just thrown together on a whim, down the line I'm going to clean this up and get rid of the bad practices but for now I wanted to share what I threw together this afternoon in case others were interested and collect feedback.

Check out the GitHub repo here.

EDIT:

Screenshots I took from my phone because I’m out of the house:

Start Page

Search Results

217
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/True_El_Cabong on 2025-01-10 20:32:52+00:00.


I’m currently undergoing treatment for stage four cancer, and I’m looking for an efficient way to track my symptoms, activities (like sleep and exercise), and overall condition. While the iPhone offers much of this functionality, I'd prefer not to share the progress of my approaching demise with a corporate AI model.

The good news is that my prognosis is measured in years, not days, so tracking these things would be very helpful for documenting my care progress. Does anyone know of a purpose-built, self-hosted application, or could you recommend a general journaling app that could be customized for this purpose?

The most helpful features would include the ability to generate reports to quantify specific symptoms for doctor’s visits—for example, tracking frequency and duration of symptoms.

Thanks in advance for any suggestions!

218
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/esiy0676 on 2025-01-10 17:18:48+00:00.


Summary: Restore a full root filesystem of a backed up Proxmox node - use case with ZFS as an example, but can be appropriately adjusted for other systems. Approach without obscure tools. Simple tar, sgdisk and chroot. This is a follow-up to the previous post on backing up the entire root filesystem offline from a rescue boot.


Better formatted at: No tracking. No ads. OP r/ProxmoxQA


Previously, we have created a full root filesystem backup of our host. It's time to create a freshly restored host from it - one that may or may not share the exact same disk capacity, partitions or even filesystems. This is also a perfect opportunity to also change e.g. filesystem properties that cannot be further equally manipulated after install.

Full restore principle

We have the most important part of a system - the contents of the root filesystem in a an archive created with stock tar 1 tool - with preserved permissions and correct symbolic links. There is absolutely NO need to go about attempting to recreate some low-level disk structures according to the original, let alone clone actual blocks of data. If anything, our restored backup should result in a defragmented system.

IMPORTANT This guide assumes you have backed up non-root parts of your system (such as guests) separately and/or that they reside on shared storage anyhow, which should be a regular setup for any serious, certainly production-like, system.

Only two components are missing to get us running:

  • a partition to restore it onto; and
  • a bootloader that will bootstrap the system.

NOTE The origin of the backup in terms of configuration does NOT matter. If we were e.g. changing mountpoints, we might need to adjust a configuration file here or there after the restore at worst. Original bootloader is also of little interest to us as we had NOT even backed it up.

UEFI system with ZFS

We will take an example of a UEFI boot with ZFS on root as our target system, we will however make a few changes and add a SWAP partition compared to what such stock PVE install would provide.

A live system to boot into is needed to make this happen. This could be - generally speaking - regular Debian, 2 but for consistency, we will boot with the not-so-intuitive option of the ISO installer, 3 exactly as before during the making of the backup - this part is skipped here.

[!WARNING] We are about to destroy ANY AND ALL original data structures on a disk of our choice where we intend to deploy our backup. It is prudent to only have the necessary storage attached so as not to inadvertently perform this on the "wrong" target device. Further, it would be unfortunate to detach the "wrong" devices by mistake to begin with, so always check targets by e.g. UUID, PARTUUID, PARTLABEL with blkid 4 before proceeding.

Once booted up into the live system, we set up network and SSH access as before - this is more comfortable, but not necessary. However, as our example backup resides on a remote system, we will need it for that purpose, but everything including e.g. pre-prepared scripts can be stored on a locally attached and mounted backup disk instead.

Disk structures

This is a UEFI system and we will make use of disk /dev/sda as target in our case.

CAUTION You want to adjust this accordingly to your case, sda is typically the sole attached SATA disk to any system. Partitions are then numbered with a suffix, e.g. first one as sda1. In case of and NVMe disk, it would be a bit different with nvme0n1 for the entire device and first partition designated nvme0n1p1. The first 0 refers to the controller.

Be aware that these names are NOT fixed across reboots, i.e. what was designated as sda before might appear as sdb on a live system boot.

We can check with lsblk 5 what is available at first, but ours is virtually empty system:

lsblk -f

NAME  FSTYPE   FSVER LABEL UUID                                 FSAVAIL FSUSE% MOUNTPOINTS
loop0 squashfs 4.0                                                             
loop1 squashfs 4.0                                                             
sr0   iso9660        PVE   2024-11-20-21-45-59-00                     0   100% /cdrom
sda                                                                            

Another view of the disk itself with sgdisk: 6

sgdisk -p /dev/sda

Creating new GPT entries in memory.
Disk /dev/sda: 134217728 sectors, 64.0 GiB
Sector size (logical/physical): 512/512 bytes
Disk identifier (GUID): 83E0FED4-5213-4FC3-982A-6678E9458E0B
Partition table holds up to 128 entries
Main partition table begins at sector 2 and ends at sector 33
First usable sector is 34, last usable sector is 134217694
Partitions will be aligned on 2048-sector boundaries
Total free space is 134217661 sectors (64.0 GiB)

Number  Start (sector)    End (sector)  Size       Code  Name

NOTE We will make use of sgdisk as this allows us good reusability and is more error-proof, but if you like the interactive way, plain gdisk 7 is at your disposal to achieve the same.

Despite our target appears empty, we want to make sure there will not be any confusing filesystem or partition table structures left behind from before:

WARNING The below is destructive to ALL PARTITIONS on the disk. If you only need to wipe some existing partitions or their content, skip this step and adjust the rest accordingly to your use case.

wipefs -ab /dev/sda 
sgdisk -Zo /dev/sda

Creating new GPT entries in memory.
GPT data structures destroyed! You may now partition the disk using fdisk or
other utilities.
The operation has completed successfully.

The wipefs 8 helps with destroying anything not known to sgdisk. You can use wipefs /dev/sda* (without the -a option) to actually see what is about to be deleted. Nevertheless, the -b options creates backups of the deleted signatures in the home directory.

Partitioning

Time to create the partitions. We do NOT need a BIOS boot partition on an EFI system, we will skip it, but in line with Proxmox designations, we will make partition 2 the EFI partition and partition 3 the ZFS pool partition. We, however, want an extra partition at the end, for SWAP.

sgdisk -n "2:1M:+1G" -t "2:EF00" /dev/sda
sgdisk -n "3:0:-16G" -t "3:BF01" /dev/sda
sgdisk -n "4:0:0" -t "4:8200" /dev/sda

The EFI System Partition is numbered as 2, offset from the beginning 1M, sized 1G and it has to have type EF00. Partition 3 immediately follows it, fills up the entire space in between except for the last 16G and is marked (not entirely correctly, but as per Proxmox nomenclature) as BF01, a Solaris (ZFS) partition type. Final partition 4 is our SWAP and designated as such by type 8200.

TIP You can list all types with sgdisk -L - these are the short designations, partition types are also marked by PARTTYPE and that could be seen e.g. lsblk -o+PARTTYPE - NOT to be confused with PARTUUID. It is also possible to assign partition labels (PARTLABEL), with sgdisk -c, but is of little functional use unless used for identification by the /dev/disk/by-partlabel/ which is less common.

As for the SWAP partition, this is just an example we are adding in here, you may completely ignore it. Further, the spinning disk aficionados will point out that the best practice for SWAP partition is to reside at the beginning of the disk due to performance considerations and they would be correct - that's of less practicality nowadays. We want to keep with Proxmox stock numbering to avoid confusion. That said, partitions do NOT have to be numbered as laid out in terms of order. We just want to keep everything easy to orient (not only) ourselves in.

TIP If you got to idea of adding a regular SWAP partition to your existing ZFS install, you may use it to your benefit, but if you are making a new install, you can leave yourself some free space at the end in the advanced options of the installer 9 and simply create that one additional partition later.

We will now create FAT filesystem on our EFI System Partition and prepare the SWAP space:

mkfs.vfat /dev/sda2
mkswap /dev/sda4

Let's check, specifically for PARTUUID and FSTYPE after our setup:

lsblk -o+PARTUUID,FSTYPE

NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS PARTUUID                             FSTYPE
loop0    7...
***
Content cut off. Read original on https://old.reddit.com/r/selfhosted/comments/1hy9i7k/restore_entire_proxmox_ve_host_from_backup/
219
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/vir_db on 2025-01-10 14:04:18+00:00.


I am currently managing my library (epub and mobi) using calibre + calibreweb, but I would like something better.

For other media, I happily use Jellyfin and Jellyseerr, I am looking for something similar but for books (I know jellyfin also supports books, but this feature is not very well developed in my opinion, also jellyseerr does not support books).

I am particularly interested in the functionality of suggesting similar books (or authors) and requesting them to be added to the library.

As a client I use koreader, relying on a self-hosted kosync server, the only special requirement is that the alternative supports authenticated OPDS, so that I can download books directly from koreader.

220
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/shol-ly on 2025-01-10 13:02:03+00:00.


Happy Friday, r/selfhosted! Linked below is the latest edition of This Week in Self-Hosted, a weekly newsletter recap of the latest activity in self-hosted software and content.

This week's features include:

  • A new Raspberry Pi 5 model
  • Software updates and launches
  • A spotlight on Paperless AI - an AI-integrated platform for Paperless-ngx document analysis (u/Left_Ad_8860)
  • A ton of great guides from the community (including this subreddit!)

In this week's podcast episode, I'm joined by guest co-host Fredrik Burmester - the developer of the third-party mobile Jellyfin client Streamyfin.

Thanks, and as usual, feel free to reach out with feedback!


Newsletter | Watch on YouTube | Listen via Podcast

221
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/dgtlmoon123 on 2025-01-10 09:03:19+00:00.


Hey all! greetings from the reddit inspired self-hosted web page change detection engine :) Quite important update for those who are using / changedetection.io to push data from a website (scrape) to their own datasources when a change is detected, we have greatly improved the whole notification send/send test experience with extra debug output. Have an awesome weekend! <3 much love!

Web page change detection - showing configuration of custom endpoints for recording page change values

222
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/SnooHedgehogs77 on 2025-01-10 08:42:51+00:00.


Hello r/selfhosted !

I've just released Dagu v1.16.0. It's a tool for scheduling jobs and managing workflows, kind of like Cron or Airflow, but simpler. You define your workflows in YAML, and Dagu handles the rest. It runs on your own hardware (even on small edge devices such as Raspberry Pi, so no cloud or RDB service dependencies. Install it with a single, zero-dependency binary.

Here's what's new in v1.16.0:

  • Better Docker image: Now uses Ubuntu 24.04 with common tools.
  • .env file support: Easier environment variable management.
  • JSON in YAML: Use values from JSON data within your DAG.
  • More control over when steps run: Check conditions with regex or commands.
  • Improved error handling: Decide what happens when a step fails.
  • Easier CLI: Named and positional parameters.
  • Sub-workflow improvements: Better output handling.
  • Direct piping and shell commands: More flexibility in your steps.
  • Environment variables almost everywhere: Configure more with environment variables.
  • Web UI improvements and smaller save files.

Dagu is great for automating tasks and pipelines without writing code. Give it a shot!

Web UI:

Docs:

Installation:

Feedback and contributions are welcome!

GitHub issues:

223
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/poeti8 on 2025-01-10 07:11:47+00:00.

224
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/dicksonleroy on 2025-01-10 01:02:13+00:00.


This is not an anti-Google post. Well, not directly anyway. But how have you used self-hosting to get Google out of your affairs?

I, personally, as a writer and researcher, use Nextcloud and Joplin mostly to replace Google Drive, Google Photos, Google Docs and Google Keep. I also self-host my password manager.

I still use Gmail (through Thunderbird) and YouTube for now, but that’s pretty much all the Google products I use at the moment.

225
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/Parking-Cow4107 on 2025-01-09 20:46:05+00:00.


Hey!

I just realesed a new version of Movie Roulette! Here the last post:

Github: 

What is Movie Roulette?

At its core it is a tool which chooses a random movie from your Plex/Jellyfin/Emby movie libraries.

You can install it either as a docker container or as a macOS dmg.

What is new in v3.2?

ENV BREAKING CHANGES:

Deprecated ENV (please check README)

  • JELLYSEERR_FORCE_USE

  • LGTV_IP

  • LGTV_MAC

IMPORTANT:

If you have issues after this update please delete the config files under your docker volume.

New Features

  • Added Emby support

  • Added Ombi request service

  • Added watch filter (Unwatched Movies/All Movies/ Watched Movies) with auto-update of Genre/PG/Year filters

  • Added search functionality

  • Initial implementation for Samsung Tizen and Sony Android TVs - NOT WORKING - Searching for contributors and testers

Major Changes

  • Completely reworked request service implementation

  • Removed forced Jellyseerr for Plex

  • Changed active service display for better visibility. Now the button shows the selected service instead of the next service

  • Expanded caching logic for all services

  • Improved cache management

Improvements

  • Updated settings UI and logic

  • Enhanced mobile styling for settings

  • Better handling of incomplete configurations

  • Moved debug endpoint to support all services /debug_service

  • Changed movie poster end state from ENDED to ENDING at 90% progress

  • Improved poster time calculations for stopped/resumed playback

  • Better movie poster updates for external playback

Bug Fixes

  • Fixed Trakt connection and token management

  • Fixed various UI and playback state issues

  • Various performance and stability improvements

Some screenshots:

Main View

Poster Mode

Cast example

More screenshots:

Hope you'll enjoy it!

view more: ‹ prev next ›