fmstrat

joined 2 years ago
MODERATOR OF
[–] [email protected] 8 points 1 day ago (1 children)

Everyone is waiting for this. There needs to be a party.

[–] [email protected] 5 points 1 day ago

A fun conversation starter is always "So do you have an internal monologue?"

[–] [email protected] 4 points 1 day ago (2 children)

No thanks. I get some people agreed to this, but I'm going to continue to use .lan, like so many others. If they ever register .lan for public use, there will be a lot of people pissed off.

IMO, the only reason not to assign a top-level domain in the RFC is so that some company can make money on it. The authors were from Cisco and Nominum, a DNS company purchased by Akamai, but that doesnt appear to be the reason why. .home and .homenet were proposed, but this is from the mailing list:

  1. we cannot be sure that using .home is consistent with the existing (ab)use
  2. ICANN is in receipt of about a dozen applications for ".home", and some of those applicants no doubt have deeper pockets than the IETF does should they decide to litigate

https://mailarchive.ietf.org/arch/msg/homenet/PWl6CANKKAeeMs1kgBP5YPtiCWg/

So, corporate fear.

[–] [email protected] 3 points 1 day ago

I just use openssl"s built in management. I have scripts that set it up and generate a .lan domain, and instructions for adding it to clients. I could make a repo and writeup if you would like?

As the other commenter pointed out, .lan is not officially sanctioned for local use, but it is not used publicly and is a common choice. However you could use whatever you want.

[–] [email protected] 8 points 2 days ago (6 children)

I use a domain, but for homelab I eventually switched to my own internal CA.

Instead of having to do service.domain.tld it's nice to do service.lan.

[–] [email protected] 16 points 2 days ago

Yea no clue what this is. No context, can't reqd what was attached because it's an image. Waste of a post.

[–] [email protected] 4 points 2 days ago* (last edited 2 days ago)

Agreed, and unfortunately articles like this are food for CEOs to do more under the guise of AI. "See, it works!"

[–] [email protected] 9 points 2 days ago (4 children)

Wouldn't it be more efficient to put this on Codeberg and accept PRs?

[–] [email protected] 13 points 2 days ago (3 children)

I'm still running Qwen32b-coder on a Mac mini. Works great, a little slow, but fine.

[–] [email protected] 2 points 4 days ago

Yea I just hit 2k hours. I don't play a ton but have been playing forever and am now hearing Rematch may be a good secondary.

[–] [email protected] 2 points 5 days ago (2 children)

Were you a Rocket League player by chance?

[–] [email protected] 3 points 5 days ago* (last edited 5 days ago)

I just validated that the latest version of the LDAP privilege escalation issue is not an issue anymore. The curl script is in the ticket.

This was the one where a standard user could get plugin credentials, such as the LDAP bind user, and change the LDAP endpoint. I.E., bad.

I chose this one because after going through all of them, it was the only one that allowed access to something that wasn't just data in Jellyfin.

So for me, security is less of an issue knowing that, as only family use the service, and the remaining issues all require a logged in user (hit admin endpoint with user token).

Plus, I tried a few of those and they were also fixed, just not documented yet. I didn't add to those tickets because I was not as formal with my testing.

@[email protected]

 

cross-posted from: https://lemmy.nowsci.com/post/13005097

Hi all,

I've been running a bunch of services in docker containers using Docker Compose for a while now, with data storage on ZRAID mirrored NVME and/or ZRAID2 HDDs.

I've been thinking about moving from my single server setup to three micro-servers (Intel N150s), both for redundancy, learning, and fun.

Choosing Kubernetes was easy, but I'd like to get some outside opinions on storage. Some examples of how I'm using storage:

  1. Media and large data storage: Currently on the ZRAID2 HDDs, will stay here but be migrated to a dedicated NAS
  2. High IO workloads like Postgresql and email: Currently running on the NVMEs
  3. General low-volume storage: Also currently on NVMEs, but different use case. These are lower IO, like data storage for Nextcloud, Immich, etc

I'm a huge fan of being able to snapshot with ZFS, as I mirror all my data off-site with hourly pushes for some container data, and daily for the rest. I'd like to be able to continue this kind of block-level backups if possible.

Assume I'm a noob at Kubernetes storage (have been reading, but still fresh to me). I'd love to know how others would set up their storage interfaces for this.

I'm trying to understand if there's a way to have the storage "RAIDed" across the drives in the three micro-servers, or if things work differently than I expect. Thanks!

 

Hi all,

I've been running a bunch of services in docker containers using Docker Compose for a while now, with data storage on ZRAID mirrored NVME and/or ZRAID2 HDDs.

I've been thinking about moving from my single server setup to three micro-servers (Intel N150s), both for redundancy, learning, and fun.

Choosing Kubernetes was easy, but I'd like to get some outside opinions on storage. Some examples of how I'm using storage:

  1. Media and large data storage: Currently on the ZRAID2 HDDs, will stay here but be migrated to a dedicated NAS
  2. High IO workloads like Postgresql and email: Currently running on the NVMEs
  3. General low-volume storage: Also currently on NVMEs, but different use case. These are lower IO, like data storage for Nextcloud, Immich, etc

I'm a huge fan of being able to snapshot with ZFS, as I mirror all my data off-site with hourly pushes for some container data, and daily for the rest. I'd like to be able to continue this kind of block-level backups if possible.

Assume I'm a noob at Kubernetes storage (have been reading, but still fresh to me). I'd love to know how others would set up their storage interfaces for this.

I'm trying to understand if there's a way to have the storage "RAIDed" across the drives in the three micro-servers, or if things work differently than I expect. Thanks!

79
Ultralightish (lemmy.nowsci.com)
 

Since you all liked the tent on the coast, I thought you might also enjoy this sighting. We spotted this species of comfort camper in the wild while we were up there.

 

Since I agree with @[email protected], I will contribute, too. I however, love the snow and ice for camping, hiking, backpacking, whatever.

This was taken on the coast after backpacking through the Olympics in Washington State.

 

Hey all,

I'm de-googling, and while OctoApp (to control OctoPrint) is open source (https://gitlab.com/realoctoapp/octoapp), there are no APKs in the releases like the README says. I can't report this as an issue because that's turned off on GutLab, so does anyone know of any other way it is distributed outside of thr Play Store?

Thanks.

 

Hi all,

Working through some things like a Will (I am fine, just normal life planning), and debating on methods for digital management when I do die.

I run a lot of self-hosted services for family and friends, all on secured servers with ZFS and on/off site backups. Key ingredient is Vaultwarden for password management.

I'd like to put something in place so that encryption keys, some docs, and key passwords are released to a tech savvy friend. Anyone know of existing solutions for this?

Requirements of:

  • Not providing keys to a third-party beforehand
  • Not forgeable to open
  • If possible, no "weekly press a button"

I'm thinking some kind of key pair where my friend has the private key and the public key is provided to a family member, and when activated a timer starts where I could cancel the release.

 

Hi all,

About to go full no-Google, but am missing one app alternative. This is URL Forwarder: https://play.google.com/store/apps/details?id=net.daverix.urlforward

It allows users to share to it like a bookmarklet. Anyone know of something else that does this?

An example use case would be browsing in your Lemmy app and sharing the post URL to another webpage.

 

So I haven't run a custom ROM for a long time and I'm thinking of trying out GrapheneOS. Before I do, is there a modern way to take a full disk image of a stock Pixel 8? The intent would be to factory restore to where I am in this moment if need be.

86
Screen is a wonderful thing. (lemmy-ui.nowsci.com:33443)
 

I use Ollama with continue.dev in code-server, and I wanted a way to hit Cntrl-Shift-Alt-T to get a "top" of sorts that would show CPU, IO, GPU, loaded models, and logs, all in one place quickly.

Set up the below screenrc file and created the shortcut above in Debian. Tab switches between CPU and IO, and Cntrl-a q quits all screens and closes the Gnome shell.

Screenrc:

termcapinfo xterm* ti@:te@
startup_message off
defscrollback 10000

bind q eval "kill" "quit"
caption always "%{= rw}%-w%{= KW}%n %t%{-}%+w"
defbce on

# Start htop and focus
screen -t "HTop" htop
focus

# Split horizontally to put nvtop under htop
split
focus
screen -t "NVTop" nvtop

# Split vertically to put ollama next to nvtop
split -v
focus
screen -t "Ollama PS" watch -n5 'docker exec -ti ai-ollama ollama ps'

# Split horizontally to put logs underneath ps
split
focus
screen -t "Ollama logs" bash -c "docker logs -f --tail 100 ai-ollama | grep -Ev '\"/api/ps\"|\"/\"'"

# Resize PS, then get back to logs
focus up
resize -v 6
focus down

# Get back to htop
focus

The atop script that runs with Cntrl-Alt-Shift-T:

#!/usr/bin/env bash

if [ "${1}" = "new" ]; then
    gnome-terminal --geometry=200x50+0+0 --maximize -- /data/system/bin/atop
else
    screen -c /data/system/setup/common/screenrc-status
fi

Happy to share my htop config as well if anyone wants it.

 

Hey all,

Anyone familiar with the state of Raptor Lake performance + efficiency cores in Linux? I'm specifically curious about how the kernel balances things when running multiple containers (without pinned CPUs)

Thanks!

 

Trying to figure out if there is a way to do this without zfs sending a ton of data. I have:

  • s/test1, inside it are folders:
    • folder1
    • folder2

I have this pool backed up remotely by sending snapshots.

I'd like to split this up into:

  • s/test1, inside is folder:
    • folder1
  • s/test2, inside is folder:
    • folder2

I'm trying to figure out if there is some combination of clone and promote that would limit the amount of data needed to be sent over the network.

Or maybe there is some record/replay method I could do on snapshots that I'm not aware of.

Thoughts?

41
submitted 5 months ago* (last edited 5 months ago) by [email protected] to c/[email protected]
 

Since everyone here seemed to like my Pegboard designs, I figured I'd share this as well. When making the Only Sensor (see the home automation community or my site), I used this Solder Fume Extractor to keep my lungs nice and clean.

Fully 3D printable, and a full bill or materials on the link. Enjoy!

Hrm, not sure why the image returned a logo, but here it is:

https://nowsci.com/diy-solder-extractor

view more: ‹ prev next ›