MajorSauce

joined 2 years ago
[–] [email protected] 1 points 17 minutes ago

Try hosting locally DeepSync R1, for me the results are similar to ChatGPT without needing to send any into on the internet.

LM Studio is a good start.

[–] [email protected] 1 points 2 days ago (2 children)

Proxmox does support NFS

But let's say that I would like to decommission my TrueNAS and thus having the storage exclusively on the 3-node server, how would I interlay Proxmox+Storage?

(Much appreciated btw)

[–] [email protected] 9 points 2 days ago (3 children)

You are 100% right, I meant for the homelab as a whole. I do it for self-hosting purposes, but the journey is a hobby of mine.

So exploring more experimental technologies would be a plus for me.

[–] [email protected] 1 points 2 days ago (4 children)

Currently, most of the data in on a bare-metal TrueNAS.

Since the nodes will come with each 32TB of storage, this would be plenty for the foreseeable future (currently only using 20TB across everything).

The data should be available to Proxmox VMs (for their disk images) and selfhosted apps (mainly Nextcloud and Arr apps).

A bonus would be to have a quick/easy way to "mount" some volume to a Linux Desktop to do some file management.

[–] [email protected] 2 points 2 days ago

I think I am on the same page.

I will provably keep Plex/Stash out of S3, but Nextckoud could be worth it? (1TB with lots of documents and medias).

How would you go for Plex/Stash storage?

Keeping it as a LVM in Proxmox?

[–] [email protected] 3 points 2 days ago (2 children)

Darn, Garage is the only one that I successfully deployed a test cluster.

I will dive more carefully into Ceph, the documentation is a bit heavy, but if the effort is worth it..

Thanks.

 

Hi all!

I will soon acquire a pretty beefy unit compared to my current setup (3 node server with each 16C, 512G RAM and 32T Storage).

Currently I run TrueNAS and Proxmox on bare metal and most of my storage is made available to apps via SSHFS or NFS.

I recently started looking for "modern" distributed filesystems and found some interesting S3-like/compatible projects.

To name a few:

  • MinIO
  • SeaweedFS
  • Garage
  • GlusterFS

I like the idea of abstracting the filesystem to allow me to move data around, play with redundancy and balancing, etc.

My most important services are:

  • Plex (Media management/sharing)
  • Stash (Like Plex 🙃)
  • Nextcloud
  • Caddy with Adguard Home and Unbound DNS
  • Most of the Arr suite
  • Git, Wiki, File/Link sharing services

As you can see, a lot of download/streaming/torrenting of files accross services. Smaller services are on a Docker VM on Proxmox.

Currently the setup is messy due to the organic evolution of my setup, but since I will upgrade on brand new metal, I was looking for suggestions on the pillars.

So far, I am considering installing a Proxmox cluster with the 3 nodes and host VMs for the heavy stuff and a Docker VM.

How do you see the file storage portion? Should I try a full/partial plunge info S3-compatible object storage? What architecture/tech would be interesting to experiment with?

Or should I stick with tried-and-true, boring solutions like NFS Shares?

Thank you for your suggestions!

[–] [email protected] 3 points 5 days ago (1 children)

Is taking care of the environment back in vogue now that it might affect businesses' productivity?

[–] [email protected] 1 points 5 days ago* (last edited 5 days ago) (1 children)

~~Lien Mort.~~

~~Avons-nous une source alternative?~~

Ça fonctionne, désolé.

[–] [email protected] 2 points 5 days ago* (last edited 5 days ago)

Thanks "big bad China" for forcing the rest of the world to move forward with more open AI (let's see if it materialises)

[–] [email protected] 18 points 2 weeks ago* (last edited 5 days ago) (1 children)

This infinite list of letters could be "any random combination of letters EXCEPT when that makes the word "banana". A subset of an infinite set can still be infinite.

Infinite != all possibilities

[–] [email protected] 23 points 2 weeks ago

So far, they are training models extremely efficiently while having US gatekeeping their GPUs and doing everything they can to slow their progress. Any innovation in having efficient models to operate and train is great for accessibility of the technology and to reduce the environment impacts of this (so far) very wasteful tech.

[–] [email protected] 1 points 3 weeks ago

There was a Reddit post of a WWII picture of a blown up US artillery piece where the round detonated in the chamber and killed the crew.

I replied something like "Looks like they got a taste of their own medicine".

While it still to this day gives me a chuckle, the reception was rather cold.

 

All we have are scriptures and texts that could have been a series of meme that built/improved from eachother but lost the common knowledge between the generations that it was fictional.

view more: next ›