homelab

6838 readers
17 users here now

founded 4 years ago
MODERATORS
1
 
 

I want to establish a second LAN at home. It's supposed to host different services on different infrastructure (vms, k8s, docker) and mostly serving as a lab.

I want to separate this from the default ISP router LAN (192.68.x.0/24).

I have a machine with 2 NIC (eno1 plugged in at ISP router and eno2), both with corresponding bridges and proxmox. I already set up the eno2 bridge with a 10.x.x.x IP and installed a opnsense vm that has eno1 as the WAN interface in the 192 network and eno2 as the LAN interface as 10. network with dhcp server.

I connected a laptop (no wifi) to eno2, got a dhcp lease and can connect the opnsense interface, machines in the 192 network and the internet, same for a vm on the eno2 bridge, so that part is working. There's a pihole in the 192 network that I successfuly set as the dns server in opnsense.

Here's what I am trying to achieve and where I'm not sure about how to properly do it:

  • Block access from the 10 network to 192 network except for specific devices - I guess that's simply firewall rules
  • Make services (by port) in the 10 network accessible to the internet. I currently have a reverse proxy vm in the 192 network which got 80 and 443 forwarded by the ISP router. Do I need to add a second nic to the vm or can I route some services through the firewall? I want to firewall that vm down so it can't open outgoing connections except for specific ports on specific hosts.
  • Make devices in the 10 network available for devices in the 192 network - here I'm not quite sure. Do I need to a static route?
  • Eventually I want to move all non-enduser devices to the new LAN so I can experiment without harming the family network but I want to make sure I understand it properly before doing that

I'd be glad for any hints on this, I'm a bit confused with the nomenclature here. If you have other ideas on how to approach this, I'm open for that too.

2
12
EliteDesk 800 G6 SFF setup (lemmy.dbzer0.com)
submitted 1 day ago* (last edited 1 day ago) by [email protected] to c/[email protected]
 
 

Hi guys,

Just picked myself up an EliteDesk 800 G6 SFF:

Current specs:

  • CPU: i5-10500
  • RAM: 8 GB
  • NVMe SSD: 256 GB

My plan is to beef this up with:

  • RAM: Crucial Pro DDR4 RAM 64GB Kit (2x32GB) 3200MHz

  • HDD: 4TB ironwolf NAS drives * 2

  • NVME SSD: Samsung 970 EVO Plus 1 TB PCIe NVMe M.2

How I'm planning my setup:

The existing 256 GB NVMe will host Proxmox.

The new 1 TB NVMe will be for VM's & LXC's

The 4TB ironwolf NAS drives will be configured in a mirror and will be used as a NAS (Best way to do this?) as well as for bulk data from my services, like recordings from Frigate.

Services:

  • Home Assistant (Currently running on a pi4)

  • Frigate (Currently running on a pi4)

  • Pi hole (Maybe, already running on an OG pi)

  • Next cloud (Calendar, photos)

  • TailsScale

  • Vaultwarden

  • Windows 11

My follow on projects will be:

Setup PBS to back up my Host(proxmox-backup-client), VMS & LXC's

I have a Raspberry Pi 4 that I was thinking to use for PBS in the short term, but will eventually move it to something like an n100 mini PC.

I will also setup a second NAS(TrueNAS most likely, bare metal) to back up the 4TB ironwolf NAS.

This is my first proper homeLab, having mostly tinkered with Raspberry Pi's and Arduino's up to this point, any advice on my setup would be really appreciated.

3
 
 

cross-posted from: https://slrpnk.net/post/17736356

Hi there good folks!

I am going to be upgrading my server within the next couple of months and am trying to do some prior planning. My current setup is as follows:

  • Case: Fractal Define R5
  • Mothberboard: Gigabyte Z170X-Designare-CF
  • CPU: i7-6700K CPU @ 4.00GHz
  • Memory: 32 GiB DDR4
  • Storage: 15TB spread across 4 HDDs (10x2x2x1) + 1HDD at 10TB for Parity.
  • OS: Unraid 🧡

While this setup as served me well, I am completely hooked on these mini-racks(Rackmate T1) and am thinking of getting one eventually. Fortunately I'll be getting my hands on my first mini-pc soon, an ASUS ExpertCenter PN52. This little badboy has the following specs:

  • CPU: AMD Ryzen™ 9 5900HX
  • Memory: 32 GiB DDR4
  • Storage: Comes with one NVMe SSD 1TB

From my little cpu knowledege this one is superior in almost all ways, so it feels like an easy choice to swith out the old one. I need an enclosing for my 5 HDDs that connects to this minipc. This leads me to my questions:

  1. What are your suggestions for enclosings?
  2. Whats the best way to connect an enclosing like this to the mini-pc?

Any pinpointers, opinions and suggestions appriciated!

edit: im getting the mini-pc for free actually, so feel like its a no brainer to upgrade.

Pictures of the mini-pc for those interested:

Ports overview

Front

Easily configurable

4
 
 

I recently generated a self-signed cert to use with NGINX via it's GUI.

  1. Generate cert and key
  2. Upload these via the GUI
  3. Apply to each Proxy Host

Now when I visit my internal sites (eg, jellyfin.home) I get a warning (because this cert is not signed by a trusted CA) but the connection is https.

My question is, does this mean that my connection is fully encrypted from my client (eg my laptop) to my server hosting Jellyfin? I understand that when I go to jellyfin.home, my PiHole resolves this to NGINX, then NGINX completes the connection to the IP:port it has configured and uses the cert it has assigned to this proxy host, but the Jellyfin server itself does not have any certs installed on it.

5
 
 

I was looking to see what would happen on the 3rd floor with a ceiling-mounted AP on the 2nd floor. New to Unifi, I keep being surprised with delight how much useful tooling & info there is.

Here's for the U6+:

radiation patterns

6
 
 

I brought a Grandstream GWN7711P switch a while ago but I have found a rather annoying problem.

When the switch does not have an internet connection it is spamming "router.gwn.cloud" every 2-5 seconds and filling my firewall logs (360+ times in 35 min).

Does anyone know how to disable the cloud connection?

7
 
 

Hi everone, basically what the title says. I am just starting my homelab and I am somewhat conflicted on whether I should run Opensense in Proxmox or should I buy a n100 device dedicated for it. What are some of the pros and cons of doind either or. So far in my research I have only come across articles/forum posts explaining how to run Opensense in Proxmox.

8
 
 

I recently setup SearXNG to take the place of Whoogle (since Google broke it by disabling JS free query results). I am following the same steps I've always done in adding a new default search engine.

Navigate to the address bar, right click "Add SearXNG" then go into settings and make it my default. After doing this, rather than using the local IP the instance is running at, Firefox uses https://localhost/search for some reason. I don't see a way to edit this in the settings section of Firefox. Anyone else experienced this?

Update: After updating the .env file with my IP address and bring docker down/ up, all is working as expected (able to use SearXNG via Caddy using the https:// address)

9
 
 

For years, I have been using Whoogle for my self-hosted searches. It's been great, but recently there were some upstream changes that seem to have broken it.

I'm guessing that SearXng will soon follow (based on the assumption that they too are using the JS free results Google used to provide).

Does anyone have any self-hosted search options that still work? I hear Kagi is good for paid/ non-self hosted options, but just curious what you all are using.

10
 
 

I'm looking for options to replace my 2-bay DS214play after 10 years of service and I'm looking for recommendations on what direction to go. My main reason for retiring the NAS is that the OS will see no further updates from Synology, and not much will run on i386 architecture.

I run truenas + docker on a NUC-like HM90 mini-pc which is attached to the NAS for storage and this has been working well for the past ~2 years.

I figure that my options are to either continue using the mini-pc with a form of "dumb" network storage, or replace both systems with something that can handle both workloads.

I've considered building my own SFF PC instead of buying a new NAS (as this would have better upgrade paths), but I haven't been able to find anything with space for HDD which will also fit in the 10" cabinet that both of the above systems currently share.

The new NAS lineup from ugreen (DXP2800/DXP4800) seems like reasonable options, but I'm wondering if there's other options I should consider instead, as these models will only barely fit on the cabinet shelf (250Hx210Wx250D).

11
 
 

cross-posted from: https://lemmy.world/post/24140532

Hi everyone!

I’m planning to repurpose an old computer case to house a few Raspberry Pis and could really use some advice.

First, I’m trying to figure out the best way to mount the Raspberry Pis inside the case. Are there any good DIY solutions, or are there premade mounts designed for this kind of project? I want them to be secure but accessible if I need to make changes.

Next, I’d like to power all the Pis centrally. Is there a way to do this efficiently without using separate power adapters for each Pi? I’ve heard of some solutions involving custom power supplies, but I’m not sure where to start.

I’m also thinking about cooling. Would the case’s old fans be sufficient, or should I add heatsinks or other cooling methods to each Pi? I want to make sure everything stays cool, especially if the Pis are running intensive tasks.

Finally, what’s the best way to handle I/O? I’ll need to route HDMI, USB, Ethernet, and other connections out of the case. Are there panel kits or other ways to organize the cables neatly?

I’d love to hear your suggestions or see examples if you’ve done something similar. Thanks in advance for your help!

12
 
 

So I currently have an Asus RT-AC86U that is working fine, but bogging down under load, and also is EOL.

We've got three people and about 15 devices, give or take. Our internet service is currently 300Mb cable.

The AX88U Pro is currently on a very good sale - $220CDN. I figure my options are that, the BE86U at $370, or the BE88U at $500.

Five hundred bucks is out of my justifiable price range. Spending less (a lot!) on the AX router would be nice, but the longevity (and support lifespan) of the BE86 has some appeal too.

I'm also not married to Asus, although they've been consistently excellent for me.

What do y'all think? Any educated guesses on when Asus is going to EOL the AX lineup?

13
8
submitted 3 weeks ago* (last edited 3 weeks ago) by [email protected] to c/[email protected]
 
 

My Jellyfin VM has been failing its nightly backups for some time now (maybe a week or so).

I'm currently backing up to a NAS that has plenty of available space and my other 10 VMs are backing up without issues (though they are a bit smaller than this one).

I am backing up with the ZSTD compression option and the Snapshot mode.

The error is as follows:

INFO: include disk 'scsi0' 'Proxbox-Local:vm-110-disk-0' 128G
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: creating vzdump archive '/mnt/pve/Proxbox-NAS/dump/vzdump-qemu-110-2025_01_04-03_29_45.vma.zst'
INFO: started backup task '4be73187-d25c-49cf-aed2-1217fba27f77'
INFO: resuming VM again
INFO:   0% (866.4 MiB of 128.0 GiB) in 3s, read: 288.8 MiB/s, write: 268.0 MiB/s
INFO:   1% (1.5 GiB of 128.0 GiB) in 6s, read: 221.1 MiB/s, write: 216.0 MiB/s
INFO:   2% (2.6 GiB of 128.0 GiB) in 15s, read: 130.5 MiB/s, write: 126.4 MiB/s
INFO:   3% (3.9 GiB of 128.0 GiB) in 25s, read: 128.9 MiB/s, write: 127.5 MiB/s
ERROR: job failed with err -5 - Input/output error
INFO: aborting backup job
INFO: resuming VM again
ERROR: Backup of VM 110 failed - job failed with err -5 - Input/output error
INFO: Failed at 2025-01-04 03:30:17

Anyone experienced this or have any suggestions as to resolving it?

Update: After rebooting the Proxmox node (not just the VM) my backups are now working again. Thanks all for the input!

14
 
 

I am mainly hosting Jellyfin, Nextcloud, and Audiobookself. The files for these services are currently stored on a 2TB HDD and I don't want to lose them in case of a drive failure. I bought two 12TB HDDs because 2TB got tight and I thought I could add redundancy to my system, to prevent data loss due to a drive failure. I thought I would go with a RAID 2 (or another form of RAID?), but everyone on the internet says that RAID is not a backup. I am not sure if I need a backup. I just want to avoid losing my files when the disk fails.
How should I proceed? Should I use RAID2, or rsync the files every, let's say, week? I don't want to have another machine, so I would hook up the rsync target drive to the same machine as the rsync host drive! Rsyncing the files seems to be very cumbersome (also when using a cron job).

15
 
 

Found these guides after having to reprogram my H310 Mini's EEPROM after bricking it with another guide. Can't speak for the other guides, but the PERC H310 MINI guide worked like a charm.

16
 
 

I have a couple rules in place to allow traffic in from specific IPs. Right after these rules I have rules to block everything else, as this firewall is an "allow by default" type.

The problem I'm facing is that when I replace these two ports to match "Any" instead, those machines (matrix server and game server) are unable to perform apt-gets.

I had thought that this should still be allowed, because the egress rules for those two permit outbound traffic to http/s and once that's established it's a "stateful" connection which should allow the traffic to flow back the other way.

What am I doing wrong here, and what is the best way to ensure that traffic only hits these servers from the minimal number of ports.

17
 
 

I'm currently running a Xeon E3-1231v3. It's getting long in the tooth, supports only 32GB RAM, and has only 16 PCIe lanes. I've been butting up against the platform limitations for a couple of years now, and I'm ready to upgrade. I've been running this system for ~10yrs now.

I'm hoping to future proof the next system to also last 8-10 years (where reasonable, considering advancements in tech and improvements in efficiency), but I'm hitting a wall finding CPU candidates.

In a perfect world, I'd like an Intel with iGPU for QuickSync (HWaccel for Frigate/Immich/Jellyfin), AND I would like the 40+ PCIe lanes that the Intel Xeon Scalable CPUs offer.

With only my minimum required PCIe devices I've surpassed the 20 lanes available on desktop CPU's with an iGPU:

  • Dual m.2 for Proxmox ZFS mirror (guest storage) - in addition to boot drive (8 lanes)
  • LSI HBA (8 lanes)
  • Dual SFP+ NIC (8 lanes)

Future proofing:

High priority

  • Dedicated GPU (16 lanes)

Low priority

  • Additional dual m.2 expansion (8 lanes)
  • USB expansions for simplified device passthrough (Coral TPU, Zigbee/Zwave for Home Aassistant, etc) (4 lanes per card) - this assumes the motherboard comes with at least 4-ports
  • Coral TPU PCIe (4 lanes?)

Is there anything that fulfills both requirements? Am I being unreasonable or overthinking it? Is there a solution that adds GPU hardware acceleration to the Xeon Silver line without significantly increasing power draw?

Thanks!

18
 
 

I currently have an HP micro server gen 8 with Xpenology with hybrid raid, which works fairly well, but I’m 2 major versions behind. I’m quite happy with it, but I-d like to have an easier upgrade process, and more options. My main use is NAS and a couple of apps. I’d like to have more flexibility, to easily have an arr suite, etc.

Considerdering the hassle of safely upgrading xpenology because of the hybrid raid (4+4+2+2 Gb HDDs) I-d like a setup which I can easily upgrade and modify.

What are my options here? What RAID options are there that easily and efficiently these these disks?

I don-t have the spare money right now to replace the 2Gb disks. Planned in the future.

19
23
submitted 2 months ago* (last edited 2 months ago) by [email protected] to c/[email protected]
 
 

HyperV has GPU para virtualization

But for qemu,kvm,xen it seems like the best option is to passthrough a GPU to a single VM, unless the GPU supports srvio, which almost all of the retail cards don't.

I head about the woof and gaming on whales project, and they seem to get around this by using only containers for the subdivision.

What methods or options have you used to share a GPU with your VMs?

20
 
 

I'm building my own NAS. I've put together gaming PCs and simple workstations, but this will be my first foray into "the big leagues". At this point, I'm planning to use Unraid because it seems quite beginner friendly. I'm not a linux newbie, but I'm no sysadmin either. The thing that's making me question my choice is that I dont plan to take advantage of Unraid's killer feature; the abilty to add any size disk into your array. I've already got as many disks as the case will hold (8 x 12TB). When the inevitable day comes that I need more storage, I'll probably just build a second machine.

I've also looked at TrueNAS Scale a bit, and it seems approachable, but perhaps more capable than I really need. I do plan to run a number of containerized apps, but don't expect I'll need to run any VMs very often. I'm also not sure how I feel about ZFS. I read so many conflicting opinions. So, I haven't decided on a file system yet either.

My primary use cases are: media server, storage server, and homelab playground. I want to self-host as many things as possible so I can stop depending so heavily on enshittifying cloud services. I know I can look a lot of this stuff up And I have been reading whatever I can find. But much of what I've learned in recent months has been a direct result of reading this sub, so I'd love to tap into the knowledge found here.

21
4
submitted 2 months ago* (last edited 2 months ago) by [email protected] to c/[email protected]
 
 

My testing setup is, all on the same subnet, ipv4

  • Windows Machine with Intel x520
    • Direct Connect 10Gbps cable
  • USW Aggregation switch (10Gbps)
    • Direct Connect 10Gbps cable
  • Synology NAS with Intel x520
    • SRVIO Connection
  • Debian VM

iperf3 Windows To Debian: 6Gbits/sec

.\iperf3.exe -c 192.168.11.57  --get-server-output
Connecting to host 192.168.11.57, port 5201
[  5] local 192.168.11.132 port 56855 connected to 192.168.11.57 port 5201
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-1.01   sec   817 MBytes  6.79 Gbits/sec
[  5]   1.01-2.00   sec   806 MBytes  6.82 Gbits/sec
[  5]   2.00-3.01   sec   822 MBytes  6.85 Gbits/sec
[  5]   3.01-4.00   sec   805 MBytes  6.81 Gbits/sec
[  5]   4.00-5.01   sec   818 MBytes  6.81 Gbits/sec
[  5]   5.01-6.00   sec   806 MBytes  6.82 Gbits/sec
[  5]   6.00-7.01   sec   821 MBytes  6.83 Gbits/sec
[  5]   7.01-8.00   sec   805 MBytes  6.80 Gbits/sec
[  5]   8.00-9.01   sec   820 MBytes  6.82 Gbits/sec
[  5]   9.01-10.00  sec   809 MBytes  6.84 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-10.00  sec  7.94 GBytes  6.82 Gbits/sec                  sender
[  5]   0.00-10.00  sec  7.94 GBytes  6.82 Gbits/sec                  receiver

Server output:
-----------------------------------------------------------
Server listening on 5201 (test #9)
-----------------------------------------------------------
Accepted connection from 192.168.11.132, port 56854
[  5] local 192.168.11.57 port 5201 connected to 192.168.11.132 port 56855
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-1.00   sec   806 MBytes  6.76 Gbits/sec
[  5]   1.00-2.00   sec   812 MBytes  6.82 Gbits/sec
[  5]   2.00-3.00   sec   816 MBytes  6.85 Gbits/sec
[  5]   3.00-4.00   sec   812 MBytes  6.81 Gbits/sec
[  5]   4.00-5.00   sec   812 MBytes  6.81 Gbits/sec
[  5]   5.00-6.00   sec   812 MBytes  6.81 Gbits/sec
[  5]   6.00-7.00   sec   815 MBytes  6.84 Gbits/sec
[  5]   7.00-8.00   sec   811 MBytes  6.80 Gbits/sec
[  5]   8.00-9.00   sec   814 MBytes  6.82 Gbits/sec
[  5]   9.00-10.00  sec   815 MBytes  6.84 Gbits/sec
[  5]  10.00-10.00  sec  1.12 MBytes  4.81 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-10.00  sec  7.94 GBytes  6.82 Gbits/sec                  receiver


iperf Done.

iperf3 debian to windows 9.5Gbits/sec

.\iperf3.exe -c 192.168.11.57  --get-server-output -R
Connecting to host 192.168.11.57, port 5201
Reverse mode, remote host 192.168.11.57 is sending
[  5] local 192.168.11.132 port 56845 connected to 192.168.11.57 port 5201
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-1.01   sec  1.11 GBytes  9.40 Gbits/sec
[  5]   1.01-2.00   sec  1.09 GBytes  9.47 Gbits/sec
[  5]   2.00-3.01   sec  1.11 GBytes  9.47 Gbits/sec
[  5]   3.01-4.00   sec  1.09 GBytes  9.47 Gbits/sec
[  5]   4.00-5.01   sec  1.11 GBytes  9.47 Gbits/sec
[  5]   5.01-6.00   sec  1.09 GBytes  9.47 Gbits/sec
[  5]   6.00-7.01   sec  1.11 GBytes  9.47 Gbits/sec
[  5]   7.01-8.00   sec  1.09 GBytes  9.47 Gbits/sec
[  5]   8.00-9.01   sec  1.11 GBytes  9.47 Gbits/sec
[  5]   9.01-10.00  sec  1.09 GBytes  9.47 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  11.0 GBytes  9.46 Gbits/sec    0             sender
[  5]   0.00-10.00  sec  11.0 GBytes  9.46 Gbits/sec                  receiver

Server output:
-----------------------------------------------------------
Server listening on 5201 (test #7)
-----------------------------------------------------------
Accepted connection from 192.168.11.132, port 56844
[  5] local 192.168.11.57 port 5201 connected to 192.168.11.132 port 56845
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec  1.10 GBytes  9.42 Gbits/sec    0   2.01 MBytes
[  5]   1.00-2.00   sec  1.10 GBytes  9.47 Gbits/sec    0   2.01 MBytes
[  5]   2.00-3.00   sec  1.10 GBytes  9.47 Gbits/sec    0   2.01 MBytes
[  5]   3.00-4.00   sec  1.10 GBytes  9.47 Gbits/sec    0   2.01 MBytes
[  5]   4.00-5.00   sec  1.10 GBytes  9.47 Gbits/sec    0   2.01 MBytes
[  5]   5.00-6.00   sec  1.10 GBytes  9.47 Gbits/sec    0   2.01 MBytes
[  5]   6.00-7.00   sec  1.10 GBytes  9.47 Gbits/sec    0   2.01 MBytes
[  5]   7.00-8.00   sec  1.10 GBytes  9.47 Gbits/sec    0   2.01 MBytes
[  5]   8.00-9.00   sec  1.10 GBytes  9.47 Gbits/sec    0   2.01 MBytes
[  5]   9.00-10.00  sec  1.10 GBytes  9.47 Gbits/sec    0   2.01 MBytes
[  5]  10.00-10.00  sec  2.50 MBytes  7.72 Gbits/sec    0   2.01 MBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  11.0 GBytes  9.46 Gbits/sec    0             sender


iperf Done.

I find that rather curious, something in the windows 10 tcp settings that limit the outgoing throughput, window size maybe?

Debian MTU

ip link show ens3 | grep -i "mtu"
2: ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000

::: spoiler Windows MTU

netsh interface ipv4 show subinterfaces

   MTU  MediaSenseState   Bytes In  Bytes Out  Interface
------  ---------------  ---------  ---------  -------------
  1500                1  273580016987  64376522487  Ethernet 4
22
 
 

I recently had my Proxmox host fail, so I re-installed and recovered all my VMs from backups.

I'm noticing that my file structure (this is on my NAS where Proxmox mounts it via SMB/CIFS) has some duplicate folders in it. The ones I highlighted are all empty. Is this normal? Can these be removed safely?

23
 
 

I've managed to get TrueNAS connected to Active Directory and created a share that I can access from an AD account on a Windows client just fine. However when I try to mount the share on Ubuntu Server 24.04 I keep getting permission/logon failure.

In my fstab entry I've tried every combo I can think of.

domain=domain,user=user,password=pass domain=domain.local,user=user,password=pass user=domain\user,password=pass user=domain.local\user,password=pass

I've also tried a separate credentials file with every one of those combinations as well as versions 2.1 and 3.0. I've got no problem mounting shares from the Windows server without even specifying the domain.

At this point I'm pretty sure I'm missing a setting on TrueNAS but no idea what. Any ideas?

24
 
 

So I just added a TP-Link switch (TL-SG3428X) and access point (EAP670) to my network, using OPNSense for routing, and was previously using a TP-Link SX-3008F switch as an aggregate (which I no longer need). I’m still within the return window for the new switch and access point, and have to admit the sale prices were my main reason with going for these items. I understand there have been recent articles mentioning TP-Link and security risks, so I’m thinking if I should consider returning these, and upping my budget to go for ubiquity? The AP would only be like $30 more for an equivalent, so that’s negligible, but a switch that meets my needs is about 1.6x more, however still only has 2 SFP+ ports, while I need 3 at absolute minimum.

I’m generally happy with the performance, however there is a really annoying bug where if I reboot a device, the switch drops down to 1G speed instead of 10G, and I have to tinker with the settings or reboot the switch to get 10G working again. This is true for the OPNSense uplink, my NAS and workstation. Same thing happened with the 3008F, and support threads on the forums have not been helpful.

In any case, any opinions of switching to ubiquity would be worth it?

25
 
 

I'm running a Docker-based homelab that I manage primarily via Portainer, and I'm struggling with how to handle container updates. At first, I had all containers pulling latest, but I thought maybe this was a bad idea as I could end up updating a container without intending to. So, I circled back and pinned every container image in my docker-compose files.

Then I started looking into how to handle updates. I've heard of Watchtower, but I noticed the Linuxserver.io images all recommend not running Watchtower and instead using Diun. In looking into it, I learned it will notify you of updates based on the tag you're tracking for the container, meaning it will never do anything for my containers pinned to a specific version. This made me think maybe I've taken the wrong approach.

What is the best practice here? I want to generally try to keep things up to date, but I don't want to accidentally break things. My biggest fear about tracking latest is that I make some other change in a docker-compose and update the stack which pulls latest for all the container in that stack and breaks some of them with unintended updates. Is this a valid concern, and if so, how can I overcome it?

view more: next ›