root

joined 2 years ago
[–] [email protected] 11 points 8 hours ago

Meshtastic can be encrypted and is LoRa based. Can easily hit nodes dozens of miles away with a good line of sight. It also relays messages across nodes to reach even further distances.

 

I’m currently evaluating switching from Proton to Tuta and the experience has been pretty great so far. There are just a couple pain points/ questions I have before taking the plunge.

Recently, Proton recently released a calendar widget last week that has been very useful for me. Is there any change of Tuta calendar doing the same, or is this not possible to do securely?

I also use Simple Login for aliasing and have a hundreds of aliases. I know it’s possible to do the same in Tuta with custom domains, but are you also able to “pause” those aliases, or is it just create and delete? Curious as to what the management interface would be like

[–] [email protected] 2 points 1 day ago (1 children)

I am also trying Tuta for the same reasons and have no complains so far, other than Proton just released a calendar widget for iOS a week ago and I’ve been really enjoying it. Hope Tuta adds one in the future

[–] [email protected] 2 points 4 days ago (1 children)

The file in question can be found here

[–] [email protected] 2 points 5 days ago* (last edited 5 days ago) (1 children)

Ah, I guess I might need to add my RootCA to my phone, laptop, pc huh? That would get rid of the untrusted warnings. Yes, please feel free to share if you have documentation!

Update: I setup my own local CA and got it working. Thanks for the tip!

[–] [email protected] 3 points 5 days ago (3 children)

You're a legend. Changing SEARXNG_HOSTNAME in my .env file solved it.

[–] [email protected] 1 points 5 days ago (1 children)

Gotcha, that matches my assumptions. Yes everything is internal. It's accessible remotely via Wireguard, but I mostly wanted to get some practice with NGINX/ TLS certs (also way easier to refer to things around the house with <service>.homelab isntead of IP:port, haha.

So if I did want this to be fully encrypted, I would essentially need to configure each service (jellyfin, home assistant, etc) to have SSL on them with this self-signed cert/ key that I used on NGINX (or perhaps new cert/ key) and then I would be all set?

[–] [email protected] 3 points 5 days ago (5 children)

Thanks! The output of the xml is as follows

<OpenSearchDescription>
<ShortName>SearXNG</ShortName>
<LongName>SearXNG metasearch</LongName>
<Description>
SearXNG is a metasearch engine that respects your privacy.
</Description>
<InputEncoding>UTF-8</InputEncoding>
<Image type="image/png">
https://192.168.2.20:8080/static/themes/simple/img/favicon.png?60321eeb6e2f478f0e5704529308c594d5924246
</Image>
<Url rel="results" type="text/html" method="GET" template="https://192.168.2.20:8080/search?q=%7BsearchTerms%7D"/>
<Url rel="suggestions" type="application/x-suggestions+json" method="GET" template="https://192.168.2.20:8080/autocompleter?q=%7BsearchTerms%7D"/>
<Url rel="self" type="application/opensearchdescription+xml" method="GET" template="https://192.168.2.20:8080/opensearch.xml"/>
<Query role="example" searchTerms="SearXNG"/>
<moz:SearchForm>https://192.168.2.20:8080/search</moz:SearchForm>
</OpenSearchDescription>

It looks like it's set to use https://192.168.2.20:8080/ for some reason. https://search.home/ will resolve fine but using https with the underlying IP will not.

[–] [email protected] 1 points 5 days ago (3 children)

I haven't. I created this custom cert and uploaded in in NGINX (NGINX itself isn't using SSL) and applied it to each proxy client, then when I visit one of them it appears to be HTTPS, but I feel that it probably isn't actually giving me the protections I imagine.

[–] [email protected] 1 points 5 days ago

They’re both different VMs on different VLANs running on the same Proxmox host

 

I have a SearXNG instance running locally, and I have a proxy entry for this (search.home). When I go to https://search.home/ in Firefox, it works as expected and brings me to SearXNG, however if I try adding this as my default search, it instead resolves to the IP and not the hostname, which fails because the IP does not have a cert on it and it tries to hit it with https (as would work with the hostname).

This works in Firefox mobile, and every other web browser I've tried on desktop, just not Firefox for some reason. I've tried various about:config changes but so far no luck. Anyone else have a workaround for this? It would be nice if Firefox showed you what it actually has saved for the url/hostname/IP of the search engine in the Search section of the Settings, but sadly it just has the name and shortcut listed.

 

I recently generated a self-signed cert to use with NGINX via it's GUI.

  1. Generate cert and key
  2. Upload these via the GUI
  3. Apply to each Proxy Host

Now when I visit my internal sites (eg, jellyfin.home) I get a warning (because this cert is not signed by a trusted CA) but the connection is https.

My question is, does this mean that my connection is fully encrypted from my client (eg my laptop) to my server hosting Jellyfin? I understand that when I go to jellyfin.home, my PiHole resolves this to NGINX, then NGINX completes the connection to the IP:port it has configured and uses the cert it has assigned to this proxy host, but the Jellyfin server itself does not have any certs installed on it.

[–] [email protected] 1 points 1 week ago

He's a saint. I saw the commit last night and was waiting for an update. I have SearXNG working now but also left up my Whoogle VM. I'll try the update and keep using that until the lights go out :')

[–] [email protected] 1 points 1 week ago

Just a heads up that I found another way to get this working. Have a good weekend!

[–] [email protected] 2 points 1 week ago

Bingo! I missed a spot in the hidden .env file. After that I'm able to hit it and Caddy is able to generate the cert for me (I am using docker).

Thanks again!

 

I recently setup SearXNG to take the place of Whoogle (since Google broke it by disabling JS free query results). I am following the same steps I've always done in adding a new default search engine.

Navigate to the address bar, right click "Add SearXNG" then go into settings and make it my default. After doing this, rather than using the local IP the instance is running at, Firefox uses https://localhost/search for some reason. I don't see a way to edit this in the settings section of Firefox. Anyone else experienced this?

Update: After updating the .env file with my IP address and bring docker down/ up, all is working as expected (able to use SearXNG via Caddy using the https:// address)

8
submitted 2 weeks ago* (last edited 1 week ago) by [email protected] to c/[email protected]
 

Let me start by saying that I am not a runner. I hope to be one day, but for now I'm just running < 1 mile after work.

After a few days of this, my knees (the tendon thing that goes down from the knee to the shin) are pretty sore. I'm wondering if I should power through this or do something differently?

A friend suggested these as he's had good luck with them, but I'm not sure if this is something the community condones or endorses.

Update: Thank you all for the suggestions! The consensus seems to be to take it easy as I begin, and run every other day (and continue to walk every day).

 

For years, I have been using Whoogle for my self-hosted searches. It's been great, but recently there were some upstream changes that seem to have broken it.

I'm guessing that SearXng will soon follow (based on the assumption that they too are using the JS free results Google used to provide).

Does anyone have any self-hosted search options that still work? I hear Kagi is good for paid/ non-self hosted options, but just curious what you all are using.

8
submitted 3 weeks ago* (last edited 3 weeks ago) by [email protected] to c/[email protected]
 

My Jellyfin VM has been failing its nightly backups for some time now (maybe a week or so).

I'm currently backing up to a NAS that has plenty of available space and my other 10 VMs are backing up without issues (though they are a bit smaller than this one).

I am backing up with the ZSTD compression option and the Snapshot mode.

The error is as follows:

INFO: include disk 'scsi0' 'Proxbox-Local:vm-110-disk-0' 128G
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: creating vzdump archive '/mnt/pve/Proxbox-NAS/dump/vzdump-qemu-110-2025_01_04-03_29_45.vma.zst'
INFO: started backup task '4be73187-d25c-49cf-aed2-1217fba27f77'
INFO: resuming VM again
INFO:   0% (866.4 MiB of 128.0 GiB) in 3s, read: 288.8 MiB/s, write: 268.0 MiB/s
INFO:   1% (1.5 GiB of 128.0 GiB) in 6s, read: 221.1 MiB/s, write: 216.0 MiB/s
INFO:   2% (2.6 GiB of 128.0 GiB) in 15s, read: 130.5 MiB/s, write: 126.4 MiB/s
INFO:   3% (3.9 GiB of 128.0 GiB) in 25s, read: 128.9 MiB/s, write: 127.5 MiB/s
ERROR: job failed with err -5 - Input/output error
INFO: aborting backup job
INFO: resuming VM again
ERROR: Backup of VM 110 failed - job failed with err -5 - Input/output error
INFO: Failed at 2025-01-04 03:30:17

Anyone experienced this or have any suggestions as to resolving it?

Update: After rebooting the Proxmox node (not just the VM) my backups are now working again. Thanks all for the input!

 

I recently got into Ubiquiti, and am trying to limit intra-vlan communications.

I have a Proxmox server hosting a couple VMs that are on the same VLAN (192.168.8.0/24).

These two devices can ping each other, even after I follow the guide here. I've tried just adding that VLAN to the Device Isolation (ACL) section in Settings > Network as I believe this should just block everything within that VLAN, as well as trying to add explicit rules in the ACL to block client A -> B and B -> A with no luck.

I feel like I must be missing something simple. Has anyone done this successfully?

 

There's a pretty popular savings chart in the personal finance community, and I just noticed it seems to be missing the option for when your employer offers an ESPP (Employee Stock Purchase Plan) unless I'm completely missing it.

Where would you guys put it if you could add it to this chart?

 

I recently swapped out my old TP-Link switch for a Unifi switch. I'm setting up the VLAN configs as I had it on my previous switch, but wanted to be sure I am understanding this correctly.

For some devices such as my APs, I am trunking the ports they connect to, tagging the VLANs that will need to be present for the corresponding WiFi networks these APs provide.

For other devices that are plugged directly into the switch and which should only have access to a single VLAN, I am setting that VLAN as the default network, and blocking all other VLANs.

Is this the correct approach?

 

I just got an Apple watch S10. Before this I was using a Garmin with the Apple health app to get some insights into sleep, calories burned per day (the outer ring) etc.

Compared to the Garmin, my Apple Watch is showing a lot of awake events, even though me and my SO don't notice me waking up. Is the Apple Watch just way more sensitive? Is it catching every movement in the night and thinking that is me being awake?

 

I have a couple rules in place to allow traffic in from specific IPs. Right after these rules I have rules to block everything else, as this firewall is an "allow by default" type.

The problem I'm facing is that when I replace these two ports to match "Any" instead, those machines (matrix server and game server) are unable to perform apt-gets.

I had thought that this should still be allowed, because the egress rules for those two permit outbound traffic to http/s and once that's established it's a "stateful" connection which should allow the traffic to flow back the other way.

What am I doing wrong here, and what is the best way to ensure that traffic only hits these servers from the minimal number of ports.

view more: next ›