megaman

joined 1 year ago
[–] [email protected] 2 points 1 week ago

I played Balatro for the first time a few weeks ago and thought "I could make a killing if I made the chess version of this". But I don't know how to make games - much less computer games - so no money for me. Just a game to play instead

[–] [email protected] 6 points 1 week ago

My 3 seconds of looking at that make it more interesting than i thought it would be. I really hoped it was just a "branded" chess board, which would be very funny.

But it looks like everyone sets up 3 pre-moves and then goes from there (if i read it for a whole minute i would know).

Feels like someone thought up a chess puzzle game and then when the biggest show on TV was about chess they finally got a chance to put it out there

[–] [email protected] 6 points 1 week ago (8 children)

We've got Chess 2, the Sequel and 4D chess that have solved any problems chess may have

[–] [email protected] 16 points 1 week ago

There are certainly are bigger issues in the world right now, sure, but it isnt about "rights for software", it is about the ability people to talk about what they want (in this case, software)

[–] [email protected] 3 points 2 weeks ago (1 children)

I appreciate the ability for the tor-like layered routing with tribler. Getting the headless UI set up is annoying, though.

[–] [email protected] 6 points 2 weeks ago (3 children)

I like to imagine that the building is on a truck surrounded by other buildings on trucks moving a couple miles down the road, but the gag there is Homer jumping over to the Moe's truck

[–] [email protected] 2 points 3 weeks ago (1 children)

Ooo, interesting.

I am going for public access here, so it wont work. But i think this is how some routers are set up. Like i think asusrouter.net is set to 192.168.0.1, so anyone with the router can go to the same url / domain and itll send them each to their own router. Found that out the other week and thought it very clever.

[–] [email protected] 2 points 3 weeks ago (1 children)

So i had done this (with Adguard rather than pihole) and i think i was getting caching issues. Whether or not i was, though, i removed it and it looks like my router is handling it all just fine without the rewrite on the local DNS server.

Some folks mentioned "hairpin NAT" - i was reading the wiki on NAT last night but didnt get to hairpin, but that appears to be what is happening.

The conclusion is - my setup had been doing what i want the whole time without any DNS fiddling. I updated the original post with the speedtests.

[–] [email protected] 2 points 3 weeks ago (1 children)

I guess I should say that I think there were caching issues, but the problem was coming from an iphone and the Bitwarden app (connecting to the self-hosted vaultwarden).

[–] [email protected] 2 points 3 weeks ago (4 children)

i think this is what I was doing with Adguard and using the re-write rules, but then the client (my phone, for example) would cache the IP address and it would fail when I was out of the house/network.

Or am I misunderstanding what you are saying here?

[–] [email protected] 3 points 3 weeks ago (1 children)

ok, well that's easy to set up if that is how it just works! i wonder if maybe i should (at least temporarily) self-host some sort of speedtest app on the server and check the speed from my phone while i'm on wifi using the IP, wifi using domain name, and off wifi using domain...

 

Let's say I've got Nextcloud selfhosted in my basement and that it is accessible on the world wide web at nextcloud.kickassdomain.org. When someone puts in that URL, we'll have all the fun DNS-lookups trying to find the IP address to get them to my router, and my router forwards ports 80 and 443 to a machine running a reverse-proxy, and the reverse-proxy then sends it to a machine-and-port that Nextcloud is listening to.

When I do this on my phone next to that computer hosting Nextcloud, (I believe) what happens is that the data leaves and re-enters my home network as my router sends the data to the IP address it is looking for (which is itself). This would mean that instead of getting a couple hundred Mbps from the local wifi (or being etherneted in and getting even more), I'm limited by my ISPs upload speed of ~25Mbps.

Maybe that just isn't the case and I've got nothing to worry about...

What I want my network to do is to know that nothing has to leave the network at all and just use the local speeds. What I tried before was using a DNS re-write in Adguard such that anything going to my kickassdomain would instead go to the local IP address (so like nextcloud.kickassdomain.org -> 192.168.0.99). This seemed to cause a lot of problems when I then left the house because, I assume, the DNS info was cached and my phone would out in the world and try to connect to that IP and fail.

My final goal here is that I want to upload/download from my selfhosted applications (like nextcloud) without being limited by the relatively slow upload speed of the ISP.

Maybe the computer already figured all this out, though - it does seem like my router should know it's own IP and not bother sending things out into the world just for them to come back.

If it matters, my IP address is pretty stable, but more importantly it is unique to me (like every house in the neighborhood has their own IP).

Updates from testing: So everything does indeed just work without me needing to change how I already had it set up, presumably because the router did the hairpin NAT action folks are talking about here.

I tested it by installed iperf3 on the server then I used my phone (using the PingTools Network Utilities android app, only found on google play and not on f-droid) to connect. Here are the results:

  1. Phone to local IP address (192.168.0.xxx) - ~700 Mbits/second
  2. Phone to speedtest.mykickassdomain.org while still on the wifi - ~700 Mbits/second
  3. Phone on cellular to speedtest.mykickassdomain.org - ~4 Mbits/second
[–] [email protected] 7 points 4 weeks ago

Server is running the password manager for myself and family, and that needs to stay on while gone (there are ways of handling local copies and they sync later, but when ive accidentally had to troubleshoot that it sucks).

Then ive got nextcloud, which while i don't normally need things on there i do enough that it is nice to have.

78
submitted 2 months ago* (last edited 2 months ago) by [email protected] to c/[email protected]
 

Edit: at risk of preemptively saying "solved" - disabling the QoS on the router bumped the desktop browser speedtest from the ~600 up to >950Mbps.

My internet plan with my ISP is for 1000 Mbps. This is far more than I need almost always, but it is what they say I am paying for. However, I can't get any speed tests to read more than ~650 Mbps, which is around about what my old package was.

My router itself has a speedtest functionality and that is what I'm getting off of that. As I'm writing this post, I did a speedtest on my wired-in desktop and got ~590Mbps on speedtest.net.

One thought I had was that maybe the ethernet cables themselves are the limit. All of them say 'cat5e' (actually, just checked and the modem-to-router is cat6), though, which should be 1000Mbps, yea? I swapped out the cable from the modem to the router once and got the same speed with the new ethernet cable.

Maybe the router is just too weak? Well, I used iperf3 between two desktops that are both hardwired in and I got ~940 "Mbits/sec". Unless I'm messing up the unit conversion (which I certainly am annoyed by the difference between "megabytes per second" and "megabits per second"), that is the 1000Mbps that I'd expect to max out the ethernet cables. So, since those two machines are going through the router, it doesn't seem that the router is the bottleneck for my speed to the great outdoors.

The modem? The modem's specsheet says it can do 2.5Gbps (well, actually I assume there is a funny typo - it says "10/100/1000/2500 Gbps RJ-45 port", but I don't think it is doing 2.5 terrabytes/bits per second). The little led on the modem is lit up the color for "an ethernet device is connected at 2500 Mbps".

So, should I start hassling my ISP about my missing 350 Mbps? Is there some other obvious thing I should test before I hassle them? I certainly don't want them to say "have you turned it off and on again"? (once I wrote that, I did go and unplug the modem and router, stand around for 30 seconds, and then plug in the modem and then the router. after I did that, I got one speedtest from the router at 820Mbps, and then the next two tests are back to ~550).

Edit: I do not have fiber, I have a coax cable coming into the house. The person trying to sell me fiber said "your current internet is shared with the neighbors".

 

Yet another question about self-hosting email, but I haven't found the answer at least phrased in a way that makes sense with my question.

I've got ~15 GBs of old gmail data that I've already downloaded, and google is on my ass about "91% full" and we know I'm not about to pay them for storage (I'll sooner spend 100 hours trying to solve it myself before I pay them $3/month).

What I want is to have the same (or relatively close to the same) access and experience to find stuff in those old emails when they are stored on my hardware as I do when they are in my gmail. That is, I want to have a website and/or app that i search for emails from so-and-so, in some date-range, keywords. I don't actually want to send any emails from this server or receive anything to it (maybe I would want gmail to forward to it or something, but probably I'd just do another archive batch every year).

What I've tried so far, which is sort of working, is that I've set up docker-mailserver on my box, and that is working and accessible. I can connect to it via Thunderbird or K-9 mail. I also converted big email download from google, which was a .mbox, into maildir using mb2md (apt install mb2md on debian was nice). This gave me a directory with ~120k individual email files.

When I check this out in Thunderbird, I see all those emails (and they look like they have the right info) (as a side - I actually only moved 1k emails into the directory that docker-mailserver has access to, just for testing, and Thunderbird only sees that 1k then). I can do some searching on those.

When I open in K-9, it by default looks like it just pulls in 100 of them. I can pull in more or refresh it sort of thing. I don't normally use K-9, so I may just be missing how the functionality there is supposed to work.

I also just tried connecting to the mail server with Nextcloud Mail, which works in the sense that it connects but it (1) seems like it is struggling, and (2) is putting 'today' as the date for all the emails rather than when they actually came through. I don't really want to use Nextcloud Mail here...

So, I think my question here is now really around search and storage. In Thunderbird, I think that the way it works (I don't normally use Thunderbird much either) is that it downloads all the files locally, and then it will search them locally. In K-9 that appears to be the same, but with the caveat that it doesn't look like it really wants to download 120k emails locally (even if I can).

What I think I want to do, though, is have the search running on the server. Like I don't want to download 15GBs (and another 9 from gmail soon enough) to each client. I want it all on the server and just put in my search and the server do the query and give me a response.

docker-mailserver has a page for setting up Full-Text Search with Xapian, where it'll make all the indices and all that. I tinkered with this and think I got it set up. This is another sort of thing where I would want the search to be utilizing the server rather than client since the server is (hopefully) optimizing for some of this stuff.

Should I be using a different server for what I want here? I've poked around at different ones and am more than open to changing to something else that is more for what I need here.

For clients, should I be using Roundcube or something else? Will that actually help with this 'use the server to search' question? For mobile, is there any way to avoid downloading all the emails to the client?

Thanks for the help.

 

I installed pop!_os as my daily driver some months ago (completely got rid of windows) and have thought it pretty good. But something about it seemed off - it would take programs just too long to open, it wasn't snappy... Once I got into something it seemed to run fine (playing dota or something else was fine after initial quirks).

Well, today, figured it out...

When I did the first install, I was very nervous about deleting all of my existing data on my disks and so tried to manually partition everything so that I could get it right (I think I was also planning to dual-boot).

Fast forward to today, and I'm testing speeds on all the drives to see which one to pitch for a new one I acquired. I see the 3 HDDs, but where is the SSD... Oh god, I installed the boot partition and root and home all onto one of the ~12 year old HDDs and the SSD has been sitting idle.

Anyway, just about done with the new fresh install onto the SSD, hopefully it isn't too hard to start port over the home directory from that HDD...

1501
Real examples here? (discuss.tchncs.de)
 

Friend who is not a software person sent me this tweet, which amused me as it did them. They asked if "runk" was real, which I assume not.

But what are some good examples of real ones like this? xz became famous for the hack of course, so i then read a bit about how important this compression algorithm is/was.

 

An android messaging app that sends everything as an image where the text is in a blue bubble. All images, baby.

 

So, I know very little and have a poor understanding of the software licenses, hence why I'm asking.

I have a 'smart' thermostat that came with the new HVAC system. It is the AprilAire 8920W. It has a touchscreen, connects to wifi, does lots of 'computer' things. I cannot imagine that this furnace company built their own OS and kernel and everything else from scratch; it seems most likely it is running linux, yea? And with that, includes libraries and other tools that are under some version of the GPL, yea?

I went down the router rabbit-hole some weeks ago and found the firmware for routers available on the Linksys website, the Linksys site has this 'GPL Code Center'. I'm finding nothing of the sort from AprilAire, though...

So, if we assume that my 'smart' thermostat is running Linux (and, say, busybox, a common GPL-ed tool on small systems, like routers), they are obligated to provide the code for at least those pieces of software, right? They need to give me a CD or have a page on their website (and include the link in the manual) and all that?

Do they need to give me access to the entire firmware as well? The router folks do, but you also sometimes need to re-install the firmware manually, so that may not be a license issue.

However, how would we know if they are violating a license if we don't know what is running on it?

I'm curious about how the GPL / copy-left licenses work, and wondering if I found someone who is violating it. I also want to hack the thermostat to control it without the motherfuckin' cloud, but that is a bit separate.

14
submitted 1 year ago* (last edited 1 year ago) by [email protected] to c/[email protected]
 

I've got my main house server that has a number of dockerized applications, including nextcloud-aio. Nextcloud-AIO comes with a built-in backup system using BorgBackups. I've had this running and doing my backups, it is probably fine. Notable, it does encrypt the backup.

Now, I recently setup a separate machine to use rsnapshot to backup the things from the main machine that need backing up. It is SSHing on a schedule to do that, and backing up the folders I've listed.

When I set that up, I skipped the nextcloud borg backup, because that is already backing up; however, it is not a remote backup, so is of limited use (granted, my 'official' backup computer is using about 18 inches away from the main server, so also of limited use).

I can easily just include the nextcloud-borg-directory on the rsnapshot list, but does anyone know if it will properly handle just the updates?

That is, both Borg and Rsnapshot are set up so that each backup isn't a complete backup but just incremental changes, so that you don't fill your whole disk in two weeks. But if Borg does that first on the nextcloud data, will rsnapshot just not work and then try to backup the full 50GBs every day? Or just do the incremental changes? Will the borg encryption jack up the ability of rsnapshot to see the changes?

If no one knows, I will just do it anyway and report back in a few days if my disk is completely full or not.

Edit: it has been ~4 days, and I think it is not all busted (not going to say it is a good idea). The total space it is taking up on the second (backup) machine is what I expect - it hasn't ballooned because it can't properly grok the borg backup format or anything like that. Importantly, this is after ~4 days and very few changes (updates/deletions/edits) to anything on the nextcloud.

 

Hey, all.

Is it possible to skip this 'register your server' step when creating a self-hosted Rocketchat instance? I just don't want to, ya know? Regular websearching is just giving a lot about how to disable user registration rather than skipping the server registration with Rocketchat HQ.

view more: next ›