Shadow

joined 2 years ago
MODERATOR OF
[–] Shadow 2 points 4 weeks ago

Subspace, also known as Continuum later in its life.

[–] Shadow 29 points 4 weeks ago

Oh man I thought that was the needle, not a toothpick. Suddenly this makes sense.

The toothpick is to provide spacing for the button, then you pull it out as you wrap the thread around underneath it.

[–] Shadow 7 points 1 month ago* (last edited 1 month ago) (3 children)

I've had shit experience with Razer and would never buy their product myself, but many people like them. They recently launched a "gaming" vertical mouse.

https://www.razer.com/ca-en/productivity/razer-pro-click-v2-vertical-edition

[–] Shadow 11 points 1 month ago (1 children)

We don't blame you for your leader, just don't say stupid maga shit and you'll be fine.

[–] Shadow 1 points 1 month ago

Many people on lemmy just read everything and block subs they don't want, rather than sub to what they do want. Subscriber count might not mean that much.

[–] Shadow 10 points 1 month ago* (last edited 1 month ago) (3 children)

Neat - https://iocaine.madhouse-project.org/how-it-works/

The load wasn't causing any issues, I was just getting ahead of it. I'm not worried about deploying countermeasures yet.

Also see https://lemmy.ca/post/43060353

I might try out anubis on old.lemmy.ca specifically, since bots seem to love it the most

[–] Shadow 39 points 1 month ago* (last edited 1 month ago) (1 children)

Too many IPs, so I did it by ASN at cloudflare.

  • AS4134 Chinanet backbone
  • AS45102 Alibaba cloud
  • AS136907 Huawei cloud
  • AS132203 Tencent
  • AS4812 China telecom
  • AS21859 Zenlayer
  • AS56041 China mobile
  • AS134762 Chinanet
  • AS56048 China mobile
  • AS24444 Shandong
  • AS38019 Tianjin mobile
  • AS134810 China mobile
  • AS56046 China mobile
  • AS56040 China mobile
  • AS24400 Shanghai mobile
  • AS17638 Tianjin provincial net
  • AS132525 Heilngjiang
  • AS24547 Hebei mobile
  • AS4808 Unicom bejing
  • AS17621 Unicom shanghai
  • AS56047 China mobile
  • AS4837 China unicom
  • AS56042 China mobile
  • AS9808 China Mobile

There's just no way we have thousands of legitimate but logged out users browsing old.lemmy.ca from their phone in China.

[–] Shadow 8 points 1 month ago (1 children)

Cloudflare tries, but bots do a pretty good job looking like regular users these days. There's some more advanced "AI" solutions that learn based on existing traffic patterns, but I've been out of that space for a while so not sure what the latest tech is.

[–] Shadow 11 points 1 month ago

I sample every 15s so just multiply the graph numbers by 4

[–] Shadow 25 points 1 month ago* (last edited 1 month ago) (2 children)

Hmm, I don't think my China blocks did, but I did also turn on Cloudflare's AI bot protection which looks like it did. I've turned that back off now. Sorry about that, thanks for pointing it out!

Unfortunately stats on https://grafana.lem.rocks/d/bdid38k9p0t1cf/federation-health-single-instance-overview?orgId=1&var-instance=lemmy.ca&var-remote_instance=lemmy.world seem to be broken for the past few days.

[–] Shadow 1 points 1 month ago

Yeah I moved off nova to this. It took a bit to get used to it, but now I like it more.

31
submitted 4 months ago* (last edited 4 months ago) by Shadow to c/main
 

I'm curious if anyone here actually finds value in the reddit posts brought over by lemmit.online, since I'd like to defederate from it otherwise.

It feels actively harmful to lemmy, since so many of the posts it brings over are questions that the original poster will never see. It encourages a conversation that will never happen, so if someone does reply they're going to feel disengaged.

The bot rarely gets any upvotes or engagement, and I suspect a majority of people (like myself) have just blocked it. TBH I forgot it existed until Tesseract showed me its posts again.

63
submitted 4 months ago* (last edited 4 months ago) by Shadow to c/main
 

Hi everyone!

Tesseract is now available as an alternative front end at https://tess.lemmy.ca/

125
submitted 4 months ago* (last edited 4 months ago) by Shadow to c/main
 

Hello everyone!

I'll be taking the site down for two maintenance windows this week to complete our server migration.

  • Weds Jan 29th - 09:00 - 11:00 PT (12:00 - 14:00 ET)
  • Thurs Jan 30th - 09:00 - 11:00 PT (12:00 - 14:00 ET)

During the first window I'll be migrating us from OVH to our new dedicated hardware. After this migration there will likely be some temporarily broken images, as it takes approximately 8 hours to resync our object storage from OVH.

This is a major change and despite my testing, may have some unintended side effects. If you run into any problems that aren't just a broken image, please let us know.

The second maintenance window is to migrate our pict-rs database from it's local sled-db into our primary postgres DB. This is a much smaller change but since pict-rs checks every image as it goes through them, it takes about 1.5 hours.

As usual, you can check https://status.lemmy.ca/ for updates.

 

Hello everyone, we're long overdue for an update on how things have been going!

Finances

Since we started accepting donations back in July we've received a total of $1350, as well as $1707 in older donations from smorks. We haven't had any expenses other than OVH (approx $155/mo) since then, leaving us $2152 in the bank.

We still owe TruckBC $1980 for the period he was covering hosting, and I've contributed $525 as well (mostly non-profit registration related stuff, plus domain renewals). We haven't yet discussed reimbursing either of us, we're both happy to build up a contingency fund for a while.

New Server

A few weeks ago, we experienced a ~26-hour outage due to a failed power supply and extremely slow response times from OVH support. This was followed by an unexplained outage the next morning at the same time. To ensure Lemmy’s growth remains sustainable for the long term and to support other federated applications, I’ve donated a new physical server. This will give us a significant boost in resources while keeping the monthly cost increase minimal.

Our system specs today:

  • Undoubtedly the cheapest hardware OVH could buy
  • Intel Xeon E-2386G (6 cores @ 3.5ghz)
  • 32gb of ram
  • 2x 512gb Samsung nvme in raid 1
  • 1gb network
  • $155/month

The new system:

  • Dell R7525
  • AMD EPYC 7763 (64 cores @ 2.45ghz)
  • 1tb of ram
  • 3x 120gb sata ssd (hw raid 1 with a hot spare, for proxmox)
  • 4x 6.4tb nvme (zfs mirrored + striped, for data)
  • 1gb network with a 50mbit commit (See 95th percentile billing)
  • Redundant power supplies
  • Next day hardware support until Aug 2027
  • $166/month + tax

This means instead of renting an entire server and having them be responsible for the hardware, we'll be renting co-location space at a Vancouver datacenter PDF via a 3rd party service provider I know.

These servers are extremely reliable but if there is a failure, either Otter or myself will be able to get access reasonably quickly. We also have full OOB access via idrac, so it's pretty unlikely we'll ever need to go on site.

Server Migration

Phase 1 is currently planned for Jan 29th or 30th and will completely move us out of OVH and onto our own hardware. I'm expecting probably a 2-3 hour outage, followed by an 6-8 hour window where some images may be missing as the object store resyncs. I'll make another follow up post in a week with specifics.

Phases 2+ I'm not 100% decided on yet and have not planned a timeline around. It would get us into a fully redundant (excluding hardware) setup that's easier to scale and manage down the road, but it does add a little bit of complexity.

Let me know if you have any questions or comments, or feedback on the architecture!

26
submitted 5 months ago* (last edited 5 months ago) by Shadow to c/main
 

Morning all!

I'm going to be taking the site down for about 5 minutes, so that I can get a consistent copy of our databases (postgres + pict-rs sled).

Will do it at about 10am PST.

280
submitted 5 months ago* (last edited 5 months ago) by Shadow to c/main
 

Hey everyone, and happy new year!

Sorry about that super long downtime there. Yesterday (Sunday) morning at 10:03AM PST our server suffered a physical hardware failure, apparently a power supply failure. Unfortunately despite opening a ticket with our hosting vendor (OVH) a few minutes later and them claiming to have 24/7 support, nobody looked at our ticket until this morning when their phone support lines opened and I called them.

They've now replaced a defective power supply and we're back online, after ~26 hours of being offline. Some pretty disappointing response times, to put it nicely.

We're planning to move away from OVH at the end of this month, onto proper enterprise grade hardware that we own and control. This will give us a HUGE boost in server resources and allow us to scale for the foreseeable future, while also giving us the control to resolve problems like this much quicker. Expect another follow up post about this in the next couple weeks once I've put together the migration plan.

Timeline:

  • Jan 5th 10:03am PST - We get alerts to the server being non-responsive.
  • Jan 5th 10:05am PST - I pull up the console via IPMI and it's completely non-responsive. Attempting to power off / on the server or do anything, does not work.
  • Jan 5th 10:15am PST - Initial support ticket created with OVH. I followed up a couple times over the next few hours, and got no response.
  • Jan 6th 6:32am PST - Called OVH, gave them the case number and asked them to investigate
  • Jan 6th 7:34am PST - I get notified they'll start their "intervention" in 15 minutes.
  • Jan 6th 11:04am PST - Call them again, the tech is still working on it and they'll get back to me with an update
  • Jan 6th 11:34am PST - "I was informed by our data centre technician that there is an issue with the power supply unit for the rack on which your server resides. Your server will come back online once they have replaced the power supply."
  • Jan 6th 12:17pm PST - We're back up finally!

Edit on Jan 7th @ 8:40am PST: We just had another outage of about an hour. Investigating with OVH.

17
Castle Infinity (en.wikipedia.org)
submitted 5 months ago by Shadow to c/[email protected]
1
submitted 6 months ago* (last edited 6 months ago) by Shadow to c/[email protected]
75
submitted 6 months ago* (last edited 6 months ago) by Shadow to c/main
 

One of the drives in our server has failed. =( Even though it should be a 10 minute job OVH needs a 2 hour window to replace it.

I've requested they schedule it for Tuesday from 8am - 10AM PST. Hopefully it'll be reasonably quick, but expect cloudflare tunnel errors while they perform the work.

view more: ‹ prev next ›