Lemmy.ca

11,417 readers
711 users here now

Welcome 🍁


Lemmy.ca is run by Canadians, hosted in Canada, and geared toward Canadians. However, it is not restricted to Canadians, or Canadian culture/topics/etc. All are welcome!

To learn more about what Lemmy is, or how the Fediverse works, you can visit our simple Getting Started Guide.

This site is run by the non-profit Fedecan and funded entirely by user donations. You can help support us by visiting our donations page.


Rules and Guidelines

1. No BigotryIncluding racism, sexism, ableism, homophobia, transphobia, or xenophobia.

2. Be CivilArgue in good faith, attack the argument; not the person, and promote a healthy debate. That includes implying violence, threats or wishes of violence and/or death.

3. No PornThis instance is not made to host porn communities. You're free to access porn communities on other instances through your account, but be mindful of Rule 4.

4. Use the NSFW tagUse your common sense: if you wouldn't want this image to show up on your work computer, tag it as such. In comments, use the
spoiler ::: tag for NSFW images, and put a NSFW mention beside links. Do not use NSFW images as your avatar or banner. :::
5. No Ads / SpamThis instance is not there to act as your billboard. If you want to promote your personal work, at least make the effort to be a contributing member of this community. Your account purpose shouldn't be to only advertise, make it natural.

6. Bot accountIf you are the operator of a "bot" account, make sure to flag is as such in the account's settings.

7. Right to privacyDo NOT distribute the personal information of someone else without their consent (aka doxxing). Information that is public domain can be shared, provided it is in good faith.
ex: The official email of an elected official is fair, the private phone number or the real name of a non-public person is NOT.

8. Report abuseThe report function isn't labelled the disagree button. You might not agree with someone, but that doesn't mean what the person says is against the rules. Using it repeately in this fashion will lead to actions being taken against the reporter.

9. ImpersonationDon't make an account with the intent to negatively deceive or defame someone on the fediverse.
ex: Parody of a famous person is okay, submitting outrageous content as appearing like another user, mod or admin isn't.


Contact an Admin:

Guides:

You can find more guides at fedecan.ca by opening the sidebar.

Meta Communities:

Other Frontends:

Don't like how Lemmy looks? Try one of our alternative UIs:

Site Status: status.fedecan.ca


Find Apps: lemmyapps.com

Find Communities: lemmyverse.net

Fediseer: endorsement


founded 4 years ago
ADMINS
1
221
submitted 1 day ago* (last edited 19 hours ago) by Shadow to c/main
 
 

Hello everyone!

It’s time for another long-overdue update on how Fedecan and our various sites are doing. It’s been just over two years since the great Reddit migration, and in that time we’ve made some solid progress:

Finances

Here’s a look at our bank balance since we began accepting donations:

We’re currently sitting at around $2,900, with a monthly burn of about $200, which gives us roughly a year of runway. We have some additional annual costs (like domain renewals and non-profit registration), but overall we run very lean.

Fedecan still owes:

  • TruckBC: $1,980
  • Shadow (me): $525

These were out-of-pocket hosting and non-profit registration costs from 2023/2024. It’d be great to get those covered, but we want to keep at least a year of operating expenses in reserve.

If you're a regular user and value what we're doing, please consider donating! We have multiple ways to donate, you can find the comparison and donation links on our website: https://fedecan.ca/en/donate

Sh.itjust.works

Nothing major to report here - we’ve all been a bit busy lately, but collaboration is continuing slowly behind the scenes.

Fediverse Growth

We're seeing a healthy volume of posts and communities on lemmy.ca, surging with each Reddit drama:

Infrastructure

Our server is a Dell R7515 with an EPYC 7763, 1 TB ram and 4x 7.68tb nvme data disks, which is hosted in a datacenter in Vancouver, BC.

I spun up victoriametrics + victorialogs a few weeks ago and have been ingesting all of our data, giving us the ability to put together some nice grafana dashboards.

Everything is running great on the infrastructure side of things. Our server is barely working up a sweat and we shouldn't have to worry about scaling for a long time.

Lemmy.ca still comprises almost all of our traffic:

Lemmy.ca

Our over provisioned stack is performing well, handling the occasional lemmy / lemmy-ui dropout:

Similarly the DB is mostly running out of ram:

Our object storage is slowly climbing as expected, but we've got several years of capacity to figure out a long term solution:

I’m also doing some limited analytics on our web logs. As expected, lemmy.world makes up the majority of our federation traffic:

One interesting thing to see from the user-agent data is the breakdown of traffic by the different mobile clients:

The “dart” UA is just a common web library, Thunder reports as this and I suspect other clients do too. If you’re a client developer, please set your user-agent!

Out of the alternative web clients we support, tesseract is the most popular although the overall traffic volume is still low:

We only store 7 days of logs but I’m hoping to get these pulled out into metrics soon, since it would be interesting to track which clients / interfaces people use over time.

Pixelfed.ca

Not much to say on this one, due to using local storage it currently runs on a single VM without redundancy.

Piefed.ca

Piefed runs on a pair of VMs with its own database and object storage backends.

Service Health Response data

Cloudflare

If you want to compare against previous data posts, here’s our same cloudflare graphs for lemmy.ca

As always, feel free to reach out if you have any questions or ideas. Thanks for being a part of the Fediverse!

2
 
 

Lemmy just reached a new milestone: 1 million posts, across 1,323 servers.

Source: https://lemmy.fediverse.observer/dailystats&days=90

3
 
 

I'm tired about reading about reddit here.

We left. Let's move on.

4
 
 

Please note this is just a beta and there are going to be bugs, but it works and it works nicely. Have fun.

5
4762
preach (lemmy.world)
submitted 2 years ago by [email protected] to c/[email protected]
 
 
6
7
 
 
8
 
 

AccidentalRenaissance has no active moderators due to Reddit's unprecedented API changes, and has thus been privated to prevent vandalism.

Resignation letters:

Openminded_Skeptic - https://imgur.com/a/WwzQcac

VoltasPistol - https://imgur.com/a/lnHSM4n

We welcome you to join us in our new homes:

https://kbin.social/m/AccidentalRenaissance

https://lemmy.blahaj.zone/c/accidentalrenaissance

Thank you for all your support!

Original post from r/ModCoord

9
 
 

I hate that everything now is a subscription service instead of buying it and do whatever you want.

10
11
 
 
12
3867
submitted 2 years ago* (last edited 2 years ago) by [email protected] to c/[email protected]
 
 

Looks like it works.

Edit still see some performance issues. Needs more troubleshooting

Update: Registrations re-opened We encountered a bug where people could not log in, see https://github.com/LemmyNet/lemmy/issues/3422#issuecomment-1616112264 . As a workaround we opened registrations.

Thanks

First of all, I would like to thank the Lemmy.world team and the 2 admins of other servers @[email protected] and @[email protected] for their help! We did some thorough troubleshooting to get this working!

The upgrade

The upgrade itself isn't too hard. Create a backup, and then change the image names in the docker-compose.yml and restart.

But, like the first 2 tries, after a few minutes the site started getting slow until it stopped responding. Then the troubleshooting started.

The solutions

What I had noticed previously, is that the lemmy container could reach around 1500% CPU usage, above that the site got slow. Which is weird, because the server has 64 threads, so 6400% should be the max. So we tried what @[email protected] had suggested before: we created extra lemmy containers to spread the load. (And extra lemmy-ui containers). And used nginx to load balance between them.

Et voilà. That seems to work.

Also, as suggested by him, we start the lemmy containers with the scheduler disabled, and have 1 extra lemmy running with the scheduler enabled, unused for other stuff.

There will be room for improvement, and probably new bugs, but we're very happy lemmy.world is now at 0.18.1-rc. This fixes a lot of bugs.

13
 
 
14
3665
Lemmy World outages (lemmy.world)
submitted 2 years ago* (last edited 2 years ago) by [email protected] to c/[email protected]
 
 

Hello there!

It has been a while since our last update, but it's about time to address the elephant in the room: downtimes. Lemmy.World has been having multiple downtimes a day for quite a while now. And we want to take the time to address some of the concerns and misconceptions that have been spread in chatrooms, memes and various comments in Lemmy communities.

So let's go over some of these misconceptions together.

"Lemmy.World is too big and that is bad for the fediverse".

While one thing is true, we are the biggest Lemmy instance, we are far from the biggest in the Fediverse. If you want actual numbers you can have a look here: https://fedidb.org/network

The entire Lemmy fediverse is still in its infancy and even though we don't like to compare ourselves to Reddit it gives you something comparable. The entire amount of Lemmy users on all instances combined is currently 444,876 which is still nothing compared to a medium sized subreddit. There are some points that can be made that it is better to spread the load of users and communities across other instances, but let us make it clear that this is not a technical problem.

And even in a decentralised system, there will always be bigger and smaller blocks within; such would be the nature of any platform looking to be shaped by its members. 

"Lemmy.World should close down registrations"

Lemmy.World is being linked in a number of Reddit subreddits and in Lemmy apps. Imagine if new users land here and they have no way to sign up. We have to assume that most new users have no information on how the Fediverse works and making them read a full page of what's what would scare a lot of those people off. They probably wouldn't even take the time to read why registrations would be closed, move on and not join the Fediverse at all. What we want to do, however, is inform the users before they sign up, without closing registrations. The option is already built into Lemmy but only available on Lemmy.ml - so a ticket was created with the development team to make these available to other instance Admins. Here is the post on Lemmy Github.

Which brings us to the third point:

"Lemmy.World can not handle the load, that's why the server is down all the time"

This is simply not true. There are no financial issues to upgrade the hardware, should that be required; but that is not the solution to this problem.

The problem is that for a couple of hours every day we are under a DDOS attack. It's a never-ending game of whack-a-mole where we close one attack vector and they'll start using another one. Without going too much into detail and expose too much, there are some very 'expensive' sql queries in Lemmy - actions or features that take up seconds instead of milliseconds to execute. And by by executing them by the thousand a minute you can overload the database server.

So who is attacking us? One thing that is clear is that those responsible of these attacks know the ins and outs of Lemmy. They know which database requests are the most taxing and they are always quick to find another as soon as we close one off. That's one of the only things we know for sure about our attackers. Being the biggest instance and having defederated with a couple of instances has made us a target.  

"Why do they need another sysop who works for free"

Everyone involved with LW works as a volunteer. The money that is donated goes to operational costs only - so hardware and infrastructure. And while we understand that working as a volunteer is not for everyone, nobody is forcing anyone to do anything. As a volunteer you decide how much of your free time you are willing to spend on this project, a service that is also being provided for free.

We will leave this thread pinned locally for a while and we will try to reply to genuine questions or concerns as soon as we can.

15
 
 

Lemmy.ml has now blocked Threads.net

16
 
 

I strongly encourage instance admins to defederate from Facebook/Threads/Meta.

They aren't some new, bright-eyed group with no track record. They're a borderline Machiavellian megacorporation with a long and continuing history of extremely hostile actions:

  • Helping enhance genocides in countries
  • Openly and willingly taking part in political manipulation (see Cambridge Analytica)
  • Actively have campaigned against net neutrality and attempted to make "facebook" most of the internet for members of countries with weaker internet infra - directly contributing to their amplification of genocide (see the genocide link for info)
  • Using their users as non-consenting subjects to psychological experiments.
  • Absolutely ludicrous invasions of privacy - even if they aren't able to do this directly to the Fediverse, it illustrates their attitude.
  • Even now, they're on-record of attempting to get instance admins to do backdoor discussions and sign NDAs.

Yes, I know one of the Mastodon folks have said they're not worried. Frankly, I think they're being laughably naive >.<. Facebook/Meta - and Instagram's CEO - might say pretty words - but words are cheap and from a known-hostile entity like Meta/Facebook they are almost certainly just a manipulation strategy.

In my view, they should be discarded as entirely irrelevant, or viewed as deliberate lies, given their continued atrocious behaviour and open manipulation of vast swathes of the population.

Facebook have large amounts of experience on how to attack and astroturf social media communities - hell I would be very unsurprised if they are already doing it, but it's difficult to say without solid evidence ^.^

Why should we believe anything they say, ever? Why should we believe they aren't just trying to destroy a competitor before it gets going properly, or worse, turn it into yet another arm of their sprawling network of services, via Embrace, Extend, Extinguish - or perhaps Embrace, Extend, Consume would be a better term in this case?

When will we ever learn that openly-manipulative, openly-assimilationist corporations need to be shoved out before they can gain any foothold and subsume our network and relegate it to the annals of history?

I've seen plenty of arguments claiming that it's "anti-open-source" to defederate, or that it means we aren't "resilient", which is wrong ^.^:

  • Open source isn't about blindly trusting every organisation that participates in a network, especially not one which is known-hostile. Threads can start their own ActivityPub network if they really want or implement the protocol for themselves. It doesn't mean we lose the right to kick them out of most - or all - of our instances ^.^.
  • Defederation is part of how the fediverse is resilient. It is the immune system of the network against hostile actors (it can be used in other ways, too, of course). Facebook, I think, is a textbook example of a hostile actor, and has such an unimaginably bad record that anything they say should be treated as a form of manipulation.

Edit 1 - Some More Arguments

In this thread, I've seen some more arguments about Meta/FB federation:

  • Defederation doesn't stop them from receiving our public content:
    • This is true, but very incomplete. The content you post is public, but what Meta/Facebook is really after is having their users interact with content. Defederation prevents this.
  • Federation will attract more users:
    • Only if Threads makes it trivial to move/make accounts on other instances, and makes the fact it's a federation clear to the users, and doesn't end up hosting most communities by sheer mass or outright manipulation.
    • Given that Threads as a platform is not open source - you can't host your own "Threads Server" instance - and presumably their app only works with the Threads Server that they run - this is very unlikely. Unless they also make Threads a Mastodon/Calckey/KBin/etc. client.
    • Therefore, their app is probably intending to make itself their user's primary interaction method for the Fediverse, while also making sure that any attempt to migrate off is met with unfamiliar interfaces because no-one else can host a server that can interface with it.
    • Ergo, they want to strongly incentivize people to stay within their walled garden version of the Fediverse by ensuring the rest remains unfamiliar - breaking the momentum of the current movement towards it. ^.^
  • We just need to create "better" front ends:
    • This is a good long-term strategy, because of the cycle of enshittification.
    • Facebook/Meta has far more resources than us to improve the "slickness" of their clients at this time. Until the fediverse grows more, and while they aren't yet under immediate pressure to make their app profitable via enshittification and advertising, we won't manage >.<
    • This also assumes that Facebook/Meta won't engage in efforts to make this harder e.g. Embrace, Extend, Extinguish/Consume, or social manipulation attempts.
    • Therefore we should defederate and still keep working on making improvements. This strategy of "better clients" is only viable in combination with defederation.

PART 2 (post got too long!)

17
 
 
18
 
 
19
 
 

Edit: obligatory explanation (thanks mods for squaring me away)...

What you see via the UI isn't "all that exists". Unlike Reddit, where everything is a black box, there are a lot more eyeballs who can see "under the hood". Any instance admin, proper or rogue, gets a ton of information that users won't normally see. The attached example demonstrates that while users will only see upvote/downvote tallies, admins can see who actually performed those actions.

Edit: To clarify, not just YOUR instance admin gets this info. This is ANY instance admin across the Fediverse.

20
 
 
21
22
 
 
23
 
 
24
3368
submitted 2 years ago* (last edited 2 years ago) by [email protected] to c/[email protected]
 
 

Another day, another update.

More troubleshooting was done today. What did we do:

  • Yesterday evening @phiresky@[email protected] did some SQL troubleshooting with some of the lemmy.world admins. After that, phiresky submitted some PRs to github.
  • @[email protected] created a docker image containing 3PR's: Disable retry queue, Get follower Inbox Fix, Admin Index Fix
  • We started using this image, and saw a big drop in CPU usage and disk load.
  • We saw thousands of errors per minute in the nginx log for old clients trying to access the websockets (which were removed in 0.18), so we added a return 404 in nginx conf for /api/v3/ws.
  • We updated lemmy-ui from RC7 to RC10 which fixed a lot, among which the issue with replying to DMs
  • We found that the many 502-errors were caused by an issue in Lemmy/markdown-it.actix or whatever, causing nginx to temporarily mark an upstream to be dead. As a workaround we can either 1.) Only use 1 container or 2.) set ~~proxy_next_upstream timeout;~~ max_fails=5 in nginx.

Currently we're running with 1 lemmy container, so the 502-errors are completely gone so far, and because of the fixes in the Lemmy code everything seems to be running smooth. If needed we could spin up a second lemmy container using the ~~proxy_next_upstream timeout;~~ max_fails=5 workaround but for now it seems to hold with 1.

Thanks to @[email protected] , @[email protected] , @[email protected], @[email protected] , @[email protected] , @[email protected] for their help!

And not to forget, thanks to @[email protected] and @[email protected] for their continuing hard work on Lemmy!

And thank you all for your patience, we'll keep working on it!

Oh, and as bonus, an image (thanks Phiresky!) of the change in bandwidth after implementing the new Lemmy docker image with the PRs.

Edit So as soon as the US folks wake up (hi!) we seem to need the second Lemmy container for performance. So that's now started, and I noticed the proxy_next_upstream timeout setting didn't work (or I didn't set it properly) so I used max_fails=5 for each upstream, that does actually work.

25
 
 

Welcome to the fediverse!

view more: next ›