lwadmin

joined 2 years ago
MODERATOR OF
115
submitted 1 week ago* (last edited 1 week ago) by [email protected] to c/[email protected]
 

Update: we've finally been in contact with Jonah and the material we originally flagged has been taken down. The link ban has been lifted.

We will be posting another announcement soon to provide more information and our future plans relating to federation.


Hello world,

following our previous decision to defederate lemmy.one due to their lack of responsiveness to abuse reports about CSAM hosted on their instance we have decided to also defederate from all other platforms operated by the same entity.

Lemmy.one is operated by Fediverse Communications LLC, which is ran by Jonah Aragon. Fediverse Communications LLC operates various Fediverse instances listed here:

  • mstdn.party
  • mstdn.plus
  • lemmy.one
  • mastodon.neat.computer
  • pxlfd.plus

Jonah is also the project director of privacyguides.org and discuss.privacyguides.net is the community platform belonging to privacyguides.org.

Additionally, Jonah is also the director of Triplebit, the ISP used to host Fediverse Communications LLC infrastructure and also Privacy Guides infrastructure.

Our abuse reports to both Fediverse Communications LLC and Triplebit about CSAM hosted on lemmy.one have gone nowhere and this material is still up, almost 3 months since the original report. This has since also been reported to NCMEC (via Cloudflare) and other US law enforcement directly by us, but so far this hasn't resulted in anything being taken down.

Although we do not believe that it is likely to find CSAM on privacyguides.org or privacyguides.net, going forward we will remove all posts and comments referencing privacyguides.org, privacyguides.net, or any of the domains of services operated by Fediverse Communications LLC. We will not ban any user for mentioning or linking them, only remove these posts and comments.

Privacy Guides has come to our attention as they are just now testing federation of their discourse forum, which is also coincidentally posted by Jonah.

We will be lifting this link ban once the offending material has been taken down, but we will not consider refederation with any instance that is operated by Fediverse Communications LLC until they can somehow convince us that they'll be taking responsibility for their infrastructure in the future.

If the Privacy Guides team is willing and able to cut ties with Jonah we will also be more than happy to remove the ban of their domains and also refederate with their discourse forum even without the original issue with lemmy.one being addressed. Federation between discourse and Lemmy appears to currently still be quite limited or not be working at all, so for the time being the federation aspect may not have much relevance here.

It's unfortunate that things had to this far; we generally try addressing issues like this in a friendly manner and reporting it privately to affected instances, as most people are more than willing to take down CSAM hosted on their infrastructure. In this case however, Jonah appears to be operating way beyond what he is able to manage. Being not only an instance operator but also an ISP puts a lot of responsibility in your hands, and at the very least you should respond to abuse reports. We have exhausted non-law-enforcement escalation steps a while back already; we went to their ISP, which they are conveniently themselves. We don't know the actual server IP hosting this content, as it's behind Cloudflare, so only law enforcement would be able to obtain this information.

709
submitted 2 weeks ago* (last edited 2 weeks ago) by [email protected] to c/[email protected]
 

Hello world,

as many of you probably already know, Lemmy is an open source project and its development is funded by donations.

Unfortunately, as is often the case, donations amounts are often going down over time if people are not aware of their necessity. When older users leave the platform they may stop donating, while new users joining will typically not be aware of this and won't start donating to even things out or even go towards an overall increase in donations.

All of the services provided by our non-profit Fedihosting Foundation are dependent on the development of FOSS platforms, which we can host without paying any licensing or other fees, instead only being required to pay for the infrastructure cost. We are currently investing a small part (€50 each) of the donations we receive in development of Lemmy and Mastodon, but the majority of the donations we receive are used for covering infrastructure costs. We're currently just about breaking even with the donations we receive, but it's certainly not enough to cover a large part of Lemmy or other software development costs.

We're looking to support sustainable software development for all the services we provide and will post similar announcements on our other platforms to promote donations towards the respective development teams in the coming days.

You can find the original announcement by @[email protected] below:

cross-posted from: https://lemmy.ml/post/29579005

An open source project the size of Lemmy needs constant work to manage the project, implement new features and fix bugs. Dessalines and I work full-time on these tasks and more. As there is no advertising or tracking, all of our work is funded through donations. Unfortunately the amount of donations has decreased to only 2000€ per month. This leaves only 1000€ per developer, which is not enough to pay my bills. With the current level of donations I will be forced to find another job, and drastically reduce my contributions to Lemmy. To avoid this outcome and keep Lemmy growing, I ask you to please make a recurring donation:

Liberapay | Ko-fi | Patreon | OpenCollective | Crypto

If you want more information before donating, consider the comparison with Reddit. It began as startup funded by rich investors. The site is managed by corporate executives who over time have become more and more disconnected from normal users. Their main goal is to make investors happy and to make a profit. This leads to user-hostile decisions like firing the employee responsible for AMAs, blocking third-party apps and more. As Reddit is a single website under a single authority, it means all users need to follow the same rules, including ridiculous ones like censoring the name "Luigi".

Lemmy represents a new type of social media which is the complete opposite of Reddit. It is split across many different websites, each with its own rules, and managed by normal people who actually care about the users. There is no company and no profit motive. Much of the work is carried out by volunteer admins, mods and posters, who contribute out of enthusiasm and not for money. For users this is great as there is no advertising nor tracking, and no chance of takeover by a billionaire. Additionally there are no builtin political or ideological restrictions. You can use the software for any purpose you like, add your own restrictions or scrutinize its inner workings. Lemmy truly belongs to everyone.

Dessalines and I work fulltime on Lemmy to keep up with all the feature requests, bug reports and development work. Even so there is barely enough time in the day, and no time for a second job. Previously I sometimes had to rely on my personal savings to keep developing Lemmy for you, but that can't go on forever. We partly rely on NLnet for funding, but they only pay for development of new features, and not for mandatory maintenance work. The only available option are user donations. To keep it viable donations need to reach a minimum of 5000€ per month, resulting in a modest salary of 2500€ per developer. If that goal is reached Dessalines and I can stop worrying about money, and fully focus on improving the software for the benefit of all users and instances. Please use the link below to see current donation stats and make your contribution! We especially rely on recurring donations to secure the long-term development and make Lemmy the best it can be.

Donate


edit, as this was frequently brought up:

Will donations to Lemmy development go towards the operation of lemmy.ml?

It depends on the donation method used and is limited to around 2% of the minimum overall donation goal. The vast majority of donations is exclusively used for developer salaries.

lemmy.ml hosting is only financed by donations via Opencollective. All other donations go exclusively to developer salaries.

[source]

For donations via Open Collective, yes, a tiny fraction of donations towards Lemmy development will go towards the operation of lemmy.ml. The reasons for this include that lemmy.ml is used for testing new releases and also that it's not worth maintaining a separate donation account for the instance. Additionally, it should be noted that the money going towards lemmy.ml hosting is just a tiny fraction of the funds that are being asked for. Hosting lemmy.ml costs around €100/month, which is only 2% of the stated minimum donation goal.

 

Hello world,

we will be performing an update to Lemmy 0.19.11 in an hour.

We are planning for around 15 minutes of downtime today at 21:30-21:45 UTC.

You can convert this to your local time here: https://inmytime.zone/?iso=2025-05-03T21%3A30%3A00.000Z

As mentioned in our April update, we had already backported most of the Lemmy-UI changes for 0.19.11, but we were still missing most of the backend changes.

This update will bring us the remaining changes listed in the release notes, as well as some additional changes not yet released:

  • user registrations are now processed in a DB transaction, which prevents some errors we've occasionally seen in the past where registrations rarely resulted in an inconsistent user creation state (#5480)
  • new posts in NSFW communities are now always marked NSFW (#5310)
  • another round of peertube federation fixes (#5652)
  • fixed email notifications for denied applications (#5641)
    this was already supposed to be part of 0.19.11 but it did not work there. we are migrating our previous external email notification implementation to let Lemmy handle sending emails now.
  • various fixes for opentelemetry that are not present in upstream Lemmy
    As Lemmy opentelemetry has been removed from Lemmy 1.0 this is not code that we are currently planning to upstream to the 0.19 branch, but we intend to keep using this going forward.

If you are an instance admin considering enabling opentelemetry in Lemmy 0.19, don't do that unless you also apply a similar set of patches to bring the related libraries to newer versions, as your instance will otherwise lock up after some requests.


Update 21:50 UTC: The update has been completed successfully and within the planned amount of time.

 

Hello,

as this is a fairly active community we just wanted to let you know that this community is no longer federating with Lemmy.World due to defederation from lemmy.one for lack of moderation.

Our announcement can be found here: https://lemmy.world/post/28173093

We recommend migrating to a community on an instance that is maintained better.

357
submitted 1 month ago* (last edited 1 month ago) by [email protected] to c/[email protected]
 

Hello world,

we've had various smaller updates lately that didn't all warrant their own update posts and we don't want to post too many announcements around the same time, so we collected them for a single larger post.

New alternative Lemmy frontend: Tesseract

We have recently added a new alternative Lemmy frontend to our collection: Tesseract.

Tesseract was forked from Photon a while back and includes a variety of additional customization options and moderation utilities.

It is available at https://t.lemmy.world/.

Lemmy-UI update to 0.19.11-ish

We have deployed a custom build of Lemmy-UI, the default Lemmy web interface, which includes most of the features included in the official 0.19.11 release.

While we haven't updated our backend to a newer version yet, as we still have to find a solution for dealing with the newly integrated functionality to send emails on rejected registration applications, all the frontend features that don't require a backend update have been included. The only part that is currently missing is Lemmy's new donation dialog, as this requires a backend upgrade as well.

You can find the list of changes in the frontend section in the announcement for the 0.19.11 release.

Defederation from lemmy.one and r.nf

A Lemmy.World user informed us about an instance we are federated with that was hosting very illegal content a while back. This was a result of an attack more than a year ago, and said content federated to many other instances, which made local copies of the material. Unfortunately, when this material was taken down at the source, that action did not federate to all linked instances, meaning that there are still some instances showing this material.

Once we were made aware of this, we realized that this was likely not the only occurrence, so we started looking for other instances where this content may also still exist. We have identified more than 50 affected instances and already reached out to many of them to inform them about this content to have it taken down.

Among these instances, r.nf and lemmy.one were some of the first instances that were informed, but even after 2 months since the initial report there has been zero reaction from either instance. Both of these instances don't appear to be moderated, as evident also by posts asking whether the instance is still maintaned on lemmy.one and 2 month old spam in r.nf's main community.

The community that gets hit the hardest by this is [email protected], which is the only larger community across these instances. We recommend looking for alternative communities on other instances.

Due to the lack of action and response we have since also reported this directly to their hosting providers through Cloudflare, which includes an automatic report to NCMEC.

Even when this material will get taken down now, we don't currently believe that the instance operators are willing or able to moderate these instances properly, so we will keep them defederated unless they can convince us that they are going to moderate their instances more actively and ensure that they provide usable abuse contacts that don't require going through their hosting provider.

We also defederate from other instances from time to time due to lack of moderation and unreachable admins among other reasons. If you're interested in the reasons for our defederations, we aim to always document them on Fediseer. Be warned though, as this list contains a mentions or references to various disturbing or illegal material.

Most of those instances are either very small, don't interact with Lemmy much anyway, or are explicitly stating support of content that is incompatible with our policies.

We also usually try to reach out to affected instances prior to defederation if we believe that they may not intentionally be supporting the problematic content.

We have temporarily re-federated to lemmy.one to allow this post and https://lemmy.world/post/28173100 to federate to them. We're waiting for federation to catch up with the activities since we defederated a day ago originally before we defederate again.

Reliability of media uploads

We have recently been receiving some reports of media uploads not working from time to time. We have already addressed one of the underlying issues and are working on addressing another one currently. Please continue to let us know about issues like that to ensure that they're on our radar.

We're currently also working on improving our overall application monitoring to collect more useful information that helps us track down specific issues, improve visibility for errors, as well as hopefully allowing us to identify potential performance issues.

Parallel federation

Back in Lemmy 0.19.6, Lemmy introduced the option to send federated activities in parallel. Without this, Lemmy would only ever have one activity in the process of being transmitted to another instance. While most instances don't have a large number of activities going out, we're at the point where instances far away from us are not able to keep up with our traffic anymore due to physics limitations when waiting for data from the other side of the world.

Some instances mitigated this by setting up an external federation queue near our instance that would batch activities together to work around these limitations while this was not implemented in Lemmy and deployed on our end. Unfortunately this also meant having to maintain an additional server, which means time investment, a few bucks every month to pay, as well as another potential component that could break.

We have enabled 2 parallel sends around a week ago and aussie.zone, who were pretty much constantly lagging behind multiple days have finally caught up with us again. We will continue to monitor this and if needed increase the number of parallel sends further in the future, but so far it looks like we should be fine with 2 for a good while.


edit: added section about parallel federation

1264
submitted 1 month ago* (last edited 1 month ago) by [email protected] to c/[email protected]
 

Hello world,

as many of you may already be aware, there is an ongoing spam attack by a person claiming to be Nicole.

It is very likely that these images are part of a larger scale harassment campaign against the person depicted in the images shared as part of this spam.

Although the spammer claims to be the person in the picture, we strongly believe that this is not the case and that they're only trying to frame them.

Starting immediately, we will remove any images depicting "Nicole" and information that may lead to identifying the real person depicted in those images to prevent any possible harassment.
This includes older posts and comments once identified.

We also expect moderators to take action if such content is reported.

While we do not intend to punish people posting this once, not being aware of the context, we may take additional actions if they continue to post this content, as we consider this to be supporting the harassment campaign.

Discussion that does not include the images themselves or references that may lead to identifying the real person behind the image will continue to be allowed.

If you receive spam PMs please continue reporting them and we'll continue working on our spam detections to attempt to identify them early before they reach many users.

 

Hello,

we will be updating pict-rs to the latest version in about 2 hours.

We expect a short downtime of 1-2 minutes during the planned migration window, as there are no major database changes involved.

Most users won't be affected by this, as the majority of our media is cached and served by Cloudflare. This should primarily only affect thumbnail generation and uploads of new media while the service is down.

You can convert this to your local time here: https://inmytime.zone/?iso=2025-03-28T22%3A00%3A00.000Z


The update has been completed successfully.

 

Hello,

as some of you may have noticed we just had about 25 minutes of downtime due to the update to Lemmy 0.19.10.

Lemmy release notes: https://join-lemmy.org/news/2025-03-19_-_Lemmy_Release_v0.19.10_and_Developer_AMA

This won't fix YouTube thumbnails for us, as YouTube banned all IPs belonging to our hosting provider.

We were intending to apply this update without downtime, as we're looking to apply the database migration that allows marking PMs as removed due to the recent spam waves.

Although this update contains database migrations, we expected to still be able to apply the migration in the background before updating the running software, as the database schema between the versions was backwards compatible. Unfortunately, once we started the migrations, we started seeing the site go down.

In the first minutes we assumed that the migrations contained in this upgrade were somehow unexpectedly blocking more than intended but still processing, but it turned out that nothing was actually happening on the database side. Our database deadlocked due to what appears to be an orphaned transaction, which didn't die even after we killed all Lemmy containers other than the one running the migrations.

While the orphaned transaction was pending, a pending schema migration was waiting for the previous transaction to complete or be rolled back, so nothing was moving anymore. As the previous transaction also didn't move anymore everything started to die. We're not entirely sure why the original transaction broke down, as it was started about 30 seconds before the schema migration query, which seems like that shouldn't have been broken by that happening at the same time.

Lemmy has a "replaceable" schema, which is applied separately from the regular database schema migrations, which runs every time a DB migration occurs. We unfortunately did not consider this replaceable schema migration in our planning, as we would otherwise have realized that this would likely have larger impact on the overall migration.

After we identified that the database had deadlocked, we resorted to restarting our postgres container, then run the migration again. Once we restarted the database, everything was back online in less than 30 seconds, which includes first running the remaining migrations and then starting up all containers again.

When we tested this process on our test instance prior to deploying this to the Lemmy.World production environment we did not run into this issue. Everything was working fine with the backend services running on Lemmy 0.19.9 and the database being upgraded to Lemmy 0.19.10 schema already, but the major difference here is the lack of user activity during the time of the migration.

Our learning from this is to always plan for downtime for Lemmy updates if any database migrations are included, as it does not appear to be possible to "safely" apply them even if they seem small enough to be theoretically doable without downtime.

 

Hello,

we will be performing the long awaited update to Lemmy 0.19.9 tomorrow.

We are planning for around 1 hour of downtime between 16:00-17:00 UTC on 16th of March.

You can convert this to your local time here: https://inmytime.zone/?iso=2025-03-16T16%3A00%3A00.000Z

You can find an overview of the changes in our previous announcement here and in the Lemmy release notes:


Update 16:50 UTC:

The upgrade was successfully completed at around 16:27 UTC, but we're still fighting with some performance issues after the upgrade. Our database and the outbound federation container are currently using significantly higher CPU than expected, which is still being investigated to identify the root cause.

[–] [email protected] 47 points 3 months ago (8 children)

a link has been added for your friend

 

Hello,

while preparation our upcoming Lemmy update we're also be updating to a newer database version, which will provide additional performance benefits and functionality.

We are planning for around 10 minutes of downtime for the database update between 19:00-19:30 UTC on 22nd of February. This is not yet the planned Lemmy update, we will be announcing that separately when we are ready.

edit: You can convert this to your local time here: https://inmytime.zone/?iso=2025-02-22T19%3A00%3A00.000Z

edit 2: The database upgrade has been completed successfully.

 

Hello World,

as many of you know, several newer Lemmy versions have been released since the once we are currently using.

As this is a rather long post, the TLDR is that we're currently planning for late January/early February to update Lemmy.World to a newer Lemmy release.

We're currently running Lemmy 0.19.3 with a couple patches on top to address some security or functionality issues.

As new Lemmy versions have been released, we've been keeping an eye on other instances' experiences with the newer versions, as well as tracking certain issues on GitHub, which might impact stability or moderation experience.

We updated to Lemmy 0.19.3 back in March this year. At that point, 0.19.3 had been released for a little over a month already and at that point all the major issues that troubled the earlier 0.19 releases had been addressed.

Several months later, in June, Lemmy 0.19.4 was released with several new features. This was a rather big release, as a lot of changes had happened since the last release. Only 12 days later 0.19.5 was released, which fixed a few important issues with the 0.19.4 release. Unfortunately, Lemmy 0.19.5 also introduced some changes that were, and to some part are still not fully addressed.

Prior to Lemmy 0.19.4, regular users may see contents of removed or deleted comments in some situations, primarily when using third party apps. Ideally, this would have been fixed by restricting access to contents of removed comments to community moderators in the communities they moderate, as well as admins on each instance. Deleted comments will be overwritten in the database after some delay, but they might still be visible prior to that. This is especially a problem when moderators want to review previously removed comments to either potentially restore them or to understand context in a thread with multiple removed comments. Lemmy modlog does not always record individual modlog entries for bulk-removed items, such as banning a user while also removing their content would only log their ban but not the individual posts or comments that were removed.

We were considering writing a patch to restore this functionality for moderators in their communities, but this is unfortunately a rather complex task, which also explains why this isn't a core Lemmy feature yet.

While admins can currently filter modlog for actions by a specific moderator, this functionality was lost somewhere in 0.19.4. While this isn't something our admin team is using very frequently, it is still an important feature to have available for us for the times we need it.

This also included a few security changes for ActivityPub handling, which resulted in breaking the ability to find e.g. Mastodon posts in Lemmy communities by entering the post URL in the search. It also caused issues with changes to communities by remote moderators.

The 0.19.4 release also broke marking posts as read in Sync for Lemmy. Although this isn't really something we consider a blocker, it's still worth mentioning, as there are still a lot of Sync for Lemmy users out there that haven't noticed this issue yet if they're only active on Lemmy.World. Over the last 2 weeks we've had nearly 5k active Sync for Lemmy users . This is unfortunately something that will break during the upgrade, as the API has changed in upstream Lemmy.

There are also additional issues with viewing comments on posts in local communities that appear to be related to the 0.19.4/0.19.5 release, appear to be a lot more serious. There have been various reports of posts showing with zero comments in Sync, while viewing them in a browser or another client will show various comments. It's not entirely clear to us right now what the full impact is and to what extent it can be mitigated by user actions, such as subscribing to communities. If anyone wants to research what is needed to restore compatibility and potentially even propose a patch for compatibility with both the updated and the previous API version we'll consider applying it as a custom patch on top of the regular Lemmy release.

If there won't be a Sync update in time for our update and we won't have a viable workaround available, you may want to check out [email protected] to find potential alternatives.

There were also several instances reporting performance issues after their upgrades, although they seemed to mostly have been only for a relatively short time after the upgrades and not persistent.

Lemmy 0.19.6 ended up getting released in November and introduced quite a few bug fixes and changes again, including filtering the modlog by moderator. Due to a bug breaking some DB queries, 0.19.7 was released just 7 days later to address that.

Among the issues fixed in this release were being able to resolve Mastodon URLs in the search again and remote moderators being able to update communities again.

0.19.6 also changed the way post thumbnails generated, which resulted thumbnails missing on various posts.

A month later, now we're in December, 0.19.8 was released.

One of the issues addressed by 0.19.8 was Lemmy returning content of removed comments again for admins. For community moderators this functionality is not yet restored due to the complexity of having to check mod status in every community present in the comment listing.

At this point it seems that most of the issues have been addressed, although there seem to still be some remaining issues relating to thumbnails not reliably being created in some cases. We'll keep an eye on any updates on that topic to see if it might be worth waiting a little longer for another fix or possibly deploying an additional patch even if it may not be part of an official Lemmy release yet at the time.

While we were backporting some security/stability related changes, including a fix for a bug that can break federation in some circumstances when a community is removed, we accidentally reverted this patch while applying another backport, which resulted in our federation with lemmy.ml breaking back in November. This issue was already addressed upstream a while back, so other instances running more recent Lemmy versions were not affected by this.

Among the new features released in the Lemmy versions we have missed out on so far, here are a couple highlights:

  • Users will be able to see and delete their uploads on their profile. This will include all uploads since we updated to 0.19.3, which is the Lemmy version that started tracking which user uploaded media.
  • Several improvements to federation code, which improve compatibility with wordpress, discourse, nodebb.
  • Fixing signed fetch for federation, enabling federation with instances that require linked instances to authenticate themselves when fetching remote resources. Not having this is something we've seen cause issues with a small number of mastodon instances that require this.
  • Site bans will automatically issue community bans, which means they're more reliable to federate.
  • Deleted and removed posts and comments will no longer show up in search results.
  • Bot replies and mentions will no longer be included in notification counts when a user has blocked all bots.
  • Saved posts and comments will now be returned in the reverse order of saving them rather than the reverse order of them being created.
  • The image proxying feature has evolved to a more mature state. This feature intends to improve user privacy by reducing requests to third party websites when browsing Lemmy. We do not currently plan on enabling it with the update, but we will evaluate it later on.
  • Local only communities. We don't currently see a good use for these, as they will prevent federation of such communities. This cuts off users on all other instances, so we don't recommend using them unless you really want that.
  • Parallel sending of federated activities to other instances. This can be especially useful for instances on the other side of the world, where latency introduces serious bottlenecks when only sending one activity at a time. A few instances have already been using intermediate software to batch activities together, which is not standard ActivityPub behavior, but it allows them to eliminate most of the delays introduced by latency. This mostly affects instances in Australia and New Zealand, but we've also seen federation delays with instances in US from time to time. This will likely not be enabled immediately after the upgrade, but we're planning to enable this shortly after.

edit: added information about sync not showing comments on posts in local communities

 

Hello World,

today, @[email protected] has provided an update to the media upload scanner we're using. This should reduce the amount of false positives blocked from being uploaded. We have deployed the updated version now.

While we do not have stats about false positives from before we implemented the scan when uploading, those changes did not change the overall data availability for us. Flagged images were still deleted, they were just still served by our cache in many cases. By moving this to the upload process, it has become much more effective, as previously images could persist in Cloudflare's cache for extended periods of time, while now they won't get cached in the first place.

Over the last week, we've seen a rate of roughly 6.7% uploads rejected out of around 3,000 total uploads. We'll be able to compare numbers in a week to confirm that this has indeed improved the false positive rate.

[–] [email protected] 42 points 5 months ago (4 children)

You can call me Leo. Leo Wadmin.

[–] [email protected] 35 points 5 months ago (3 children)

No, L.W is not running under my desk :-)

[–] [email protected] 5 points 6 months ago

it arrived a few minutes ago, federation is working again (for now)

[–] [email protected] 28 points 8 months ago (8 children)

The site admins are below the org operations team, so you can "go to their boss" / "talk to a manager" if you have a issue you feel is being handled unfairly.

[–] [email protected] 17 points 8 months ago (3 children)

We'll review this with the team.

[–] [email protected] 2 points 8 months ago

We'll look into updating it. Thanks for the feedback ❤️

[–] [email protected] 2 points 8 months ago (1 children)

We're doing the best we can to consider everyone.

[–] [email protected] 1 points 8 months ago (3 children)

Getting advice from a shitpost is probably not a good idea.

[–] [email protected] 5 points 8 months ago

Thank you, the team is trying our best to learn and grow.

[–] [email protected] 25 points 8 months ago (1 children)

Thank you for being understanding about it 🙏

[–] [email protected] 17 points 8 months ago

Thanks, we're trying to do our best.

view more: next ›