Communities are already some kind of federation.
This is an excellent interpretation. On federated platforms there is federations inside federations. Its superfluous really, but it gets worse where communities get repeated.
Ok but nobody has the power to enforce the rules right? How do you deal with trolling and spamming? Does every user have to block every troll one by one?
Well it could depend on what you mean by enforce. There is moderators and they can enforce rules, it's just each user is put kind of in an admin position to pick and choose the moderators. You can also have groups of moderators controlled by other users for any user to use, which is some part of how discussions.app is doing their 'communities' the other part is each community chooses what #tagged content to use. Aswell the instance could enforce certain moderators and infact some moderators must be set as default to keep the platform clean. The idea is these could be changed individually or with lists of moderators, or users just live inside the curated communities. The other thing is different instances could do it differently with a different set of default moderators.
The whole point of this type of platform/moderation is to solve problems seen on others. You won't be able to own a topic or community 'as such' as users must consent for you to have power. The reality is people are lazy, stupid and will consent to crazy thing as seen from the last year. So because moderators provide a service stopping spam and abusive behavior then people will use them and they will have power. What this really prevents is moderators being bad actors and also people not having good moderation or situations with no moderation when its needed. Because anyone can moderate there should be a much higher supply of moderation and types of moderation. The types of moderation I think is where things can get interesting because there is a heap of behaviours that could be hidden and platforms would be much nicer places, but really it's up to the users what they want and how they experience things.
Block lists with torrents were used to try prevent legal implication and a few other reasons all which are largely useless because peers on blocklists can just move to nonblocked ips, especially peers with resources looking to gather evidence.
I disagree but it is a point worth exploring. Thinking what would be the cause likely is that the federated instances exert control over the content rather than just storing and relaying it to the users to deal with.
Yes you could but it still depends how the content is distributed, peers on the white list could act as relays to those not on the white list. This would risk them being removed but when dealing with so many peers I just dont see lists like that working to segment the network too much. They may as well fork. The desire to remain connected to the larger network will be too much for most users. You could also have underlying blockchains or hash tables that make forking the network not very promising.
Section 230 and the dmca should still apply, enforcing becomes much harder with p2p but so does defending I think the legal system does make having free speech an interesting problem inspite that the constitution says it should be possible. I think having speech government and companies and many others don't want is part of the plan for these networks. There is a great many authorities not happy at the freedoms the internet has allowed.
Yes but I think keeping the text part of these platforms alive should be much easier. Torrent only needs 1 peer(seed) with a full copy, a social network can easily have 50000 users which is a lot of resistance and redundancy.