Federation vs Centralization is political, not technical
Comment on this article: https://hachyderm.io/@mitsuhiko/109342699270840040
If you don't introduce the actors and variables involved in a problem, the analysis will show some logic conclusions, but it will be biased and spurious. This is the case of the linked article.
The author considers the moderation and federation problem a technical problem, and it misses the point. This is a political, social and cultural problem, and they need to be addressed before the technical one.
A federated social network defines the moderation rules based on a community instead of revenue-based decisions like the centralized monopolistic companies. Companies like Meta or Twitter do not care about free speech or freedom, it's just a sell speech. They live as long as they make money based on how people interact with sponsors. So, moderation in those social networks is a PR problem. In Mastodon, moderation is a social problem. How many people is involved, how? These questions need to be asked and answered by the community. If we disagree on how a server is ruled we can move our account to another server without losing our content.
Hate speech will raise everywhere in internet because this is a complex worldwide scenario. Turning down the hate speech is not enough to fight against it. The analogy is trying to cover the sun with your hand, or sweep the sand in a beach. If you want to fight against the hate speech, the answer is not blocking it, the answer is explaining as many times as it's required why it is bad. If people in a server still wants to ban an entire server, it is possible, Mastodon allows to block or silence other servers. So, the tools are there.
In terms of community moderation, there are a lot of moderation patterns that can be introduced in a software, some of them already exist since the 90s. Silent bans based on reports; removing voice from a person in a conversation (flood control); discussion boards to talk about specific cases; community moderators that are accountable in the message when they edit or remove it. We learnt a lot about moderation in the 90s and the first decade of this millennia.
You cannot promote or prevent people's behavior, and you cannot automate the culture. Would it be easier if we don't get involved in moderation and we delegate it to an algorithm or “someone else”? Yes, of course, but if something bothers you in a social space, you should get involved, and the software must provide the tools to get you onboard. This is a real change in the mindset. I am not a simple content consumer, I am an active member of a community, so I have rights and duties.
High availability is a problem if you lose money when your site is down. If you participate in a community, you understand there is people behind the systems trying to make them work, and if there is some downtime, it's ok, we cannot demand MORE from the people, if we want to make it better, we can get involved. If you care about losing your content, you can ask how a server is funded, how many people work to keep it up, etc. Is there a risk of someone not paying a bill? Of course! But do you have more control on companies like Twitter? You don't. If Elon shuts it down because “it's not profitable”, there is nothing you can do. In Mastodon you can move your account to another server. You can even start your own server!
In terms of software architecture, federation WORKS. Torrent works. Other decentralized and distributed systems work. We can sort out the limitation determined by the CAP theorem just accepting there will be downtime. Centralized software always use some sort of decentralized system in the backend because there is no way to handle the mass throughput without making some compromises.
Criticizing is easier than getting involved. The scalability problem is just a straw man. We are doing distributed software since the 60s and the problems are always the same. At this point, centralized vs federated software is a social and political problem, not a technical one.