• 0 Posts
  • 13 Comments
Joined 7 months ago
cake
Cake day: April 9th, 2024

help-circle
  • “us/them” mentality doesn’t help much. this is surely a great fuss for a 3 year old pr but you’re misrepresenting the situation:

    • serenityOS (and consequently ladybird) has rather strict rules about wording documentation: they enforce language style and even date formats
    • that wasn’t a report, but a PR: rather than pressing ~50 keys on your keyboard and then a “lock” button he could have just pressed the “merge” button and integrated those ~10 total characters changes
    • github issues are routinely used to fix wording: documentation often lives on git and it’s useful to have it version controlled, even plain documents without attached source are kept on git so that their edit history is accessible and manageable

    folks are making a big fuss but Andreas really set himself up: just say sorry and change 4 words, such a weird horse to die on





  • nice whataboutism, “they should do this instead”. oh they do, but you don’t care when they do.

    the delivery didn’t deface anything, if you want to focus on the delivery and once again ignore the message at least be honest. willing or not, messages like this do BP bidding


  • comparing with mullvad is ridiculous and just shows how much you drank the apple juice without questioning

    • mullvad doesn’t hold your contact info, like apple right?
    • mullvad is open source so you can independently verified which data is being sent, just like apple right?
    • mullvad claims to not log anything, like apple and their csam thing on icloud right?

    “leave alone the multi billion dollar corporation” energy



  • i really disagree with most of your points. a “server” is some machine working for the client. your proposal isn’t getting rid of servers, you’re just making every user responsible to be their own server.

    this mostly feels like “im annoyed my instance is filtering content and lacks replies”. have you tried fedilab? it allows fetching directly from source, bypassing your instance and fetching all replies. i think thats kind of anti-privacy but you may like it

    if you’re interested here’s a wall of text with more argumentations on my points (sorry wanted to be concise but really failed, i may make this into a reply blog post soon:tm:)


    Federation is not the natural unit of social organization

    you argue that onboarding is hard, as if picking a server is signing a contract. new users can go to mastosoc and then migrate from there. AP has a great migration system. also federation is somewhat the natural unit: you will never speak to all 8B people, but you will discuss with your local peers and your ideas may get diffused. somewhat fair points, but kind of overblown

    Servers are expensive to operate

    you really can’t get around this, even if you make every user handle their own stuff, every user will have their database and message queue. every user will receive such post in their message queue, process it and cache in their db. that’s such a wasteful design: you’re replicating once for every member of the network

    We should not need to emulate the fragmentation of closed social networks

    absolutely true! this should get handled by software implementers, AP already allows intercompatibility, we don’t need a different system, just better fedi software

    The server is the wrong place for application logic

    this is really wrong imo, and the crux of my critic. most of your complaints boil down to caching: you only see posts cached on a profile and in a conversation. this can’t be different, how could we solve it?

    • you mention a global search: how do we do that? a central silo which holds all posts ever made, indexed to search? who would run such a monster, and if it existed, why wouldn’t everyone just connect there to have the best experience? that’s centralization
    • again global search: should all servers ask all other servers? who keeps a list of all servers? again centralized, and also such a waste of resources: every query you’re invoking all fedi servers to answer?
    • even worse you mention keeping everything on the client, but how do you do that? my fedi instance db is around 30G, and im a single user instance which only sees posts from my follows, definitely not a global db. is every user supposed to store hundreds of GBs to have their “local global db” to search on? why not keep our “local global dbs” shared in one location so that we deduplicate posts and can contribute to archiving? something like a common server for me and my friends?

    also if the client is responsible of keeping all its data, how do you sync across devices? in some other reply you mention couchdb and pouchdb, but that sounds silly for fedi: if we are 10 users should we all host our pouchdb on a server, each with the same 10 posts? wouldn’t it be better keeping the posts once and serving them on demand? you save storage on both the server and all clients and get the exact same result

    having local dbs for each client also wouldn’t solve broken threads or profiles: each client still needs to see each reply or old post. imagine if every fedi user fetched every old post every time they follow someone, that would be a constant DOS. by having one big server shared across multiple people you’re increasing your chance of finding replies already cached, rather than having to go fetch them

    last security: you are assuming a well intentioned fedi but there are bad actors. i don’t want my end device to connect to every instance under the sun. i made a server, which only holds fedi stuff, which at worst will crash or leak private posts. my phone/pc holds my mails and payment methods, am i supposed to just go fetching far and wide from my personal device as soon as someone delivers me an activity? no fucking way! the server is a layer of defense

    networks are smarter at the edges

    the C2S AP api is really just a way to post, not much different than using madtodon api. as said before content discovery on every client is madness, but timeline/filter managenent is absolutely possible. is it really desirable? megalodon app allows to manage local filters for your timeline, but that’s kind of annoying because you end up with out of sync filters between multiple devices. same for timelines: i like my lists synched honestly, but to each their own, filters/timelines on the client should already be possible.

    you mention cheaper servers but only because you’re delegating costs to each client, and the “no storage” idea is in conflict with the couchdb thing you mentioned somewhere else. servers should cache, caching is more efficient on a server than on every client.

    a social web browser, built into the browser

    im not sure what you’re pitching here. how are AP documents served to other instances from your browser? does your browser need to deliver activities to other instances? is your whole post history just stored in localstorage, deleted if you clear site data? are you supposed to still buy a domain (AP wants domains as identities) and where are you going to point it?




  • thanks for saying this! i really don’t want to victim blame itsfoss for getting traffic spikes but if you cant handle ~20MB in one minute (~400kbps) of traffic you’re doing something really really wrong and you really should look into it, especially if you want to distribute content. crying “dont share our links on mastodon” also sounds like hunting windmills, block the mastodon UA and be done with it, or stop putting images in your link previews for mastodon, or drop link previews completely. a “100 mb DDOS” is laughable at best, nice amplification calculation but that’s still 100 megs


  • consider that caching happens at thousands of levels on the internet. every centralized site has its content replicated many many times in geo local caches, proxies and even local browsers. caching is a very core concept for the internet. others often bash AP because it replicates a lot, but that’s kind of like explicit caching: if the whole fediverse network fetched a post from it source, millions of requests would beat small servers down constantly. big servers cache the content they intend to distribute and handle the traffic spike instead of the small instance. small instances on their hand dont need to replicate as much and can rely more on bigger instances, maybe cleaning their cached content often and refetching when necessary. replication is a feature, not a design flaw!