• sparr@lemmy.world
    link
    fedilink
    English
    arrow-up
    38
    arrow-down
    1
    ·
    edit-2
    6 months ago

    I am sad that the current generation of federated social media/networks still doesn’t have much, if any, implementation of web of trust functionality. I believe that’s the only solution to bots/AI/etc content in the future. Show me content from people/accounts/profiles I trust, and accounts they trust, etc. When I see spam or scams or other misbehavior, show me the trust chain connecting me to it so I can sever it at the appropriate level instead of having to block individual accounts. (e.g. “sorry mom, you’ve trusted too many political frauds, I’m going to stop trusting people you trust”)

      • EldritchFeminity@lemmy.blahaj.zone
        link
        fedilink
        arrow-up
        9
        ·
        6 months ago

        This concept reminds me of a certain browser extension that marks trans allies and transphobic accounts/websites using a user aggregate with thresholds that mark transphobes as red and trans allies as green.

    • SorteKaninA
      link
      fedilink
      arrow-up
      4
      ·
      6 months ago

      I guess the question is how specifically you implement such a system, in this case for software like Lemmy. Should instances have a trust level with each other? Should you set a trust when you subscribe to a community? I’m not sure how you can make a solution that will be simple for users to use (and it needs to be simple for users, we can’t only have tech people on Lemmy).

      • sparr@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        6 months ago

        For the simplest users, my initial idea is just a binary “do you trust them?” for each person (aka “friends”) and non-person (aka “follow”), and maybe one global binary of “do you trust who they trust?” that defaults to yes. anything more complex than that can be optional.

        • SorteKaninA
          link
          fedilink
          arrow-up
          1
          ·
          6 months ago

          But how does this work when you follow communities? Do you need to trust every single poster in a community?

          • sparr@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            6 months ago

            You’d see posts in a community/group/etc based on your trust of the community, unless you’ve explicitly de-trusted the poster or you trust someone who de-trusts them (and you haven’t broken that chain).

            • SorteKaninA
              link
              fedilink
              arrow-up
              1
              ·
              6 months ago

              Right, so if I have no connection to someone else, it’d be “neutral” and I’d see the post. If I trust them transitively, then it would be a trusted post and if I distrust them transitively, it would be a distrusted post.

              I think implementing such a thing would not only be complicated but also quite computationally demanding - I mean you’d need to calculate all of this for every single user?

    • Blaze@reddthat.com
      link
      fedilink
      arrow-up
      2
      ·
      6 months ago

      Definitely something that will emerge in the future once we’ll inevitable get bots here too

    • jkrtn@lemmy.ml
      link
      fedilink
      arrow-up
      2
      ·
      6 months ago

      Yes! Web of trust is the only way. Everything else can be scammed. I am kinda wondering if it could be invites and if severing could be automated for social media. “We just banned a third person who came in on your invitations. Goodbye.”