• 2 Posts
  • 33 Comments
Joined 1 year ago
cake
Cake day: June 21st, 2023

help-circle








  • It’s frustrating, because I have pronouns after my name and I dislike hexbear… a lot. It is a good idea to have users give pronouns and automatically attach it.

    Their behaviour has made me constantly check if people with pronouns after their names are part of hexbear before engaging in any threads, because of the stress of dealing with them :/, sometimes I do engage anyway and immediately regret it /shrug

    It is depressing, because normally using pronouns like this indicates trans supportiveness so I feel better about conversing with people with them on their names. Hexbear has ruined this because of their behaviour around all other topics and sometimes trans topics.

    Just hope Jerboa gets instance-blocking features soon ;p, then I can block them on both my lemmy accounts .


  • Something that might be useful long term is trying to train an AI and release weights to identify CSAM that admins can use to check images. The main problem is finding a way to do this without storing those kinds of images or video :/

    My understanding is that right now, the main mechanisms involved use several central databases which use perceptual hashes of known CSAM material. The problem is that this ends up being a whackamole solution, and at least in theory governments could use these databases to censor copyrighted or more general “unapproved” content, though i imagine such a db would lose trust quickly and I’m not aware of this being an issue in practise.

    One potential solution is “opportunistic training” where, when new CSAM material gets identified and submitted to the FBI or these databases by various server admins, a small amount of training is done on the AI weights before the image or video is deleted and only a perceptual hash remains. Furthermore, if a picture is reported as “known CSAM” by these dbs, then you do the same thing with that image before it gets deleted.

    To avoid false positives, you also train the AI on general non-CSAM content.

    Ideally this process would be fully automated so no-one has to look at that shit - over time, ypu’d theoretically get a neural net capable of identifying CSAM reliably with few or no false positives or false negatives .. Admins could also try for some kind of distributed training, where each contributes weight deltas from local training, or each builds up LoRA-style improvement modules and people combine them to reduce bandwidth for modification sharing.


  • The only reason this happens is that capitalism ties survival to labour. Automation should be liberating us, and yet the structures of capitalism and “protestant work ethic” cause it to do the opposite :/. People would act this way because otherwise the greater efficiency acts as a detriment to their survival ability.

    None of what you said is an argument against worker democracy, but an argument against the fundamental models of capitalism and “”“free”“” market ideology . (or more generally, any system and ideology which gatekeeps access to basic resources behind their perceived ability to provide “value” or perform labour).











  • 196 is a random content community with the simple rule of “post before you leave”, so its filled with memes .

    It’s also a very trans friendly place. But there was a thread recently with a bunch of “just asking questions”, and “trans people are just oversensitive”, and “I’m not a bigot, but most trans woman have a chip on their shoulder so I am no longer friends with them” kinda stuff, on a post a (trans) mod made complaining about people reporting a pretty questionable comment.

    Even if people disagreed over the original comment, the thread about it ended up being transphobic as fuck.

    So presumably the admins of lemmy.blahaj.zone (a trans-run instance, who host this, main, c/196 community, on the promise it would be very trans supportive) noticed the lack of moderation of that transphobic thread and are doing something about it .