Several months ago Beehaw received a report about CSAM (i.e. Child Sexual Abuse Material). As an admin, I had to investigate this in order to verify and take the next steps. This was the first time in my life that I had ever seen images such as these. Not to go into great detail, but the images were of a very young child performing sexual acts with an adult.
The explicit nature of these images, the gut-wrenching shock and horror, the disgust and helplessness were very overwhelming to me. Those images are burnt into my mind and I would love to get rid of them but I don’t know how or if it is possible. Maybe time will take them out of my mind.
In my strong opinion, Beehaw must seek a platform where NO ONE will ever have to see these types of images. A software platform that makes it nearly impossible for Beehaw to host, in any way, CSAM.
If the other admins want to give their opinions about this, then I am all ears.
I, simply, cannot move forward with the Beehaw project unless this is one of our top priorities when choosing where we are going to go.
I’ve tried those methods something like 10 years ago. It didn’t work; people would pose as decent users, then suddenly switch to posting shit when allowed. I’m thinking nowadays, with the use of ChatGPT and similar, those methods would fail even more.
Modern filtering methods for images may be fine(-ish), but won’t stop NSFL and text based stuff.
Blocking VPN access, to a site intended as a safe space, seems contradictory.
Like someone else’s free WiFi. Wardriving is still a thing.
That can be easily abused, either manually or through a bot. Reddit has the right idea there, where they have an avatar generator with pre-approved elements. Too bad they’re pretty stifling (and sell the interesting ones as NFTs).
Yup, as it gets ever easier to overwhelm systems, there are no good solutions to the matter, aside from keeping it text only + Beehaw’s own drawings.
Some text-only creepepastas are equally disturbing and illegal in some places. IIRC some Lemmy instance in Ireland had to close shop because their legislation applies to both “images” and “descriptions of images”.
True, but this is assuming one wishes to have a place to communicate online at all.
And though text can be intensely disturbing, it is inherently different to images/footage of actual children actually being harmed.
Yeah… you’ll have to excuse me, because while I’d love to delve deeper into the philosophy of perception, the art of rhetoric, or how the AIs can upend it all… I’ll have to leave it here, since I’ve been told in no uncertain terms that this is not the place to discuss this kind of stuff.
Maybe we could meet in some other safe space, focused on pure intellectual discussions, if such existed.
That’s fair.
Not currently using other spaces, nor aware of any suited to the topic (gladly, I suspect).
There are some… just not safe, and/or not intellectual. I’d start one, but seeing the shitstorms going over here, and my current IRL drama, I kind of don’t feel like it ATM.