I’ve been thinking recently that machine learning models could be used as a first-line defense for moderation, e.g. identify obvious spam/violations, but also identify borderline cases that require human intervention. So you could reduce the burden on moderators, and perhaps even shield them from some of the more extreme things, although I think those tend to be more image/video which I imagine will be a lot harder to really effectively harness ML for.
Nice, thanks! They have Friendica now, which I think is what I’ll really need based on my potential audience.