With what all is being mentioned, maybe a revamp of the site is needed? It was mentioned in at least one post how difficulty finding folks, especially volunteers, to fill in the gaps for some of these projects leads to a reliance on AI. I think someone has already posted and discussed this topic in a way;
I propose setting up a category that’s pointed to more prominently, like a job board, would probably alleviate some of the issues regarding AI and human effort in projects. Two birds with one stone.
This is a buzzword that’s been used in too many unpopular circles. What I would go for is “discouraged language”, ie:
- making threats of violence, blackmail, or doxxing
- Participating in ACTUAL harassment, aka - hounding someone’s online or even PHYSICAL presence to insult, badmouth, or make threats
- Making discussion with others to enact harm upon another individual or their property/livelihood
There’re several rules that apply on-site, but these I think are the language topics that apply to off-site. When it comes to behaviors though, anything outright illegal should be pretty cut and dry: the person mentioned essentially enacted non-consensual voyeurism on the people in those photos, and given it’s a weight gaining site associated with video games, moral guardians are bound to tie this person with the site and all those affiliated, no matter how “hands-off” the admins try to pretend to be; if the guilty party gets in trouble, the connections and coincidences will be brought to the forefront. Which also makes this person a legal and PR liability if say, the women in those photos end up suing or there’s some other legal repercussions that somehow get attention. They could end up dragging this site with them into it, no matter what claims of innocence the admins try to make.
In regards to other site revamps, like someone else said, AI projects need to have their own category; something like “AI asset dependent, pending volunteers/hiring”, “AI based”, etc. At the very least, maybe come up with some prototype categories/discussions regarding AI categories and usage. To those that say this wouldn’t do enough and outright banning is the better option, they forget something that pro-AI and anti-AI folks do when they complain in a public place: they tell on themselves. If someone is complaining “my project isn’t getting enough eyes” then you can point them at the aforementioned job board and tell them to get someone to help you - if they go towards it, we can give them the benefit of a doubt, but if they’re resistant without good reason, then they’re likely not doing it for the art. Even if they try to scam the artist/programmer/etc, a protection can still be offered in that they’re making an offer in a public forum, so there’s a chance for some accountability. If they do it once, it’s a black stain, and a second time is probation, third can be the ban. Plus, making a disclaimer for AI projects mandatory, like if the project is permanently or temporarily going to rely on AI assets, another measure that can discourage AI-bros. Basically, have a system to make it extremely laborious for those folks to cheat. They’re here for quick and easy money, not to make their actions an achievement.
In regards to shovel ware problems, I suggest there be a review system, which might be difficult to do with Discourse. I’m talking less just leaving a comment and more having a system similar to what DeviantArt had for a while with their short-lived critique system, where critics could leave a 1-5 star rating for different categories along with commentary. If it’s a certain rating, then it’s given a title, like “shovel ware”, “legit”, or “masterpiece”. Basically, categorized ratings to separate the slop from legitimate projects. This at least alleviates some burden from admins and gives a stronger voice to the site users. Granted, it has to be front and center on the project’s topic display.
Granted, a system like that can still be vulnerable to abuse. It would have to be woven with the badge system this site uses from Discourse for who gets to leave critiques - to filter bots, as AI bros and others would certainly try to circumvent the restrictions to review bomb their score upwards. It wouldn’t be perfect but it offers a solution.
If none of that is considered enough, there’s a topic someone has already opened for discussion on some of this:
Any of these can be taken into consideration by site admins.
Edit: This is something I just remembered, but in relation to helping admins, I understand that monetization, while not desperate, can always be improved. I think that’d be a case of needing stronger promotion, as there’re plenty of good actors out there at least that don’t know about this site. DeviantArt’s one, but I’ve made a list of several video websites the site could be promoted from without worrying about moral guardians: