My Views on AI Generated Content

Ah I see what you mean. Sorry I misunderstood.

We do actually have a plan in place for it and this is basically following the copyrights offices guidelines. AIGC is creative commons so will be treated as so. It may sound harsh but I wont give any special protections as that is one of the inherent factors that does limit AIGC.

I know there could be an issue with that becoming a bit of a shit show if it does boil over but we plan to handle any fires or potential fires more aggressively then we may usually.

Once again though I think this will be a mute point with the changes we have planned that will be discussed in the 5th post.

6 Likes

This is encouraging! I look forward to hearing more details about the plans going forward.

I do want to take a moment to comment on this point though,

  1. It is a slippery slope when we start curating content as we are forced to take more legal responsibility for the content being posted. This means that we might have to be more strict on what is or is not allowed which depending on your view point could be a good or a bad thing.

I’m guessing you’re referring to Section 230 protections here, but there has been so much misinformation spread about it by people who want to basically claim content moderation is illegal and that they should be entitled to post whatever they want on social media, more or less.

This article explains it in more detail, but I can’t imagine a world in which there’s any additional liability as a result of doing more to moderate content.

The biggest irony is that in the real world, Section 230 empowers moderation, it doesn’t restrict it.

“I have to use it or I lose my job” and “I want to use it for my hobby” are two very different situations. As far as I know most of the people on this site aren’t making their living off of niche fetish games, and those that do are their own bosses and are the ones “forcing” themselves to use genAI. Is there a race to the bottom argument to be made there? Maybe. But a race to the bottom should be something that’s discouraged on this site. Speaking of which…

A fetish game forum probably isn’t going to have a big influence over whether the world accepts shady practices, but we can certainly set the standard of whether we accept those shady practices on this site. Gotta start somewhere.

5 Likes

I have to be honest here: As an individual who is not wealthy and has no connections in the industry, it certainly doesn’t FEEL like I can have any appreciable impact. And that applies in both positive and negative senses. I don’t actually see AI companies suffering in any meaningful way when I don’t use their products, and I don’t hear artists screaming in pain every time I submit a prompt to a bot. So is it any wonder that so many people who say “I don’t like AI because X, Y, and Z” end up using it anyway? The benefits of using it are tangible and immediate, while the benefits of not using it are intangible and uncertain.

Another thing that’s occurred to me, at least regarding AI use for personal, non-monetized purposes: I admit I’ve only done a little bit of reading on the subject, but my understanding is that the AI companies are currently LOSING money. They might make their money back if they can get people to actually sign up for paid plans, but if I use a free image generator and make a bunch of images that are of no interest or value to anyone but myself (as @RadicalSquidward alluded to people doing), how is that anything but a drain on the resources of the company hosting that image generator? Training data? Maybe, but if all the talk about an AI bubble is to be believed, that data isn’t worth what people are investing into acquiring it. And I guess you could argue that just using an LLM would contribute to climate change, but I’m not convinced that that’s anyone’s biggest gripe with AI. (The same basic question of “How much of a difference does this actually make” still applies.)

People don’t like to feel like they’re the “bad guys”. Attacking someone’s moral character is a quick way to get them to turn defensive. And given what I just said about the intangibility of the harm caused by individual use of AI, it’s mentally pretty easy to dismiss attacks on that basis as being unreasonable. So I don’t think that moralizing and lecturing people is a good way to go. If anything, I think it’s likely to backfire, causing people to dig their heels in and start seeing the “mean” anti-AI people as enemies.

What I think would be more effective is providing alternatives. Or, to put it another way: If people are prioritizing convenience over morality, then you should focus on convenience too, don’t you think? Instead of lecturing people and punishing them for using AI, focus on making it easier for them not to (to the extent that you’re able). The idea of an assets store is a good one. And if you know of any “ethical” AIs (if they really exist), then point people to them.

6 Likes

Here’s one - a text-to-image model trained only on CC0/public domain work. Here’s another. HuggingFace is where you will find these, and you will need to run them locally (or pay for your own server/processing to run them off somebody else’s setup) as they aren’t popular due to being unable to match current quality of output due to smaller sampling pools and comparatively “outdated” data sets due to the vast majority of public domain work being nearly a century old. No company is hosting these on their own, they are the creations of individuals trying to apply their own moral and ethical approach to the genAI problem.

4 Likes

The issue with saying “individual contribution to the problem is negligable” is that there are millions of individuals. One person throwing a piece of trash on the ground isn’t a problem. 100 people throwing trash on the ground turns the place into a dump. And then when you say “hey you shouldn’t throw trash on the ground” people keep responding “but it’s only one piece and there’s a bunch of trash here already and it’s way easier to throw it on the ground than it is to find a trash can”.

11 Likes

I had a bit of time this morning so wanted to do a few more responses quick here.

No actually, I dont want to go to far into it here so I would encourage you to bring it back up in the 5th post or feel free to DM me if you want to chat about it more, but my larger concerns is the EU’s copyright directive (mainly how this effects the one mulligan we get) and general civil liability if we let something like a malicious exe slip through the cracks as an example.

These are more just questions I need to look into and tbh with how I am planning it it may not matter anyway. But there is a difference between moderation and curation and generally speaking the more curation that is done the more responsibility you have to take for what you are curating.

Fair enough. While I may not agree with your premise, I cant really argue with that. I think it does highlight or differing viewpoints very well though. Your arguments feel very deontological in nature where I am looking more from a utilitarian perspective. Neither is right or wrong per se, but due to their differing viewpoints lead to different perspectives on how to approach the problem.

Sorry, I think my example muddied the waters of what my intended point was. I was more meaning to draw your attention to the Overton Window and how forced adoption within the work place could lead to further normalization of AIGC over time which would make it harder for you fight back against it becoming a standard in the broader sense. If your intent was focusing more on just this specific community that is fair but I thought you where talking in the more broader sense so if I misread that sorry about that.

More or less though my intent was to bring that up as a factor for you to add to your consideration.

Quick warning, going to be showing my train nerd side here a bit.

Generally speaking there a 3 reasons these companies make the tools freely available and available to consumers in general.

  1. Training - When some one uses an AI tool you always see that little thumbs up or down. That is used to communicate back to the company what is considered good or bad output. This raw data is then used both directly in another training pass and indirectly to create another LLM that is specialized to only judge the quality of the output which can be used for synthetic training. Supervised learning models are very data hungry so data is always a massive motivating issue.
  2. Advertising - Its kind of like how freight rail in the US used to use their passenger rail service to advertise how quick and reliable their freight service was even though passenger rail usually lost them money. These tools are made available like that to more try to show perspective clients how much more advance their models are compared to their competitors as well as to drum up more investor hype by making it a public display.
  3. Supplemental Revenue - Since they are burning cash anything to help stem the flow does help even if its not by much. Its kind of like how I am with ads right now with Weight Gaming. They are making so little I kind of want to remove them, but since we run at a loss anyway $100 is still better then no dollars even if I dont feel like its worth it.

I dont think this was what @someoneoutthere was arguing. I could be wrong, but I think they where making a similar point to what I was saying that if we do want to affect any actual change it would have to be done collectively, not on an individual scale. Also, its not an unfair assertion that framing its use as a moral failing could make any collective action more difficult as it could accidentally alienate possible supporters (especially those more on the edge) by making them feel like they are “the bad guys” even if that is not the intent.

7 Likes

So for example, collectively deciding to discourage use of unethical genAI models across an entire community? Or is collective action solely the domain of governmental regulation?

The way I view it I am talking on a larger scale but because I think doing so is the only realistic way to deal with the root of the issue. I suggest governmental regulation because looking at the insensitive in play I think its the only effective way that individuals can take more decisive action to deal with AIGC.

Within the context of a smaller community yes its easier to collectively enforce removal but my argument personally has been more around the fact that it doesn’t really solve the root of the problem and does more to hide it. For example IP theft wont stop because we are not part of the incentive structure that encourages it anyway.

But end of the day its really more about where one views its more effective to fight it.

Quick edit, I may have misunderstood a bit of what you said @Chubberdy. If by community you are instead referring to something like a city, state, or nation for example my point is actually the same. The current incentive structure means companies like OpenAI are really only concerned with a small minority of individuals and players. Due to this, while collective action is not solely the domain of governmental regulation, it is one of the few options I see that can be used to practically fight back espically within the US.

Here’s the only 2 outcomes I see ever happening.

  1. You keep genAI content on the forum. This never ends. You spend more of your time trying to sweep criticism under the rug, and make a post like this every few months for people to argue pointlessly in before closing it. Like the dozen other times you have.
  2. You get rid of it. Creators are happy, some creators who left return(not all, you burnt bridges doing this and you have to live with it), people who use genAI have to “adapt or perish” and learn to do it themselves, or just use the massive fucking piles of actually ethical sources for free assets, a handful of techbros whine about how the thing making companies hemorrhage billions of dollars is “inevitable” then eventually leave, and the remaining 99% of people here who just want kinky games to consume continue to consume kinky games.

You don’t want this problem. Nowhere I’ve been that bans genAI outright has this problem. You even say you don’t even support genAI shit and only use it to keep your job. Forget the ethical arguments. Forget the “neutrality” copium. Just be pragmatic and take the most effective route to solve the problem you want to solve.

I’m setting this topic on ignore now. Hope you finally decide to do the ethical thing that also just stops this endless loop of bullshit and makes your life easier.

10 Likes

There is technically a third option, which I talked about above and seems like the “inevitable” final conclusion for me - since Grot has specifically stated that genAI assets are fair game, and there is no legal apparatus to assign protection to them, eventually the snake eats its own tail as those using genAI for low-effort shovelware start directly stealing from the other producers of genAI products to reduce their time and effort outlay even further. These “clone” games can be sold at an undercutting price, as even pennies for basically no effort is still income for people in economically-disadvantaged parts of the globe, which sets off a domino effect that collapses the market entirely.

3 Likes

I think you’re overestimating the influence of the site here. AI creators won’t “adapt or perish”, except to the extent that simply peddling the same wares on friendlier venues counts as either. I imagine very few creators that rely on AI would give it up entirely if their choice was to continue using or stop posting their projects here. Simply banning projects or discussions thereof is the equivalent of sticking your fingers in your ears and pretending you can’t hear anything.

I don’t believe that AI is the cancer killing the site here. Yes, it’s brought about negatives, and the continued hemming and hawing about what to do about it hasn’t helped, but it’s that laissez-faire attitude towards pretty much anything but stuff with minors that’s been a bigger issue. The fact that people on all sides and perspectives of the AI dilemma have come to the table within a couple days of the post going up shows that they haven’t really been driven off, and the anti-AI fanatics still pop out of the woodwork very quickly when they get a whiff of something that seems off to them - they’ve never left.

1 Like

Pretty much every post I make about ai gets removed. Most of them deserve it, but not all of them.
Real artists built this place and now most of them have either left, or are unhappy with the state of this place because of the proliferation of slop this place place has experienced in the last few years.
What’s your plan for when the ai bubble finally pops? This stuff has done nothing but lose money since it started. The only reason people use it to make games is because it’s free and they’re too lazy to learn a new skill. When all the tec companies decide to start charging a premium for their generative garbage there aren’t going to be any games here anymore because you’ve chased away all the real creators.
You let artists build your site, and then were all to happy to throw them under the bus for the sake of keeping the peace. It’s already too late too get back any of that good will.

20 Likes

grim, that is both a moralistic and wrong argument

first off, beating the dead horse that is already a fine past of “real artists built this for you, how dare you throw them away” doesn’t do anything other than piss off every other group involved in this site and make you look arrogantly self centered as hell, a site like this requires so many kinds of people using it that artists are a SEVERE MINORITY of the total community in both representative and numerical populace

second off, the most likely outcome for AI art is a shortcut tool used by corporations and forced ads in generated images for general users, aka every damned picture will have McDonalds in it because they are paying for advertisement… you aint getting rid of a tool that cuts out an expensive employee from a company, they refuse to let that happen and will go to extreme lengths to force it

third off, history has proven REPEATEDLY that large tech conglomerates like alphabet (google), microsoft, and apple can and will have these new age tech companies make these tools, loose all their money, then buy them outright to bypass the expenses in development… these AI generating companies loosing all their money IS DESIGNED that way so they can be bought out at a fraction of development costs if microstft or alphabet was to develop those tools themselves

1 Like

“Game devs are a minority in the Game Dev forum, so we should ignore the interests of Game Devs and only think about players.”

12 Likes

Then why are we using Discourse, if this is very easily done with more traditional forums? F95 has sections for certain games, as does TFgames.site. At least think about using the oft-ignored tag system to tag games. A mod-controlled tag grading system can be applied to commercial games that allows users to differentiate between:

A: Commercial games that have agreed to a set of very basic community standards about how they interact with the community, and any community members that may become patrons (eg; at least one substantive post per month on whatever monetisation site/s they may have, even if it is as basic as ‘I’m not dead’).

B: Commercial games that have been verified as adhering to the standards set in A for a set period of time (say 3-6 months) - think of it less as a determinant of quality and more of reputation.

C: Both Commercial and Non-Commercial games that have met the requirements of B and are considered to be ‘good’; be it some kind of staff vote or some community poll, this can be a way to point users towards the better works on the site without relying on other tag combinations.

Both A and B have no ‘risk’ to staff since it is not a statement of verifying the quality or future actions of a dev and is merely a statement of fact about what a dev has both agreed to, and of what they have materially done.

I don’t think this forum can afford to wait for a hypothetical, perfect site redesign where everything is hosted and we all join hands to sing kumbaya, especially since it’s a very common tactic among bad actors in the games sphere to promise that sort of thing without materially delivering to stave off interest in incremental improvements that are both far cheaper and more immediately beneficial.

3 Likes

Please, please, understand the surrounding tech environment before you say things like this. Google is developing Gemini, Microsoft is developing Copilot, both are pouring huge sums into each and trying to recoup these amounts via shoving them into everything they can to boost usage and get people accustomed to relying on them for when they become subscription-gated (which Microsoft is already doing with Copilot in Office365).

Besides, it’s not like Anthropic or OpenAI are making the tools being used to produce the content posted on this site. The process of “jailbreaking” them to produce unmoderated content is a continual war of attrition, one which gets more convoluted every day as the most well-known tricks are patched out. It’s locally-hosted Stable Diffusion instances or smaller, pay-for-access hosted instances such as NovelAI. Maybe, MAYBE, they’re using DeviantArt’s DreamUp AI in some cases due to the huge amount of inflation, expansion, and weight gain content which was scraped to make it… and that’s the only investment-backed option I can think of which would be producing things here.

Because, again, this is niche fetish content. People don’t want their McDonald’s ad next to this shit.

On the one hand, that means what is being posted here art-wise is probably going to survive the AI bubble bursting - because it’s not depending on those huge corporations subsidizing the generations. Stable Diffusion is out in the wild, people can run instances on sufficiently-powerful home systems or rent time on server racks if they really need to. Genie’s out of the bottle there. But please understand, it has absolutely nothing to do with the AI bubble popping. That’s going to happen, it’s going to annihilate a lot of the chatbot sites especially, but it probably won’t actually impact what happens here merely because of the particular genAI systems being utilized here.

The environment surrounding AI usage as it applies to this site in particular is far too convoluted to apply the same broad arguments about things like ChatGPT or Claude to it, either in a pro-or-anti stance, which is why my own take on genAI usage here basically boils down to “it’s going to stop being something people want to do because it’s going to stop being potentially profitable and nobody is going to care about AI-art-Twine-Game #56456456.” Especially given the “people can freely lift your entire game’s assets and the people in charge aren’t going to stop them” thing.

3 Likes

I am a sfw solo dev. I have tried avoiding AI as much as I can. But it’s becoming increasingly difficult to keep up without it, or dumping large amounts of cash that I simply do not have.

My qualms about AI are mostly tied to the fact that even independently hosted models are reliant on large corporations to even exist. Those corporations train their models with ethically dubious means, and could very easily pull the rug from beneath us. So as of right now, I’m basically waiting until something changes in any direction.

I don’t fundamentally hate gen AI, but I can’t trust those who control it.

2 Likes

Thank you, this is quite interesting. Even if the models are currently low-quality compared to the popular ones today, it’s encouraging to see that some people are trying. The example outputs do remind me of early gen AI images. Maybe they can improve too, given enough time. (Perhaps a long time, but still.)

Grot gave a pretty fair summary of what I was saying. I’m not saying that there’s no point in doing anything; I’m saying that when you’re trying to change the status quo, as opposed to maintaining it, it’s going to take more than just pointing out that “We can make a difference if we all work together” to get people on board. There needs to be either an individual incentive to join the cause or some assurance that other people will join the cause in sufficient numbers. Making it more convenient for people is a way of approaching the “individual incentive” side of things, by lowering the cost of joining until it’s outweighed by those moral qualms that you complained that people are willing to compromise on.

So far, the only change that I’ve seen the anti-AI hardliners suggest here is straight-up banning AI from the site. To me, that doesn’t feel like an actual solution to the overall problem. It feels like hiding. To borrow your littering-in-the-park analogy, banning AI feels to me like the equivalent of fencing off a little section of the park, kicking out all the litterbugs, and keeping just that little section clean. And while I’m sure the people in that little section will rejoice, that does nothing to reduce the total number of litterbugs (and will probably make the litterbugs who got kicked out turn openly hostile).

5 Likes

Funny enough about the hand thing, there are actually mannequin hand that can be used for reference:

3 Likes