StyleGAN AI generated images

Well it let it train all night and the results are… okish. All the faces have a terminal case of mouth wide open and still issues with faces, eyes, arms and other but I figure I’ll release what I have in a bit. Still not super satisfied and might attempt to train the AI on a anime face database to try to bring it back to some normalcy but I might as well let people play around with it if they want horrible flesh abominations.

Edit: Ok uploaded my network and have a simplified Google Colab people can use to generate images. Only thing you need is a google drive to run it. I’ll update the main post when I get better results but for now I figure I should at least let people give it a try.
Mega with the trained network ~1 gig
Google Colab with instructions on how to run it.
Let me know if you run into any issues. Also feel free to post whatever decent looking fatties you make. Just know for every one that looks ok there are probably ~50 that are horrible right now so be prepared.

5 Likes

when I clicked the google colab link I got a screen saying: Notebook loading error

Details

Invalid Credentials
Bb@https://colab.research.google.com/v2/external/external_polymer_binary.js?vrz=colab-20210317-060053-RC00_363384271:87:87
fr@https://colab.research.google.com/v2/external/external_polymer_binary.js?vrz=colab-20210317-060053-RC00_363384271:689:397
zx@https://colab.research.google.com/v2/external/external_polymer_binary.js?vrz=colab-20210317-060053-RC00_363384271:1218:318
g/<@https://colab.research.google.com/v2/external/external_polymer_binary.js?vrz=colab-20210317-060053-RC00_363384271:1344:470
za@https://colab.research.google.com/v2/external/external_polymer_binary.js?vrz=colab-20210317-060053-RC00_363384271:19:336
xa.prototype.throw_@https://colab.research.google.com/v2/external/external_polymer_binary.js?vrz=colab-20210317-060053-RC00_363384271:18:403
Aa/this.throw@https://colab.research.google.com/v2/external/external_polymer_binary.js?vrz=colab-20210317-060053-RC00_363384271:20:248
g@https://colab.research.google.com/v2/external/external_polymer_binary.js?vrz=colab-20210317-060053-RC00_363384271:62:147

Is that when you try to click the link? I might have been fiddling with the colab and it broke it temporarily. Try again and if it still doesn’t work I’ll do some more investigating. You could also run the colab file by uploading the unzipped version of this to your google drive and clicking Run In Google Colab or whatever pops up ThisFattyDoesNotExist.zip (374.9 KB)

Edit: I think I fixed the link

now it loads properly

can the public access the generator? i want to see what it can do

The post a few posts up has some details on how you can run it. It isn’t too involved to setup but it does take a bit more effort than just clicking a button and running. Most the information on what to do is in the Google Colab linked in that post.

Cool! Love how artist handles become lovecraftian texts lol

I’m glad to see that not even AI can figure out the art of drawing hands, makes me feel better about my drawing ability lol

Here you go.

1 Like

You are correct. Getting some transfer from the anime database might work.

Also, learning limbs in general is difficult because a lot of the convolutions we use are direction oriented, and limbs are often in many different positions. If the anime database doesn’t include bodies, I’d suggest adding a few more layers to the network(and scaling it accordingly), then pruning it heavily through dropouts to reduce overfitting. It will take a while, but since you are already kinda pleased with the results, it should only help improve them a bit more.

Finally, I don’t think your network can create new features with the method you are using right now. Any image it generates is even in the best case will be a linear combination of features of other images in the dataset. In the best case scenario it mixes stuff up enough that it looks completely new, but in the worse cases, you can really see the image it is using as basis.

The reason I mention this is just because I am not sure where copyright stands in this case.
¯\_(ツ)_/¯ Just something to consider.

For my idea we would need a reasonably big data set of weight gain sequences(and any other sequential wg images, like donation drives or comic panels where weight gain takes place, etc). The idea would be to eventually split each sequence into tuples of before → after, and run the network on that. By doing this, we will be creating a network that learns the “fatten operation”, and associated feature recognition.

The pictures we would like most would be the ones that have no background and those that keep the character in the same or similar poses/outfits, but any sequential fattening image would do, they’d just perform worse.

But we can leave dataset organization for later. First we need the actual images.

Honestly, I’ve never used webcrawlers before, because most work I did with images were for uni and copyright was a big issue for them. So if there is a way to get one to fetch only sequence/donation drives/comic images from a bunch of galleries, I don’t know how to do it. Maybe @ExtrudedSquared does?

In the case we don’t have an automated solution here, I’d say it’d be fine to crowdsource. We would need to set-up a drop off point online where anons could add sequential pics like the want, and then we would get a smaller pool of contributors to clean it and properly organize the file structure in a way our tools can work with. Could you organize something like that @Yamhead ?

Once we have that, we could see what we could do with all this data. I’ve mentioned my original idea, but maybe working together we can think up of more ways to make use of such an interesting dataset.

I personally don’t know how they work either but I would assume that they are pretty basic. you would tell it to download all pictures from x site with y tags or something like that. if it does work like that, then I think a web crawler would miss a lot because (from my experience) people aren’t that good at tagging their things.

I would probably be able to do something to crowdsource the sequences.
I would like it to be understood that I might not be able to, just so I don’t let you down if I don’t manage to do it (it will probably work out, I’m just covering my bases here)

how many sequences would you estimate it would take?

So since people were again talking about gathering, I figured I’d make a mega account so I could post my bank of images. They should all be sorted, and there are direct sequence in there as well: 674.23 MB file on MEGA

They discuss copywrite laws a bit here. I think the just of it is that legally we mostly fine, at worst probably a bit of a grey area. I believe all the initial images I am starting from are generated from the TADNE model and those images/model is under CC-0 so I think we fine in that regard.

For downloading all the images I have, I used gallery-dl which is pretty simple and easy to use. I think the hardest part would be finding enough sequences that would work well.

I’d share the dataset I’m using now of ~10k cropped images but a decent chunk of the images are probably not supposed to be publicly available (I took a lot from seedier places such as ExHentai because it was convenient to get an artists complete work) or the images themselves are less than ideal to share (Some artists I took from draw stuff that I don’t 100% approve of but I can’t be bothered to check all the images). If you working on a project that needs a dataset like this I can send it directly but I’d rather avoid sharing all those images without a purpose.

As far as how to move forward with the project, I’ll start looking into some of the things Ano has suggested and see if I can figure out how to implement them.

Did some more training while putting some of the original anime dataset back into the training to try and fix some of the issues I had with the faces and such. Results are pretty decent, at least some of the faces are looking better. I figure I’d release the new model in case people wanted to try better faces and such. Still haven’t figured out how to do some of the more complicated solutions such as labeling the images or training on sequences but hopefully at some point I go somewhere with that.

For now, here is the new model

From my initial tests of this model, it has higher highs and lower lows as far as how well it generates fats. While the faces of some of images is greatly improved, it also worse in some cases. For information on how to use this, scroll up a couple of posts. Still not updating the main post because it still too jank.

4 Likes

Had some time to amuse myself with the older model.

Some ones I thought looked alright:


012 013
014

Like was mentioned before it has the “same face” problem, but otherwise there’s some nice results.

I found some of the cursed monstrosities pretty entertaining so I have to share these:


016 017 018 019

I’ll try out the newer model and see where that leads me…

6 Likes

I actually found thisanimedoesnotexist’s cursed things to be an excellent source of enemy battlers for my RPG, so I’m quite interested in what awful things the fats generator spits out.

The good ones that the new model made looks like they could be pretty serviceable if an artist felt like fixing their arms by hand. Probably easier than drawing an entire artwork to let the program do most of the work and just fix the bits it’s bad at.

Also, if someone ends up wanting to do a tallsprite version of this like is shown in the original post, say so and I’ll post a whole bunch of sequences I’ve made for my game. I currently have 8 clothed/10 nude sequences of around 8 stages each.

1 Like

This is bordering on memetic kill agent levels of horific. MORE

1 Like

Glad you were able to make some decent (and horrific) fats with it! It nice to see what people come up with. Hopefully some of the same face issues have been resolved, still not great but better than before.

If you want I can train the tall sprite model I made with your sprites or train one purely on your sprites. Or I could upload the dataset/model I already have if someone else wants to give training a go. The model was trained on a pretty lacking dataset so the more images I can get the closer to a real RPGMaker generator I could get.

There is actually a way to calculate this, but it is quite complicated, and to this date it has been done for just a few ML methods. It involves something called the VC Dimension of a model. Just thought I’d mention.

But since science doesn’t have an easy answer for us there, the rule of thumb in ML is just: The more data the better.