StyleGAN AI generated images

After seeing the website https://thisanimedoesnotexist.ai/ I was inspired to start working on my own styleGAN projects. The basic idea is that you feed a lot of images into a blackbox, and out comes similar images. There is a lot of fancy science behind it that I’m not qualified to explain properly.
Here are a few of the experiments I have done with some pretty mixed results. Most of them would greatly benefit from more time training but that is difficult.

This was trained on all the RPGmaker sprites I could find that use the tall sprites Grimimic made.
BodiesSpritesLowCreative
Same model but I gave the AI more creativity. Leads to more unique but also more cursed results
BodyHighCreative

I then trained a model based on all the RPGMaker faces I could find from all the RPG maker games I had downloaded at the time.
interpolation_movie

Currently my goal is to create a fatter version of the “This Anime Does Not Exist” model by transfer learning the anime model with images of fatter anime girls. So far the results have been middling but I’m getting closer with some of the modifications I have been making.

Progress showing the model as it trains. Ended up giving up on this particular model because it ended up training incorrectly but at least some of the features were starting to take shape even if it looks incredibly cursed
.AnimeFats AnimeFats2

If you are interested in making your own styleGAN projects, it doesn’t require any special hardware or software. I highly recommend using this Google Colab to get your feet wet with learning how to make your own as it explains what all you have to do step by step.
https://www.reddit.com/r/MachineLearning/comments/j7nwya/p_stylegan2ada_google_colab_transfer_learning/

This website has a lot of great info explaining how to the models are created and steps one should take to prepare. https://www.gwern.net/Faces

I’ll make a step by step guide if there is some demand for it. I’d say the hardest part is creating a proper dataset. Hope you enjoy my incredibly cursed images.

Edit: Made a step by step guide on how to prepare a dataset here StyleGAN AI generated images - #9 by ExtrudedSquared. Preparing the anime dataset was a bit more involved and requires a bit more explanation.

Edit: Currently model I’m training is a partial success. I feel like it is doing a decent job with the body but the faces have slowly become more and more terrible. Still I think with either more training or more images to train with, it might end up working alright. Here some .gifs of the AI training from the base Anime model to the current iteration of the model.
ThickAsABrick ThickAsABrick2 ThickAsABrick3

41 Likes

You should make it train not with illustrations like you are doing there, but with sequences of weight gain. There are a LOT of them on deviantart/pixiv. Remember, those images that are the exact same, even same pose but the character just gets fatter and fatter.
Wouldn’t that be better training material for the neuronal network?

4 Likes

Really depends on how you mean. A single weight gain sequence wouldn’t even be close to enough data to train the AI, as you need closer 500+ images to start getting reasonable results even when using a pretrained model. If I end up using multiple weight gain sequences then it wouldn’t be much different than what I am currently doing, as I have several sequences already in the dataset. I’ll probably add more once I’m sure I have the new model working.

In theory I could do something similar to this. Google Colaboratory
I haven’t done a ton of research into it but from what I understand, lets you control various factors of the image using labels. In theory I could use weight gain sequences and label the weights by size so I could control how fat the generated image based on much influence is given by the labels but that is just me spit balling ideas.

Working on a new model now, progress is pretty promising.
ThickerGrill3 Emboobend ThickerGrill ThickerGrill2

5 Likes

I was literally just wondering about something like this today! It’s rough to be sure but what you have certainly looks promising!

I see! Go forward then good sir! hahaha
Results are great indeed.

i hope someone makes a furry version of this

Well you are almost in luck, There have been some experiments doing some transfer learning on the anime model with furries to some decent results shown here https://nitter.cc/arfafax/status/1353850224599437313#m

There also this but it was only trained on furry portraits instead of whole bodies. https://thisfursonadoesnotexist.com/

So it certainly possible but pretty difficult. If arfa ends up posting their trained furry model it might be fairly reasonable to do.

nice, stylegan is so fun to play with, especially sampling near the edge of the latent distribution like you did in that second pic. you should legit do a writeup somewhere about those rpgmaker sprites. i bet people (besides us lol) would think they’re cool.

The RPGMaker sprites were fairly easy to do, 90% of the effort was just creating the dataset.

  • Collect a silly amount of assets. In order to get good results, you will need a lot of images. The main reason I got decent results with the RPGMaker sprites is because all the sprites are very similar so it was easier to train with. I just went into the folder I store all my RPGMaker games, copied all the folders called “Characters” or “Faces” depending on what I was working on and shoved them all in one place.

  • Download a program called ImageMagick - Download . Makes preparing the data much easier and it super nifty.

  • I recommend doing a quick pass on cleaning up your dataset before you start modifying. You can do a group by dimensions to get rid of all the files that don’t match the dimensions of what you are trying to do. In the case of tall sprites, all the 3x4 sheets are 360x480 so I just deleted all the sprites that didn’t fit those dimensions.

  • Start preparing the dataset. I’d recommend making a copy of everything before you start doing commands because the commands will change the originals (You could do different commands to keep the originals but I’m lazy and it easier to just use the mogrify command). First step is to remove the transparency, which you can do by shift+right clicking on the folder all of your assets are in, open PowerShell or whatever command line thing you use and type
    “magick mogrify -background white -alpha remove -alpha off *.png” which should result in all of your images starting to look like this.
    $adipe fat stuffed

  • Convert all the images to .jpg, it much faster and easier for the AI to work with by doing the command “magick mogrify -format jpg *.png”

  • Start cropping out all the individual sprites. The tall sprites are 120x120 px so I’ll use the command “magick mogrify -crop 120x120 *.jpg”. Note this will just cut out all the sprites. If you just want one particular sprite then you can do something like “magick mogrify -crop 120x120+120+0 *.jpg” which will crop out just the section that is 120px from the left, 0px from the top which is just the front facing sprite. Zip up all the results and you have your dataset.
    $adipe fat stuffed-1

  • It easiest to train on datasets that are a power of 2 so I recommend scaling up to the nearest power of 2. In the case of the RPGMaker sprites, they are 120x120 which is convenient because the nearest power of 2 is 128x128 so I’ll use the command
    “magick mogrify -extent 128x128 -gravity center -background white *.jpg” if I want the sprite itself to stay the same size or
    “magick mogrify -resize 128x128 -extent 128x128 -gravity center -background white *.jpg”
    if I want to scale up the sprite as well.

  • Get rid of any images that are far too different than the rest. Stuff like props, one off sprites like bad ends and such will muddy up the AI a bit

  • It much easier to train on Google Colab but in theory you can train locally. I’m going to explain the Google Colab way. First upload your dataset to your Google Drive. If you actually use your google drive I might recommend making a new account and starting there because AI stuff will start clogging it up.

  • After that, it mostly just following the instructions from the Google Colab here https://www.reddit.com/r/MachineLearning/comments/j7nwya/p_stylegan2ada_google_colab_transfer_learning/ . You can skip the “Download Weights for Transfer Learning” and “Do some surgery when weights don’t exist for your specific resolution”. All you have to do is change the file paths for a few things and it should just work. Training can take a VERY long time. You can see the results in your Google Drive as they come in.

Creating the anime dataset is a bit more involved and transfer learning from the “This Anime Does Not Exist” requires some other methods. I might write up how to do that if there is demand for it but it requires a good bit more know how.

That should be about it. If anyone that tries runs into any issues, feel free to contact me.

Edit: Forgot a step

1 Like

First time posting, messed up. Been a while since I used a forum that uses this system. So, while I’m not sure how to make the AI systems work on my own, I do have some assets to offer. I’m working on my own WG-based RPG maker game and I’ve got several tall-sprite female characters each with clothed and nude sprites, around seven stages each. They’re a variety of different body types as well.

I also have a collected library of anime-style WG/BBW images sorted by author/source and character if you’re looking for an easy source of pictures for the anime-maker.

1 Like

I could certainly use either! Right now the issue I’m having is lack of data to train the anime set in particular so I could use any images you have.

How would be the best way to get them to you? I have about 3GB of various stuff (not much, but at least it’s sourced and sorted to the best of my ability) although not all of it will be useable probably since there are various art styles.

Edit: Is there any content you would find objectionable that I should try and remove first?

I’m fine with anything, I’m going to have to go through the images anyway to make sure they would work well with the AI anyway so the more image the merrier. Really appreciate the help!

I’ve got them all cleaned up (removed things I thought would be useless, might have missed some stuff) and zipped. How do I get the files to you?

Can’t upload attachments as I’m a new user.

Probably would have to upload it somewhere and share the link. Stuff like Dropbox, Google Drive, Megaupload or a bunch of others would work but I’m not sure what the popular choice is these days.

Well, seeing as I’d like to avoid linking activity on a fechi site to anything I use with anything else at all, I’m not sure what I should do. I guess I’ll just wait until I’m not a new user anymore?

If there’s a PM system for this forum, could I send you the attachment that way?

I think that if you sending 3 gigs of files, it would probably exceed the limits for Weight Gaming. Hopefully someone can chime in with a better way to send the files.

It’s about 700KB with the useless images removed and the files compressed, but I still wasn’t able to send it to you as a new user. Hopefully if a mod happens to see my predicament, then they’ll have a solution.

Why can’t this be easy. Already spent half an hour+ on getting them fixed up, so I’d hate to just leave it. : /.

The results are really cool, man.

You know what is weird? I have a Masters on this and I teach it every day, and I just separated my fetish life from my day life so much that I never considered on doing what you’ve done here.

ML can be though sometimes, specially the math.
(I feel half the work I do with it is at least 70% witchcraft most of the time)
If you need some help, feel free to PM me.

Also, I know it has been suggested before, and you already responded to it, but I think building a dataset of sequences could still be a good idea.

This is off the top of my head here, and these things benefit a lot from sitting down with a cup of coffee and pencil and paper in the morning, but here is what I would do:

  • Instead of interpolating bit sequences(images), I’d interpolate operator sequences, this is what is known as meta-learning. You first have to train a small network to tag parts of an image(or do it by hand). In our example, you would train a network to recognize which parts of an image change in a wg sequence(I’d use image pairs as an entry, btw). Then you would run this for your whole dataset. Right now all we want is tagging.
  • After that is done and we hopefully have a network with good results, we need to have a look at which neurons that get activated with each image. If we extract those from the layers, each sequence of neurons can be seen as a single operator, and what this operator is is a mapping from x^n to y^m bits.
  • After we have those operators, we then need to find an intermediate representation for them, one we can do learning from as data input, and that I guess is where most of the creativity in this idea will have to come in.
  • Finally, the idea would be to train another network to recognize features of good operators, and generate new operators that are interpolations of the past ones. You can do this through supervised learning, by handpicking which operators work well for any given image, or unsupervised by using the strength of activation of operators in the previous dataset as a “success measure” for clusters of images.

Of course, this is all assuming you have a dataset of WG sequences, and that you want to devote this much time to it, and that what I said didn’t pass way over your head(this is a real trouble I have at work, lol).

For me I know that doing all this requires a bit more than I want to invest. But to me it is just one of those things it is fun to think about with ML, you know? “What if I had time to pour into this and I did x, then y, then z, would that work?”

So again, if you need help, don’t hesitate to give me a howler. Good luck!

4 Likes

I’d be lying if I said I actually understood anything more than the very surface level details of the ML things I’m using so I would need some serious handholding to implement the ideas you have. That being said, I’d love to implement those ideas because it sounds like it would really improve a lot of things.

I’ll do some more research and learn a bit more about what exactly I’m doing but in the meantime if you end up making your own experiment I can do whatever grunt work you need done to help out.

I’ll definitely contact you if I have any questions. It is good to know there are people here that I can call on because ML is a complicated subject.

Edit: Also I ended up killing my current model. While it did an OK job on the bodies, the faces were nightmare fuel. I’d imagine it mostly because the small dataset I’m training on or the stylistic changes from the original anime set and my dataset which is fairly western looking. Here some examples of training from the start to the final model.
Failure5 Failure6 Failure7 Failure8

2 Likes