StyleGAN AI generated images

how you build a .pkl file? I can use the one in the example but how I build one from scratch?

For training your own from scratch, you can use the colab discussed Here. For the most part it explains what you have to do in the colab and you can skip the steps under " Download Weights for Transfer Learning" and “Do some surgery when weights don’t exist for your specific resolution” if you aren’t doing transfer learning (starting from another .pkl file as a base).

Under the “Start the training of the network.” there is a section that says EDIT THESE! which are all the things you can easily change without too much issue. If you starting from scratch, set the “resume” to ‘noresume’ (or if you pause your training and want to start back up again, change it to the most recent .pkl file it generated).

If you do want to do transfer learning from another .pkl file, you should be able to just have the “resume” set to the .pkl file you want to transfer learn from. Main benefit of this is that you can get results much faster and often better results with transfer learning rather than just your dataset alone. The main exception for this is using that anime set I’ve been using because it special and breaks everything. I can help with training using that if you need but it a good bit less user friendly.

I’d say most the work will be creating and cleaning up the dataset to be as clean as possible. I highly recommend using Image Magick to process your images. I’d recommend having all the images be .jpg (fastest) and they should all be size of 2^n x 2^n (128x128, 512x512, 1024x1024, etc. If you really want odd shaped images there are ways to do it but it would require a different colab).

If you need any help, just let me know and I can hopefully give a hand.

1 Like

!!!REALLY LONG POST AHEAD, YOU’VE BEEN WARNED!!!

Is there a good way to crop a lot of photos manually quickly?

The same data generators I told you about already crop images. It is important because to get better training we want to make multiple crops of images, but it also complicates things a bit for our case because the data generators do that at import so if we don’t already import them in the z-dimensional way I mentioned it might crop stuff weird and that is going to mess up the whole z-layer.

Usually convolutional neural networks are naturally robust to translation(for example, it can trivially tell that two images are the same despite having slightly different crops), but this usually happens as the information travels through the hidden layers, because the layers are trained to react to features not pixels. The problem in our case is that the features our “fattenator” will be trained to detect will be the “vectorial forms” that represent the way the pixels change in our two layers, and a misaligned crop in this instance will skew the vector(so comparing to the above example, the features of two different crops will no longer be the same). Due to the level of complexity you can get a CNN to reach, I still think it might work even then, but the more complex a CNN, the more data hungry it is, so we better not risk it.

TL;DR: We should start this experiment with a rather conservative dataset. Like an MVP for a system, we start small, then we gradually increase complexity. We should separate all the images that maintain the same pose from those in which they change, and if we need to crop them, we should crop them as aligned as possible. The way these images are created is that the artist will often copy the rough sketches several times and just change the level of fatness, so I think we will be rather successful in aligning the crops. You can do that manually using any image editing program like photoshop, gimp or imagemagick, just do a rough crop, put the images in different layers and lower the opacity of the image on top and try to align them. Then save multiple versions switching which image is on top.

How big should the final crop size be? 256? 512? 1024?

In regards size, we shouldn’t artificially reduce the size. The network itself will do this as it goes through the layers during training. To find out which size we should aim for, we should look at the smallest image we have, and we level everything based on that. If it is too small compared to the second smallest we just throw that away and use the second smallest, until we reach a good size.

is there a limit to the number of images we can have in a tuple? Will mismatched tuple sizes be an issue?

For the size of the tuples, we will use just tuples of 2. If we have image 1, 2 and 3 in a sequence, we can use the transitive property, so we can make (1,2) a tuple, (2,3) another, and (1,3) another, but that is it. This is also good because it increases the size of the dataset, and in theory this is super justified, as a “fattening” of 1 → 2 is just as valid as one from 1->3.(and I just considered, that maybe we can experiment with an “interpolation” free variable, something you can tweak to get different results, because vector 1->3 is just a linear sum of 1->2 and 2->3)

Should we do transfer learning from one of the existing models or try to train purely off of what images we have?

As for transfer learning, I don’t know yet. There is still a lot of work ahead of us to prune this dataset before we can start thinking of this, and that is already enough food on our plates. My gut feeling is that we won’t need transfer learning for this first “well behaved” dataset. I assume it will be very simple, kinda like doing your first written digit recognition or cats vs dogs toy models in Machine Learning 101, but it will be essential to getting our quirky new ideas in place before we try anything harder.

In the end of the day, generative adversarial networks never were quite my focus, and while I do understand the basics of how they work(it is basically just inverting a regression network), I still want to do a bit of reading on those before implementing anything too crazy, because I am sure there are a lot of intricacies that are worth knowing.

Now, for the real work: @ExtrudedSquared, can you get all the sequences together and remove duplicates with your script? Then we can split the full dataset in batches and split the work between whoever wants to actually give time for this project(this is where you raise your hands if you want to be part of this, guys. Please don’t be shy!) Let’s give it say a week for anyone that wants to contribute to make their voice heard. Next week we can split and upload the parts for each person.

After that we can start the grueling work of going through hundreds of images of fatties and split them in two folders, one for the “well behaved sequences”(in which the character changes pose very little between steps) and “naughty naughty sequences”(which we will try to use later once the version 1.0 is working with the well behaved ones).
Then, we will also work to split the 1 image sequences into multiple images, trying to do as I’ve outlined in the TL;DR. I am not 100% on naming conventions yet.

I have to have a read on the tensorflow documentation to see exactly how we are going to import this, and depending on that the naming convention might need to be changed, so please don’t do anything with that yet, we should have our hands full with the above tasks as it is.

So… whew… that was a write-up. So I hope I was at least somewhat clear, and people have a better idea now of the scope of the work ahead of us. If you want to buy in, please do let your voice be heard, this is the part where collaboration will be the most useful. The later parts is mostly just tweaking sliders and witchcraft.

1 Like

thats a really big post ngl

P.S.: Since I made a big rallying post inviting people to this project, I just noticed that I’ve talked about the stuff I want to do in very technical terms before that a lot of non-computer science people might not understand, and that absolutely does not mean your help isn’t useful.

So I thought I’d explain in simple terms what it is that we are doing here.

What we are doing is basically an AI that can do two things:

  • First, you can feed it an image and it will automatically fatten the character in it for you. You can fatten it in several levels, and it will do this in a way that is reasonable with what a real artist would do(though the “visual style” of the image might end up changing through levels as an side effect)

  • Second, it can create new weight gain sequences based on combinations of stuff it learned it the past, kinda like ExtrudedSquared’s already does, but slightly more limited on ver 1.0 as it will have less poses to draw from, and better results for the same reason.

Those are the two basic things it is supposed to do. That is the goal for ver 1.0.

If you do decide to get involved in this, all you need to do is sort some pictures and maybe do some basic photo editing, and you can read all about it on my post above.

Now, I will be 100% honest with you guys; I was already a busy guy before, but the financial crisis due to covid fucked up one of my sources of income and also threw my living situation under disarray for good measure, so I cannot dedicate a lot of time to this project. Because of that, I cannot really get involved here as much more than consultant, dealing with the whole linear algebra witchcraft that goes on behind the curtains, and giving what little insight I’ve gained over ML programming over the years.(Though I will still also do my part in sorting the dataset)

That being said, all the cool things that you’ve seen in this thread so far weren’t really made by me. @ExtrudedSquared was the guy that made most of the advancements in this project, just by messing with stuff, and even if I can’t give much more help than advice, I am sure he will be able to make something cool out of the effort you do put in.

1 Like

the tread title should probably be changed to have “looking for volunteers” in it and have info in the op on how to “apply” (preferably at the top so it’s easy to find). I guess it would be easiest to have people dm the person in charge of the volunteers (I could be that person if you’d like @ExtrudedSquared) saying they would like to help

when it comes to naming, unless it is to restrictive to do this, I would suggest having a prefix to the images that corresponds to what group of images it was in when we split it for the volunteers. then they wouldn’t be able to accidentally name a sequence the same as someone else.

1 Like

I’ll try to update the main post with the details by the end of the weekend, got some stuff I got to take care of before I’m ready to fully commit to this project. Just to be sure, the call for volunteers is primarily to sort the image sets by quality and maybe do some cropping to get the sets to match one another? I suppose we should make examples of how to sort, crop and such.

If you willing, I’ll make you Chief Executive Volunteer manager Yamhead.

I’m mostly worried about actually figuring how to train using these image sets which is why I am dragging my feet on working on this. I don’t want to get a bunch of people committed to preparing this dataset when I haven’t even really started to figure out how to implement this. I have very little understanding how this ML is actually working, everything I’ve done so far is basic value changing using code other people have written. I don’t think I have enough understanding to even ask the right questions on how to set this up. I’ll do some more research but I’m mostly just praying that there is a simple solution or Ano wins the lottery and gets more free time to figure it out for me.

I wish I had any news at all about this project but unfortunately, even after a bit of research I’m too simple of a lad to figure out how to train using sequences in a meaningful way. If anyone finds any research papers or similar things that deal with something like that I could look at I might have a better shot but as of right now, the dream of sequence based training is ded until something changes. Apologies to all those that contributed to the image bank.

That isn’t to say AI is completely dead, I just haven’t been dedicating any time to it recently. If anyone has any particular ideas or things they would like to see done, I can either attempt it myself or try to help however I can.

1 Like

Sent you a PM with a link to a good course.

1 Like

Hi everyone.

Sorry for the delay in responding. Last week was a bit too much for me.

Anyway, here is my tutorial to what people will have to do.

First:

  • You will get a batch of images from us.
  • The first step will be putting each sequence in their own folder.
  • The name on the images will be the name of the folder, and the names of the pictures inside the folder is just a number.
  • This is also where we will break 1 picture sequences into multiple ones.

Please follow the gif walkthrough later in this post if you have doubts on what to do.
If you don’t know how to break the images, that is fine as well, just put the ones you got from your batch somewhere and we will redistribute them later to people that can do that.

Then:

After that we will separate pictures into subcategories.
Please create this folder structure in your PC.(refer to the gifs later in the post)

->no_background
	->static
		->fat
		->preg
		->unknown_inflation
	->slight_change
		->fat
		->preg
		->unknown_inflation
	->pose_change
		->fat
		->preg
		->unknown_inflation
->simple_background
	->static
		->fat
		->preg
		->unknown_inflation
	->slight_change
		->fat
		->preg
		->unknown_inflation
	->pose_change
		->fat
		->preg
		->unknown_inflation
->complex_background
->comics`

We will then work of dividing the sequences among these folders

The criteria:

For the first decision, the most important thing is to look at whether there is any kind of complex environment in the picture background. That refers specifically to stuff that represents something other than the subject of the wg. The network is smart enough to ignore simple background like gradients and simple brushes. Simple objects like chairs, booths, tables, etc are fine, but they get sorted into their own folder, the only thing we won’t touch yet are pics with complex scenes in the bg.

Comics get a separate treatment here because they are likely complicated to break, so I don’t want to deal with them yet. We might get enough pictures not to need them, so lets leave them for later. Think of comic as anything that has a very complex panel structure in a single picture.

The second decision is pretty self explanatory. We want to separate sequences in which the character pose doesn’t change, then those where the pose changes a little, then those where the pose change a lot. I know that “changes a little” and “change a lot” is a little vague, but it is fine if it is. The separation doesn’t have to be perfect. For those that still need some more guidance, here are the things to think about when deciding:

  • Look at the torso/perspective, does it move between pictures? ->pose_change
  • If only the limbs change but the torso/perspective remains exactly the same then ->slight_change

The last decision is going to require a bit of guesswork on your part, but thank god we are actually a group of bastards that really understand fat vs. preg vs. inflation, etc. If you think about it, we are kinda like specialists! Anyway, it is fine if this is not perfect too, but here are a few tips:

  • If there is flab, then it is clearly fat.
  • If you see baby bumps, it is clearly pregnancy
  • We didn’t try to get vore pics, but if for some reason you think you found one, put it on preg as well. It is similar enough visually and we may deal with it later.
  • If the subject just grows between pics, but the reason is not clear, put it in unknown_inflation. That is going to be our “catch-all” for any expansion for which the reason is not clear.
  • Inflation also goes in that same category because there are many methods for it, and it largely remains the same.

Here are the gifs tutorials:

  • Creating the folders for sequences
    gif4

  • Sending sequences through the structure
    (btw, while recording this I forgot to structure the other sequences properly like in the tutorial before, but just pretend they are properly sorted and broken already, ok?)
    - Find & Share on GIPHY

  • Breaking a simple static sequence
    (I didn’t show myself saving the different pictures because you’d see the file structure/name of my pc on the save screen, but just save each version as a separate file after you do what I did in the tutorial)
    - Find & Share on GIPHY

  • Breaking a sequence with slight pose changes
    (just try to align as best as you can)
    - Find & Share on GIPHY

  • Breaking a sequence with pose changes
    (once again, just do you best. As long as the images are kinda aligned it is fine)
    - Find & Share on GIPHY

I used photoshop for my tutorials, but you can use other photo editing programs as well.
This is some pretty basic stuff, so any photo editing software should be able to do it.

I hope you appreciated the tutorial.

Now @Yamhead , it is back to you, dude.

Btw, might I suggest you also make a new post? This one is kinda messy. You can just copy my previous posts there and make a cleaner introduction to what we are doing if you want.

P.S.:
I found two more sources of sequences, one on e-hentai and the other one on bbw-cha. Here are their links:
https://bbw-chan.nl/bbwdraw/res/8.html

@ExtrudedSquared , could you use your crawler to save everything into the dataset?

4 Likes

I’m gonna wait for a little while with making the new topic. I remembered @ExtrudedSquared (correct me if I’m wrong) didn’t want to involve volunteers until they knew how to implement the data set.
just tell me when you’re ready and I’ll make it.

I would also need the sequences that you’ve got ExtrudedSquared. I’m not sure I have the most up to date version (and I haven’t pruned it from dupes).

@Ano is that Volafile room you made still active? it would be simpler if I could just use that one instead of setting up a new one

The room itself still exists, but all files are deleted after 2 days in the system, so someone has to keep checking in to download anything new that comes that way.

Honestly it’s been a while since I looked at it. I assume it is empty now.

good that it’s still around. I think that it’s the best way to send files around for this and that I have to look every 48h isn’t that big of a deal