AI generated fats

So I’m sure many of you have witnessed the massive amount of AI generated shitposting happening lately, and naturally I wondered if I can make something hot out of it. DALL-E mini is very rough around the edges compared to DALL-E 2, but still can make some passable weight gain stuff. DALL-E 2 is much more realistic, but isn’t available for public use yet, and also has filters in place so you can’t make anything explicit.

Here’s the results I’ve been getting with DALL-E mini:

Obviously these are all very rough and are using somewhat vague prompts compared to the stuff people normally make. The AI is improving over time, however, I’ve seen it slowly learn who certain celebrities are after people make more and more images involving them. If someone really wanted to they could run the software on their own pc, but it takes about 10x as long to generate a single image unless you have some kind of NASA supercomputer.

If anyone else manages to make something more refined, or if you happen to have access to DALL-E 2 ; ) I’d love to see it.

14 Likes

The biggest problem with this version of Dall-e is that it cannot do heads, at all. bodies are often fine for some reason, but heads look like something out of a horror movie. Dall-e two would be like having superpowers by comparison, you could probably make whatever you want providing you have a supercomputer handy.

6 Likes

I had seen some things in the Discord before but I am extremely pleased a post has been made about this!

I have been heavily invested in AI in general. I think things like GPT-3 and even the not released DALL-E 2 are only the tip of the iceberg for the potential of this wonderful technology (I personally knew about DALL-E 2 before it was a news story). It will make your head spin in 10-20 years what someone will be able to do with an AI like this for free just laying on their bed eating Cheetos.

Someone: Write and perform a 3 hour long ballad opera written in limericks about the intracies of Bionicle Lore, The Pixar Theory, and the corporate history of 7-Eleven. Put special emphasis on Lightning McQueen, and the particular smell of 7-Eleven restrooms.

AI: Ok :ok_hand: (Three seconds elapse…)

AI: Theeeeeere once was a caaaaar-

It may look rough now, but everyone is aware why DALL-E Mini looks particularly rough. I have no doubt that even after we get our hands on DALL-E 2, it will look like nothing in comparison to what you could make in even like 5 years after. I even think some of these here with DALL-E Mini don’t look that bad on their own, and DALL-E Mini is small potatoes obviously in comparison to even DALL-E 2.

I believe that the faces are messed up by design. I think the creators of this AI wanted to avoid the headache and negative public image that could result from realistic faces. Imagine what happened if the AI put normal faces onto the images. Someone asks Dall-E to create a fat version of a popular actress, politician, their co-worker. The co-worker would get mad. The fans of the actress would get mad if they saw the image. The fans of the politician would get mad and start a discussion on politics and stuff. This is true for all representations that can be perceived by someone as something negative, not just “fat”

Right now with no faces, it is just a funny meme. With normal faces it would become a nightmare. It would not even matter if the AI generated the face as a compilation of several images or used a single source image. People would complain regardless as soon as they saw a face that looked similar enough

1 Like


Dwayne ‘The Rock’ Johnson with a bit of pudge and a beard. Via FaceApp.


DALL-E 2 producing perfectly believable images of humans. Via The Guardian.

It is not intentional. DALL-E Mini is just very underpowered in comparison to it’s bigger sibling. Human faces are complex, and most of the time AIs have to be specifically built to get all the details right (and sometimes even they don’t get it right). For things like a landscape, it doesn’t really matter if one element is a bit skewed to the right, or a bit warped, but for a human face it does. If this were some kind of intentional filter, OpenAI/the person who put Mini out would’ve likely made everyone look like slender man as opposed to the occasionally horrific pseudo-faces made by Mini.

Addressing the supposed backlash that would be recieved, this has been something celebrities and world leaders have been getting for a while (Kim K is a good example in modern history), and with how the AI works something like co-workers would be impossible to do at least with DALL-E 2 as it works on a prompt as opposed to an image you put in. You can’t really type in: “Carly Watkinson on floor 2 so fat she breaks down to floor 1!!!” and expect it to know what you’re talking about.

If there would be a backlash to it it would’ve happened with FaceApp, and also with some of the more realistic morphs I’ve seen. Celebrities (particularly musicians) are much more concerned with AI generated music based off of their’s than they’d ever be with something like this.

1 Like

after seeing tons of content used with it, i can agree, it does bodies well, but faces are horrifying

1 Like

Been playing around dalle mini a little and wanted to share some cool results:



Let me know what you guys think

3 Likes

So AI has already improved a lot since I made this post, and by using NMKD’s Stable Diffusion GUI, I was able to make some really good images.

These were all made with the same prompt:





They aren’t perfect, but with a little fine tuning with the same software, you can make pictures that are almost unrecognizable as being generated.

Here’s some chubby celebrities I made in like 5 minutes:



I used different samplers on each one, but they all work relatively the same. My process usually goes like this:

  1. Generate the base image, describe what you want the person to be doing in the picture. Name, background, shot length, lighting, clothes, etc. This usually takes a few tries, the generated images can either look flawless or look like a lovecraftian nightmare. I usually do batches of 5, but I have a pretty old GPU so I’m a bit limited in that regard.

  2. Refine the image. The GUI comes with an inpainting tool that allows you to only generate certain parts of the image. Usually I’ll inpaint the subject’s stomach, using prompts like “chubby, overweight, belly, weight gain” this usually gives me what I’m looking for. Larger adjectives give larger results. Often times you’ll need to play around with the initial image strength, as well as scale and amount of steps. Each sampler has it’s own sweet spot. After doing the stomach, I usually will refine the body with a high initial image weight, just to add a slight layer of chubiness. After this, you can inpaint around the edge of the face to add a double chin/chubby cheeks.

  3. Once you’re satisfied, you can use the upscaling setting to get x4 resolution, this works okay but grainy images will usually have a hard time upscaling. I also turn up the detail for the final image, the default maximum is 120. This makes the image pop and look almost indistinguishable from reality.

Not trying to shill for whoever made the GUI program, I just think it’s a good tool for this stuff. No censors and runs locally, and it can run on mid to low end GPUs like mine (GTX 1080).

1 Like


1 Like