[CakeMix: AI Companions] - Make your own characters and live out your fantasies

Thanks! They don’t change yet but I’m currently programming it in, so I hope to have that in the next update.

3 Likes

I realized when reading this that there’s no way to go to the Character Creator during chat. I swear I had that in there before, but I must have broken it somewhere along the line. Anyway I’ve added it to my to-do list, and your workaround is a good way to do it for now.

1 Like

putting it in as we speak :slight_smile:

Thanks so much everyone for the comments and suggestions!

2 Likes

Thank you! I’m looking forward to the next update!

How exactly will this be implemented? When creating a character, will it be possible to set growth coefficients?

And are there any other updates planned related to weight gain?

1 Like

I’m setting it up so there are “versions” of a character. When in the Character Creator, you’ll be able to switch between versions (A and B for now), and have different settings for each version. So in the default A version, let’s say, she can be thin. And in version B, you can dial everything up so she’s large. When you save the character, it will save both versions.

So then in chat it will be able to use those two versions. I’m still making this part so this might not be exactly how it works in the build, but…

I’m planning on having options for how you want the change to take place. So you can just switch between them with buttons for an instant switch. There will also be the option of using a slider to smoothly do it (manually). There will also be “timed” options, so you can set the change to take place during the chat, and I am also putting in a thing where you can set the change in the character herself, and it will work over time. So you might for instance want her to grow larger over a real time period of three weeks, and every time you talk to her, the system will increase the growth a bit so she’ll literally change every day.

So basically I have a bunch of ideas that I’m putting in, to give players a lot of control over things.

This system will work great for weight gain but also for other things, like say you make version A a younger character, and version B an older version. Or version B could be pregnant, or really anything (I also have monster girl stuff in the pipeline). Lots of possibilities!

I’m totally open to ideas, as well, so everyone, please share.

1 Like

Hey bro, definitely loving it so far.

Quick question though, how do we rotate their bodies?

1 Like

Is anyone else struggling to download the LLM? I go to download it and the progress bar never moves. My laptop is a little older but it should be able to run everything just fine. I’m no PC tech expert so I’m a bit lost as to why it’s struggling.

1 Like

Thank you! I should probably put a controls section in it.

Right Mouse + move mouse: rotates the camera around the character.

Middle Mouse pressed: pans the camera and/or moves it up and down.

Middle Mouse Wheel: zooms the camera.

First, make sure you have the latest 1.1.1 demo because the older one had troubles downloading the LLM.

If you do have that one and it’s freezing on the LLM download, try first reinstalling (unzipping) the game to somewhere else and see if that solves it.

Finally, if none of that works, open CakeMix then go to Settings/Advanced, look for the button that opens your local data folder. Inside that folder you’ll see a Player.log file. Email that file to ripenedpeach[@]gmail.com (remove the brackets), and I’ll take a look and get back to you with a solution.

1 Like

Interesting project! I was a bit concerned about the large download, but do like that it’s offline. It worked decently well as I pushed bounds. It only got a bit tripped up with more complex interactions with more directive instructions and who was doing what in the scene.

It’d definitely be nice if the LLM could return some YAML or JSON state/values that you can grab at the end of its responses so you could have it drive the model automatically without player interaction (and not just sliders like weight as you’ve said, but also expressions and poses and such). There could be a bit of config from the model setup that gets passed in the initial instruction set to instruct the LLM on what the possible values/options are for that model.

It’d also be nice to have a slightly larger context to pass-in, even if just a few more characters like 128+. Especially when trying to provide a bit of direction or instructions to the LLM outside of just dialogue it got a bit tight.

1 Like

Its promising. But in my wee shotteh of the demo I found the characters frequently lost the thread of what was being discussed or forgot things within a few minutes.

1 Like

Yeah, it does seem like that, if you persist a bit with a particular topic though it does become a bit more sticky. But then it’ll not really have memory/context of where people are in the scene and what they’re doing, add another person, or swap around what people are doing or where they are.

1 Like

Thanks for your thoughts!

CakeMix actually has a system I coded in early on that does what you’re suggesting as far as the AI telling the character what to do. I ended up turning it off for the time being because it causes slow-down when the AI has to basically examine what it just did, parse it, and send it to a back channel so the game can work with it. I also experimented with NOT using actual AI to parse it, and wrote a whole NLP framework to do that, which was faster but less consistent.

With all that said, I’m going to revisit it soon, and try some newer LLMs that might handle it better, so yeah I’ll definitely take a look. :slight_smile:

Regarding context size, I assume you mean how much you can type in. I set a soft limit on it because early testers would type a ton of stuff and the AI would get confused. You can always type more. The limit number is there as a reminder to try to keep it concise.

As a test, I just tried a longer, sort of ridiculous input (386 characters) on two separate LLMS and got the following results:

Input: First, I want you to take off your bra, then walk over to the curtains and open them up, then wave to our neighbor so he sees you. Next I want you to get a Dr. Pepper from the fridge, and pour it all over yourself until your panties are dripping wet. Then tell me a joke about a bucket, and make it funny. Then suck my cock. Cool? Oh yeah, put your bra back on before you suck my cock.

Using the default Mistral Instruct LLM: Fucked by Moonlight

Using Ellaria-9B Q4_K_M (Gemma): Surrender's Sweet Depravity

Mistral held its own and did what I asked, while Ellaria did everything but faltered at the end.

So yeah you can type more, but the AI might mess it up a bit.

That is, again, assuming you are talking about the text input. If not, forgive me. If you’re more talking about LLM context in a technical sense, you can mess with those settings inside the llm\llmSettings\ folder in the local data directory.

I’m thinking of writing up some dev logs in a blog and on Patreon to detail some of this stuff out.

2 Likes

Sorry to hear that. You know what I just realized? I’ve done some major changes in the story-telling in the code, but I don’t know if I’ve fully tested the ‘official’/demo scenarios since then, so thanks for the heads-up; I’ll take a much closer look.

On the subject of the AI losing track, etc… It’s funny what it comes up with sometimes. CakeMix uses small AI models which are far from perfect in their consistency, but sometimes it just makes me laugh out loud. The other day one of the characters mentioned the pubic hair under her bra.

Uhhhhhmmmm… :thinking:

But yeah because of the small size of the LLMs, it’s not really possible to guarantee that it stays on target, but I do try my best to code things behind the scenes to make it better. If you wouldn’t mind, could you help me out? You can turn on chat logging in the Settings.

Then after a chat is finished (using the finish option), after a few seconds it will show a button to open the chat logs folder. If you can email me (ripenedpeach[@]gmail.com) one or two chat logs where it goes off the track, it would be super helpful. Either the html or .log versions are fine. I can look at those and get a better idea of how to improve things.

Thanks for your thoughts!

I’m working on a memory system so it can keep track of things that happen over time. Trying to make sure it’s fast enough, though, so it doesn’t hold things up too much.

Yeah sure, I’ll fire it up again at some point this week and get you some logs. If it helps, I noticed it most with the third girl - I forget her name. Didn’t test the second girl yet. The first girl (the one with the tattoos) was a bit more consistent but did tend to rail road me with having my character respond in the response to whatever I’d prompted and leaving me almost as if I was supposed to be replying as her - or ignoring sections of the response in order to provide as my character.

1 Like

Excellent, yeah I would appreciate taking a look at the logs, thanks.

I was in playing around with the demo earlier and noticed it would sometimes say something off track, almost right away. I made those scenarios early on, and I think maybe they’re too verbose, so I’m going to tighten them up and see if that helps.

And yeah, one of the things the AI does sometimes is answering for the player, and then leaving things like… what am I supposed to say?? I’m still tweaking things to try to avoid that.

What happens with these LLMs is if you mention something to them (my prompts in the background, for instance), they’ll focus on it. So if I give the AI instructions in the back-channel like “Don’t mention anything about birthday parties”, it will then go on to focus on birthday parties, lol.

this looks so so cool, can’t wait to try it out when I get back home to my main PC in a few days! (turns out my old ass laptop is way too ass to handle the demo version lol) Just from the UI alone I’m already super intrigued, will be supporting this on patreon and I can’t wait to see how this develops, super good job so far! Keep it up!

1 Like

Thanks so much! I hope you enjoy it. :slight_smile:

1 Like

Im really curious how you went about integrating the lllms into the game, did you just use llamma cpp?

Also assuming you used something like llamma cpp can you not turn up the number of context tokens to the limit of the model assuming enough ram to help with this.

1 Like