I want to try making a visual novel in Ren’py, but 1.) it’s not letting me select the text editor I’ve downloaded off the previous version’s recommendation, and 2.) the new recommendation in this version is a Microsoft product, which I don’t trust due to their most recent scummy practices (for those not in the know, Microsoft is “updating” all their customers to Windows 11 and cutting support for 10 in October; funny thing is, only their latest computers are able to use Windows 11. Second, the terms and license agreement requires you allow them to look into your computer, and I’d rather not A.) allow a breach in my privacy, and B.) risk that they may try to scrape whatever’s shown on my computer or files for possibly training their AI).
What do others on here recommend?
For the record, I don’t want to use a spyware Microsoft product - which Visual Studio Code is - for the listed reasons above. If anyone knows a good text editor for Ren’py, I’d love to know about it, as well as how to set it up with Ren’py.
P.S. - before anyone says anything about being paranoid about Visual Studio Code, everyone should remember that after Adobe made their photo editing program require a subscription model - and thus be constantly online - that it wasn’t even a decade before they tried to make it so that they could scrape artists’ work to train their AI model, along with making it extremely difficult to opt out. DO NOT go along and pray that history doesn’t repeat - corporations, especially the CEOs, have one goal in mind only, and you are certainly not even in their peripherals.
Have you tried emacs? Its has a renpy-mode and it’s macro capabilities for editing are unsurpassed.
I think the renpy-mode for emacs is on the renpy wiki to download.
Depends a little on your priorities. If you only want privacy and local control but otherwise as “normal” as possible, Notepad++ is great. I’m sure it has a plugin for Ren’py syntax.
If you’re willing to try something with more of a learning curve for faster text editing for the rest of your life, it’s either emacs (as mentioned above) or Vim.
Do you have any links for those? It seems that my browser doesn’t make finding those easy.
If you already have the software have you tried right clicking on I.e script.rpy within the game folder, selecting “open with” then browsing to the .exe for that application? Failing that you could go to settings and search for default apps, make sure it’s on by file type then search for .rpy or any other ren’py files types you want and setting the application for it that way. Should do the job either way.
This assumes you’re using windows. Not sure how it works on a mac. But I figure from the rant you’re not using a Mac or it would be largely irrelevant what Microsoft do.
That worked! It helped a ton, thank you!
Awesome, glad I could help
have you looked at NeoVim. Atom, or VS Codium(is a fork of VS Code since it is open source, that has all the Microsoft and telemetry stripped out)Looks and feels and work exactly like VS Code
VS Codium and NeoVim are my two picks, recently been doing most of my coding in VS Codium with the Continue extension, pointed at my own ollama(local opensource ai) instance for the tab ai autocomplete. Its surprisingly good at filing in function arguments, and in general being more helpful than not.
Do super old unupdated versions of visual studio code have this problem? Because that was my solution to the concern, I figured whatever microsoft was going to push on us was going to require an update of the software from pre-AI popular to now.
@zdeerzzz Does anyone have any recommendations on uh what the best 7b-15b ish sized models for coding are nowadays for me to run locally on my vram?
they always had a base amount of telemetry and every extension you install authored by Microsoft adds is own telemetry, and now with copilot sending your code to the cloud for ai to respond to i cant imagine how much there now collecting vs before.
I should start with i have an old 980ti in my home server doing the ai, so if you have a better gpu you will have quicker responses. This also means i usually cannot run larger than a 7b model.
I mostly do c++ and python. qwen2.5-coder:1.5b is what i use for tab autocomplete, since it more responsive. the qwen2.5-coder:7b is more accurate but i can type faster than my gpu can respond so while its cool i don’t find it helpfull. Really just play with the size of the model till you find thee right balance for you. Asking qwen2.5-coder:7b to generate code with continue is hit or miss but can give a good spring board for you to edit/start from. (use agents and set rules to get best response)
Llama3.1:8b is by far the best at coming up with new ways of doing things. pass a section of code to it and ask for a type of improvement and it usually points out something i did not consider, yet. I use this model for bouncing ideas off of. Its code generation leave something to be desired compared to qwen2.5-coder, but it is definatly more creative and will think more outside the box comparitively(no ai actualy things outside the box rather qwen2.5-coder seems to have a tiny one). I have found llama to be conceptually correct or right many times, however it fails to code the concept in the best way.
My biggest complaint is how much the JavaScript mentality bleeds through the ai’s coding. This is were the tab autocomplete shines because its trying to match your previous style in short bits, it doesn’t have a chance to bleed through.
Second biggest complaint is all the ai’s seem to favor readability and/or maintainability over performance and will sacrifice every last bit of performance if it means the code is more readable and/or maintainable.
Models i found not as good codellama, mistral, deepseek. Deepseek as a reasoning model at the 7b size take to long to come up with an answer thats no better than LLama3.1:8b. I think it probably shines at higher parameter counts where it can do better reasoning.
Having a 10 year old gpu is definitely a limitation (mine is far newer). But it’s good to hear that qwen2.5 comes in various sizes to try out.
So in regards to speed if you have not done tab auto complete before, its not just about output tokens per second but also input tokens per second.
Tab autocomplete prompts the ai whenever you pause typing.
After it gets the prompt you have to wait for gpu to process the input, with tab auto complete, how quickly it finishes this is usually the difference in how useful it is. My 980ti does like 50tokens a second on output and seems fairly fast with the 7b model, it just takes to many seconds to get to outputting that i don’t wanna wait, in the case of tab autocomplete. On the other hand the 1.5b and 2.5b models start responding a second after i pause and waiting another second have finished.