Version Control and the Lone Developer

So I thought I’d post this as working with version control has saved my bacon multiple times. While there are obvious and well documented benefits for teams of people, there are also significant benefits for someone working alone.

The basic principal of version control is that after making something work, you commit the changes to a server. The server keeps track of the changes between successive versions in a repository (repo). This works best with files that contain text (ie. code), but the server can typically also handle binaries.

There are different version control systems out there, Git being the most obvious. I still use CVS for most as my stuff as I’ve been doing version control for longer than Git has been a thing. I don’t want to get into the differences between them, they largely have the same features. Each version control system has its own commands, some have user interfaces, and most IDEs have integration built in or pluggable for common version control systems.

Here are some scenarios where version control has helped me:

Dead Laptop/Computer
Has to be a common reason why game development comes to an end. With version control you’ve an offsite backup of the latest working version. Fix your machine, pull the latest version from the repo, and you’re back up and running.

What did I just do?
Everything’s just stopped working, but why? Was it something that got changed unintentionally? Ask your version control system what has changed between the last working version and where you are now. Your IDE will likely have a way of looking at the current version and the committed one and allow you to revert selected changes. This is also useful if it is your IDE that has had “a moment” and broken the source in some unspecified way.

Working on two machines
You usually develop on your desktop, but put a copy on a laptop to work on when travelling. You did some work, but now you can’t remember what you changed. If you’ve been working with version control then it can show you what you’ve done, and you can commit the good stuff.

Working on Windows and Mac/Unix
Windows has an unconventional text file format. Version control systems can automatically handle the simple text conversions needed to move files from one system to another.

I’m sure I fixed this before, it used to work, when did it break?
Version control can also determine changes between previous versions. You can go back and see what’s been changed between two old versions too.

I need to go back and fix an old version
So you’ve got your current development (dev), an early access version (v4), and a public version (v3). Someone finds a bug in the public version - it’s got more eyes on it, so this is quite likely. This is where some more version control features come into play: tag, branch, and merge.

  • When you make a release you tag it, basically saying this version of the code creates a release. Now you can rebuild that release precisely any time you want.
  • The bug report comes in, relating to v3. Use version control to branch the development at v3, fix the bug, and release the updated version to the public.
  • That bug also needs to be fixed in v4. Use version control to branch the development at v4 as well. Rather than try and reproduce the edits you made to v3, you can merge the v3 changes into v4. This can be largely automatic unless the same line of code was changed by v3->v4 and the v3 bug changes.
  • Finally the change can be merged back into the dev version ready for the next release.

And if/when you do build a team …
Just point your new members at the repository!

The downsides

  • It’s something else to do, and takes a little time to learn.
  • Version control works best with files containing text/code rather than binaries - it can’t show changes between two binary files.
  • It works best when the source is in multiple smaller files. So something like a Twine game with everything in one file is more cumbersome to work with than the same game built with Tweego with one file per passage.

I hope that helps someone!


What about using GiT?

@madone I don’t understand your question. What about Git? I mentioned it. I use it for some things. I’m no expert in it.

A nice explanation of the benefits of using Version Control Systems! I know for me I feel in love with VCSs after I lost a huge project in college due to data corruption. Now, expect for very rare instances, I use a VCSs for all of my projects and have never regretted it. I do think there are a few caveats that I would like to add that would be handy for everyone to know.

Not all VCSs are the same

For the most part the VCS market is dominated by 3 players

  1. SVN - The oldest and one of the simplest VCSs on the market SVN is quite popular with older codebases and corporations. There are quite a few mature tools around it that simplify its use. The main downside for some one new to VCSs is SVN requires a central server for the repo which can make SVN harder to setup and makes it less resilient then the other options when it comes to resisting data corruption.

  2. Git - Currently the most popular VCS, Git has a variety of free tools and services available for it like GitHub, GitLab, and Bitbucket. Git is what is called a distributed VCS which means each user who has access to the repo keeps a full copy of that repo (or the branch of that repo they checked out) and its history on their computer making it very resistant to data corruption and loss. Also, Git can host repos locally on the devs computer making Git easier to setup then SVN (it also comes already installed on most Linux systems). The main disadvantages of Git is its lack of mature tools (mainly focusing on CLI based tools) and the fact the system expects the user to know what they are doing which can make it hard to learn and use.

  3. Mercurial - One of the newest VCSs Mercurial was built as an improvement on Git. It has many of the strengths of Git but focus on improved usability and handles binary files better then SVN or Git to my understanding. The main disadvantage is Mercurial’s slow adoption. Since it is so small there are not many services that support it and more services are dropping support for it then adding support.

In general Git tends to be the go to due to the amount of support it has from the developer community at large, but SVN and Mercurial have their own strengths that could fit your project better so best to research the alternative when choosing what to go for.

VCSs do not magically preserve your data

Its not the VCS the prevents data loss but instead the fact they usually make use of an offsite data repository or service that is not on the devs local computer. If you only commit to a local Git repo or have an SVN server running on the same computer you are developing on you are no safer from data loss then you where when you where not using a VCS. Its only when it is paired with a service like GitHub, GitLab, or Bitbucket (or if you have multiple devs when using a distributed VCS like Git) do you start getting some of those benefits for free.

Why binary files are a pain

VCSs try to efficiently save data by only saving the changes that where made to a file in a process called “change tracking”. Unfortunately, with binary files this can be very difficult for a VCS to figure out what changed so they usually just save the whole new file. This leads to repo bloat as there is a fully copy of that file every time it has been changed which can degrade performance. (basically you could be downloading the same file 10 times for example)

Many VCSs try to offer systems to help with what they call “Large File Storage” but they can be a bit difficult to work with sometimes. In general it is ok to commit binary files into the repo directly if:

  1. The file is small
  2. The file will not change often

Do keep in mind the best you can do with binary files when it comes to change tracking is reverting back to an older version, but there are new formats coming out for some things like 3D models that aim to be more VCS friendly.

I’m sure that there are schools of thought that frown upon it, but as a severely scatterbrained person, committing my work almost as habitually as hitting the save function in the IDE has been great for my work flow. Whenever I’m done for the day, taking a break, or just switching focus to another part of the same project, I look through my changes, try to organize files into like groups of edits, and make a number of small commits rather than throwing all my changes together.

On group projects, I’m sure there would be someone who’d complain about clogging the commit history or pushing code (even to my own branch) that isn’t production-ready or even properly buildable, but being able to look through each individual edit or set of changes helps immensely in remembering what I’ve done and why I’ve done it, helps with rooting out causes of minor bugs that I might not find until much later, and goes a long way towards building a detailed change log once I’m ready to push a release build.

My only issue with how Git handles Windows/Linux disparities isn’t even specifically an issue with Git so much as NTFS in general. Since Windows doesn’t really have the concept of an executable bit, file permissions tend to get a bit messed up if I’m trying to pull in files from my Windows drive under Linux. Files that were committed without executable permissions, then downloaded to and pulled back from a Windows drive tend to have the execute bit set anyways, which Git will then flag as a change to be committed even if the file is otherwise identical to the source file.

Yeah, I’m from the unix world and use cvs (old-school).