How does car physics work in the original Death Rally? Part one, background extraction, background position tracking. (C++, OpenCV)

Likely the original Death Rally is my favourite game of all time. Okay, maybe I loved other games too, I grew up with Lara Croft and I played through all the classic Tomb Raider games at least three times (by the way, I didn’t consider this sentence, I’m sure I never will grow up), but Death Rally just calm my nerves. I just feel the speed and I’m able to keep the whole race under control and the world seems even-handed and fair again. Action, ultra-violence, good cars.

Maybe you think that Death Rally is just one of these ordinary 2D top-down car games, like the Micro Machine series or Super Cars II or Slicks (talking about them makes me feel old, especially because I played all of these very-very old games), but I don’t think so. If you don’t want to try the original Death Rally, you could give a chance to the modern remake of Death Rally (or to the movie Death Race).

I’ve been planning to reverse-engineer the in-game car physics of Death rally for a long time, but I never took a crack at it. My biggest motivation is that it is not an easy problem. Actually, I tried to code car physics once, later I will upload this try, it is a mini-car game written in C++ (and Windows API). It was my student class project when I took my first C++ course. (Good old days.) Frankly, its physics is just terrible. Now, I want to understand how Death Rally works and why its physics is so cool. Car physics is an important part of the “unique game experience”, because players can push other cars, players can learn how to drift their car and with the best driving style, a very good player beats all the quiche eaters even if they have much better cars than him.

*** Part one: Background/foreground segmentation ***

In this article, I introduce my approach to extract the background of a track in Death Rally. I want to measure the position and the orientation of the cars with minimum error and in order to achieve this, I will extract the background first. When I will have the background, then a simple subtraction will solve the problem of the foreground/background segmentation. In this concrete case, the foreground/background segmentation is much easier than in general, because the track below the cars or the background behind the car’s pictures is only one simple picture, and we see it through a small window, they are in the same plane and they are able to move only in the same plane.

If you think, that we don’t need to write programs to extract the background, all we need is a video editor, a gimp and enough patience, then you are right. But I chose a different way, I implemented a small OpenCV program which extracts the background. Why? Maybe because I’m a programmer, or because we can reuse it in other programs. So I did the following steps:

Step 1: Install OpenCV, go to the folder ‘samples’, then create your own copy of the file ‘starter_video.cpp’ to your own working directory. After than install something like an Automake script+Emacs/VI (or an integrated GUI if you feel like having one) that makes possible to compile the modified files in  a simple way.

It is not complicated, especially when you are using Linux, so if you like challenges, then forget Linux and buy a Windows before this step. Here is a good installation guide. Writing an Automake script for that is also not hard, check this before you write it, it could save time (and it contains the necessary commands).

Step 2: Download the shareware version  of the original Death Rally and a DOSBox, and create two or three videos from your favourite race track (it is Rock Zone in my case) using DOSBox and Ctrl+F5.

Step 3: Determine the absolute position of the images of the videos (regarding to the background).

I determined the shift vector between the images to calculate it. In Death rally, the picture of the race track is one big image and the camera shows always a small portion of that, so if you examine a picture and the next picture after than, then you see almost the same content but this content is a bit shifted on the second image. I used a very simple algorithm to determine the shift vector. (There are existing very sophisticated methods.) It just takes all possible shift vectors (it starts with (0,0) then it takes (0,1), it always takes the vectors by length in ascending order), it always shifts the first image according to the current vector and it calculates the number of pixels that have different colors on the shifted first and on the not-shifted next image. When this number is more than 10,000 then the algorithm thinks it’s OK and it returns the shift vector.

shift_vectorFigure 1: An illustration of the shift vector between the images.
The red vector is the shift vector.

This method works because the coordinates of the shift vectors are always integer numbers in Death Rally. (It allows higher performance, no filtering needed.) And this method isn’t painfully slow because the coordinates of the shift vectors are usually small numbers. (However, there are also exceptions, for example, at car crashes – once I measured the vector (15, 3), and 15 is not a small number in this case.) I calculated the shift vectors image by image. The sum of the shift vectors resulted the absolute position of the current image regarding to the first image without error. Step 3 accomplished.

Step 4: Segment out the background of the images and merge the portions of the background considering their absolute position.

My solution is not perfect, however, it works more or less. I always took two images, an image and the next image after than. I shifted the first image with the calculated shift vector, then I took the pixels that were the same on the shifted first and on the next image. I though the other pixels cannot belong to the background because they changed in the fraction of a second. And when the intersection of the two images had too much points (the difference of the number of the pixels of the original and the intersection image was less than 1,000), then I dropped it, because it was suspicious. (For example, at the start of the game, cars and camera don’t move, all images are very – and too – similar.) I used weighted average to merge the intersection images.

imgFigure 2: The application extract_background in action,
that determines also the absolute position of images

These are the necessary steps, with these four steps I could estimate the image of the track background. Here is the source code of the app ‘extract_background’, you can download it if you want to. I executed it on more videos, I generated more track background Images  and then I merged them with Gimp only the track. At the end, I removed the 3d objects of the track and the background manually:

sum01sum03

Figure 3: On the left, there is a generated image, on the right,
there is the result of the merge of generated images after some edit

In the next article, I will use the image on the right side of Figure 3 in order to segment out the foreground. I will take the image of the cars and I will write a program that will be able to measure the position and the orientation of the cars in Death Rally, and after than I will try to reverse-engineer the car physics.

About vrichard86

I'm Richard and I'm an enthusiastic coder. I want to be better and I think a professional journal is a great self improvement tool. Welcome to my blog, to my hq, to my home on the web!
This entry was posted in C++, OpenCV and tagged , , , , , . Bookmark the permalink.