|
Core Dump - Work In Progress
Episode 21May 12, 1997: Gameplay, screenshotsLast week saw me setting the priorities of the sound effects. Usually you can only play one sound effect at a time. While you are playing some sound, another sound effect may occur. You've then got to decide whether you should ignore the second one, or interrupt the first one and go on to play the second one. These decisions are made on a priority-number, i.e. if a sound effect occurs while another is playing already, it will be ignored if and only if its priority is lower than that of the first sound effect. This ensures that things don't sound awkward, e.g. a loud explosion being interrupted by a very silent bleep. In general, louder sound effects get higher priorities. Exceptions to this rule are "important" sound effects, which are given higher priorities even when they're not so loud. But I did a lot more. Does anyone know the game Vectorman? It's a megadrive platform game, that has a special bonus method: there are the usual "extra life" items andsoforth, but also "x10" items. Take an "x10" item, and if you then pick up another item withing a given time limit , the effect of the second item is multiplied by ten. This way, you can collect 10 extra lives, or 10 times the firepower. I considered this for a while and struck upon a great extension of the concept. I won't reveal it completely, but Core Dump will feature a number of puzzles based upon the order in which you pick up certain items. And the items might be far apart. This should make an interesting twist on the platform theme! I also decided that you'll be able to save the game situation to disk, but only on computers. A bit like the data images in Black Cyclon, actually. This way, you can't just save your way through a difficult part of the game. The text routines were also completed. This time (unlike Akin) a window appears to display the text. Oh well. You'll see. I also worked on another bit of the plot, incorporating Eric Frey and Earl D.Squirrel into the game. Some headaches were caused by the fact that my main game code had grown out of proportion, and that when I have coded only 80% of the main code! E.g. I still have to code that new item stuff and that will be put in the main game part of the code. Hmmms. Back to the drawing board. Some reshuffling of the memory map showed that the problem wasn't too hard to solve. But it's stuff like this that slows down the design process. Another interesting message informed me that at the Eindhoven University
of Technology (which is where I study computing science) has developed
a new compression algorithm, which is better than the Lempel-Ziv method!
Hurrah! It's based on a weighting algorithm that can select the optimal
tree for a given block of data... erm this is quite difficult to explain...
well I'll conclude with stating that this algorithm currently holds the
world record on text compression. It's the best all-purpose algorithm
available.
P.s. Deep Blue beat Kasparov last night! Well it was only a matter of time, but it's nice to tell to computer-skeptics. *grin* Hmms. The context-tree weighting method I mentioned before turns out to be quite complicated. (I'm now looking at a summary of the article) But that has never stopped me before. But it'll take some time. My exams are coming up, and Core Dump still has priority over compression algorithms. Indeed. I had an amazing flash of new knowledge while working on my IPP course (Implementation of Parallell programs, Eindhoven Uni..). I was solving one problem by using a tree-like structure to implement a bag (A bag is a list in which element may occur more than once, that's why it differs from a set). But it then turned out that dynamic memory allocation (needed for the tree-version of the bag) was quite problematic on the transputer network we're using to test the programs. The course problem had to be solved in such a way that the size of the problem could be expressed by constant N. My new solution had a loop of log(N), which is extremely quick, but it only works if the dynamic memory allocation was also in the order of log(N). And after two days work I struck upon a log(N) method for dynamic memory allocation (where block sizes are constant, which is the case with the tree) which is probably faster than the original one in the transputer network... Anyway, I solved the dynamic tree problem by using a static tree for the memory allocation... computing science is a strange thing.
|