![]() |
![]() |
![]() |
![]() |
![]() |
|
Core Dump - Work In Progress
Episode 21May 12, 1997: Gameplay, screenshots I considered this for a while and struck upon a great extension of the concept. I won't reveal it completely, but Core Dump will feature a number of puzzles based upon the order in which you pick up certain items. And the items might be far apart. This should make an interesting twist on the platform theme! The text routines were also completed. This time (unlike Akin) a window appears to display the text. Oh well. You'll see. I also worked on another bit of the plot, incorporating Eric Frey and Earl D.Squirrel into the game. Some headaches were caused by the fact that my main game code had grown out of proportion, and that when I have coded only 80% of the main code! E.g. I still have to code that new item stuff and that will be put in the main game part of the code. Hmmms. Back to the drawing board. Some reshuffling of the memory map showed that the problem wasn't too hard to solve. But it's stuff like this that slows down the design process. Another interesting message informed me that at the Eindhoven University
of Technology (which is where I study computing science) has developed
a new compression algorithm, which is better than the Lempel-Ziv method!
Hurrah! It's based on a weighting algorithm that can select the optimal
tree for a given block of data... erm this is quite difficult to explain...
well I'll conclude with stating that this algorithm currently holds the
world record on text compression. It's the best all-purpose algorithm
available.
P.s. Deep Blue beat Kasparov last night! Well it was only a matter of time, but it's nice to tell to computer-skeptics. *grin* Hmms. The context-tree weighting method I mentioned before turns out to be quite complicated. (I'm now looking at a summary of the article) But that has never stopped me before. But it'll take some time. My exams are coming up, and Core Dump still has priority over compression algorithms. Indeed. I had an amazing flash of new knowledge while working on my IPP course (Implementation of Parallell programs, Eindhoven Uni..). I was solving one problem by using a tree-like structure to implement a bag (A bag is a list in which element may occur more than once, that's why it differs from a set). But it then turned out that dynamic memory allocation (needed for the tree-version of the bag) was quite problematic on the transputer network we're using to test the programs. The course problem had to be solved in such a way that the size of the problem could be expressed by constant N. My new solution had a loop of log(N), which is extremely quick, but it only works if the dynamic memory allocation was also in the order of log(N). And after two days work I struck upon a log(N) method for dynamic memory allocation (where block sizes are constant, which is the case with the tree) which is probably faster than the original one in the transputer network... Anyway, I solved the dynamic tree problem by using a static tree for the memory allocation... computing science is a strange thing.
|
![]() |
![]() |
![]() |
![]() |
![]() |