UpPreviousNext







Date: 1995-12-28
From: Paul
Subject: Alexander Algorithm

I'd like to request that the Alexander algorithmist put his thinking cap on and consider an adjustment or clarification to the world generation algorithm. The part that bothers me is the allocation of the temporary sparklist array. A quick review of the use of the sparklist (this is from 1+ year old memory, and may be a bit faulty):

The list gets "seeded" with a few random sparks.

Loop until the sparklist is empty: A random spark is removed from the list and its nearest eight neighbors on the map get inspected. If unassigned, they are assigned and added to the sparklist.

So how big does the sparklist get? No larger than the number of cells in the world, I guess, but that's a pretty big number (we looked at this briefly a few years back, but didn't come up with anything definite). The sparklist is actually a structure containing (x,y) pairs, so there are two integers per entry. One could also redundantly store the terrain type here, but I chose to not do this - my algorithm takes the (x,y) pair and looks up the terrain type on the world map.

If we store the (x,y) coordinates as 2-byte integer pairs (which seems reasonable to me), I'm confident the sparklist will never exceed rows x columns x 2 x 2 bytes. For a 100x100 world (not huge), this is 40,000 bytes - a pretty big chunk of memory. Or is it? Maybe I'm being stingy and old-fashioned here. On the Mac, I made the arbitrary assumption that the sparklist never exceeded a third or so of this size. This was necessary at the time because of an annoying 32K limit on data segment sizes on the 68K processor (or so I now recall). But once in a while this assumption was wrong, and all hell would break loose when I wandered past the end of the array. I guess I could put error checking in, and just quietly restart the algorithm from scratch on the occasions that the sparklist filled up, but error checking is a little expensive in a tight loop like this, and I'd rather deal with something slightly more deterministic.

So the challenge, I think, is this: either show a reasonable upper limit on the sparklist size, or switch to a new data structure that doesn't require random access.

A few words on the latter option: Needing to know an upper limit on size would go away if I could dynamically allocate memory for the sparklist. In my mind, this might be the preferrable solution. For a while I considered implementing the sparklist as a linked list. This solves the size problem, but creates another - random access into linked lists is unbearably slow. I'm sure this would ruin performance. But maybe random access isn't that important. What would happen if, in the original algorithm (above), we substituted "The last spark is removed from the list" for "A random spark is removed from the list?" Maybe we'd still get a nice random-looking world.

Whoops! I see I've whiled away the morning at work thinking about this. Maybe I should do some real work before taking a lunch break.

Duk

UpPreviousNext