Generating a Meandering Line Between Two Points

I had an interesting problem to solve recently for a project I’m trying to revive. Given two points, create a meandering line between them that such that the y co-ordinate of each intermediate point is no more than 1 pixel higher or lower than its neighbouring pixels.

Here’s an example. We’re trying to draw a line from point A to point B:

| | | | | | |B|
| | | | | | | |
|A| | | | | | |
| | | | | | | |

This is a valid line:

| | | | | |x|B|
| |x| | |x| | |
|A| |x|x| | | |
| | | | | | | |

This is an invalid line:

| |x| | | | |B|
| | |x|x| |x| |
|A| | | | | | |
| | | | |x| | |

The problem becomes obvious if you start at point A and start generating arbitrary y values that adhere to the rules. Starting out is easy, but how do you ensure that you can meet up with point B?

Here’s the recursive, divide-and-conquer algorithm I came up with:

  • Find the x co-ordinate of the point exactly in the middle of the start and end points.
  • Find the maximum possible y co-ordinate for the mid point:
    • Assume that y increases by 1 point each time x increases.
    • Find the x distance from the start point to the mid point.
    • Add that to the starting point’s y co-ordinate.
    • Do the same for the end point, but work backwards towards the mid point.
    • Find the smaller of the two values.
  • Find the minimum possible y co-ordinate for the mid point:
    • Assume that y decreases by 1 point each time x increases.
    • Find the x distance from the start point to the mid point.
    • Subtract that from the starting point’s y co-ordinate.
    • Do the same for the end point, but work backwards towards the mid point.
    • Find the larger of the two values.
  • Assign the midpoint a random y value that lies between the calculated maximum and minimum y co-ordinates.
  • Use the start and mid points as the new start and end points and recurse.
  • Use the mid and end points as the new start and end points and recurse.


Voting For Eggs

The iOS version of Super Foul Egg is still underway. Since the last post I’ve fixed all of the coding problems I identified and added the pause function, but there’s no visible button for that yet. There’s still no exit game function and I haven’t yet replaced the “press any key” bitmaps, either.

Instead, I’ve been working on adding back in a feature I removed: two-player mode. Where’s the fun in SFE if you can’t play against a real person? I took out the feature because it’s far to complicated to try and track and differentiate between two users interacting with the same touchscreen. To add it back in I’ve been playing with Gamekit.

Gamekit is a very simple networking framework that allows iOS devices to communicate over Bluetooth or wifi. It announces the availability of a peer via Bonjour, automatically assembles a network of peers with no user intervention, and then allows multicast messaging between them. Having written a network library for games I know how tricky it is, and I’m impressed at how easy Gamekit is to use.

Although Gamekit takes care of the messaging system admirably, there’s still the problem of co-ordinating the behaviour of multiple instances of the same game across a network. The control system was easy:

  • Every time the player moves an egg, transmit the move over the network;
  • Receive the message on the peer;
  • Send the move as notification to the notification centre;
  • Receive the notification in the grid representing the remote player;
  • Perform the move.

I disabled the drop timer for the second player’s grid and rely on a “drop egg” network message, which keeps the two devices in sync. Fortunately Gamekit has a messaging mode that is guaranteed to transmit and receive messages in the correct order.

The difficult part has been choosing the next pair of eggs. Eggs are chosen randomly. How can two devices be made to choose the same random pair of eggs?

I came up with a few solutions for this. Disregarding the option of using a pseudorandom progression for the sequence of eggs (which would have removed the need for any network communication but would also have made the game extremely predictable), the first was to allow network peers to choose their own eggs. Every time they choose an egg, they send a message to the network: “I chose red”. All peers add a red egg to their next-egg lists. They only need to choose an egg when they reach the end of the next-egg list.

This would work in most situations, but it falls apart if two or more peers try to choose an egg at the same time. Peer 1 would choose red; peer 2 would choose blue. Peer 1 and peer 2 now disagree on what the next egg will be. Other peers will have either red or blue as their next egg depending on which message reached them first.

Despite the race condition this is a temptingly simple approach, but the most likely time for a disagreement to occur is when the game first starts up and all peers choose their first set of eggs. Everyone disagrees; the game is a shambles.

The second approach was to designate one of the peers as a server and the others as its clients. Any time a client requires an egg it sends a message to the server: “I need a new egg”. The server responds to all of its clients: “The next egg is red”. All peers add a red egg to their lists. If the server needs a new egg it just broadcasts the next-egg message without being prompted.

One handy feature of Gamekit is the way it automatically chooses a randomised ID for all peers in the network. The server can be chosen by ordering all peer IDs numerically and selecting either the first or last in the list. As long as all peers use the same algorithm, no network traffic is required to select a server.

However, I was dissuaded from using this approach because it requires two sets of code for all network-related functionality. One set has to perform server actions, whilst the other has to perform client actions. It would be better if all network participants were peers and ran the same code.

Thinking about the problem some more led me to the phrase “reaching consensus in a distributed system”, which resonated with my distributed systems class at university: This is a solved problem. I need to implement a voting system.

The algorithm works like this:

  • Peer 1 needs a new egg.
  • Peer 1 sends out a message to all peers: “I vote for a red egg for vote 1”.
  • Peer 2 receives the message.
  • Peer 2 responds to all peers with the message: “I vote for a blue egg for vote 1”.
  • Peer 3 receives the message.
  • Peer 3 responds to all peers with the message: “I vote for a red egg for vote 1”.

As each peer votes, all participants in the network store each peer’s vote along with the peer ID and the vote number. Once the number of votes collected for that vote number is equal to the number of peers, all peers have voted. All peers use the same algorithm for selecting the “winning” vote. The network arrives at consensus.

It’s important to note that each peer will only vote on a given vote number once. Otherwise, each peer would respond to each incoming message, resulting in an never-ending, exponential cascade of voting.

To choose the winning vote, I initially planned to sum the votes for each egg colour and select the colour with the most votes. In the case of a tie, I’d choose the vote from the peer with the highest ID. I then realised that I could greatly simplify the algorithm by using the American approach to an election: ignore the popular vote and just use the single vote from the peer with the highest ID. At last I have a system that works.

Next steps are to ensure that the next-egg list is never empty. If the list is empty gameplay has to halt to allow voting to take place, which can take a few seconds. Other than that, I’m going to simplify the garbage egg placement algorithm. It currently fills up the columns with the fewest eggs in first, which is predictable across all peers, but given two columns with the same number of eggs it will randomly choose one of the options. I could implement a messaging system for garbage egg placement, but it is far easier to change the algorithm to be deterministic when running a network game.


Really Bad Eggs - Objective-C Edition

The Objective-C version of Really Bad Eggs is coming along nicely. I’ll try to get some screenshots and maybe a video up, but in the meantime here’s some stream-of-consciousness pontificating on the port so far.

When I wrote DS code I used to complain about the lack of a debugger. I’ve written most of the Objective-C version of Really Bad Eggs whilst out and about on a Windows laptop, so I haven’t had a debugger or even a compiler. Despite the lack of tools and the game being my first Objective-C program it’s working well so far. Rewriting the code in a new language has allowed me to re-examine a lot of the original ideas. In some cases I’ve refactored both the C++ and Objective-C code to be simpler or more versatile. In others I’ve used features of Objective-C to improve on the original code.

Bug Fixes and General Improvements

The DS’s representation of incoming garbage eggs was inaccurate due to a typo. I’ve fixed this in both versions of the game. I’ve also improved the placement of garbage eggs that are dropped into the grid.

Garbage eggs are always dropped into the columns with the least eggs first. The (simplified) algorithm looks like this:

  • Get a list of columns and their heights, sorted in ascending order by height (insertion sort).
  • Add the leftmost column to the list of “active” columns.
  • While there are eggs to place:
    • Place a garbage egg in each active column.
    • If the column to the right of the last active column has the same height as the last active column, add it to the active column list.

It essentially fills the grid from bottom to top as though it were pouring in water.

There is a subtle problem with the algorithm. The sorting system produces a list of columns that are sorted by height and column index. In SQL, it would look like this:

    columnHeight ASC
   ,columnIndex ASC

If columns 0, 1, 2, and 5 are of height 2, whilst columns 3 and 4 are of height 1, and we add a garbage egg, we know that the egg will be added to column 3. Here’s the column state:

        000  0
Column: 012345

Here’s the sorted data:

Column: { 3, 4, 0, 1, 2, 5 }
Height: { 1, 1, 2, 2, 2, 2 }

The first column to receive an egg will be column 3, as it is the left-most column in the sorted list.

This predictability means that advanced players can start using it to their advantage. They can drop a block here because they know they’ve got an incoming garbage egg and want it to go there. They can change their plans because they know what’s going to happen next. To prevent this, the sorting algorithm now randomises columns of the same height within the sorted array.

Memory Management in Objective-C

Memory management is a pain. I can see exactly why Apple have implemented reference counting the way they have. I’ve worked extensively with low-level memory management in C++ due to the combo attack of the DS’ tiny stack/slow CPU. The combination makes both smart pointers and stack-based objects mostly unusable, leading to the same questions time and time again:

  • If I call a function, who owns the memory it allocates? Do I delete it or not?
  • Does this function expect to receive an existing block of memory that I allocate or will it allocate its own?
  • When writing functions, should I allocate memory or should I expect it to be allocated in advance?
  • Should I return allocated memory via the return statement or via a pointer-to-a-pointer passed as a parameter?
  • Should I return an object on the stack and hope RVO kicks in, or should I return a pointer to an object on the heap?
  • Should functions receive and return pointers or references to objects?
  • If an object is told to store a pointer to another object, should it store the pointer or a copy? What if the pointed-to object changes during the lifetime of my object?

I’ve become used to considering these issues every time I work with memory in C++, but Objective-C tries to solve the issues itself. Mostly it uses conventions to suggest the correct approach. Functions with “new” or “init” at the start of the names, for example, will retain the returned object and increase its reference count. The caller therefore becomes the owner of the object. Convenience methods, such as NSArray’s arrayWithObjects method, return autoreleased objects. These are released automatically by the Cocoa framework when the current iteration of the application’s state terminates. If the creating code needs them around longer then it needs to retain them. Collections retain objects added to them, so they can be deemed to be the owners of those objects. It is the responsibility of the object’s previous owner to ensure that he releases the object.

Getting the hang of the conventions is tricky, particularly when dealing with frameworks like cocos2d which aren’t the best architected examples I’ve seen. There are a number of edge cases, too. Suppose you retain a reference to object A, which creates and retains a reference to object B, which itself creates and retains a reference to object C. During its creation, object C retains object B. What happens to B and C when A is released? Why, nothing, of course. B and C have a cyclic relationship. Releasing object A reduces the reference count of B by one, but B is still referenced by C (and vice-versa) so neither will be deallocated. This isn’t an issue in C++ as the developer manually deletes objects. Deleting A will delete B which will delete C (assuming the destructors tidy things up correctly). Nor is it an issue for C#’s garbage collector because it can tell that the only references to B and C are each other.

For the most part, I’ve used what Objective-C calls “weak references” - I’ve got pointers to objects without retaining them, the same as I would in C++. My code probably looks hideous to experienced Objective-C coders as I’ve trampled conventions all over the place.

One of the main reasons I upgraded to OSX Lion so quickly was the promise of ARC, or “Automatic Reference Counting”. Skimming through the docs suggests that there are a lot of edge cases to it, though (that seems to be a common theme with Objective-C - watch out for the edge cases), so I’m sticking with the traditional method first in order to understand fully what ARC is replacing.

A bug that caught me out for a while was a classic Objective-C memory leak. The “Leaks” utility insisted I’d leaked memory from one class like crazy, but it also told me that the reference count for all leaked objects was 0. That makes no sense until you remember that Objective-C allows you to make boneheaded mistakes like this:

- (void)dealloc { }

Oops. All dealloc methods should call the superclass’ dealloc method:

- (void)dealloc {
    [super dealloc];

I’d have suggested that Apple use the template pattern for this instead. NSObject could have methods like this:

- (void)onDealloc { }

- (void)dealloc {
    [self onDealloc];
    [super dealloc];

User code would look like this:

- (void)onDealloc {
    // Cleanup here

It’s now impossible to leak memory by forgetting to dealloc the superclass. I imagine there’s a valid case for giving developers the power to shoot themselves in the foot, but there’s a lot of boilerplate code needed in classes that could be excised very easily by using the template pattern. The same is true for object initialisation. Here’s what you do now:

- (id)init {
    if ((self = [super init])) {
    // Initialise the object

    return self;

This is what a templated NSObject could look like:

- (void)onInit { }

- (id)init {
    if ((self = [super init])) {
        [self onInit];

    return self;

This is what a user class would look like:

- (void)onInit {
    // Initialise the object

Tidier, but potentially less powerful.


Running the OSX version of Really Bad Eggs through Xcode’s CPU profiler demonstrates how efficiently the game works. Most of the game code barely registers; cocos2d’s drawing routines use several magnitudes more CPU time than anything else in the game. It also demonstrated how abysmally slow the NSArray objects are. One function I used them in took up more CPU power than anything else in the entire game (except the cocos2d drawing routines), so I refactored the function without NSArrays and the problem disappeared.


The dynamic nature of Objective-C means I’m happy to use calls to an object’s isKindOfClass method, whereas in C++ I’d have been more inclined to refactor to remove the need for type checking or calls to dynamic_cast<>(). For example, the C++ function to test if two eggs can be connected looks like this:

void NormalBlock::connect(const BlockBase* top,
                          const BlockBase* right,
                          const BlockBase* bottom,
                          const BlockBase* left) {
    setConnections(top != NULL && top->getColour() == _colour && top->getState() == BlockBase::BLOCK_STATE_NORMAL,
                   right != NULL && right->getColour() == _colour && right->getState() == BlockBase::BLOCK_STATE_NORMAL,
                   bottom != NULL && bottom->getColour() == _colour && bottom->getState() == BlockBase::BLOCK_STATE_NORMAL,
                   left != NULL && left->getColour() == _colour && left->getState() == BlockBase::BLOCK_STATE_NORMAL);

All egg classes inherit from the BlockBase class (descriptive name, I know) which includes a “_colour” member. Each subclass sets this to a colour representative of the bitmaps used to display the egg. The red egg has an RGB value of (31, 0, 0) whilst the blue egg has an RGB value of (0, 0, 31). Checking if two eggs are of the same type simply entails a comparison between two 16-bit integers.

The Objective-C function just compares the eggs’ classes instead:

- (void)connect:(BlockBase*)top right:(BlockBase*)right bottom:(BlockBase*)bottom left:(BlockBase*)left {

    BOOL topSet = top != NULL && [top class] == [self class] && top.state == BlockNormalState;
    BOOL rightSet = right != NULL && [right class] == [self class] && right.state == BlockNormalState;
    BOOL bottomSet = bottom != NULL && [bottom class] == [self class] && bottom.state == BlockNormalState;
    BOOL leftSet = left != NULL && [left class] == [self class] && left.state == BlockNormalState;

    [self setConnectionTop:topSet right:rightSet bottom:bottomSet left:leftSet];

It’s not a major change, but it does remove some cruft from the code.

Blocks as Callback Functions

The Grid class represents the well of eggs and manages all of the eggs within it. It includes functions for rotating the live eggs, identifying and exploding chains of eggs, and so on. In the C++/DS version, the Grid class is also responsible for calling methods of the SoundPlayer class to trigger audio playback every time anything sound-worthy occurs. This works, but it’s not terribly clean - the Grid shouldn’t need to deal with sounds. The design ties the logic of the game to the platform it is running on.

A better solution would be to trigger an event, but if you’ve ever implemented an event system you’ll know that it’s a very verbose pattern. At the very least, you’ll need interfaces and classes that represent event arguments and event listeners. I’d like to tidy up the design but I’m not that bothered.

A considerably terser option would be to use a function pointer as a callback method, but that opens up a whole can of non-OO, global namespace-polluting nastiness. C++’s pointers-to-members are so limited - the exact type of the pointed-to object must be known - that they aren’t even worth considering.

Another possibility would be to create a message queue and have the grid push messages into it. A message pump would dispatch messages to the sound player object as necessary. Again, this is ugly and involves far too much new code for virtually no benefit.

It was only after working through this familiar thought process that I remembered that Apple has added closures (“blocks”) as a non-standard extension to C. With closures, the problems disappear. No bloated event system, no global namespace pollution with random C functions, and a simple way to extract the sound triggers from the Grid class.

The “Grid.h” file includes the following typedefs:

typedef void(^GridEvent)(Grid*);
typedef void(^GridBlockEvent)(Grid* grid, BlockBase* block);

I think of the typedefs as being analogous to C#’s delegate signatures. The class itself includes the following members:

GridEvent _onGarbageRowAdded;
GridEvent _onLand;
GridEvent _onGarbageLand;
GridBlockEvent _onBlockAdd;
GridBlockEvent _onBlockRemove;
GridBlockEvent _onGarbageBlockLand;

The class triggers the closures like so:

if (_onBlockLand != nil) _onBlockLand(self);

Wiring up code to an instance of the Grid class is simple:

Grid* grid = [[Grid alloc] initWithHeight:2 playerNumber:0];

grid.onBlockLand = ^(Grid* sender) {
    printf("A block has landed");

It looks like C, and it tastes like C, but I can perform JavaScript-like tricks with it. ‘Tis a beautiful abomination.

I’ve used this pattern all over the code to trigger sounds and visual effects as well as add/move/remove sprites. The game engine code is completely separate from the presentation layer.

Classes as Objects

The BlockFactory class has also benefited a little from the switch to Objective-C. When a grid needs to place a new egg, it fetches a new egg instance from the BlockFactory rather than create the egg itself.

The BlockFactory is necessary because all players receive the same sequence of eggs. Suppose player 1 requests his third pair of eggs and is given a red egg and a blue egg. When player 2 requests his third pair of eggs, he must also be given a red egg and a blue egg. No player can “cheat” due to a particularly advantageous sequence of eggs. Winning is a matter of skill and speed rather than luck.

The BlockFactory must, therefore, maintain a list of all egg types that have been generated. It also needs to remember which pair of eggs all players will receive next. Once a pair of eggs has been used by all players it can be removed from the list. Attempts to retrieve a pair of eggs not in the list will cause a new pair of random egg types to be created and appended to the list. The BlockFactory instance is shared across all players.

In the C++ version, the list stores enum values that represent an egg type. When a new egg is requested, the correct constructor is called for that value using a switch() statement. Here’s the code that adds a new item to the list:

BlockFactory::BlockType BlockFactory::getRandomBlockType() const {
    return (BlockFactory::BlockType)(rand() % _blockColourCount);

void BlockFactory::addRandomBlock() {

Here’s the code that creates a new egg from a given type:

BlockBase* BlockFactory::newBlockFromType(BlockFactory::BlockType type) const {
    switch (type) {
        case BLOCK_RED:
            return new RedBlock();
        case BLOCK_PURPLE:
            return new PurpleBlock();
        case BLOCK_YELLOW:
            return new YellowBlock();
        case BLOCK_BLUE:
            return new BlueBlock();
        case BLOCK_GREEN:
            return new GreenBlock();
        case BLOCK_ORANGE:
            return new OrangeBlock();

    // Included to silence compiler warning
    return new RedBlock();

The BlockFactory class includes a BlockType enum necessary for this scheme to work.

The Objective-C version dumps the enum and uses a tidier solution. As classes in Objective-C are themselves objects, storing enum values in the list is unnecessary. Instead, the list stores pointers to the class objects. Here’s the code that adds a new item to the list:

- (void)addRandomBlockClass {
    [_blockList addObject:[self randomBlockClass]];

- (Class)randomBlockClass {
    int type = rand() % _blockColourCount;

    switch (type) {
        case 0:
            return [RedBlock class];
        case 1:
            return [BlueBlock class];
        case 2:
            return [YellowBlock class];
        case 3:
            return [PurpleBlock class];
        case 4:
            return [GreenBlock class];
        case 5:
            return [OrangeBlock class];

    // Included to silence compiler warning
    return [RedBlock class];

Creating a new BlockBase object from the list becomes trivial. A single line of code will create a new BlockBase object from an item in the list:

BlockBase* block = [[[_blockList objectAtIndex:0] alloc] init];

The item in the list is a class, so calling its alloc and init methods gives us a new object automatically.

The difference between the two sets of code is minor, but conceptually it’s a big leap. Objective-C lets me store classes in arrays. I’m pretty certain that the only other language I’ve used that could do anything even remotely like this is JavaScript.


Scrollbars and Greyscale Algorithms

Scrollbars have been one of the major banes of Woopsi’s existence for years. No matter what I’ve done, they’ve always been slightly inaccurate. When scrolling through large amounts of data in particular, it is frequently impossible to scroll to the end of the list. If the list is scrolled by dragging the list itself, clicking on the grip causes the list to jump back to show a different portion of the data. Part of my ongoing testing frenzy involved writing a test for the ScrollingListBox gadget, which finally put me in the position to try and really fix these problems.

My first guess was a rounding issue. Scrollbars are basically a simple ratio problem. Suppose we have a list of items 100 pixels high displayed in a window with a scrollbar 10 pixels high. The total height of the scrollbar (10 pixels) must represent the entire list (100 pixels). Therefore, each pixel in the scrollbar is worth 10 pixels in the list.

The rounding problem can arise because Woopsi deals only with integers. Behind the scenes all of the ratio calculations are done with fixed-point math, but the scrollbar’s grip is displayed on the screen and therefore has to align to a pixel. If we adjust the height of the list to 105 pixels, each pixel in the scrollbar is worth 10.5 pixels. That will clearly result in a rounding error.

However, testing the algorithm demonstrated that although the rounding issue exists, it is not significant enough to account for the problems displayed by the scrollbar. In the first example (10 pixels vs 100), there isn’t a rounding error but the inaccuracies in the scrollbar still manifested themselves. Clearly the problem lay somewhere else.

I had a nagging suspicion that the problem had something to do with the height of the grip but couldn’t identify what the exact cause could be. After combing through the code for a while I hit on it. The scrollbar calculates the height of the grip based on the height of the visible portion of data it is scrolling through. In the first example, assuming the window was the same height as the scrollbar, the grip would be one pixel tall. The window and scrollbar are 10 pixels tall, whilst the list of items is 100 pixels tall. The height of the grip is therefore 10 * (10/100), or window * (scrollbar / list).

This gives us a final height of 1 pixel, which is clearly too small to be usable. Instead, the grip height is limited to a minimum of 5 pixels tall. However, if the grip is 5 pixels tall and not 1 pixel tall, this means the last 4 pixels cannot be scrolled through. This was the problem. The fix was to reduce the logical height of the scrollbar whenever the grip was artificially increased in size.

Once this was fixed, another problem became obvious. Clicking the up/down buttons in the scrollbar had strange effects. Sometimes the grip would move in the wrong direction; sometimes it would move and then get stuck; and on other occasions it wouldn’t move at all.

The reason for this was fairly simple. The scrollbars included a way to set the amount that each click would scroll through. By default, it was set to “1”. That would represent the number of pixels scrolled through in a textbox, for example, or the number of items in a list. If we return to the first example, however, attempting to scroll by 1 pixel would actually result in no movement at all. To the scrollbar, 1 pixel translates to 0.1 pixels, which is nothing more than a rounding error.

Instead of this, the scrollbars now calculate the minimum movement that the grip can make and move by that instead.

The scrollinglistbox test project highlighted some other bugs. The ScrollingListBox::getPreferredDimensions() method now returns reasonable values, and it greys out when disabled. Its draw() method performs some preclipping functions that have significantly improved its performance.

Other new test projects include tests for the Label, Button, BitmapButton and AnimButton gadgets. All of these return correct values for their getPreferredDimensions() methods. They all grey out when disabled. This last piece of functionality proved tricky for the BitmapButton and AnimButton classes. In order to allow bitmaps to be greyed out, I’ve added two new methods to the Graphics/GraphicsUnclipped/GraphicsPort classes:

  • drawBitmapGreyScale(), which will draw a greyscale version of a bitmap;
  • greyScale(), which will grey out a specified region.

The former works like the version of drawBitmap() with transparency - it plots pixel-by-pixel, so it is slower than a straightforward bitmap blit. The second works like dim() - it reads a pixel from the destination, applies a filter to the colour, and writes it back.

drawBitmapGreyScale() isn’t a fast method, and using it with the AnimButton probably isn’t the smartest idea. Every new frame drawn to the button needs to be greyed out. On the DS it isn’t noticably slow, but in a DS emulator the slowdown is pronounced. I’m considering stopping the animation when a button is disabled and restarting it when the button is enabled again. This will significantly reduce the amount of work done each frame.

The greyscale routine was fairly easy to write. The algorithm looks like this:

  • Read pixel (in 555 RGB format)
  • Halve brightness of pixel by right-shifting one place (giving 444 RGB format)
  • Extract red pixel right-shifted one place (giving 344 RGB format)
  • Extract green pixel (still 344 RGB format)
  • Extract blue pixel right-shifted one place (giving 343 RGB format)
  • Add red, green and blue components together (giving a single 5-bit component)
  • Recreate a 555 format pixel by re-using the single component in the place of R, G and B
  • Write pixel back at original co-ordinates.

This simplistic luminosity-based algorithm gives a 15-bit greyscale image weighted slightly more in the green component, reflecting the properties of the human eye. Other algorithms for this are listed on John D Cook’s blog.


Dates and Text Wrapping

Couple of changes today. The Date class introduced yesterday now has a more optimal “calculateWeekDay()” method, thanks to Jeff. It still seems to work correctly with the calendar, which is curious, because I’m sure it was previously returning “0” as the value for Sunday whereas it now returns “7”. I must be doing some modulus stuff in the Calendar class.

Secondly, the “wrap()” function in the Text class should now be rather faster than before. Instead of re-wrapping the entire string every time something is inserted, appended or removed from the string, it now locates the line of text that contains the modification and re-wraps from that point forward.

I’d tried to implement this a few days ago but came across a couple of problems. First of all, locating the line of text in which the modification fell was slightly tricky. I’d decided to use a binary search (obvious enough), but came unstuck because I was looking for a value that fell within the range indicated by two values in the wrapping data array. I wasn’t searching for an exact match, but a match between two values. Turned out to be fairly easy to implement once I’d scratched it down on paper.

Secondly, the wrap() function remembers the width (in pixels) of the widest line of text to help with other routines elsewhere in the class. If we try to wrap the text from line n, ignoring the previous lines, and the widest line is actually n-1, we won’t have the correct width. We’ll just have the width of the widest line in the range from n to the last line of text (if we ignore the old value we stored) or the width of a line that may no longer exist (if we use the old value).

The solution to that was pretty straightforward, too. I added a second vector to store the widths of every new widest line that the wrap routine comes across, along with the index of that line in the wrapping data vector. So, if the first line is 10px wide, that’s the widest line we’ve seen up until that point, so we add width 10 and index 0 to the vector. If the second line is 8px wide, we ignore it as it is thinner than the currently-identified widest line. If the third line is 20px wide, we add width 20 and index 2 to the vector. That way, when we re-wrap from line n, we discard all of the longest line data from the vector from line n onwards. The last entry in the width vector is then the width of the longest line in the text we’re not re-wrapping.

The upshot of all this is that if a char is appended to a string 100 lines long, only the last line is re-wrapped. Previously, all 100 lines would be re-wrapped.


Fixing the MultiLineTextBox Again

I keep trying to fix the MultiLineTextBox but it never actually gets fixed. I’ve been through the calculations over and over again, refining them, tidying them up and tinkering, but nothing ever seems to entirely fix the problems. I realised the other day that the calculations aren’t wrong; they’re the wrong calculations.

The MultiLineTextBox has three vertical alignment options. It can align to the top, centre or bottom of the visible screen. The getRowY() method, for calculating the Y co-ordinate of rows in all of these situations, works fine.

The class also has a draw(Rect) method that works out which rows of text fall within the supplied rect and only tries to draw those rows. This method also works fine.

The problem is that the draw() method is working with a different set of rows to the getRowY() method. The draw() method only clips based on top alignment, not all three possible alignment options, so it produces the wrong results.

I’ve now got a version of the MultiLineTextBox with four new draw methods - drawTextTop(), drawTextCentre() and drawTextBottom() (that work out which rows of text are visible) and drawText() (which is ultimately called by the previous three). It’s untested at the moment, but hopefully this will finally fix the problems.


Line Clipping

I was browsing through a book on computer graphics the other day when I came across a section on clipping. The book didn’t really go into any great detail, so I googled it and discovered there are three main 2D line clipping techniques. The first is very simple; draw a line pixel by pixel, and if the pixel to be drawn falls outside the visible region, don’t draw it. Simple, effective, slow.

A better algorithm is the “Cohen-Sutherland” method, which imagines the display as a 3x3 grid of rectangles, uses a bitmask to identify which regions the two points of the line are in, then uses the parametric equation of the line to reduce its length so that it fits within the visible region. Clever. I would explain the routine in detail, but it’s hugely popular in Google already, and it seems that you’re not a Real Programmer unless you implemented it a dozen times before you had breakfast today. Of course, I’d never heard of it before.

The third method is the “Liang-Barsky” algorithm, which I confess to not really looking into. It’s allegedly faster than the Cohen-Sutherland method, but I lost interest as soon as I realised it required floating-point maths to work.

There was another method, which claimed to be several times faster than Liang-Barsky and didn’t need floats, but as that one was patented I lost interest even faster. Software patents are a very bad thing.

Anyhoo, Woopsi’s GraphicsPort class now has a drawLine() routine. It uses the same Bresenham line drawing routine that the SuperBitmap uses, but it’s wrapped inside an implementation of the Cohen-Sutherland clipping routine. I’ve had to modify it a little to replace its use of floats with fixed-point maths, but that doesn’t seem to have impacted its accuracy.

The next thing on my to-do list is to extract the basic functionality of the TextBox class into a separate “Label” class, .NET-style. Read-only textboxes really don’t need to carry around the extra baggage of the new cursor system. Also, I’m thinking of moving the string manipulation code out of the TextBox class and into the Text class, thus separating the presentation of the data from its manipulation. This, of course, means I’m effectively writing my own string class, which is one of the classic mistakes people make with C++ (the STL obviously has a perfectly good implementation) but as the STL is off-limits I think I have a good excuse.