2008-01-07

Emulation and Other Thoughts

Things are still going a little slowly here in Zombie Towers. The only thing worse than programming dreams for disturbing your sleep is having programming dreams whilst ill. They’re essentially the same, but instead of waking up with the solution to a particularly thorny problem you wake up exhausted with a head full of nonsensical junk and the slightly worrying feeling that you’ve lost the ability to add simple numbers together. Having had a couple of meaningless physics dreams after reading half of Stephen Hawking’s “Theory of Everything”, I’ve been avoiding any complex thought and have been absorbing anime instead.

I did sneak a small coding project in, though - a CHIP-8 emulator. I was introduced to emulation back in the days of Amiga Format when one of the issues came with a couple of Game Boy emulators on the coverdisk. Despite being an even more abysmal programmer then than I am now, there weren’t many programs around that were completely beyond my understanding. However, emulators were one of them and I was immediately fascinated. How on earth would someone go about writing an emulator? I hadn’t even the vaguest notion of where to start. Understanding and then writing an emulator has been on my “things to do” list ever since.

I got past the “understanding” part of the equation ages ago, but it’s taken until now for me to get around to writing an emulator. I’d intended to write a CHIP-8 emulator in Flash MX, but couldn’t be bothered in the end (I was switching to Java at the time) and someone else got there first.

The CHIP-8 is a curious machine, mainly because it’s not a machine at all. There never was a CHIP-8 CPU. It’s probably one of the earliest examples of a commercial virtual machine. With the proliferation of dozens of incompatible home consoles in the late 70s, console manufacturers decided that there should be a common hardware platform to write games for. The simplest way to achieve this was to specify a virtual hardware platform and have each physical platform implement an interpreter. The CHIP-8 was the result. It is an 8-bit CPU featuring 4KB of RAM (of which the bottom 512 bytes is reserved for the hardware platform’s BIOS), 16 multi-purpose registers (the last of which doubles as the carry flag), a program counter, stack pointer, index register (which I think is a 70s name for a dedicated address register), 35 opcodes, a 16-key keypad (hard-wired into the CPU via another set of boolean registers) and some weird design decisions that reflect the fact that it is a virtual machine, not a CPU. It has two opcodes to draw sprites, for example - I doubt that anything so high-level and specific exists in a physical general-purpose CPU (though the NES’ not-quite-6502 might have something similar).

My emulator has most of the opcodes implemented, but some of the features aren’t implemented yet - the sprites in particular are tricky, mainly because there’s a dearth of documentation describing how the sprites are supposed to work. There’s no documentation about how fonts are stored, either - I imagine that they’re supposed to be coded into the BIOS memory somewhere, but finding out what the fonts look like or what address they’re supposed to be stored at would seem to involve getting hold of an existing emulator’s sourcecode and digging around in that.

I might get around to finishing it eventually. It’s enough just to have got the bulk of the CPU emulated, and it’s a distinctly bizarre experience to watch code you’ve written execute a binary written by someone else.

Writing an emulator is actually very simple. A CPU can be modelled as a large switch statement. Memory can be emulated by allocating an array, and CPU registers can be simulated in the same way. The emulator program fetches each distinct unit of information (“opcode”, “operation code”, or CPU instruction) from the binary it is executing and feeds it into the CPU switch statement. The switch statement decides which instruction the opcode represents then runs that instruction. A simple CPU could look like this:


void emulate(unsigned short opcode) {

    // Extract instruction and data from opcode
    instruction = (opcode & 0xFF00) >> 8;
    x = (opcode & 0x00F0) >> 4;
    y = (opcode & 0x000F);

    // Parse opcode
    switch(instruction) {
        case 1:
            // Add register x to register y and store in register x
            add(x, y);
            break;
        case 2:
            // Subtract register x from register y and store in register x
            sub(x, y);
            break;
        case 3:
            // Multiply register x with register y and store in register x
            multiply(x, y);
            break;
    }
}

This simple CPU can perform three actions - addition, subtraction and multiplication. The opcode is a 16-bit value comprised of the instruction code to execute (highest byte) and two 4-bit data values (lowest two nibbles). We use bitmasks and bitshifts to extract the instruction and data from the opcode in order to parse it in the switch statement.

The CHIP-8 is more complex, naturally - it has 35 instructions, and there is no set format for the instruction and data portions of the opcode. The instruction can consist of just the highest nibble or it can use the whole highest byte. The data portion of the opcode can consist of a single 4-bit value, two 4-bit values, a byte and a nibble, or a single 12-bit value, depending on which instruction it is paired with.

The basic premise holds, though - there are other ways to achieve the same thing (a jump table instead of a switch statement, for example), but writing a CPU emulator is just a case of getting hold of a list of opcodes, working out how the opcodes are structured, what the opcodes do and how they affect the system’s memory and registers.

Now that I know how all of this works, it’s easy to see how assemblers and disassemblers work (I might get around to writing my own CHIP-8 assembler if I finish the emulator). It’s easy to see how a Z80 or 6502 emulator would work. The 6502 is particularly easy, assuming you’re only interested in the documented opcodes (of which there are 40-odd; there are around 200 undocumented opcodes) and aren’t worried about making it cycle-exact.

Other than learning all of this gubbins, I’ve been pondering some more Woopsi developments. First of all, I’ve got a plan for the scroll bars. Each scroll bar (horizontal and vertical) will be a gadget comprised of three sub-gadgets. They need up and down/left and right buttons and a slider gadget, which is itself comprised of a “gutter” and a “grip”.

I’ve also been pondering Jeff’s suggestion that gadgets pass requests around via events. At the moment, the only place this is particularly relevant is in the screen flip and depth sort buttons. Currently, the buttons call their parent’s “flip()” or “swapDepth()” functions. Jeff’s suggestion, to aid subclassers, is that the buttons just trigger flip or depth swap events in their event handlers (the event handler would be set to the parent for these decoration gadgets). It doesn’t seem like a particularly big change to swap from “_parent->flip()” to “_eventHandler->handleEvent(EVENT_FLIP)“, but it is actually a fundamental shift in how the system hangs together. At the moment, the system is very rigid. Each gadget orders the other gadgets to do something. The flip buttons order their screens to flip from one display to the other. Switching to event-driven interaction will mean that gadgets request operations instead. Gadgets will no longer say, “You must flip to the top display.” They will instead ask each other, “I say, old chap, do you mind awfully if you flip displays?” The whole system becomes much more fluid and, indeed, flexible.

Thinking about this has been like looking at a 2D representation of a cube - one moment it looks like the cube protrudes, the next it seems like the cube is inset. I’ve been swapping between thinking that it’s a good idea to rejecting it and back again, but I’ve finally decided that it’s a good idea. Why not make the system more flexible? Other than an extra function call, what are the downsides? There are several good reasons for the change, but no really good reasons not to implement it.

Comments

Jeff on 2008-01-07 at 21:57 said:

I’m not sure that a scrollbar needs to be three distinct gadgets - that may be taking the decomposition a bit far, for not a lot of gain. In general terms, I’d look at whether you expect to ever code different behaviour for a subgadget before building one.

On the other hand, if you implement the up/down parts as seperately usable gadgets (ie, icon buttons again?) then re-using those seems like a no-brainer, though you need to keep the messaging between button and slider clean, rather than having the button know its part of a scrollbar.

Does having them as seperate gadgets make it easier to completely suppress them? ie, I don’t want the arrow buttons at all, just the slider.

Does having them as seperate gadgets make it easier/harder to do “proportional sizing” for the slider?

Does having them as seperate gadgets make your skinning easier/harder/impossible?

ant on 2008-01-08 at 00:02 said:

Up/down/left/right buttons will just be a single GlyphButton class, which has two states (normal and clicked). That class will probably replace the existing flip, depth and close buttons. The bar itself should double as a slider gadget, so that can be re-used. Having the grip and gutter as two separate gadgets means I can use existing functionality within the base class.

Jeff on 2008-01-08 at 02:53 said:

Doesn’t using ‘GlyphButton’ lock the width/height of your scrollbar down to being the width/height of your font?

Whilst the ‘common graphics are just text characters’ is a nice hack, it does create some arbitrary hidden limitations. Once again, I’d say what is the impact on skinning?

I think the answer is: a skinnable scrollbar would share very little source code with an amiga scrollbar, but more importantly, can they have a common base-class that doesn’t make assumptions about how the subclasses are implemented? ie, whilst the AmigaScrollbar might use GlyphButton, and have three sub-gadgets, SkinnableScrollbar might want to do it all with plain GraphicsPort->drawRect/Icon/Whatever routines instead…

Jeff on 2008-01-08 at 02:55 said:

One other question about sub-gadgets. Are you expecting that the scroll-bar subgadgets would become the “active gadget” or more specifically, would they get the ‘keyboard focus’?

ie, does CROSS-DOWN button get passed to the scrollbar, or to the most recently clicked sub-gadget?

ant on 2008-01-08 at 13:25 said:

The current screen flip, screen depth, window close and window depth gadgets are all examples of gadgets that are essentially the GlyphButton, so there’s masses of repeated code. The button itself can be any size. The glyph in the middle is automatically centred horizontally and vertically; the size of the font has no impact on the possible dimensions of the gadget. The dimensions are set in the constructor.

Only the arrow buttons will be instances of the GlyphButton class. The grip and the gutter (the slider gadget) won’t use glyphs. In the Amiga version of the gadget, the grip will be a standard bevelled rect that changes background colour when clicked and moves when dragged. The gutter will be an inwards-bevelled rectangle with a darker background colour. When clicked, the grip will jump to the stylus location.

A skinnable scrollbar is identical to the Amiga scrollbar, except for the draw(rect) functions - the skinned versions blit a bitmap whilst the Amiga versions draw some primitive shapes. The skinned version can use BitmapButtons instead of GlyphButtons for its arrow buttons.

The arrow buttons could easily be hidden by setting a property, in the same way that window close buttons can be hidden.

Key events are routed through all active gadgets. As all of the gadgets that comprise the scroll bar are children of the scroll bar, it’s easy for the top-level scroll bar gadget to intercept any keyboard events and process them as necessary. Thus, if any of the components of the scroll bar is the current active gadget, keypresses can be interpreted by the container gadget.

Obviously, keypresses can’t be routed to the scroll bar if none of the scroll bar gadgets is active - what if there is more than one scroll bar in a window? That would need to be handled by the developer (intercept any key events at window level and route or process them accordingly).

Jeff on 2008-01-08 at 20:16 said:

Consider my List gadget, which might like to own a scrollbar; when it (the List) has the focus (because it has been clicked in) and the user presses CROSS-DOWN. At the moment, I handle that in the List, changing the selection, then presumably needing to tell the appropriate scroll bars to “change value being displayed”.

Alternately, I might push the CROSS-UP/DOWN keypresses to the “vertical scroll bar” and rely on it calling me back to do the actual scroll, even if it isn’t the active gadget. I’m not sure thats a brilliant way to do things, but I’m just trying to keep my mind around minimal repetition.

The logic would probably end up with

Woopsi->tellListAboutKey(DOWN) List->tellScrollBarAboutKey(DOWN) VScrollBar->tellOwnerAboutScroll(AMOUNT) List->tellScrollBarAboutNewPos()

This gets sticky with the RIGHT key currently meaning “scroll down by 1 page” - clearly the above implies that the list would tell the HORIZONTAL scroller which would not have the desired effect.

So, unless the scrollbar has a different graphic representation when it is the active gadget (as opposed to being ‘currently clicked’ - ie, in a drag cycle), its hard to know what the four keys would do, from the users perspective.

(This is a more general problem with Woopsi - knowing which Gadget has the keyboard focus, so that (say) X could toggle the state of a checkbox, or trigger-left/right could move the focus around a data entry window ala DSOrganize. MacOS and Windows both have “focus rectangles” - no idea about Amiga - but of course, they cost valuable screen space…)

It also seems like the List gadget would need to rely on the scrollbar not processing any keypresses at all, so that it could override them, even if the scrollbar was the active gadget? And to be able to intercept that, it would need to know how the scrollbar was implemented (as one gadget, or as one with three subgadgets) - or the scrollbar not becoming active.

Or have I got the keypress routing wrong? There isn’t one active gadget, there’s one active subgadget for every Gadget, and all keypresses are always delivered to the entire hierarchy. Thats different to OSX/Windows, but would not have the same problems I’m worrying about above - though its more expensive.

ant on 2008-01-08 at 23:31 said:

This is why I haven’t implemented keypress interpretation within any of the gadgets so far - with the stylus, it’s obvious what the user is trying to do. Keypresses are much more ambiguous, and building in default behaviours is dubious if developers are going to want their keys to work in unexpected ways.

The scroller should be intelligent enough that it can be told to scroll (and will scroll its associated target) or can be told to resize/reposition itself (because the target has been manipulated).

You’re right - there isn’t a single active gadget. Each gadget contains an active gadget pointer, until we hit the gadget with focus (which contains a NULL pointer). Keypresses are sent to every active gadget. Imagine that the last clicked gadget was the scroll bar’s “Up” button. Keypresses would be sent like this:

  • Key pressed;
  • Woopsi raises a keypress event;
  • Woopsi passes keypress to active screen;
  • Active screen raises a keypress event;
  • Active screen sends keypress to active window;
  • Active window raises a keypress event;
  • Active window sends keypress to scroll bar;
  • Scroll bar raises a keypress event;
  • Scroll bar sends keypress to “Up” button;
  • “Up” button raises a keypress event.

Generally, the gadgets don’t do anything with the keypress event. It’s just a redundant function call. However, if I know that I want CROSS-RIGHT to scroll a list gadget by a full page, I could:

  • Subclass Window;
  • Intercept the keypress passed to the window by the screen;
  • switch() the keypress to determine if CROSS-RIGHT has been pressed;
  • If so, scroll the list gadget down one page;
  • If not, continue sending the keypress through the active gadget pointers.

If you are subclassing, you can pick up keypress events as soon as they become relevant and stop pushing them through the active gadget chain. Alternatively, if you aren’t subclassing, you can leave the keypress events to process normally and just handle the event fired by the gadget you’re interested in. You could set the same event handler for both the main scroll bar gadget and the list, and have the same code process CROSS-RIGHT for both gadgets.

Jeff on 2008-01-09 at 03:40 said:

I think, as it stands, I’m more likely to take your “sub-gadgets” approach, and have the List gadget own a ScrollBar - the list should interpret all key events, and leave nothing to the ScrollBar.

The alternative was to have the List and Scrollbar being peers with their owning window tying them together “buddy style” (like the tiny up/down control that Microsoft put next to numeric text entry fields), but that seems like the hard way to do it, since (as you point out) you’d need to subclass Window.

This way, the list can show the selected row in a different color if it has the focus as well, even if the active gadget is one of its children. Hmmm, can that work or does it depend on the implementation of the scrollbar? Can List prevent the Scrollbar from becoming the active gadget?

Aha, the List can look to see if it is the active gadget in its parent, irrespective of whether it, its children, or grand-children “have the focus”

ant on 2008-01-09 at 14:44 said:

Well, you don’t have to subclass window. If you have a “ListEventHandler” class, you could make that the event handler for the window, list and scrollbar gadgets. Its handleEvent() function would look like this:


void handleEvent(EventArgs event) {
    case EVENT_KEY_PRESS_RIGHT:
        _list->scrollPageDown();
        break;
}

The scroll bar updating code could be embedded into the _list->scrollPageDown() function as long as the list contains a pointer to a scroll bar. Alternatively, the List gadget’s scrolling function could be called by the scroll bar, in which case you’d have to subclass the scroll bar. You could even have the list inherit from the ScrollingPanel, as I’m thinking about having control code for that gadget embedded into the scrollbar.

Replying to one of the earlier comments, I don’t think the standard Amiga GUI included the concept of an active gadget at all. The Magic User Interface (an alternative GUI library) did have carets and keyboard control, but the standard BOOPSI system didn’t. If you wanted the keyboard to control anything you either had to add keyboard shortcuts (so pressing “O” clicks the “OK” button, for example) or you had to rely on menu shortcuts (pressing Right Amiga-E in Workbench brings up the “Execute Command” window).

The last way to achieve keyboard interaction was (I think - this is what I vaguely remember from writing GUI code with Blitz, which isn’t necessarily indicative of how it would have worked in either ASM or C) to receive keyboard messages passed to the window by waiting for a new event for that window, then processing it accordingly. If the message was a key event, you’d use a switch() statement to work out which key was pressed and then write code to perform whatever action that key was supposed to perform on whatever gadget it was supposed to affect.

This is similar to the way I’ve built Woopsi, but I’ve added in the extra flexibility of allowing all gadgets to receive keypress events, not just top-level container gadgets (screen and window).

Representing the active gadget with some kind of visual feedback is possible. The easiest way is to alter its colour, OSX-style, but that poses problems for skinning and for complex gadgets (your list, for example). A Windows-style caret (dotted rectangular box around the active gadget) is another option; before the GraphicsPort was written this was completely impossible because there was no way to draw it, but now it’s easy for a parent gadget to draw a dotted box around its active child. However, this has a negative impact on the amount of screen space available as you’ve noted.

It’s a tricky one. Most of the problems arise because of the nature of the hardware - the available screen space is too small and the key input options are limited. That’s another reason why I’ve left out default keypress behaviour - I’m focusing on the stylus as the primary input device.

Jeff on 2008-01-09 at 19:35 said:

The thing about the ListEventHandler approach is that it introduces yet another object which must maintain its own pointers to the other objects involved - your _list variable must point to the List gadget, which the Window already does.

And to get the ScrollBar to use the same event handler, I have to either

a) have the Window know about the implementation of the List b) have the List pass it through at, um, construction time? No, its not populated then - at List.setEventHandler() time? eeuww, that means List relies on the event handler it is passed being smart enough to deal with events from gadgets other than the one it was installed on.

b) wouldn’t be so bad if the events being passed around were the higher-level ones, but I think that raw ‘hardware events’ like click and keypresses may well confuse it all beyond belief.

I think this gets back to the compromise between a pure MVC model, and the desire to keep the number of objects (and thus memory requirements) as low as possible. I can have an all-encompassing model object that owns the window and the list - that object would be a subclass of EventHandler. Or I can just subclass from both Window and EventHandler - in effect welding the M part of MVC onto the topmost V object.

Jeff on 2008-01-09 at 19:45 said:

I think the concern about ‘skinning’ vs focus is a minor one - the Gadget hierarchy need not concern itself with the problems it causes - it just needs to provide a mechanism whereby the appropriate draw() routines can know that they are active - it becomes the skinning code that has to worry, and the solution would simply be ‘provide more bitmaps’

This is essentially what Windows does - custom draw routines get passed in flag words that say “this element is active, it has the focus, it is selected”, etc and it is expected to do the right thing. Now, Windows does provide a few nifty helper subroutines that equivalent to drawFocusRectangle(), etc and I think Woopsi pretty much has similiar capabilities via its GraphicPort routines. The more primitives that the port provides, the easier it is for the Gadget.

Windows also provides the ability to ask for ‘the active text color, the active window background colour, etc’ which, as I recall, Woopsi does via #define at the moment, but thats not a major thing to change. In my day job, we just have a single object called ‘SystemMetrics’ which is initialised at startup from the system config, and can be adjusted on the fly (in effect, by your skin loader). When you want the colour of a checkbox (or the font height of a label), you ask the metrics object, you don’t compute it yourself. That way, every gadget in the system uses compatible computations.