Author Topic: OutpostHD Programming  (Read 4093 times)

Offline leeor_net

  • Administrator
  • Hero Member
  • *****
  • Posts: 2352
  • OPHD Lead Developer
    • LairWorks Entertainment
OutpostHD Programming
« on: October 10, 2015, 11:45:22 AM »
As the game has progressed the code has naturally started getting larger and more complex. It's not unmanageable but I've been wondering about the UI handling code in particular so this message is for you developers out there who I can bounce some idea off of (looking at you Hooman and StylixX).

In the GameState object, there is a lot of UI code with event handlers and various checks, etc. Nothing too out of the ordinary, standard game logic code. However, the UI handling code has grown so large that I literally moved it out of the GameState source file and into its own GameStateUi source file. This is transparent to the compiler but makes it easy for me to sort these things out.

However, I'm at a point now where mouse clicks kind of need to stay within the UI and not leak. The Event handler in the base NAS2D code doesn't care who's receiving messages, it just throws the events at any listeners that are there. Simple enough.

So here's the question.

I've been thinking about moving all of the UI/GUI code to its own object, GameStateUi and providing a function like "isPointerInGui()" so I can have GameState know to ignore things like mouse clicks and even key presses if a particular UI element has focus. I'll need to update the UI component's to start setting and releasing focus now because of this but moving all of the UI code to a component class like GameStateUi would clean up the GameState object a bit and all of that handler code can then be moved out.

It does present some ugliness though as I'd need to pass around pointers/references to other underlying components like TileMap, RobotPool and StructureManager among other things though I could just use callbacks that GameState hooks into so when inserting a stucture, for instance, the GameStateUi object would raise an event that GameState would hook into. I think that may be overkill so I'd rather jut have GameStateUi have pointers to those things instead.

Thoughts? Comments?

Thanks!
« Last Edit: October 10, 2015, 05:51:03 PM by leeor_net »

Offline Hooman

  • Administrator
  • Hero Member
  • *****
  • Posts: 4955
Re: OutpostHD Programming
« Reply #1 on: October 10, 2015, 12:42:17 PM »
I'm afraid I don't quite understand what is going on here. Having some example code might help.

Why would GameState need to ignore mouse clicks? As in, why would it even respond to them. I don't think an object called GameState should have anything to do with the user interface. I also don't think GameState should listen for events. Rather I think it should be told when an action occurs, and not a generic action like a mouse click, but something really specific like a unit order being issued. What exactly is this GameState object actually responsible for handling?

Offline leeor_net

  • Administrator
  • Hero Member
  • *****
  • Posts: 2352
  • OPHD Lead Developer
    • LairWorks Entertainment
Re: OutpostHD Programming
« Reply #2 on: October 10, 2015, 05:48:00 PM »
I am, of course, assuming that everybody is familiar with the code and knows what I'm talking about. Sorry about that -- I need to remember to be detailed about these things.

I'm using NAS2D which has what we called a State Machine. Basically, you derive a class from an object called State. State has the basics for automatically having update and init functions being called. The state machine pretty much just calls init() function when a state comes into focus and calls update() every frame. The idea behind it was to break apart game logic. It's not the greatest system in the world and I'll be extending functionality in the future but the general idea is that I create a state that represents a particular state in a game and it defines all of the logic, etc.

In NAS2D, you can subscribe to specific events including mouse motion, mouse buttons, key handling, window events, joystick events and probably in the future network events.

GameState in the case of OutpostHD is the state in which the tile map is displayed. It draws the mouse and responds to clicks within the tile window. It also handles responding to UI events. E.g., when a UI Control gets clicked on, it raises an event which the GameState object is subscribed to. Say, for instance, ButtonTurns... when it gets clicked it raises an event. GameState subscribed to ButtonTurns.Click() and used void buttonTurnsClicked() as it's 'delegate' (listener).

I haven't shown any code atm because I figured this was a high level problem but I'll provide some sample code to illustrate my point.

Code: [Select]
class GameState
{
public:
    GameState();
    ~GameState();

protected:
    bool update();

    void onMouseDown(int x, int y, MouseButton button);

private:

    Button btnTurns;
    Button btnSystem;
    Button btnRobots;

    Menu menuRobots;
    Menu menuTubes;

    Dialog diggerDirectionDialog;
    Dialog tubeTypeDialog;

    TileMap map;
};

Fairly straight forward. Again, this is simplified, but I hope it illustrates my point.

GameState connects itself to the EventHandler class. The EventHandler class just takes in system events and forwards them out to subscribers. So if the application gets a MouseDown event from the operating system, EventHandler captures that and throws it at any subscribers. GameState connects its onMouseDown() function to EventHandler for MouseDown events. So any time a user clicks the mouse within the application window, GameState's onMouseDown function gets called. It's handled similarly to Qt's Signals & Slots.

GameState::onMouseDown() looks generally like this.

Code: [Select]
void GameState::onMouseDown(int x, int y, MouseButton button)
{
    if(button == MOUSE_BUTTON_LEFT)
    {
        if(isPointInRect(x, y, map.area())
        {
            // do stuff
        }
    }
}

This is a fairly simple setup. Straight forward, effective, does the job.

UI elements, called Control's in OutpostHD (modeled off the Windows API), ALSO subscribe themselves to these same events. They do so automatically upon instantiation. This is to make it easy for me when I'm building UI's so I don't have to manually connect each Control to the EventHandler or forward events manually to each Control (e.g., if GameState::onMouseDown() is called, I don't have to do something like btnTurns.onMouseDown(x, y, button); ).

GameState::onMouseDown() responds to MouseDown events in OutpostHD specifically when it comes to inserting objects into the TileMap or picking tiles, etc. That's its purpose. But consider this -- if a Control is in the same space as the TileMap, both the Control AND GameState will handle the MouseDown event at the same time (well, sequentially based on their subscribe order but that's not the point). In most cases this isn't a problem but what about a case where GameState is attempting to insert a digger but the DiggerDirection Dialog is within the bounds of the TileMap. Now the DiggerDirection dialog is responding while GameState is ALSO responding which can cause weird bugs (this was an actual bug in OutpostHD's code). So to prevent this, we do something like this:

Code: [Select]
void GameState::onMouseDown(int x, int y, MouseButton button)
{
    if(button == MOUSE_BUTTON_LEFT)
    {
        if(isPointInRect(x, y, diggerDirectionDialog.area()) && diggerDirectionDialog.visible())
            return;

        if(isPointInRect(x, y, map.area())
        {
            // do stuff
        }
    }
}

Simple. Does the job. Easy enough to understand.

But what about the case where we start having a lot of potential dialog's? For each individual dialog or UI element that will also potentially be within the area of the TileMap, we need to add a special case for it. Which leads to larger code, etc. Could use a list and a loop to iterate over the list of UI Control's but that doesn't address the other problems.

But even beyond all of this, all of the initialization and event handler code for all of the UI Control's leads to a GameState class that starts to look like this:

Code: [Select]
class GameState
{
public:
    GameState();
    ~GameState();

protected:
    bool update();

    void onMouseDown(int x, int y, MouseButton button);

private:

    void onBtnTurnsClicked();
    void onBtnSystemClicked();
    void onBtnRobotsClicked();

    void onMenuRobotsSelection();
    void onMenuTubesSelection();

    void onDiggerDirectionDialogSelection();
    void onTubeTypeDialogSelection();

    Button btnTurns;
    Button btnSystem;
    Button btnRobots;

    Menu menuRobots;
    Menu menuTubes;

    Dialog diggerDirectionDialog;
    Dialog tubeTypeDialog;

    TileMap map;
};

You can see where the problem lies. As the UI gets more functional and more and more controls are added, you get more and more UI event handler functions and code. Not to mention each UI Control not only needs to be instantiated but also needs to be set up. That's kind of what you'd expect when developing a UI (I thought my setup was insane until I saw that both Qt and .Net do it practically the same way) and you end up with large functions that have a lot of this sort of code:

Code: [Select]
    mBtnTurns.image("ui/icons/turns.png");
    mBtnTurns.size(30, 30);
    mBtnTurns.position(100, 200);
    mBtnTurns.click().Connect(this, &GameState::btnTurnsClicked);

Normally when I develop in C++ I define a class in a header file (GameState.h) and all of its definition code in a source file (GameState.cpp). Because of all the UI code, the source file was getting huge and somewhat unmanageable. So I broke all of the UI handling and initialization code into a separate source file (GameStateUi.cpp). And it got my gears turning -- there are two ways of effectively using OOP. One, for Inheritance (Object B IS AN Object A). Two, for Composition (Object C HAS AN Object A AND AN Object B). Why don't I just create a class for GameState called GameStateUi that takes all of the initialization and event handling out of GameState and does it itself? That way GameState can focus on the LOGIC of the game and GameStateUi can handle the MANAGEMENT of the UI. The idea being that you'd get the following code readability improvements:

Code: [Select]
class GameState
{
public:
    GameState();
    ~GameState();

protected:
    bool update();

    void onMouseDown(int x, int y, MouseButton button);

private:

    GameStateUi ui;

    TileMap map;
};

and

Code: [Select]
void GameState::onMouseDown(int x, int y, MouseButton button)
{
    if(button == MOUSE_BUTTON_LEFT)
    {
        if(ui.mouseInUiElement(x, y));
        {
            return;
        }
        else if(isPointInRect(x, y, map.area())
        {
            // do stuff
        }
    }
}

I apologize for the length of this post but it's clear that I needed to provide specific examples of what I meant and what, exactly, I was asking about.

Quote from: Hooman
What exactly is this GameState object actually responsible for handling?

In this case, everything when the user is looking at the tile map. Basically it's the game itself. I remember another 2D game framework that uses the term Scene. Same basic idea.

Later on I will also have objects called TitleState (title screen and title menu options) and GameStartState (selecting difficulty, planet type, etc.). These objects represent specific 'states' of the game on a high level scale. State's themselves could probably be broken down into smaller states (e.g., GameState would be broken down into NO_INSERT, INSERT_STRUCTURE, INSERT_ROBOT, etc.) but I didn't see the need for that.
« Last Edit: December 19, 2015, 01:11:53 PM by leeor_net »

Offline Hooman

  • Administrator
  • Hero Member
  • *****
  • Posts: 4955
Re: OutpostHD Programming
« Reply #3 on: January 03, 2016, 11:19:37 AM »
Wow, so nearly 3 months later, and I notice this thread is marked as unread, but the forum section was not highlighted as containing unread topics. Strange. I think I read a bit of this in an email notification and then promptly forgot about it.

It sounds like you have a complicated event handling system. I'm thinking there must be a better way to organize and route mouse and keyboard messages. I don't think your system has quite sunk in to my mind yet, so my reply might be a little off.

Usually I see hierarchical systems where larger components contain smaller components. Typically the larger components will know where the smaller containing components are, so they can do hit testing and routing of messages to immediately contained objects. Think of it as a top down system, where the main window first receives the message, and then routes it down to frames, which routes it to buttons, or whatever. A game map view might route a mouse message to an active mouse command handler, such as unit selection, attack command, repair command, etc.

Processing/handling order can be considered separately from the routing. It could be that a message is first processed, and then handed down, or first handed down, and then processed on the way back up. Consider a game pause feature. You'd want to intercept the messages before handing them down to a disabled game window. You might also have an APM (Actions Per Minute) recorder that counts messages that were handled successfully by the lower layer and distinguishes them from useless clicks that did nothing. Such a system might catch messages on the way up, and also record return values that indicate if the message was handled. Typically though, I don't see return values used to indicate if a message was handled. It's a feature without a whole lot of use. If the user clicks a space between controls, the message gets routed down far enough to determine that, and then silently forgotten.

Hit testing might not be so explicit. You don't need to store boxed coordinates for everything that's clickable. The parent container may have some kind of implied layout, such as a horizontal list of items, a vertical list of items, or a grid of items. In those cases, the (x, y) coordinates of the mouse can be used to calculate the item that is being clicked using modular division and array indexing.

Example:
Assume there's a margin of 10 pixels around all buttons, each button is 22 pixels high, and buttons are stacked vertically.
The vertical layout receives a mouse click at (x, y). This click can fall on a button, or in the dead space between them.
First check if ((x >= 10) && (x < width - 10)), or rather, return from the message processing function if that's not true, as you're in the margin area to the sides of the buttons.
Next calculate (buttonIndex, yOffset = y.divmod(10+22)).
If (yOffset >= 10) then the click hit the button, otherwise it falls in the dead space between buttons.
Propagate the message: verticalLayout.buttonList[buttonIndex].Click();

For a more complex child container object, rather than a button, you may want to adjust for relative mouse offsets:
verticalLayout.objectList[objectIndex].Click(x - 10, yOffset - 10);
Using relative mouse offsets means your controls don't need to be self aware of their absolute screen position. They might not even have a defined size, relying on the container to know an appropriate size. Does a button really need to know how big it is? From a behavioural standpoint, it only needs to know if it has been pressed.

I'm of course glossing over mouse up/down, left/middle/right, and how controls are drawn, particularly if they don't know their own size. (Who draws them? Themselves? Or their containers?)
« Last Edit: January 03, 2016, 11:23:08 AM by Hooman »

Offline leeor_net

  • Administrator
  • Hero Member
  • *****
  • Posts: 2352
  • OPHD Lead Developer
    • LairWorks Entertainment
Re: OutpostHD Programming
« Reply #4 on: February 18, 2016, 02:19:37 PM »
This is effectively the way I handled it in a complex GUI structure. You have the top level or root which I considered the entire screen area. Then you had controls that were put into a stack like structure except that instead of the compsci stack where you can only insert and remove from the top, you could move controls up and down it. I guess it's easier to just call it a top down list.

Anyway, the top level controls would get events first and would then pass those events on to their children.

It worked pretty well but with the changes to the event system and the moving of the code from strictly development of The Legend of Mazzeroth to NAS2D and wanting it all to be modular, the original GUI code kind of got left behind.

At this point I only have very super basic controls including one called a Container. The container is just what it sounds like... it contains other controls and its children are positioned relative to the container itself.

Drawing is handled by the individual Controls themselves. The idea behind this is that if you wanted a more specialized version of a control, say a different style button, you would simply override the draw() function and do whatever special is needed.