As I've mentioned before, it would be great if other programmers got some use out of my framework code. After all, I've spent a lot of time now getting everything to work on Windows, Linux and Mac, and I've done a reasonably careful job on the code. I know there are several multi-platform solutions out there, for both graphics and UI, but it looks like there's room for improvement.
Recently, Shamus Young started another programming project and ran into trouble using Qt as his GUI. So I suggested that he try using my framework. Unfortunately, my GUI is still very much under development. I've sent him a preliminary version of it (which you can download). For this part, I thought I would try to explain some of the issues.
Programmers will over time build up a library of code that they use again and again. On a large project, you also have layers of infrastructure needed before you even get to the real application. All of this gets put into the "framework" for the program.
Eventually, if everything works out, your framework will get pretty powerful. You may want to make a published, documented library out of it. Unfortunately, it's rarely that simple. Over time, you will have connected one piece to another, or used several different other libraries to make your job easier, until the whole thing is a huge pile of code from different sources, with all kinds of invisible linkages behind the scenes.
Someone comes along expressing interest, but the thing turns out to be unusable. It's like you see a pretty flower by the side of the road and try to pick it. But the stem won't break, and when you pull, up comes a huge mass of roots, a patch of ground, and twenty other nearby flowers. The flower no longer looks like something you want to take home and stick in a vase.
Shamus has been having two problems with the libraries he's using. First, this linkage/dependency problem. He just wants a few simple functions, not a lot of attached guts and dependencies. Every library you bring into your project increases the size, increases the number of sources for bugs, and complicates the build/release process.
Second, he's writing a video game, and timing is critical. At 60 frames per second, you only have 16 milliseconds to compute a frame. For any kind of real-time game, smooth frame rate is essential. You can't have libraries going off and using time on their own initiative. You don't want to be in the middle of firing the gun and have a library waste 50 milliseconds doing internal cleanup!
I have the same requirements that Shamus does, and I've been implementing my own framework. I'm in the middle of implementing my own GUI. Still, since there's been so little interest from anyone in using my code, I haven't worried about some of the small details. I also hadn't really considered people using just pieces of my framework. To get someone like Shamus to use this code, I needed to improve the modularity a bit and clean up some of the loose ends.
For the rest of this part, I'm going to review how the architecture turned out. I'm not sure this is of much general interest, but perhaps the programmers reading this will have an opinion. For the rest of you, there's a set of three demo applications that you can play with. There is of course no documentation yet! I'll worry about that if Shamus actually decides to use this code.
Note that I've prefixed all my class names with "mg". Vanity, I know, but it helps if your framework has unique names, so that it doesn't collide with other packages people are using. I thought of calling the package "magnesium", so that "mg" didn't obviously stand for me, but someone else is already using that name. Oh well.
We want to write programs that run under multiple operating systems. My targets are Windows, Linux and Mac (OSX). All my code is in C++, and all three systems have C++ compilers, so you would think this would be simple. Unfortunately, the C and C++ languages did not presume to offer "operating system" functions, so these are different on each platform.
This is different from a more modern language like Java. In Java, if you want background "threads", they are part of the language. If you need synchronization between threads, that's part of the language too. In C++, you call into the operating system for that stuff, and different operating systems do it in different ways.
On top of that, we want to create a window and handle input events. The operating systems handle that in different ways too. And we want to do 3D graphics, in OpenGL or Microsoft DirectX. Both have many different versions with different capabilities. Covering over all these variations is a challenge.
My mgPlatform library tries to cover the basics -- create a window, take input events, and set up for OpenGL graphics. Here are the variations we have to cover:
In Windows, your program starts with an "entry point" called WinMain. You do your initialization, set up OpenGL, create your window, and then process "events" until done. You don't read the events directly -- you give Windows a pointer to one of your routines, and windows calls into your code with each event.
Setting up OpenGL under Windows is a pain in the neck. Microsoft only directly supports OpenGL 1.1, which is ancient history. To use anything more recent, first you create an OpenGL 1.1 context on a window. Then you ask the device driver "what version of OpenGL do you really support?" You might get any level of OpenGL, depending on how up to date the drivers are on that machine, and what the graphics card is capable of.
Next, you ask the device driver to give you pointers to all the OpenGL functions that it implements (there are dozens). This lets you call into the driver without using any of the functions Microsoft provides (since nearly all of them are missing.) Initialization will also include things like picking a "pixel format", which determines how many bits of memory in each display pixel, and what OpenGL features you need.
On Linux, your program starts with the entry point main, and connects to the X Window System (XLib), a monstrous package with far too many functions. You create a window and do OpenGL initialization, then read events from the window system. You can do whatever you want with events, since you are in control of the main loop of the program. You would process them until the program ends.
Setting up OpenGL is similar to Windows, except you don't have the false start of creating an OpenGL 1.1 context just to find out what is really supported. But as with Windows, you request a pixel format, and if it's not available, fall back to some other format you can support (or give up.)
On the Mac, you use the XCode program development environment to create a UI for your program. That defines the initial size and properties of the window. You place an OpenGL "view" into the window, telling the interface builder to initialize OpenGL. The XCode tool builds some boilerplate code for you, which you are expected to customize.
XCode wants you to write your applications in Objective C. We are working in C++, which the XCode compiler handles perfectly well. To bridge between the two, there is a wrapper in Objective C that calls into my C++ framework, which then calls into your application.
Again we will be setting up OpenGL, and there will be some mess to handle. In concept, the code is similar to what I've done on Windows and Linux. In detail, it's all different.
After our application starts up and the window is created, OSX will call into the Objective C program with events. Those get translated by the framework and sent to the application. We are not in control of the main loop of the program. We don't even control when the screen gets refreshed, since the OSX UI framework wants to handle that.
I've used the Cocoa framework under OSX, since that's the way it's done in the OpenGL SuperBible I used as a reference. There's another layer, CGL, that might be simpler. I haven't tried it.
I've heard it said that "systems design is really interface design." After all, you could change some algorithm deep in the code without breaking things. This is like replacing the seats on your car with better seats. It doesn't affect the engine. Changing an interface is like moving the steering wheel to the other side of the car. It changes a lot of things.
To handle the variations I've just described, we have to create an interface between the system and the application. This interface is implemented differently on each operating system, but looks the same to the application. This is like two different cars with completely different chassis, but with the same controls, from steering wheel down to the buttons on the radio.
In my source code, programmers can find an abstract class mgApplication. The application implements this interface and the framework calls into it. It's very basic. The application initializes itself, it takes input events, it shuts down. When there are no input events, the framework calls appIdle, where the application can do graphics.
One important task is setting up the OpenGL graphics context. This has to be done before the application can do anything on the screen. There are many parameters that the application can set.
The application implements the appRequestDisplay method. This method sets all the graphics parameters the application cares about. After the method returns, the framework tries to initialize the display as requested. Parameters are then changed to whatever values the framework actually found. The application can check these and adapt as necessary. Or just complain to the user and give up!
I originally wanted mgPlatform to just initialize OpenGL and not offer any other services. I wanted it to be as simple as possible. When I split this code out of my larger framework though, it was a nuisance to use. To put anything on the screen, you need to compile a shader, set it all up correctly with a particular vertex format, then build vertex buffers to describe triangles on the screen. It's a significant amount of code just to draw a triangle under the latest versions of OpenGL!
My barebones platform didn't even allow you to draw a cursor without doing all this work. To write a simple demo application, I had to pull in all these utilities. I considered adding all of this to the mgPlatform library, but that would have defeated the purpose of splitting it away from my 3D library. I wanted something that an OpenGL programmer could use without learning a new library.
I finally settled on just putting in a shader compiling method, and one for drawing a texture mapped 1-1 with the screen. With these methods, you can easily draw a cursor and the UI. That's enough to get started with, and not something most OpenGL programmers are going to want to fuss over. If the DirectX support ever makes an appearance again, I can easily implement both of those functions.
One of the new demos in the source implements a simple OpenGL application. I took the code for rendering a planet (see Part 42) out of SeaOfMemes and added a trivial GUI to it. It looks like this:
Next, in order to implement any GUI, we need 2D graphics -- text, lines, fancy button images, etc. I've put all this in a library which defines an abstract mgSurface interface.
This is just a stand-in for graphics at this point, and has no function except for text and filled rectangles (which is why the buttons in the image above look so ugly.)
In the current implementation of mgSurface, I'm using FreeType for text, and will implement my own 2D graphics code for images and line art. This is all very slow, but it can be improved later. Architecturally, there's nothing to stop me from implementing a version of mgSurface with shader support, or implementing native Windows, Linux or Mac versions, to remove the dependence on FreeType.
The important thing about this interface is that the entire GUI is written on top of this abstract graphics layer. That means that the GUI could actually be used on any system that implements a version of mgSurface. It can be cleanly separated from the rest of my libraries.
For example, I took the GUI library and used it in an ordinary Windows application, without the mgPlatform library. The GuiTestWin demo implements a simple map based on Simplex Noise. You can scroll around in all directions and new map data is generated on the fly. The GUI is the exact same code as in the OpenGL demo. This screenshot shows the "console" open on the demo. The demo writes a line whenever it receives an event from the GUI.
To be completely reusable, the GUI has to separate both input and output from the underlying OS. The output side we've seen -- the GUI and all its controls are implemented on top of mgSurface.
On the input side, we have a tree of controls. There is a mgTopControl which is created on the surface. The top control handles all the input. To use the GUI, the application will pass events from the window system (things like key presses and mouse movement) to the top control. TopControl passes events to the individual push buttons, check boxes, etc., and they will redraw themselves on the surface.
Under the top control, you build more controls like push buttons, check boxes, etc. The current library of controls is minimal, since I'm still working on it. The demo I sent to Shamus has only push buttons, check boxes, input fields, labels and a console. It also has a formatted text control that is used to implement the help.
I have a decent collection of controls from my previous efforts on this project. I'll be adding more as I convert them to the new interface, and build a nicer look and feel. In addition to being limited, the current set of controls are extremely ugly!
To arrange your controls, there's a mgTableLayout class. This is very much like using an HTML table. You have rows and columns, and cells can span multiple rows, or multiple columns. Each cell can have a control in it, or another table.
Finally, you need to know when things happen in the GUI -- when the button is pressed, for example. To know this, the application attaches "listeners" that are called on input events. This is very similar to what the Java Swing library implements.
I'm currently using a very C++ oriented interface. With a bit of work, I could implement classes that call C routines or take C callbacks. From there, the GUI could bind to any language.
I'll be writing more on the GUI in a part or two. I still have a lot of work to do.
After pulling out the platform code, the 2D code and my GUI, what's left is a set of 3D graphics classes. These are intended to cover all the variations between OpenGL versions, and cover DirectX as well. Currently, only two versions are supported -- OpenGL 2.1 and OpenGL 3.3. I don't even support all of OpenGL, just the pieces I've used.
I expect this library to keep growing as I need more out of OpenGL, and for the interface to change again. Other people could use what I've done here since any OpenGL program will need functions like these. Coding on the raw OpenGL interface with no utilities is a bit of a pain.
I've added a third demo called GuiTestAll that shows everything together. The application is implemented on a MovementApp class that I use as the base for all my demos. It is based on my TestCube demo and is showing help using the text formatter control:
My framework was originally called mgFramework and included everything. Various parts have been split away as I worked on the project, and in this recent restructuring. I now have the following libraries:
To support all these libraries, I need two other open source libraries:
Download the source code at LetsCode_part55_Framework.zip There's a "readme" file that will tell you about using the GUI and building the demos.
In addition to the source, the three demos GuiTestWin, GuiTestGL and GuiTestAll are prebuilt for Windows in each directory. None of this runs under Linux or Mac yet. I still have to get versions of mgPlatform working on those systems.
I'm not sure Shamus is willing to dive into some unfinished, undocumented code, but I hope he'll give it a try. If not, I'll keep working on it (though probably not as urgently!)
blog comments powered by Disqus