I don't have much to show you this time, but since it's been two weeks since the last post, I thought I should write something.
It's been another frustrating stretch of work on my framework. The problem isn't getting an interface I'm happy with or getting it all cleaned up. As usual, the problem is with these variations in the platforms.
I gave up on supporting OpenGL 2.1 back before I got fancy with the shaders. In Part 16, I discovered I could reduce my cube vertexes which took 9 floats to specify (36 bytes) down to two integers (8 bytes) and use bit-shifting operations to extract integer coordinates for the vertex elements. This allowed me to keep four times as much landscape in the display memory and make things more responsive.
Unfortunately, OpenGL 2.1 had two problems with this approach. First, it doesn't let you use integers as part of a vertex description. All you can use are floats. A single 32-bit float isn't large enough to hold a 32-bit integer (doesn't have enough precision.) So I recoded the shaders to take a pair of floats (4 total), and made the framework code map an integer vertex element to two floats.
Then in the shader, I could convert the two floats to 16-bit integers and combine them into a single 32-bit integer. Then my old code could extract all the bit fields it needed. Or I could just extract fields from each of the 16-bit integers, which is what I ended up doing.
I coded this up using my GL 3.3 shaders so I wouldn't have to mess with differences in syntax at the same time I was changing the interface. To my surprise, this ran just fine. In fact, it was a tiny bit faster than sending integer vertexes, even under GL 3.3 which supports them.
I think there must be a bit of overhead on each vertex element. Sending two separate integers gets that overhead twice, where sending a "vec2", which is 2-element floating point vector, gets the overhead only once. Other than that, I can't think why this roundabout method of passing data would be better than what the compiler does internally. It's bizarre.
With that problem out of the way, I expected to have the new code working under GL 2.1 in no time. But then I ran into the second problem... When I switched between 3.3 and 2.1 shaders before, it was just a change of keywords, using "attribute" instead of "in" and "varying" instead of "out." This time, after I changed the syntax, the compiler told me that the bit shift operations were not supported. Sigh.
I could have rewritten the bit extraction with arithmetic, using divide by 2 instead of right-shift, modulo instead of "and", etc., but I knew that would be too slow (I had tried this before, back when Florian warned me that bit operations are very slow in shaders.) So it's just not possible to write a compressed vertex version of the shader for my cubes under GL 2.1. That stinks.
It's not just a performance and memory issue. What that means is that I can't write a platform-independent version of the cube rendering code. It has to know what format vertex (9 floats or 2 ints) it is using, and build those vertexes appropriately. So all the code splits into float and int versions, and the framework needs an interface to tell the code which is available. I struggled with this all last week, but couldn't find a solution.
The Mac is Back
I did finally get the OpenGL 2.1 version working for all my code, which means that it all runs on the Mac again. I'm now running on a real Mac, by the way. My old Hackintosh bit the dust, and rather than set up another one and hope it really runs graphics like a real mac, I just bought a refurbished MacBook Pro 13" from Apple.
I was also eagerly awaiting the release of Lion, the new version of OSX. I had read that it would have OpenGL 3.2 support, which would mean I didn't have to mess with OpenGL 2.1 at all.
And it turns out, you can compile GL 3.2 code. The problem is that when I request a 3.2 rendering context, it tells me there's no such thing. All it will give me is a software emulation, which is 100 times slower than the real thing. I know the 13" MacBook Pro doesn't have the best graphics, but this is a recent machine. If it doesn't support real OpenGL 3.2, I can't rely on it. In fact, looking at the supported hardware list from Apple, it doesn't look like the Air or the Mini are supported either, since they all use the same Intel HD 3000 graphics chip.
By this time, I was tempted to just drop the 2.1 support completely and the Mac along with it. But there's one more thing I mean to try. When I bought the MacBook Pro, I also bought an iPad2.
From the documentation, it looks like OpenGL ES, the "embedded systems" version, is similar to GL 2.1. In particular, it has the same limitations -- no integers in the vertexes, no bit operations, and no texture arrays. I'm hoping that with some syntax changes, I can get all this code to run on the iPad one of these days. Then we'll see if it has the performance to render these large scenes of cubes. It's a neat little device, and I would get a kick out of seeing my game code running on a handheld device.
So that's the status for now. I hope to have something more interesting for you soon.
All I'm releasing this part is the new framework. There are two demos in there as well -- a simple rotating cube, and the tree-branching demo from last part. Someone asked in the comments for me to release that code, so here you go. This is all it does:
Unless you are interested in playing with the tree code, there's not much here for you.
For Windows, download Part 27 Demos - Windows.
For Linux, download Part 27 Demos - Linux.
For Mac, download Part 27 Demos - OSX.
The Source Code
blog comments powered by Disqus