Another scattered week, since I'm still distracted by family issues (Hi, Mom!) Also, I took a whack at Skyrim for a couple of days, until I realized I was circling the Black Pit of Game Addiction (Hi, WoW!) and stopped playing. I'll probably get back to it sometime, to look at the graphics if nothing else.
Since there's no completed topic or demo to show you, here are some bits and pieces. If you get bored, look at the video below.
I started by cleaning up some framework and chat client issues. One commenter had said the client crashed for him when another user left the world. I had of course tried this on my own machine without any problems. I looked at the code, which was trivial, and didn't see any problems. Finally, after a dozen tests, it did crash inside OpenGL, and I immediately knew where the bug was.
Back in Part 15, I had tried to use OpenGL from multiple threads, and failed. It's not just that you have to put a lock over your use of OpenGL. The application can't even reference buffers or make GL calls reliably from outside the main thread.
I had "solved" this problem in my framework by loading index and buffer data into allocated memory, and then only creating the OpenGL buffer object when you went to draw. Since all drawing is done in the main thread, this avoids any multithreading issues. I just have to be careful to avoid any other framework functions that use OpenGL calls when I'm in other threads. Easy to say, but also easy to forget.
The problem this time was that there's a background thread which reads the connection to the server. When a user leaves the world, the server informs the remaining users. The read thread parses this message and then calls the method to remove the user. That also removes the avatar instance (the green guy), which deletes the index and vertex buffers for the avatar. Which makes an OpenGL call, which only sometimes crashes. Sigh.
The fix here was to mark users deleted and then clean them up in the main thread. The more basic problem is how to avoid these situations in the first place.
I should probably have some queue of objects to delete inside the framework, rather than doing this in the application code. I should also do something about textures, to make sure those objects are only created on demand, not when you load the texture file. As the app gets more complicated, with multiple threads and objects that come and go, I'm either going to have to put in more infrastructure or be a lot more careful.
In the meantime, I've put a "check thread" call into the debug version of the framework. This compares the current thread to the OpenGL creation thread, inside each method which uses OpenGL. That will catch the bugs until I improve the framework to avoid causing them.
Next I spent some time restructuring the landscape code described in Part 28. Until I come up with a data structure that supports caves and user modification of the landscape, I'm using this to draw the surface of the planet, moon, asteroids and ring.
Unlike part 28, those worlds are all curved. I needed to split the quad-tree style traversal of the landscape away from the actual generation of landscape points. I also need to decide how I'm going to lay out the terrain quads over a spherical world. More on that below.
Once I have that working, I have to combine this "near" version of the landscape with the medium and far versions. "Far" is what you'd see from space. There would be no terrain geometry at all, just a flat texture. "Medium" is what you'd see from high altitude. You'd have a very curved horizon and distant (many kilometers) scenery. I expect to draw all that in two or three passes, and I'm not sure how to handle the transitions between the different scales.
Since I needed a "medium" view to really debug the landscape, I switched to working on that. But before I could really stand on the landscape, I needed to be able to fly through the solar system and land on objects.
Moving through Space
Moving through the system is both an implementation problem and a UI problem. The system has high resolution (fractions of a meter for movement on the ground) and large scale (millions of meters across.) I'm still struggling with rendering it correctly, but I had not previously worried much about moving around.
The player needs high speed to cross the system quickly. If the moon is more than a few minutes away, people will not visit. At the other extreme, while on a world, the player needs to move at walking speed. If we had a small number of destinations, I could build a teleport system like Second Life or a bus system like the World of Warcraft ships, zeppelins and griffins. In this game, there are thousands of asteroids and you might want to visit any one of them.
We also have the problem of orientation. Standing on a world (or the ring), vertical is away from the surface. Those surfaces move and rotate, so in world coordinates, your vertical is constantly changing. When you launch into space and visit another object, your vertical will be wrong.
I could implement a physics model, and then add UI to let you control your vertical angle in all three rotations. This would all be complicated to implement, and complicated to use. You'd have to accelerate out to the destination, decelerate again starting at the halfway point, and then change your vertical angles to match the target object. Matching up with a rotating asteroid would be a challenge.
Instead, I want players to just fly up to an object and then hit "L" to land. The system would match your speed and angle to the target, and you'd find yourself hovering over the surface of the moon or ring.
I also needed to implement player positions in local coordinates for each object. I discovered this the hard way. I had a bug I just couldn't figure out which caused the avatar to drift slightly while standing on the moon. I couldn't find anything wrong with the math. It was one of those frustrating debugging sessions where you back up again and again, trying to find the assumption that is wrong.
I finally got to the point where I was yelling "that's PI/2, you stupid machine! How could you get that wrong!" And it turned out that in my include file, I had casually defined "PI = 3.14159". Over the scale of the solar system, that wasn't enough digits in PI. The round-off error was causing the bug. I changed the code to define PI as "2.0*asin(1.0)" so that PI would match whatever the library was using. I also switched the code to use local coordinates for each object.
In this video, you'll see me approach the moon, land on the moon, and move around a bit in the moon coordinate system. The ring is an interesting feature in the sky. Then I launch back into space, fly to the ring and land there. I move in the ring coordinate system for a bit. The moon is a terrifyingly large feature of the ring sky! Finally, I launch from the ring and let the planet drift away.
There is still some odd bug in this, so the landing transitions aren't right. It should start with the eye at the current orientation, and gradually rotate you into local vertical. Sometimes it works, sometimes it doesn't.
If you look at the top-right corner, you'll see your speed, over one million kph, and the landing status. Out in space, it's blank. On approach to an object, it reads "Landing range." Hit the "L" key and you assume the coordinate space of the nearest object, and the indicator goes to "Landed." After playing with this a bit, I think I'll revise it. You don't really need to see a "Landed" indicator the whole time you are on an object.
The speed of all objects has been turned up so I could debug. The moon orbits the planet in 10 minutes, so it's really moving. The ring spins and the moon turns ridiculously fast as well. In the real game, I'll pick more realistic times.
Drawing the World
With movement sort of working, I decided to try rendering the "medium" distance version of the planet and moon, so that the skies would be more realistic. I am going to implement some kind of atmospheric scattering so that I have blue skies and sunsets, and it's all correct as you gain altitude or see it from space.
I have three sources of information on this. There's a 2008 paper by Eric Bruneton and Fabrice Neyret here (PDF), with source code (zip), a GPU Gems item that Florian Bösch recommended here, and Florian's own source code for scattering (part of his ambient occlusion demo) here. With all of these to work with, I'm sure I'll get it running eventually. But before I start on that, I wanted to change the way I draw the world.
These scattering algorithms are all implemented in the shaders. The shader code will be casting rays through the atmosphere and calculating sky colors from that. The code will intersect rays with the surface of the planet or measure the distance through the atmosphere. I am not sure how well that mixes with drawing the planet as a set of triangles. Fortunately, I can draw the world directly in the shader.
When I started doing this project, I just implemented all my spheres as grids in latitude/longitude. That seemed the obvious way to do it, and you can use Mercator projections as your texture (See Figure 2.)
Vertexes bunch up at the poles though and produce ugly texturing. At some point in this project, a commenter told me there were better ways of drawing a sphere. I Googled around and came up with doing it as a sky box. In Figure 3, you can see how it's done. Take a box, with a grid on each face, and divide the (x,y,z) of the grid point by its distance from the center. That collapses the box into a sphere. The cells are a bit strangely shaped, but they are all more or less square and don't bunch up anywhere on the sphere. You can traverse the sphere by stepping from cell to cell without much trouble.
If you just have textures or texture arrays, you can use the grid i,j as the texture indexes. If the display supports texture cubes (skyboxes), you can use the point on the sphere as the texture index.
As mentioned above, I need to divide a sphere into some kind of grid so that my landscape algorithm has cells to subdivide as you move. I think I'll probably use something like this, although I'm not completely sure yet.
To draw the world in the shader, we're doing something completely different. Figure 4 shows the geometry of the situation. We first need to build a rectangle that brackets the visible portion of the sphere. We construct a coordinate system where the sphere is at (0,0,0), the eye is at (x,0,0), and the y and z axis are orthogonal to the eye vector. Looking at this in two dimensions, the red line between the two tangent points to the sphere is the edge of the rectangle we need to build. "h" is the height of that edge, and "m" is the distance towards the eye.
Fortunately, Google knows all, even high school geometry, so I can code this up. It's been many years since I learned this the first time! We know the distance from the eye to the center -- that's the hypotenuse ("e"). We know one side of the triangle -- that's the radius ("r"). It's a right triangle because the tangent meets at a right angle. So the remaining side "d = sqrt(e*e - r*r)".
From that, "h = (r*d)/e" and "m = (r*r)/e". Now we can build the rectangle. For a more efficient shader (fewer pixels considered), we could draw a disk width radius "h". The area of the rectangle we are drawing is 4*h*h, and a circle would have area 3.14*h*h.
Now that we have the plane, the shader can do its thing. For each pixel, it finds the intersection between a line from the pixel to the eye and the surface of the sphere. If there is no intersection point, the pixel is discarded. Otherwise, we take the intersection point and use that as the skybox texture coordinate. I've scaled the system so that the radius = "1.0", and there's no need to normalize the intersection point. Figure 5 shows the result.
Just Getting Started
There are some issues with this right from the start. Surprisingly to me, performance isn't one of them. The display doesn't even notice doing all these calculations per pixel. Another good point is that the sphere is perfectly smooth. When I draw this with triangles, even using six 32 by 32 grids (6K points for a sphere!), I can still see some edges from the right angle. With each pixel calculated correctly in the shader, there are no edges.
One problem is that as far as the Z buffer is concerned, this is a plane, not a sphere. So if we had intersecting objects and wanted to do use the Z buffer to order them, it wouldn't work. Fortunately, we're drawing a planet here and it won't have other shapes intersecting it.
A second problem is the edges of the sphere. When drawing the sphere as a set of triangles, the display merges the edge with the background, as shown in Figure 6. When I do the sphere all in the shader, pixels are either drawn or not, giving a harder edge (Figure 7). In these samples, it's hard to see a difference, but in the demo, the world has edges that sparkle as you move. I'm not sure what to do about that.
Next up is adding the atmospheric scattering computations to this shader. Hopefully, I'll cover that next week.
blog comments powered by Disqus