McrView (and also the current Crafty) are based around a flat landscape with bricks 128 deep. When reading the old Minecraft data, I translate four of their 16 by 16 by 128 chunks into a stack of four of my 32 by 32 by 32 chunks. I haven't gotten around to reading the newest Minecraft save files with 256-deep landscapes, but that would just increase the stack of chunks from four to eight in my code.
In the new Crafty, I don't want any height restrictions at all. I'm using the same algorithm I used in Part 28, except in three dimensions. At the top level, there is a 3 by 3 by 3 array of terrain cells. As the eye moves through the world, new cells are added in x, y, or z to keep the eye centered in the array. Each cell in this top level array is a terrain OctTree, where each node is a 2 by 2 by 2 array of child nodes.
Taking 1 unit as a meter, the top cells are 64km across, and the leaf cells are the 32 meter cells we've been using. As the eye moves around in the grid, two methods, tooCoarse and tooFine, are used to subdivide nearby cells and combine distant ones. The coordinate space is an integer, so the total volume supported is 4 million km on an edge. That should be enough!
I've coded this to take either polygon data or cube data as the contents of a leaf cell. With this I can try variations -- cubes up close and polygons in the distance, or small cubes up close and larger cubes in the distance.
The advantage of cubes in the distance is that the code could summarize the detailed landscape by just sampling it at lower resolution. I'd really like the player to be able to see distant cities, or altered landscape. On the other hand, a reduced resolution view looks pretty nasty, so I'm not sure it's going to be acceptable.
The simplest polygon view of the distance is just to create a heightmap of scenery. I have all that code from Part 28, so putting it in is no problem. The big issue there is that you won't see any signs of distant construction.
One additional possibility is creating a texture for the polygon scenery based on the detailed cube data. The heightmap could also be altered by using the cube data as a source. Then if an area has been heavily modified, the distant view would show the modified terrain and the colors of constructed buildings. The heightmap can't represent caves though, so some alterations of scenery wouldn't be shown. To do that, I'd have to return some kind of polygonalization of the cube data, as with marching cubes or other contouring.
Back in Part 31, I considered three ways of drawing a selection highlight on bricks. Doing it all in the shaders would have worked, except that the non-cubical shapes don't currently draw the faces where I wanted the highlight. They would all have to be modified. Doing highlight with decals didn't handle transparency correctly -- the highlight has to be sorted into the transparent data. And so I ended up with breaking the output buffer into two pieces. I would draw everything up to the selected cube, then draw the selection highlight, then draw the rest.
This would work fine for a single selected brick, but not for multiple bricks (I would have to break the buffer for each selected brick.) I knew that when I did export from McrView, I would want to select volumes of bricks. When I was thinking about 3D printing, I thought I might want to get even more specific than that, and say select the roof of a building to print separately from the base. So the Part 31 technique didn't seem like a good idea.
Even if I stuck with a simple boxed area to export, drawing a boundary is a nuisance. If I wanted a translucent sparkly selection box, I would have to sort this into the existing scene, or else transparent bricks would be rendered incorrectly, before or after the selection edge.
So I thought about going to back to shader-implemented selection. That would let me handle arbitrary collections of bricks. But that leads to another problem -- how do I tell the shader which bricks are selected? I could easily put a flag in the vertexes, but the whole point here is to select bricks in an existing chunk of scenery, not regenerate it whenever the selected brick changes (as when the eye tracks over the bricks.)
To get the selected brick list to the shaders, it would have to go in a shader uniform. But each vertex drawn would have to traverse the list and find out if the current face was selected. That would be slow. And the small size of uniform data would limit how many selected blocks I could have.
I can put data in a texture and give it to the shaders that way. So the 32 by 32 by 32 chunks of bricks could also reference a 32 by 32 by 32 texture array, with a flag at each pixel of the texture. This would work, but means regenerating that selection texture whenever selection changes (every time the eye moves...) So I didn't like that either.
Finally, it occurred to me that if the selection volume were opaque instead of translucent, everything would work. I could draw giant box with just beams and columns to define the selected area. Opaque data can be added after the opaque bricks and the Z-buffer will sort things out. Then draw the transparent bricks in sorted order as usual.
If I draw my selection highlighting that way, I can highlight as many faces as I like, not break the cube buffers or do anything fancy, and everything works. The only restriction is that I can't use translucent selection textures. In fact, I have to use the crudest "nearest-neighbor" filtering on the selection texture, since otherwise the hardware will average adjacent alpha values and give me a translucent texture.
So the rendering quality on the selection hightlight is a bit noisy, but otherwise it all works fine. Figure 1 shows some selected blocks (green X pattern), with the current block under the cursor having its own selection highlight (the black rectangles.)
blog comments powered by Disqus