Since I wrote the last part, talking about future directions, I've been reading the comments and thinking hard about the subject. On the one hand, I want an interesting world that invites people to join in and do things. On the other hand, I'm afraid of how much graphics programming is required to get that world implemented. I find graphics to be much harder to write and debug than other kinds of programming. There are also huge areas that I know nothing about.
However, I also don't really like just hand-waving and saying "the world is extensible by users -- it can look however you want it to look. You can write programs there. It will be great." That doesn't convince anyone, and won't get me testers or suggestions. It's just a big blank slate. So I need to build something that people can react to.
I've decided that the way to do both -- build something to show people, but not spend a lot of time on it -- is to do a video. For a video, I don't have to worry about performance. If it takes hours to generate a two minute video, that's not a problem. I also don't have to worry so much about bugs. If there are boundary conditions where the graphics fall apart, I can just avoid showing them in the video. And I can just throw together code for the world to get the right look, without worrying about the code quality or generality. The video code will not be in the actual system.
If the video is good, it can attract people in a way that even a demo won't. Not many people want to download a demo, or it may not run on their OS or graphics card. It takes time to download a demo and really explore it. But anyone can play a video and make suggestions. As I hear about good ideas, I can update the software and regenerate it, to produce better versions. And when I get a look people like, I can use it as a guide to actually implementing the system.
But before I can make a video, I have to make some decisions about the shape of the world.
Infinite ... and boring?
I haven't actually bought many games over the last few years. From what I see in game trailers though, it still seems like you have either spaceships and aliens, combat in ruined cities, or elves in the woods. I have never agreed with Roger Ebert that video games can't be art, but I also haven't been too inspired by what I've played. Graphics are getting good enough that we can render almost anything we can imagine. Can't we imagine something more interesting than 1950s Fantasy and SF?
I think the problem is that game development is so expensive, and the art so difficult to create, that a lone artist just doesn't have a chance. Even a small game company will be tempted to go the safe route and use familiar situations. Why spend all the money that game development requires and lose it all because a mass audience doesn't like to be challenged? If we want more interesting games, we have to lower the costs of building them.
I don't know if I can do anything about that. I like the openness of Minecraft, and the way people can play with creating things there. It has very low barriers to creativity. If I can keep that aspect in my system, perhaps artists will come along and do really new things with it. But I still have to plant a seed and start the thing somewhere. I can't just offer an empty page and say "now, create!" So the overall style of the world needs to be inviting.
Minecraft has a very large (practically infinite) world, but it's all the same. Another cliff, another forest, another beach. Notch has added biomes, but I really don't see much variation. I don't want a world like that. Why shouldn't I be able to add completely different plants and landscape? Why can't I have three suns in the sky? Why can't I change the strength of gravity?
From an architectural point of view, it's tempting to just have an empty space in 3 dimensions. As you move in the game, the database will be returning all the objects near you, and the client will render them. These objects could be landscape or stars or even gravity generators. If you want a blue sky and sun, build a big sphere around your terrain and paint sun and stars on the inside. Make it turn and you have a day/night cycle. All of that could be a little world hanging in space, right next to the other little worlds.
I have a number of reservations with this approach. First, it's not very user friendly. People need to know how to get from one place to another. They need to give each other directions. And they need to be able to form a mental map of the place. Saying "start in the space station, go out the airlock and you'll be in this shopping mall, then exit the mall and you'll be in a jungle with dinosaurs, then jump into the volcano and you'll be at my place", probably won't work well.
Second, there's an implementation problem that's pretty hard to avoid. If sun and gravity are objects (which they'd have to be, in order for the designer to vary them), then what happens when you are slowly moving into range? As you approach the world, is it dark until the sun object gets delivered, then suddenly light? Are you floating until the gravity object appears, then suddenly falling? In general, your world needs structure, and if there isn't any by default, you will have these loading issues.
Most importantly, I want to avoid sprawl. The times I've visited Second Life, it seems like there are acre after acre of empty buildings. I understand the urge to go off somewhere and build your dream castle. The problem is that if everyone does that, you never meet anyone in the world, and no one sees your work. To get a really vital community in world, I think it needs to be more crowded and urban. But how to keep people from sprawling out in a virtual world with infinite land? In the unconstrained empty space I just described, there would be no way to do this.
In a Minecraft world of infinite flat landscape, I would solve this problem by creating islands and not allowing people to build in the ocean. Then there would be an incentive to share the island with other people. Going off to a different island means putting a lot of distance between you and the rest of the community. This might have the right balance -- you can create large new things if you want to, but you'd rather work in town, where you can be seen.
If I'm going with some three dimensional infinite space with little worlds, the closest analogy to islands are asteroids. You can build on a rock or go somewhere else and start a new project on an empty (or less crowded) rock. It has the same effect as the island -- it gives people an incentive to clump up. The asteroids also give a semi-familiar structure to the universe. They can be named and people can find them. Once on an asteroid, people can make maps, and navigate by landmarks. It's not an unstructured jumble of art in space, but it can still vary a lot. The owner of the asteroid might set gravity and weather, but structures will still be under control of individual users.
So that's my plan. I'll have a world with multiple asteroids hanging in space around a sun (or maybe several suns, for variety), and on each one there will be water, plants and buildings. The distance from the sun varies, so you can have a huge sun like I've been doing in the demo, or a tiny one. Each rock can have its own weather, gravity, and day/night cycle (caused by spinning the rock.) Rocks will be stationary so you can find them -- there won't be any real orbits -- and gravity will be local to an asteroid. Anything else would be impossible to implement. It's not realistic, but I also want the asteroids to be close enough that you can see development on your neighbors. That will invite people to come and look.
Now I just have to implement something like that and make a video of it!
What I'll need
First, I'll need to generate a field of asteroids and stars in good positions. I can do this with some noise source and a few rules. Not a problem.
Next, an asteroid itself with some terrain. I've used noise to generate height maps before (in Part 1 of this project). I can do the same thing but working in polar coordinates. That will deform a sphere using the noise and probably get me a decent asteroid. Using the usual rules for coloring things by height and filling in low areas with water, I can generate a terrain over it. Also not a problem.
Trees and plants will be added. Since I'm not worried about rendering speed, I'll just fill in a few thousand of them. I can use clip art or photos of trees off the net. Google Image Search is my friend.
But then I have structures to add, and that is a problem. I could just use my current demo to build Minecraft-like buildings made of blocks. Doing enough of these to cover a world would take days. And I have that problem that I'm artistically impaired... Fortunately, I don't have to do this. I can just import some structures from Minecraft.
Shamus Young and friends set up a Minecraft server called http://twentymine.com and they have made the save files public. Shamus has also written about the server here. I asked him if anyone would mind if I cropped some buildings out of it. His reply was "Go for it." So I wrote an importer and grabbed a bit of the world. Figure 1 is the Minecraft view of this area, and Figure 2 shows my demo view of it.
These look very different. Part of the problem is that I didn't match the Minecraft textures very well. Another problem is my sun and stars instead of the Minecraft blue sky. But the biggest problem is the lack of light and shadows.
I really don't want to stop and implement a lighting model for this video. I don't understand how that stuff is done, and every tutorial I've read about it sounds complicated to implement. Light is bad, shadows are worse. It seems like most games have all kinds of short cuts to give you somewhat decent lighting without spending a lot of time on it. Even A-list games like Half-Life will make mistakes with shadows, drawing them on the wrong side of a wall, for example.
I don't want to do lighting for this video, but I think it will look terrible without it. Fortunately, there is a brute force approach.
The standard graphics approach is to take a triangle with a texture and draw it onto the screen using perspective transformations. By drawing from back to front (or using the Z buffer as described in Part 4), you can get a 3D scene. This is reasonably efficient, but to handle light and shadows, you have to play all kinds of games. For example, a shadow is implemented by in effect calculating the projection of the triangle onto the ground. Projecting shadows onto other objects is even more complicated.
Ray tracing doesn't do any of this. Instead, it's based on what happens in the real world. Rays of light come from the sun or from lamps and bounce around your room until some of them hit your eye. If you simulated this, you could get realistic lighting effects without ever calculating the shapes of shadows. Everything is done pixel by pixel, and shapes emerge naturally.
Since most of the light never reaches your eye, there's no point in simulating a whole light source. Instead, ray tracing works in reverse. It starts with your eye, and casts a ray from there, through each pixel on the screen, into the scene you are rendering. When it hits a part of the scene, it then traces a ray from that point to the light source. If the light is visible from the point, it's lit. If not, it's in shadow. If you think of moving your eye to a shadowed part of your room, it's obvious that this works. That's what makes something in shadow -- you can't see the light from there.
I'm not going to go into any detail on ray tracing. For starters, you can go here. None of this code will be used in the actual system, so I'll just give the overview.
I have all the coordinates of all the cubes in my demo. I write them all out to a file. The ray tracer reads this in and has a huge list of triangles. I start at the eye position. I know the corners of the virtual "screen" I'm projecting the world onto. So a ray from the eye through a pixel is no problem.
I then compare this to all the triangles in the scene. The closest one that intersects the ray is the target. I find the pixel in the texture for that triangle at the hit point. In the first version of the code, that was it. I set the screen pixel to the target pixel. I got a flat, unlit view of my scene, not much different than I get with the standard graphics rendering.
But then I do the second step, tracing a ray from the hit point back to the light. If I can reach the light, the pixel is bright. Otherwise, it's dark. Just doing that trivial bit of code gets me Figure 3, a rendering with a nice shadow.
Next, I do the same thing with multiple light sources. The only difference is that instead of tracing a single ray from the hit point back to the light, I trace multiple rays, one to each light source, and add them up. The color of the final pixel is the color at the hit point times the angle of incidence of the light, added across all the different light sources. I get Figure 4, which looks really nice. I have no idea how to do that with conventional graphics.
Finally, I implement transparency. When a ray hits a transparent triangle, it adds in the color of the hit point, but doesn't stop. The ray continues until it hits something opaque. Doing this on the ray from the eye to the scene picks up color. Doing it again on the ray from the hit point to the light picks up more color. The result is Figure 5.
Real ray tracing would be generating additional rays for reflections off of the surfaces and so on. I'm not doing any of that, so I'm even more impressed by how nice this looks for the time I spent on it. Note the green shadows on the floor and central pillar in Figure 5.
The only real problem with this is that the ray tracer is appallingly slow. I haven't optimized anything, not even cutting down the number of triangles I test. It's taking 20 seconds per image even for these little scenes with 1000 triangles in them. The larger Minecraft image from above is 240,000 triangles. Multiply that by 600 by 800 by 30 frames per second by say 90 seconds of video, and we have around 300 trillion triangle-ray intersection tests. And actually, it would be at least twice that, since we have a ray to the scene and a ray at least to the sun. If multiple light sources are present, each one adds another ray. And of course, I want a bigger world than just that one bridge and bit of landscape.
I know I can get a huge speedup by not comparing with all the triangles in the scene. I'm not sure I can get it fast enough to render my entire world in a reasonable amount of time (say 10 hours for 90 seconds of video.) If I can, that solves all my lighting problems in one stroke. I don't have to figure out the clever ways of doing lighting. If not, it will be back to the drawing board.
Finally, the last thing I need that I don't have is the code to write a frame of image to some video format. I spent a couple of hours sorting through Microsoft's Media Foundation SDK before I gave up and did what I should have done -- asked Google. It pointed me to this page which has everything I need to write an AVI file, including source code.
If all goes well, I'll have some video for you next week. Please comment on the world design issues. And all of you graphics guys can tell me that shadows are really easy (I hope!) and that I'm an idiot for using ray tracing to get out of doing a bit of programming.
blog comments powered by Disqus