Back in the 1980s sometime, while I was working for IBM Research, I saw an early Virtual Reality setup. I forget who was running the project, or where the demo was (Stanford University, I think) but an announcement went out over Usenet (there was no Web yet) that there would be a demo.
My friend and I arrived early and were one of the first people in the room. I vaguely remember someone in a motion capture suit who was positioning a cartoon character in-world. She might have been a lobster? And you could put on the headset and look around. This was all done with something like an early Silicon Graphics workstation, so really crude graphics, low framerates and low resolution.
Even so, we were all excited about it. My most vivid memory was an hour after we got there, when the room had about 30 people in it, all lined up waiting for their try. One of the organizers said he was impressed by the turnout, since it had only been a department announcement. I told him it had gone out on sci.nanotech, one of the Usenet feeds. His response was a blank look and then "Oh, God!" By the time we left at noon, the place was wall to wall people.
And then nothing. The equipment was out there in research labs, and you would hear things about it, but I never got to play with a VR headset again for thirty years.
My Oculus Rift Development Kit 2 was ordered on April 12, with an estimated delivery date of July. It actually arrived August 22, so I've had it for two weeks now. Here are my first impressions.
My kit was exactly what I expected from videos like this one:
Hardware and software installed exactly as advertised (I'm running under Windows 7). The only surprise was that despite warnings I would have to reload the firmware, it was already up to date. Since I got it, there have been two more releases of the SDK, 0.4.1 and 0.4.2.
The head tracking is partly some kind of inertial sensor inside the headset, and partly from an IR camera you are supposed to mount five feet from your head. But in the manual, they show it clipped to the top of your monitor, which is more like two feet away. I don't really know how I'd manage five feet without a tripod and placement of the camera in some really inconvenient spot like off to the side or behind me.
The only consequence of placing the camera near you is that the tracking volume is a bit small, and it's easy to move your head out of the tracked area, which causes the position to jump inside demos. If you are seated and not moving around much, it won't be a problem.
I should mention at this point that I am very nearsighted. With my glasses off, my eyes focus to about three inches in front of my nose. Farther than that, everything is a blur. Aging has also taken its toll and I now use reading glasses while programming.
The Oculus comes with two sets of lenses -- "A" for normal sight, and the "B" set for nearsighted users. I had hopes I could take my glasses off and use the "B" lenses, but they weren't enough. Everything was still blurry. So I'm using my distance glasses with the "A" lenses.
This has a couple of consequences. First, there's an adjustment for the distance between the screen and the eye, and I have it cranked all the way out so that there's room for my glasses. I think this is giving me a bad experience with the headset. I have a lot of chromatic aberration (red and blue rings around objects) and dizziness when I move.
Second, the positioning of my glasses is really very fussy. If I push them too far up or down my nose, or if they are at all crooked, I can't see clearly. The Rift tends to grab my glasses when I put it on or take it off, so it's hard to get everything arranged exactly.
This is a nuisance when programming. I write some code and it's time to test. So I switch pairs of glasses, stick my face into the Rift glasses first (can't just pull it down over my face), then fuss with the straps. The code must already be running, since otherwise you are in the dark and can't see the keyboard.
Then I play with my code some (more on that below), and it's time to take the Rift off, change glasses again, and look at the screen. Needless to say, single-stepping through code in the debugger while wearing the Rift is impossible. Even hitting a breakpoint or something is a huge nuisance, since the whole world freezes. This is very jarring while wearing the headset. After a couple of hours of debugging, I had a headache every time.
None of this would apply if you were just playing games, although you do have to remember to set everything up first, then put on the headset and start to play. For game designers, this means there should be some "getting started" state the player can sit in before they really enter the world and start playing.
So What's It Like?
The little control panel applet has a test scene in it, with a desk and some plants. To my surprise, this scene is instantly very strongly three dimensional, even when I had the eye-related settings wrong. Although I don't feel I'm "in the world", the illusion is strong enough that I was startled the first time I ran it and reached for the keyboard. I couldn't see my hands and was briefly shocked. Where are my hands?! Oh.
Next I tried the "Tuscany" scene. I could look around, but as soon as I moved in world, I instantly became dizzy. This has slightly improved since I've fiddled with the inter-eye distance and some other settings, but I'm still very dizzy in all the demos when I move. I'm really surprised at this, since I normally don't get motion sickness. This has taken some of the fun out of the Rift, but at least I can sit and look around.
After Tuscany, I immediately wanted to try Half Life 2. To run this, you have to allow Beta code in Steam, and mess with lots of other settings. It took me quite awhile to get this right, even with instructions. You'll find a write-up here.
My first error with this is that I needed to run the Rift in Extended mode, where it is a second monitor. By default, it wants to run in Direct mode, where they route graphics to it via their own device driver.
What isn't mentioned in the User's Guide is that in Extended Mode, you have to use the Windows Monitor control panel to set the Rift in Portrait mode, and position the Rift monitor to the left of your main desktop. Then applications can create a left side window, 1920 by 1080 and it will fall on the Rift portion of the desktop.
To my disappointment, once I got all this working, HL2 makes me completely sick in VR mode. You are too tall -- much taller than other characters -- and movement is somehow even worse than in the other demos. Scenery looks really nice, and I stood in parts of the game and just looked around. But people and furniture, etc. just look plastic and fake. It needs to be much higher resolution to be convincing. And I had no luck at all running and shooting, so playing the game was hopeless.
Unfortunately, "The Chair" was when things started to get flakey. I ran this demo on my NVidia GT 640 display in Direct Mode, and it ran for a few minutes then stalled. This demo works by changing scenery when you aren't looking at it. The game was still running, but the story line wasn't advancing (nothing was changing.) I can only guess that my framerate was so slow that it wasn't registering that I was looking away. It was disappointing to get into it and then have it fail like that.
In all of the demos except the simplest, my frame rate was terrible. Even in the Tuscany scene, I was only getting 30 FPS. So I decided to get a new display card, a Radeon 270x. On this display, the Chair demo would not run at all, crashing in some DLL at startup. In desperation, I ran it in Extended Mode, and it ran fine. It also did not get stuck in the middle the way it had on the NVidia. Strange.
The Rift SDK comes with source code for the SDK itself, and two demos. One is "OculusRoomTiny" which is the absolute basics of presenting a scene and moving around. The other is a stripped down version of the "Tuscany" demo, called "OculusWorldDemo". There's also support for Unity, which I haven't looked at.
It's taken me a few days to modify my demo code to support the Rift. In OpenGL on the Radeon, I can't get it to run in Direct Mode at all. There's a note on one of the forums that this isn't supported yet, but some of the code comments seems to suggest that it is.
I've also had trouble running the OpenGL version of the SDK, including some invalid OpenGL calls. At first I thought that meant the code had never been tested properly, but it turns out it's because I'm running a core profile, not allowing any older calls. They are querying the existing environment, so that they can restore it on exit, and using some older GL 2.1 calls to do that. Some of this appears to have been fixed in the 0.4.2 version of the SDK, which I haven't tried yet.
The SDK wants you to render everything to a texture, not the main window. My first task was to get my demos to do that. The SDK requests a very large texture, 1182 by 1461 for each eye. Then they distort these textures with their own shader to match the lenses on the device. The result is displayed at 960 by 1080 for each eye. Along with a very wide field of view, this gives you a full image, including peripheral vision.
Next, I had to implement their projection matrix, which is off-center and different for the two eyes. I got this subtly wrong and had a lot of eyestrain from it. Finally, I had to track the head position and calculate the correct view matrices for the two eyes. I got this wrong too since I misread some of the sample code. What I thought was a positive displacement from the head to the eye was actually a negative displacement of the view matrix. So I actually implemented the view with my eyes reversed!
At a distance, there's not much difference between the two eyes, so it wasn't immediately obvious what was wrong. Only as I got closer to an object did my visual system start to complain. Oddly, it just felt as if everything was the wrong size. There wasn't any double vision or other clue. My mind kept telling me those one meter blocks of the Minecraft world were huge! That was another day of short debugging sessions, since it was giving me a headache.
I have also noticed some after-effects from these sessions. I had the feeling I get when I'm breaking in a new pair of glasses. A mild headache and the feeling that I wasn't seeing "correctly" somehow. This was with my normal glasses, and lasted for hours after these debugging sessions with messed up views.
I still don't have the SeaOfMemes demo running correctly with the Rift. I'm not sure what's wrong, but perspective seems exaggerated, and even a planet seems like a small ball in front of me. I have to dig through the code some more, but I assume I'm scaling distant scenery down, and this changes the relative size of the distance between my eyes.
To really release my demos as Rift applications, I'd also have to do something about the cursor (currently not implemented) and the GUI. It's recommended that you create the GUI on an object in world, not just pasted to the screen. I have code to do that, but would need to put it all together.
I'm also not happy with the performance of any of my demos. The McrView program runs at 250 fps fullscreen on the Radeon 270x, but only 60 fps when rendering both eyes on the Rift. I'm not sure where the extra time is going.
I would say the most annoying part of the Rift is being blind while you are wearing it. The whole thing would be so much more comfortable if you could hit a switch and see through the eyes. In fact, I'm curious what it would look like if you piped a webcam image into the screen while you were wearing it! Cameras are so cheap now that they really could afford to put a couple on the front of the Rift.
Perhaps this is just my difficulties with putting it on and off over my glasses. I'm going to see how much it would cost to get a new pair of prescription lenses without frames and mount them over the Rift lenses. Then I could put it on and off quickly.
I would also say that the headset could use more resolution. The problem is that you are using so little of the screen. If you look at those screenshots above, only a small patch in the center of each eye is where you are actually looking. Perhaps 300 by 300 on each eye, out of the total 1920 by 1080. Double the resolution would be appreciated, although it would be more work for the graphics card.
As for my own projects, I hadn't intended to do a Rift game, but I suppose I could. The reason I bought it was to put some excitement back into my project, and get me to look at that tar pit of graphics programming again. Although it is cool to stand inside my game worlds and look around, the dizziness that comes with moving is a problem for me.
blog comments powered by Disqus