It is my last year as an undergraduate at the University of Missouri. As my life as an undergraduate is coming to an end, I have been working on a project recently to recreate a section of campus for a virtual reality tour. That section of campus is Francis Quadrangle, or better known as simply “The Quad.” To accomplish this, I have been taking pictures with my team and using them as reference material.

We also have been using Google Earth Pro to get measurements for buildings and the surrounding landmarks of the the quad. These reference materials give us the tools we need to properly model the quad for our virtual reality tour. We have been modeling our buildings and other assets in Maya 2013 and we are creating our virtual tour in Unreal Engine 4.

I have been working on project recently to recreate a section of campus for a virtual reality tour.

We have been tirelessly modeling for for a little over a month now, and have made some real progress on the surrounding buildings of the quad and a few of it’s landmarks. We decided it was about time to export our buildings out of Maya and bring them into unreal for testing. We have been modeling at 1/100th of the actual size of the buildings and then importing them into unreal at 100 times the scale.working

This seemed to work perfectly at first. We had a basic ground model with an exact scale image of a Google Maps overhead view of the quad. Placing buildings was as easy as finding the outline on the ground model. Unreal recently updated to 4.7.2, which allows you to run your test game from within the engine itself, so we fired up a test game and threw on our Oculus Rift DK2 to behold the beauty of our work. This is where we hit a snag. When viewed through VR, it wasn’t to scale. The buildings were exactly the correct sizing, but it wouldn’t look that way if a user took the tour. There was no way to make sure everything was properly scaled besides just going with what felt right.

We fired up a test game and threw on our Oculus Rift DK2 to behold the beauty of our work. This is where we hit a snag.

Now, jump back about 4 months to my first time being able to take home and try out the DK2. It was truly an amazing first experience. Before this I had only briefly gotten to encounter the Oculus, and had purchased a Google Cardboard when they first came out, but this was on a whole new level. I downloaded as many demos as I could and I lived in my DK2 for the better part of a month. However, spending this much time with the Oculus I became aware of a problem. It is hard to have the Oculus on for more than 10-15 minutes for multiple reasons. I will only focus on a few of these reasons.

Content for the Oculus is short. Most games/demos don’t last more than 10-15 minutes anyway. You are constantly having to switch between content on the Oculus Rift. Another reason is that you are contained within this virtual reality. There is no way to see what is happening outside of the Rift. If you get a text, need a drink, someone wants to talk to you, etc you have to take off the Rift.

That is where I got the idea for VR to IRL

It just made sense. Wearing the DK2 you don’t want to have to take it off. It is annoying to set it down on your desk, it is annoying to slide it up and wear it on your forehead (greases up the lenses), and it is just plain annoying to not be able to keep it on when you want to do something as easy as press the desired key a game asks you to. I wanted there to be a way to see the world around me for these easy tasks. This would improve my life as an Oculus Rift gamer.

As an Oculus Rift developer I wanted to know that my scaling was right. If you have been in an Oculus Rift, you know that presence (the feeling of true immersion in a virtual world) is absolutely key. I wanted there to be a way to go out to the quad and look at a building and be able to view what we had created at the same time. To do that I would have to go out to the quad, then run back to the VR lab and run our game and run to that place. There had to be a better way.

crudeSo, I had this idea churning in my head and a friend invited me to attend HackIllinois. I immediately accepted and through my idea at him. He was in and the ball was rolling. I had seen some demos of using a leap motion as a pass-through camera on the Oculus Rift. This seemed like a perfect solution to me. Use a gesture to switch between the leap motion and back to a game. Most of the tools were already there. Piece of cake. Can probably create a rig for the laptop running everything and to hold the DK2 positional tracker and have a day to spare to converse with fellow hackers. We had this crude drawing and our basic idea to take with us as we traveled 5 hours to Champagne Illinois to concur our challenge.

Building the rig

IMG_5031As we got settled in and the event started, it soon became very apparent that this would not be as easy as we originally had thought. My fellow hacker and friend Mark was working on implementing a gesture to be used on an Android Wear device to allow to switch between VR and the leap motion camera. I worked on getting the leap motion gestures working and the pass-through camera. The pass-through camera was the easiest part. Creating a game in unity which only utilized the camera of the leap was all it took. Now it was time to test the gesture to switch to another game. But, you can’t run two full screen Oculus apps and still be able to use leap gestures as a background process. I didn’t realize this was the problem until an Oculus mentor actually came and asked us about our progress. His suggestion was to run the apps Windowed because the Oculus actually won’t allow us to do what we were trying to do without that. It isn’t perfect, but it is a start. Sure enough our switch gesture switch between the two windows.

You can’t run two full screen Oculus apps and still be able to use leap gestures as a background process

IMG_5033IMG_5041This was about the point that mark started working on the rig to house the laptop and positional tracker. The rig consisted of some wood blocks, screws, and an array of PVC pipe. The wood blocks connected the two PVC pipe frames that would add stability to the laptop and cabling that the backpack would house. A long PVC pipe then came out of the top and another came directly out and down to hold the positional tracker. Building the rig didn’t take to much time, but it did lack some stability so the need for more screws became apparent. We threaded the necessary cables through and began the long process of connecting all the necessary cables and devices, and placing them into the rigged up bag. It is definitely a two person job.

IMG_5036Getting the contraption on is kind of a hassle too. The PVC pipe become rather heavy pretty fast, and without the screws it was very rickety. For the most part however, getting it on is pretty straight forward and not hard, especially with a second person. The real problem with this first build was the leap motion. Using the leap motion as the pass-through camera turned out to be a major design flaw. The first big flaw is the fact that the leap motion can only see in black and white. This is something we knew going in. Honestly, it wasn’t a huge issue for the designer part of the equation that we were trying to solve for. You don’t need to see the color of the objects you are designing, you would just use this for scaling and presence. A quick trip down the hallway and we soon found out that the leap motion was not going to be doing that either. I could see Mark standing just a few feet away. I could see his plaid shirt and his beard and glasses. Beyond Mark however, there was just a pitch black. As if I had entered a cave and was using some terrible form of night vision. It was disorienting to say the least, and I could not walk around on my own.

So, we had come to the conclusion that this was not a good solution for the developer. What about someone who just doesn’t want to take off the rift? The original dilemma I had faced when I first started using the Oculus Rift. Turns out it wasn’t much good for that either. Sure, I could grab a drink or even pick up some snacks. The leap even gave depth to the world so it was relatively easy. However, more normal tasks like wanting to work on the laptop to switch demos or check the time on my phone were not possible. The leap motion was unable to view these screens. It made them useless slabs of glass that would not be able to be used by any gamer. So, hours of development later and we had made a pretty cool rig, but we had solved none of the original problems we had wanted to. Not only that, it was getting late into the competition and we had just one night left.


The Solution

It is about 2:30 am and we have to turn our projects in at 10 am. We are both running on fumes at this point and we decided to take a walk. Now that we knew the leap motion would not suffice as a qualified pass-through camera for the type of rig we wanted to create, we remembered seeing some Logitech webcams available for checkout. We went to the first floor and soon found out that equipment checkout had closed. We still had one backup plan however. Mark had thrown in his bag what would end up saving the project for HackIllinois. We went back up to our study room and started to implement the Playstation Eye he had just happened to bring into the project.

We downloaded a stereoscopic video player and the Playstation Eye solved our problems for the most part. We could now switch between a game and the pass-through camera using our gestures on the leap motion camera and check our phone, eat some food, or just see what was going on in the room. We could also use our other gesture to get a 50/50 view of both the VR world and the real world. This build was far from perfect though, but it was a great proof of concept and we were excited to further development on it. We ended up not placing in the top 5 but we were really happy with how much we got accomplished in 36 hours and the 1,000 people who were there had some really great projects.

The Future of VR to IRL

Once we got back home from Illinois we new the Playstation Eye webcam was not going to cut it. We tested the VR to IRL out with a Logitech C920. This it was pretty night and day. The increased resolution and decrease in size made it a much more attractive option. The only problem now was the widscreen aspect ratio. After a little more research we have found the solution to that problem as well.


Mounting two of these webcams in portrait and then implementing that into the software application will give the user increased FOV as well as give the user a much more true to life view. We plan on 3D printing a mount for the two Logitech C920s as well as a place in the middle for the leap motion. This will make it much easier to attach the device to the Oculus without having to use items like take a Velcro, and will also allow us to place the devices closer to the users actual eye placement which will make gestures easier.

The current software for VR to IRL is a jumbled mess to be honest and needs to be written as a single Windows/Mac application. As it stands right now, the webcam pass-through software, the leap motion gestures, and the transparency software are all separate. This is not ideal and a single application for the VR to IRL is already in development. Mark, throughout this time, has been working on a way to control the switch from virtual reality to the pass-through camera using a gesture on my Moto 360. By using APIs by a company called Strap, it should allow us to write a single applicaiton that can be used on the Apple Watch, Android Wear, and the Pebble Smart Watch.

A single application for the VR to IRL is already in development.

I hope to use the current implementation of VR to IRL for my current development of the virtual tour of the Francis Quadrangle. I believe with the true stereoscopic setup with the new webcams it will be an incredible tool for me and my team to get a real sense of scale and presence. The finished version will most likely not contain the PVC rig and only use a Surface Pro 2 as the main device driving the software. This will need to be tested, but we believe that it can handle what we have and work for our needs. Another solution we have come up with if the latency becomes too high and frame rates drop is using Steam Streaming and having a more powerful rig close by to reach the same outcome. Either option will work, but the first is the most versatile and desirable. I will end this with a small demo of where VR to IRL is currently at in production. I look forward to writing about further updates on the project.



One thought on “VR to IRL

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s