Human Computer Interaction
The Envision Center can capture and track body movements, recognize gestures, and integrate touch-feedback into your research.
At right is a video of some of our human computer interaction work, and below is some of the technology we use to accomplish this.
Cave Automatic Virtual Environment (CAVE)
The Envision Center hosts a reconfigurable Virtual Reality Theater with two adjustable outer walls to accommodate audiences of varying sizes. The floor and three upright screens all support Active Stereoscopic 3D projections. Enclosing the outer walls enhances the immersion, allowing environments and data visualization to be experienced in new and engaging ways. This facilitates the discovery of unique and innovative perspectives.
Portable Visualization System
Our fully-portable visualization system supports applications we develop using the Microsoft Kinect®, Razer Hydra®, and Novint Falcon®. The 3D television display allows for an immersive experience superior to a standard desktop, and the mobility provided by this system allows for operation in locations outside of the Envision Center facility.
Using optical and infrared sensors, the Kinect® can perform body mapping/tracking in 3D space. This allows for interaction without handheld controllers through the use of body motion, gestures, and spoken commands using the Kinect's voice recognition.
The Razer Hydra® is able to use an emitted magnetic field as a reference point so a pair of six-degrees-of-freedom controllers can become virtual hands in a 3D space, allowing for complex and precise interactions.
The Oculus Rift® provides an opportunity for enhanced immersion with a head-mounted device that displays imagery to lenses placed directly in front of the eyes, and it also features rotational head tracking.
The Novint Falcon® is a haptics device that allows you to experience "feeling" things in a virtual world, using servomotors to simulate forces such as the weight of virtual objects.