Abstract:
In a system for intelligent placement and sizing of virtual objects in a three dimensional virtual model of an ambient environment, the system may collect image information and feature information of the ambient environment, and may process the collected information to render the three dimensional virtual model. From the collected information, the system may define a plurality of drop target areas in the virtual model, each of the drop target areas having associated dimensional, textural, and orientation parameters. When placing a virtual object in the virtual model, or placing a virtual window for launching an application in the virtual model, the system may select a placement for the virtual object or virtual window, and set a sizing for the virtual object or virtual window, based on the parameters associated with the plurality of drop targets.
Abstract:
In one general aspect, a system and method are described to generate a virtual environment for a user. The virtual environment may be generated with a first electronic device that is communicably coupled to a second electronic device. The method may include tracking movement of the second electronic device in an ambient environment, determining, using one or more sensors, a range of motion associated with the movement in the ambient environment, correlating the range of motion associated with the ambient environment to a range of motion associated with the virtual environment, determining, for a plurality of virtual objects, a virtual configuration adapted to the range of motion associated with the virtual environment the plurality of virtual objects according to the virtual configuration, and triggering rendering of the plurality of virtual objects according to the configuration.
Abstract:
In one general aspect, a system can generate, for a virtual environment, a plurality of non-contact targets, the plurality of non-contact targets each including interactive functionality associated with a virtual object. The system can additionally detect a first non-contact input and a second non-contact input and determine whether the first non-contact input satisfies a predefined threshold associated with at least one non-contact target, and upon determining that the first non-contact input satisfies the predefined threshold, provide for display in a head mounted display, the at least one non-contact target at the location. In response to detecting a second non-contact input at the location, the system can execute, in the virtual environment, the interactive functionality associated with the at least one non-contact target.
Abstract:
Methods and apparatus using gestures to share private windows in shared virtual environments are disclosed herein. An example method includes detecting a gesture of a user in a virtual environment associated with a private window in the virtual environment, the private window associated with the user, determining whether the gesture represents a signal to share the private window with another, and, when the gesture represents a signal to share the private window, changing the status of the private window to a shared window.
Abstract:
In one aspect, a method and system are described for receiving input for a virtual user in a virtual environment. The input may be based on a plurality of movements performed by a user accessing the virtual environment. Based on the plurality of movements, the method and system can include detecting that at least one portion of the virtual user is within a threshold distance of a collision zone, the collision zone being associated with at least one virtual object. The method and system can also include selecting a collision mode for the virtual user based on the at least one portion and the at least one virtual object and dynamically modifying the virtual user based on the selected collision mode.
Abstract:
A system and method of operating an audio visual system generating a virtual immersive experience may include an electronic user device in communication with a tracking device that may track a user's physical movement in a real world space and translate the tracked physical movement into corresponding movement in the virtual world generated by the user device. The system may detect when a user and the user device are approaching a boundary of a tracking area and automatically initiate a transition out of the virtual world and into the real world. A smooth, or graceful, transition between the virtual world and the real world as the user encounters this boundary may avoid disorientation which may occur as a user continues to move in the real world, while motion appears to have stopped upon reaching the tracking boundary.
Abstract:
A system and method of operating an audio visual system generating an immersive virtual experience may include generating, by a head-mounted audio visual device, a virtual world immersive experience within a virtual space while physically moving within a physical space, displaying, by the head-mounted audio visual device within the virtual space, a visual target marker indicating a target location in the physical space, receiving, by the head-mounted audio visual device, a teleport control signal, and moving a virtual location of the head-mounted audio visual device within the virtual space from a first virtual location to a second virtual location in response to receiving the teleport control signal.
Abstract:
Systems and methods are described for generating a virtual reality experience including generating a user interface with a plurality of regions on a display in a head-mounted display device. The head-mounted display device housing may include at least one pass-through camera device. The systems and methods can include obtaining image content from the at least one pass-through camera device and displaying a plurality of virtual objects in a first region of the plurality of regions in the user interface, the first region substantially filling a field of view of the display in the head-mounted display device. In response to detecting a change in a head position of a user operating the head-mounted display device, the methods and systems can initiate display of updated image content in a second region of the user interface.
Abstract:
An example technique may include performing, by a virtual reality application provided on a computing device, video rendering at a first video rendering rate based on updating an entire image on a screen of the computing device at a first update rate, determining that a performance of the video rendering is less than a threshold, performing, based on the determining, video rendering at a second video rendering rate by updating a first portion of the image at the first update rate, and by updating a second portion of the image at a second update rate that is less than the first update rate. Another example technique may include shifting, during an eye blinking period, one or both of a left eye image and a right eye image to reduce a disparity between a left viewed object and a right viewed object.
Abstract:
In a system for dynamic switching and merging of head, gesture and touch input in virtual reality, a virtual object may be selected by a user in response to a first input implementing one of a number of different input modes. Once selected, with focus established on the first object by the first input, the first object may be manipulated in the virtual world in response to a second input implementing another of the different input modes. In response to a third input, another object may be selected, and focus may be shifted from the first object to the second object in response to a third input if, for example, a priority value of the third input is higher than a priority value of the first input that established focus on the first object. If the priority value of the third input is less than the priority value of the first input that established focus on the first object, focus may remain on the first object. In response to certain trigger inputs, a display of virtual objects may be shifted between a far field display and a near field display to accommodate a particular mode of interaction with and manipulation of the virtual objects.