Abstract:
Disclosed are an apparatus and a method of detecting a user interaction with a virtual object. In some embodiments, a depth sensing device of an NED device receives a plurality of depth values. The depth values correspond to depths of points in a real-world environment relative to the depth sensing device. The NED device overlays an image of a 3D virtual object on a view of the real-world environment, and identifies an interaction limit in proximity to the 3D virtual object. Based on depth values of points that are within the interaction limit, the NED device detects a body part or a user device of a user interacting with the 3D virtual object.
Abstract:
A computer-implemented method for utilizing a camera device to track an object is presented. As part of the method, a region of interest is determined within an overall image sensing area. A point light source is then tracked within the region of interest. In a particular arrangement, the camera device incorporates CMOS image sensor technology and the point light source is an IR LED. Other embodiments pertain to manipulations of the region of interest to accommodate changes to the status of the point light source.
Abstract:
Systems and methods of a personal daemon, executing as a background process on a mobile computing device, for providing personal assistant to an associated user is presented. While the personal daemon maintains personal information corresponding to the associated user, the personal daemon is configured to not share the personal information of the associated user with any other entity other than the associated user except under conditions of rules established by the associated user. The personal daemon monitors and analyzes the actions of the associated user to determine additional personal information of the associated user. Additionally, upon receiving one or more notices of events from a plurality of sensors associated with the mobile computing device, the personal daemon executes a personal assistance action on behalf of the associated user.
Abstract:
A tele-immersive environment is described that provides interaction among participants of a tele-immersive session. The environment includes two or more set-ups, each associated with a participant. Each set-up, in turn, includes mirror functionality for presenting a three-dimensional virtual space for viewing by a local participant. The virtual space shows at least some of the participants as if the participants were physically present at a same location and looking into a mirror. The mirror functionality can be implemented as a combination of a semi-transparent mirror and a display device, or just a display device acting alone. According to another feature, the environment may present a virtual object in a manner that allows any of the participants of the tele-immersive session to interact with the virtual object.
Abstract:
Technologies pertaining to calibration of filters of an audio system are described herein. A mobile computing device is configured to compute values for respective filters, such as equalizer filters, and transmit the values to a receiver device in the audio system. The receiver device causes audio to be emitted from a speaker based upon the values for the filters.
Abstract:
Systems and methods of a personal daemon, executing as a background process on a mobile computing device, for providing personal assistant to an associated user is presented. While the personal daemon maintains personal information corresponding to the associated user, the personal daemon is configured to not share the personal information of the associated user with any other entity other than the associated user except under conditions of rules established by the associated user. The personal daemon monitors and analyzes the actions of the associated user to determine additional personal information of the associated user. Additionally, upon receiving one or more notices of events from a plurality of sensors associated with the mobile computing device, the personal daemon executes a personal assistance action on behalf of the associated user.
Abstract:
A tele-immersive environment is described that provides interaction among participants of a tele-immersive session. The environment includes two or more set-ups, each associated with a participant. Each set-up, in turn, includes mirror functionality for presenting a three-dimensional virtual space for viewing by a local participant. The virtual space shows at least some of the participants as if the participants were physically present at a same location and looking into a mirror. The mirror functionality can be implemented as a combination of a semi-transparent mirror and a display device, or just a display device acting alone. According to another feature, the environment may present a virtual object in a manner that allows any of the participants of the tele-immersive session to interact with the virtual object.
Abstract:
Various technologies pertaining to shared spatial augmented reality (SSAR) are described. Sensor units in a room output sensor signals that are indicative of positions of two or more users in the room and gaze directions of the two or more users. Views of at least one virtual object are computed separately for each of the two or more users, and projectors project such views in the room. The projected views cause the two or more users to simultaneously perceive the virtual object in space.
Abstract:
Technologies for transitioning between two-dimensional (2D) and three-dimensional (3D) display views for video conferencing are described herein. Video conferencing applications can have multiple display views for a user participating in a video conference. In certain situations a user may want to transition from a 2D display view of the video conference to a more immersive 3D display view. These transitions can be visually jarring and create an uncomfortable user experience. The transition from a 2D display view to a 3D display view can be improved by executing the transition to a 3D display view by manipulating visual properties of a virtual camera that is employed to generate the display views.
Abstract:
Systems and methods for representing two-dimensional representations as three-dimensional avatars are provided herein. In some examples, one or more input video streams are received. A first subject, within the one or more input video streams, is identified. Based on the one or more input video streams, a first view of the first subject is identified. Based on the one or more input video streams, a second view of the first subject is identified. The first subject is segmented into a plurality of planar object. The plurality of planar objects are transformed with respect to each other. The plurality of planar objects are based on the first and second views of the first subject. The plurality of planar objects are output in an output video stream. The plurality of planar objects provide perspective of the first subject to one or more viewers.