Abstract:
A user can interface with a computer system to interact with a computer program using an input device. The device includes one or more tracking devices and an input mode control. The one or more tracking devices are configured to communicate information relating to a position, orientation, or motion of one or more controllers to the computer system. The input mode control is configured to communicate an input mode signal to the computer system during interaction with the computer program. The input mode signal is configured to cause the computer program to interpret the information relating to the position, orientation or motion of the one or more controllers according to a particular input mode of a plurality of different input modes.
Abstract:
A method includes identifying a real-world object in a scene viewed by a camera of a user device, matching the real-world object with a tagged object based at least in part on image recognition and a sharing setting of the tagged object, the tagged object having been tagged with a content item, providing a notification to a user of the user device that the content item is associated with the real-world object, receiving a request from the user for the content item, and providing the content item to the user. A computer readable storage medium stores one or more computer programs, and an apparatus includes a processor-based device.
Abstract:
A system and method of simulating weight of a virtual object in a virtual environment includes receiving a weight adjusting profile in a handheld peripheral device. The weight adjusting profile corresponding to at least one weight characteristic and/or a movement characteristics of the virtual object presented in the virtual environment where the handheld peripheral device represents the virtual object. The handheld peripheral device includes a movable weight. The weight adjusting profile is stored in the handheld peripheral device and a position of the movable weight in the handheld peripheral device is adjusted to correspond to a movement of the virtual object in the virtual environment.
Abstract:
A method includes identifying one or more objects in one or more images of real-world scenes associated with a user, adding the identified one or more objects to a list of real-world objects associated with the user, assigning each object in the list of real-world objects to an object class based on object recognition, and providing a notification to the user that a content item has been associated with an object class assigned to one of the objects on the list of real-world objects associated with the user. A computer readable storage medium stores one or more computer programs, and an apparatus includes a processor-based device.
Abstract:
A method includes identifying one or more objects in one or more images of real-world scenes associated with a user, adding the identified one or more objects to a list of real-world objects associated with the user, assigning each object in the list of real-world objects to an object class based on object recognition, and providing a notification to the user that a content item has been associated with an object class assigned to one of the objects on the list of real-world objects associated with the user. A computer readable storage medium stores one or more computer programs, and an apparatus includes a processor-based device.
Abstract:
A method includes identifying one or more objects in one or more images of real-world scenes associated with a user, adding the identified one or more objects to a list of real-world objects associated with the user, assigning each object in the list of real-world objects to an object class based on object recognition, and providing a notification to the user that a content item has been associated with an object class assigned to one of the objects on the list of real-world objects associated with the user. A computer readable storage medium stores one or more computer programs, and an apparatus includes a processor-based device.
Abstract:
A method includes identifying a real-world object in a scene viewed by a camera of a user device, matching the real-world object with a tagged object based at least in part on image recognition and a sharing setting of the tagged object, the tagged object having been tagged with a content item, providing a notification to a user of the user device that the content item is associated with the real-world object, receiving a request from the user for the content item, and providing the content item to the user. A computer readable storage medium stores one or more computer programs, and an apparatus includes a processor-based device.
Abstract:
Systems and method for processing video frames generated for display on a head mounted display (HMD) to a second screen are provided. One example method includes receiving the video frames formatted for display on the HMD, and while passing the video frames to the HMD, selecting a portion of content from the video frames and processing the portion of the content for output to a second screen. The video frames viewed in the HMD are a result of interactive play executed for viewing on the HMD. The second screen configured to render an undistorted view of the interactive play on the HMD. In one example, the method and system enable additional content to be rendered on the second screen (e.g., second screen content, such as social interactive play with others, other non-game content, player-player communication, etc.).
Abstract:
Methods, systems, and computer programs are presented for rendering images on a head mounted display (HMD). One method includes operations for tracking, with one or more first cameras inside the HMD, the gaze of a user and for tracking motion of the HMD. The motion of the HMD is tracked by analyzing images of the HMD taken with a second camera that is not in the HMD. Further, the method includes an operation for predicting the motion of the gaze of the user based on the gaze and the motion of the HMD. Rendering policies for a plurality of regions, defined on a view rendered by the HMD, are determined based on the predicted motion of the gaze. The images are rendered on the view based on the rendering policies.