Abstract:
A method may include presenting a scene from linear content on one or more display devices in an immersive environment, and receiving, from a user within the immersive environment, input to change an aspect of the scene. The method may also include accessing 3-D virtual scene information previously used to render the scene, and changing the 3-D virtual scene information according to the changed aspect of the scene. The method may additionally include rending the 3-D virtual scene to incorporate the changed aspect, and presenting the rendered scene in real time in the immersive user environment.
Abstract:
An image processing apparatus comprises: an input unit configured to input an image signal obtained by photoelectrically converting, by an image sensor, an optical image incident via an optical system including optical image stabilization unit; an acquisition unit configured to acquire optical image stabilization control information obtained by the optical image stabilization unit; and a motion of camera prediction unit configured to predict a motion of camera based on information obtained by eliminating influence of optical image stabilization by the optical image stabilization unit from the image signal input from the input unit based on the optical image stabilization control information.
Abstract:
System for filming a video movie in a real space includes a filming camera, a sensor, a computerized pinpointing module for determining the location of the filming camera, a monitoring screen and a computerized compositing module for generating on the monitoring screen a composite image of the real image and of a projection of a virtual image, generated according to the filming camera location data.
Abstract:
A method implemented in a video playback system is described for incorporating augmented reality (AR) into a video stream. The method comprises determining a target pattern, determining an inner pattern in the target pattern, determining a relationship between the target pattern and the inner pattern, and receiving, by the video playback system, frames of the video stream. For each frame within the frame sequence, binarization is performed according to a predetermined threshold. Based on whether a location of the target pattern can be determined, a location of the inner pattern is determined. Based on the location of the inner pattern on received frames and the determined relationship between the target pattern and the inner pattern, a location of the target pattern is determined. The method further comprises displaying a virtual object with the target pattern on an output device based on the location of the target pattern.
Abstract:
Disclosed herein are a method and system for producing a video advertisement. The method includes the steps of receiving raw video content, applying a set of filters to said raw video content to produce a base video commercial in a first format, and displaying said base video commercial to a user. The method processes user video content received from said user to accord with said first format, such as by applying a set of filters, and merges the processed user video content with the base video commercial to produce the video advertisement.
Abstract:
An example information processing apparatus includes: an image acquiring unit that acquires a captured image of a real space; a feature detecting unit that detects a feature from the captured image; a determining unit that determines a virtual object, or, a virtual object and an aspect of the virtual object while changing the same in accordance with a condition of imaging device for the captured image; an image generating unit that generates an image of a virtual space in which the determined virtual object or the virtual object in the determined aspect is placed on a basis of the feature; and a display controlling unit that displays an image on a display device such that the image of the virtual space is visually recognized by a user while being superimposed on the real space.
Abstract:
A telepresence system includes a projector for generating an image, a projection screen for reviewing the image generated by the projector and generating a reflected image and a foil for reviewing the reflected image generated by the projection screen. The foil generates and directs partially reflected image toward an audience, the partially reflected image being perceived by the audience as a virtual image or hologram located on a viewing stage. Additionally, the system incorporates a camera for filming an individual through the foil, the camera being located on a camera side of the foil and positioned adjacent to the viewing stage, the individual being located on an individual side of the foil and positioned on a filming stage.
Abstract:
An information processing device includes: an outline extraction unit extracting an outline of a subject from a picked-up image of the subject; a characteristic amount extraction unit extracting a characteristic amount, by extracting sample points from points making up the outline, for each of the sample points; an estimation unit estimating a posture of a high degree of matching as a posture of the subject by calculating a degree of the characteristic amount extracted in the characteristic amount extraction unit being matched with each of a plurality of characteristic amounts that are prepared in advance and represent predetermined postures different from each other; and a determination unit determining accuracy of estimation by the estimation unit using a matching cost when the estimation unit carries out the estimation.
Abstract:
A remote monitoring system includes: a display unit on which a CG video generated from a three-dimensional CG model is displayed; an input unit which accepts a user's input to the CG video; a three-dimensional CG image generation unit which displays, on the display unit, the CG video after being moved on the basis of the input; an optimal camera calculation unit which specifies a surveillance camera that can pick up a real video similar to the CG video after being moved; and a control unit which controls the surveillance camera that is specified, wherein a real video from the surveillance camera that is controlled is displayed on the display unit.
Abstract:
Systems and methods are provided for generating calibration information for a media projector. The method includes tracking at least position of a tracking apparatus that can be positioned on a surface. The media projector shines a test spot on the surface, and the test spot corresponds to a known pixel coordinate of the media projector. The system includes a computing device in communication with at least two cameras, wherein each of the cameras are able to capture images of one or more light sources attached to an object. The computing device determines the object's position by comparing images of the light sources and generates an output comprising the real-world position of the object. This real-world position is mapped to the known pixel coordinate of the media projector.