摘要:
An image based system and process for rendering novel views of a real or synthesized 3D scene based on a series of concentric mosaics depicting the scene. In one embodiment, each concentric mosaic represents a collection of consecutive slit images of the surrounding 3D scene taken from a different viewpoint tangent to a circle on a plane within the scene. Novel views from viewpoints within circular regions of the aforementioned circle plane defined by the concentric mosaics are rendered using these concentric mosaics. Specifically, a slit image can be identified by a ray originating at its viewpoint on the circle plane and extending toward the longitudinal midline of the slit image. Each of the rays associated with the slit images needed to construct a novel view will either coincide with one of the rays associated with a previously captured slit image, or it will pass between two of the concentric circles on the circle plane. If it coincides, then the previously captured slit image associated with the coinciding ray can be used directly to construct part of the novel view. If the ray passes between two of the concentric circles of the plane, then the needed slit image is interpolated using the two previously captured slit images associated with the rays originating from the adjacent concentric circles that are parallel to the non-coinciding ray. If the objects in the 3D scene are close to the camera, depth correction is applied to reduce image distortion for pixels located above and below the circle plane. In another embodiment, a single camera is used to capture a sequence of images. Each image includes image data that has a ray direction associated therewith. To render an image at a novel viewpoint, multiple ray directions from the novel viewpoint are chosen. Image data is combined from the sequence of images by selecting image data that has a ray direction substantially aligning with the ray direction from the novel viewpoint.
摘要:
Systems and methods are described that support variable play speed control for media streams. The variable play speed control for media streams discussed herein provides an end-to-end solution for media stream delivery, playback, and user interface that enables end users and software developers to dynamically control the playback speed of media streams without losing the ability to comprehend the media content.
摘要:
A spectator experience corresponding to an occurrence of one or more games or events is generated based on each associated occurrence. The occurrence of a game or event varies in response to contributions and/or interactions of one or more participants of the game or event. The spectator experience enables users thereof to observe an augmented version of the game or event, such as by implementing enhanced viewpoint controls and/or other spectator related effects. In a particular aspect, the spectator experience can provide an indication of the spectator' presence, which is made available to the spectators and/or to the participants of the game.
摘要:
An “adaptive audio playback controller” operates by decoding and reading received packets of an audio signal into a signal buffer. Samples of the decoded audio signal are then played out of the signal buffer according to the needs of a player device. Jitter control and packet loss concealment are accomplished by continuously analyzing buffer content in real-time, and determining whether to provide unmodified playback from the buffer contents, whether to compress buffer content, stretch buffer content, or whether to provide for packet loss concealment for overly delayed or lost packets as a function of buffer content. Further, the adaptive audio playback controller also determines where to stretch or compress particular frames or signal segments in the signal buffer, and how much to stretch or compress such segments in order to optimize perceived playback quality.
摘要:
An adaptive “temporal audio scaler” is provided for automatically stretching and compressing frames of audio signals received across a packet-based network. Prior to stretching or compressing segments of a current frame, the temporal audio scaler first computes a pitch period for each frame for sizing signal templates used for matching operations in stretching and compressing segments. Further, the temporal audio scaler also determines the type or types of segments comprising each frame. These segment types include “voiced” segments, “unvoiced” segments, and “mixed” segments which include both voiced and unvoiced portions. The stretching or compression methods applied to segments of each frame are then dependent upon the type of segments comprising each frame. Further, the amount of stretching and compression applied to particular segments is automatically variable for minimizing signal artifacts while still ensuring that an overall target stretching or compression ratio is maintained for each frame.