Abstract:
Techniques and tools for encoding enhancement layer video with quantization that varies spatially and/or between color channels are presented, along with corresponding decoding techniques and tools. For example, an encoding tool determines whether quantization varies spatially over a picture, and the tool also determines whether quantization varies between color channels in the picture. The tool signals quantization parameters for macroblocks in the picture in an encoded bit stream. In some implementations, to signal the quantization parameters, the tool predicts the quantization parameters, and the quantization parameters are signaled with reference to the predicted quantization parameters. A decoding tool receives the encoded bit stream, predicts the quantization parameters, and uses the signaled information to determine the quantization parameters for the macroblocks of the enhancement layer video. The decoding tool performs inverse quantization that can vary spatially and/or between color channels.
Abstract:
Embodiments for implementing a speech recognition system that includes a speech classifier ensemble are disclosed. In accordance with one embodiment, the speech recognition system includes a classifier ensemble to convert feature vectors that represent a speech vector into log probability sets. The classifier ensemble includes a plurality of classifiers. The speech recognition system includes a decoder ensemble to transform the log probability sets into output symbol sequences. The speech recognition system further includes a query component to retrieve one or more speech utterances from a speech database using the output symbol sequences.
Abstract:
An array of microphones placed on a mobile robot provides multiple channels of audio signals. A received set of audio signals is called an audio segment, which is divided into multiple frames. A phase analysis is performed on a frame of the signals from each pair of microphones. If both microphones are in an active state during the frame, a candidate angle is generated for each such pair of microphones. The result is a list of candidate angles for the frame. This list is processed to select a final candidate angle for the frame. The list of candidate angles is tracked over time to assist in the process of selecting the final candidate angle for an audio segment.
Abstract:
A decoder processes a first bitstream element (e.g., a pull-down flag) in a first syntax layer (e.g., sequence layer or entry point layer) above frame layer in a bitstream for a video sequence, the bitstream comprising encoded source video having a source type (e.g., progressive or interlace). The decoder processes frame data in a second syntax layer (e.g., frame layer) of the bitstream for a frame (such as an interlaced frame or progressive frame, depending on source type, or a skipped frame) in the video sequence. The first bitstream element indicates whether a repeat-picture element (e.g., a repeat-frame element or a repeat field-element) is present or absent in the frame data in the second syntax layer.
Abstract:
A method is described for efficiently determining total end-to-end distortion of a pre-compressed data stream, such as video streams or other media streams, at the time of delivery over a lossy-network, and for providing adaptive error-resilient delivery schemes based on distortion estimates. The methods can be utilized with single or multilayer packet streams and are particularly well suited for video streams. By way of example, distortion estimates are performed by generating side-information at the time of data stream compression, wherein the side-information is used in conjunction with information about the network status to determine an estimated distortion for the group of packets when the data stream is transported over the network to a destination end. This estimation may be utilized within described resiliency techniques in which the error correction mechanism is selected in response to the estimated distortion, which may be additionally refined in reference to cost factors.
Abstract:
A decoder receives an entry point header comprising plural control parameters for an entry point segment corresponding to the entry point header. The entry point header is in an entry point layer of a bitstream comprising plural layers. The decoder decodes the entry point header. The plural control parameters can include various combinations of control parameters such as a pan scan on/off parameter, a reference frame distance on/off parameter, a loop filtering on/off parameter, a fast chroma motion compensation on/off parameter, an extended range motion vector on/off parameter, a variable sized transform on/off parameter, an overlapped transform on/off parameter, a quantization decision parameter, and an extended differential motion vector coding on/off parameter, a broken link parameter, a closed entry parameter, one or more coded picture size parameters, one or more range mapping parameters, a hypothetical reference decoder buffer parameter, and/or other parameter(s).