Abstract:
A method for detecting and quantifying losses in at least one of video and audio equipment, comprising the steps of generating the test pattern; processing the test pattern through video equipment; and displaying the processed test pattern to a display to a viewer. In one embodiment, the test pattern is indicative of video compression losses due to quantization. In another embodiment, the test pattern is indicative of a color transformation mismatch after encoding/decoding with incompatible video transmission standards. In a third embodiment, the test pattern is iso-luminant after a color transformation between a first and second video transmission standard. In a fourth embodiment, the test pattern is indicative of lipsync error.
Abstract:
A digital video decoding system receives packetized video data representing programs conveyed on a plurality of video channels. The system includes a plurality of buffers for storing encoded video data representing images of video programs conveyed on a corresponding plurality of video channels. An individual buffer, corresponding to an individual video channel, stores sufficient encoded video data to prevent an underflow condition following switching to decode a program conveyed on the individual video channel. A processor initiates switching to decode a program conveyed on a selected one of the plurality of video channels in response to a user channel selection input. A decoder decodes encoded video data received from one of the plurality of buffers corresponding to the program conveyed on the selected video channel as determined by switching initiated by the processor. The decoder also predicts a next channel to be selected by a user based on, (a) predetermined user channel and program preference criteria, (b) predetermined user channel navigation patterns, or (c) user data entry device sensory data.
Abstract:
A method for generating a transition stream and processing video, audio or other data within the transition stream using, respectively, pixel domain processing, audio domain processing or other data domain processing.
Abstract:
Compression-related information is used to constrain the selection and control of video imagery and content used to produce one or more uncompressed video streams for subsequent compression processing. Rather than taking an uncompressed video stream “as is” for compression processing, characteristics of compression processing are taken into consideration during the video production stage when the uncompressed video stream is generated. Different types of constraints include “intra-frame” constraints that constrain video content within a frame of a video stream, “inter-frame” constraints that constrain video content from frame to frame within a video stream, and “inter-stream” constraints that constrain video content across different video streams. Two or more different constraints and two or more different types of constraints may be applied simultaneously. The compression-related information may be “static” (e.g., in the form of processing “rules” that are applied) or fed back in real time from the video compression stage as “dynamic” information. By taking the subsequent compression processing into account during the video production stage, the resulting uncompressed video stream(s) can be encoded (e.g., to achieve more programs per channel and/or higher quality per program) using computationally inexpensive “objective” video compression algorithms that operate without taking video content into consideration, while still achieving the bit rate and video quality levels achieved using computationally expensive “subjective” video compression algorithms that do take video content into consideration.
Abstract:
When logos or other imagery are to be added to compressed digital video bitstreams, certain constraints are applied to the encoder that generates the original compressed bitstream to enable a video logo processor (e.g., at a local broadcaster) to insert logo-inserted encoded data into the bitstream without placing substantial processing demands on the video logo processor. In one embodiment, areas where logos can be inserted are identified and the encoder is not allowed to use image data within those logo areas as reference data when performing motion-compensated inter-frame differencing for pixels outside of the logo areas. Preferably, the compressed data corresponding to the desired location for logo insertion are extracted from the compressed bitstream and replaced by logo-inserted encoded data. As a result, logos can be inserted into compressed digital video bitstreams without having to completely decode and re-encode the bitstreams, while maintaining the overall quality of the video display.
Abstract:
Video signal compression apparatus generates residues representing pixel value differences between predicted and real pixel values of a current frame of a video signal. Noise reduction circuitry, in the form of nonlinear processing functions, attenuates lower amplitude residues greater than higher amplitude residues and is responsive to a noise estimate. The processed residues are transformed to provide a compressed video data output. The nonlinear processing functions attenuate noise and reduce image distortion.
Abstract:
During transform-based video compression processing, motion vectors, which are identified during motion estimation and then used during motion-compensated inter-frame differencing, are constrained to coincide with block boundaries in the reference data. Block-based motion vectors have components that correspond to integer multiples of block dimensions. For example, for (8×8) blocks, allowable motion vector components are ( . . . , −16, −8, 0, +8, +16, . . . ). Constraining motion vectors in this way enables the resulting encoded video bitstream to be further processed in the transform domain without having to apply inverse and forward transforms. In particular, an existing input bitstream is partially decoded to recover the motion vectors and prediction error (i.e., dequantized transform coefficients). Because the motion vectors coincide with block boundaries in the corresponding reference data, motion-compensated inter-frame addition can then performed in the transform domain to generate transform data for subsequent processing (which may ultimately involve re-encoding the transform data into another encoded video bitstream). Because motion compensation can be performed in the transform domain, the bitstream data can be further processed in the transform domain and without having to apply expensive and lossy inverse and forward transforms.
Abstract:
Noise reduction circuitry, in a video signal compression apparatus of the predictive DPCM compression type includes a simple nonlinear processing element within the DPCM loop to eliminate residues between predicted and real image signals, which are smaller than a predetermined value. Elimination of such residues dramatically reduces the amount of compressed data generated for signals including even modest amounts of noise.
Abstract:
A discontinuous motion special effects generator for television is described including a memory means responsive to a television video signal for storing signals representing a television picture field and memory control means for controlling the writing and reading of the picture fields. This generator includes means for automatically providing control pulses at a rate for desired discontinuous motion. The generator further includes means coupled to the memory control means and responsive to the control pulses for writing a single picture field of video only on the occurrence of each control pulse whereby the video output changes only on the occurrence of each control pulse.
Abstract:
A system is provided for compensating for signal defects such as dropouts in recorded television signals. A recovered video signal is delayed for a period of substantially one scanning line, and the delayed signal is coupled to two channels, one for the luminance and scan synchronizing information and the other for the color information. The color information is phase-reversed each time it is delayed so as to provide a proper phase relationship when reinserting the delayed information into the signal path. The two channels are recombined to provide a compensating signal which is substituted for the original video signal during a dropout. In addition, when a head-switching transient such as occurs in a quadruplex VTR is sensed, a further mode of operation is activated in which the system substitutes a portion of the prior line horizontal blanking interval for the duration of the switching transient so as to prevent degradation of the synchronizing signal and resultant disturbance to the operation of the recorder servo systems.