Abstract:
A robust fine granularity scalability video encoding includes a base layer encoder and an enhancement layer encoder in which motion compensated difference images are generated by comparing an original image to predicted images at base layer and enhancement layer with motion compensation. Based on leaky and partial predictions, a high quality reference image is constructed at the enhancement layer to improve temporal prediction. In the construction of the high quality reference image, one parameter β controls the number of bitplanes of the enhancement layer difference coefficients used and another parameter α controls the amount of predictive leak. A spatial scalability module allows the processed pictures at the base layer and the enhancement layer to have identical or different spatial resolutions.
Abstract:
A robust fine granularity scalability video encoding includes a base layer encoder and an enhancement layer encoder in which motion compensated difference images are generated by comparing an original image to predicted images at base layer and enhancement layer with motion compensation. Based on leaky and partial predictions, a high quality reference image is constructed at the enhancement layer to improve temporal prediction. In the construction of the high quality reference image, one parameter β controls the number of bitplanes of the enhancement layer difference coefficients used and another parameter α controls the amount of predictive leak. A spatial scalability module allows the processed pictures at the base layer and the enhancement layer to have identical or different spatial resolutions.
Abstract:
A robust fine granularity scalability video encoding includes a base layer encoder and an enhancement layer encoder in which motion compensated difference images are generated by comparing an original image to predicted images at base layer and enhancement layer with motion compensation. Based on leaky and partial predictions, a high quality reference image is constructed at the enhancement layer to improve temporal prediction. In the construction of the high quality reference image, one parameter β controls the number of bitplanes of the enhancement layer difference coefficients used and another parameter α controls the amount of predictive leak. A spatial scalability module allows the processed pictures at the base layer and the enhancement layer to have identical or different spatial resolutions.
Abstract:
The present invention relates to an architecture for stack robust fine granularity scalability (SRFGS), more particularly, SRFGS providing simultaneously temporal scalability and SNR scalability. SRFGS first simplifies the RFGS temporal prediction architecture and then generalizes the prediction concept as the following: the quantization error of the previous layer can be inter-predicted by the reconstructed image in the previous time instance of the same layer. With this concept, the RFGS architecture can be extended to multiple layers that forming a stack to improve the temporal prediction efficiency. SRFGS can be optimized at several operating points to fit the requirements of various applications while the fine granularity and error robustness of RFGS are still remained. The experiment results show that SRFGS can improve the performance of RFGS by 0.4 to 3.0 dB in PSNR.
Abstract:
A method for automatically focusing a camera including the steps of (A) recording a first topology and a second topology, where the second topology occurs temporally after the first topology, and (B) comparing the first topology with the second topology. A focus of the camera is automatically adjusted based upon one or more similarities between the first topology and the second topology.
Abstract:
A method of rate-distortion computations for video compression is disclosed. The method may include steps (A) to (C). Step (A) may generate a plurality of transform coefficients from a residual block of the video using a circuit. Step (B) may generate a block distortion value (i) based on the transform coefficients and (ii) independent of a plurality of inverse transform samples produced from the residual block. Step (C) may generate a rate-distortion value from the block distortion value.
Abstract:
A method for automatically focusing a camera including the steps of (A) recording a first topology and a second topology, where the second topology occurs temporally after the first topology, and (B) comparing the first topology with the second topology. A focus of the camera is automatically adjusted based upon one or more similarities between the first topology and the second topology.
Abstract:
A camera including a first queue, a second queue, and a processor. The processor is generally coupled to the first queue and the second queue. The processor embodies routines that, when executed by the processor, cause the processor to (i) record a first topology in the first queue and a second topology in the second queue and (ii) compare the first topology with the second topology. Recording of the second topology is generally started after the first topology is completely recorded. A focus of the camera is automatically adjusted based upon one or more similarities between the first topology and the second topology.