Abstract:
Two software-only prefix encoding techniques employ encoding look-up tables to produce contributions to the encoded bit stream that are incremented in integral numbers of bytes to facilitate accelerated encoding rates at the expense of an acceptable trade-off in increased memory size requirements. The first technique, referred to as offset-based encoding, employs encoding tables which eliminate most of the bit-based operations that need to be performed by a prefix encoder without inordinately expanding memory requirements. In offset-based encoding, a Huffman table is employed which contains information for each number of bits by which the length of a Huffman word is offset from an integral number of bytes. The encoding method generates bytes of encoded data, even though the Huffman code has variable length code words for each symbol to be encoded. The second technique, referred to as byte-based encoding, employs a byte-based Huffman encoding table which operates even faster than the offset-based encoding scheme because it does not employ any bit-based operations at all; however, this is achieved at the expense of a considerable expansion in memory requirements.
Abstract:
To let decoder side motion vector derivation (DMVD) coded blocks be decoded in parallel, decoder side motion estimation (ME) dependency on spatially neighboring reconstructed pixels can be removed. Mirror ME and projective ME are only performed on two reference pictures, and the spatially neighboring reconstructed pixels will not be considered in the measurement metric of the decoder side ME. Also, at a video decoder, motion estimation for a target block in a current picture can be performed by calculating a motion vector for a spatially neighboring DMVD block, using the calculated motion vector to predict motion vectors of neighboring blocks of the DMVD block, and decoding the DMVD block and the target block in parallel. In addition, determining a best motion vector for a target block in a current picture can be performed by searching only candidate motion vectors in a search window, wherein candidate motion vectors are derived from a small range motion search around motion vectors of neighboring blocks.
Abstract:
Systems, apparatus, articles, and methods are described including operations to generate a weighted look-up-table based at least in part on individual pixel input values within an active block region and on a plurality of contrast compensation functions. A second level compensation may be performed for a center pixel block of the active region based at least in part on the weighted look-up-table.
Abstract:
Techniques to identify one or more candidate reference blocks used to generate a prediction block to encode a current coding block. The candidate reference blocks can be in the same layer as the current coding block or a different layer. In addition, the candidate reference blocks do not have to be co-located with the current coding block. Motion vectors and shift vectors can be used to identify the one or more candidate reference blocks. In addition, uniform and non-uniform weighting can be applied to the one or more candidate reference blocks to generate the prediction block. Accordingly, an encoder can determine and identify reference blocks to a decoder that can provide desirable rate-distortion cost.
Abstract:
Systems, apparatus and methods are described related to real-time automatic conversion of 2-dimensional images or video to 3-dimensional stereo images or video.
Abstract:
Systems and methods of detecting an object using motion estimation may include a processor and motion estimation and object detection logic coupled to the processor. The motion estimation and object detection logic may be configured to include logic to detect an object in a frame of a video based on motion estimation. The video may include a first frame and a second frame. The motion estimation may be performed on a region of the second frame using sum of absolute difference between the region of the second frame and a corresponding region of the first frame.