Abstract:
Aspects of the disclosure provide a method for merging compressed access units according to compression rates and/or positions of the respective compressed access units. The method can include receiving a sequence of compressed access units corresponding to a sequence of raw access units partitioned from an image or a video frame and corresponding to a sequence of memory spaces in a frame buffer, determining a merged access unit including at least two consecutive compressed access units based on compression rates and/or positions of the sequence of compressed access units. The merged access unit is to be stored in the frame buffer with a reduced gap between the at least two consecutive compressed access units compared with storing the at least two consecutive compressed access units in corresponding memory spaces in the sequence of memory spaces.
Abstract:
An image compression method includes at least the following steps: receiving a plurality of pixels of a frame, wherein pixel data of each pixel has a plurality of color channel data corresponding to a plurality of different color channels, respectively; encoding the pixel data of each pixel and generating bit-streams corresponding to the plurality of color channel data of the pixel, wherein the bit-streams corresponding to the plurality of color channel data of the pixel are separated; packing bit-streams of a same color channel data of different pixels into color channel bit-stream segments, wherein each of the bit-stream segments has a same predetermined size; and concatenating color channel bit-stream segments of the different color channels into a final bit-stream. Alternatively, color channel bit-stream segments of the same pixel are concatenated into a concatenated bit-stream portion, and concatenated bit-stream portions of different pixels are concatenated into a final bit-stream.
Abstract:
An image compression method includes at least the following steps: receiving a plurality of pixels of a frame, wherein pixel data of each pixel has a plurality of color channel data corresponding to a plurality of different color channels, respectively; encoding the pixel data of each pixel and generating bit-streams corresponding to the plurality of color channel data of the pixel, wherein the bit-streams corresponding to the plurality of color channel data of the pixel are separated; packing bit-streams of a same color channel data of different pixels into color channel bit-stream segments, wherein each of the bit-stream segments has a same predetermined size; and concatenating color channel bit-stream segments of the different color channels into a final bit-stream. Alternatively, color channel bit-stream segments of the same pixel are concatenated into a concatenated bit-stream portion, and concatenated bit-stream portions of different pixels are concatenated into a final bit-stream.
Abstract:
An encoding method is used for encoding an image. The image includes a plurality of blocks each having a plurality of pixels. The encoding method includes: encoding a plurality of data partitions of block data of a block in the image to generate a plurality of compressed bitstream segments, respectively; and combining the compressed bitstream segments to generate an output bitstream of the block. A bit group based interleaving process is involved in generating the output bitstream. According to the bit group based interleaving process, each of the compressed bitstream segments is divided into a plurality of bit groups each having at least one bit, and the output bitstream includes consecutive bit groups belonging to different compressed bitstream segments, respectively.
Abstract:
A hybrid video encoding method and system using a software engine and a hardware engine. The software engine receives coding unit data associated with a current picture, and performs a first part of the video encoding operation by executing instructions. The first part of the video encoding operation generates an inter predictor and control information corresponding to the coding unit data of the current picture. The first part of the video encoding operation stores the inter predictor into an off-chip memory. The hardware engine performs a second part of the video encoding operation according to the control information. The second part of the video encoding operation receives the inter predictor, and subtracts the inter predictor from the coding unit data to generate a residual signal. The second part of the video encoding operation then transforms and quantizes the residual signal to generate transformed and quantized residual signal, and encodes the transformed and quantized residual signal to generate an encoded video bitstream.
Abstract:
An encoding method includes applying a search range constraint on a search range of a block in a current frame, and encoding the block in the current frame with pixel information in a reference frame according to inter prediction performed based on the search range of the block in the current frame, wherein a resolution of the current frame is different from a resolution of the reference frame.
Abstract:
A method for generating a decoded value from a codeword which is binarized utilizing a concatenated unary/k-th order Exp-Golomb code includes: identifying a first portion of the codeword, a second portion of the codeword and a third portion of the codeword; generating an offset according to the second portion; decoding the third portion to generate an index value; and generating the decoded value by adding the offset and the index value.
Abstract:
A video encoding apparatus includes a data buffer and a video encoding circuit. Encoding of a first frame includes: deriving reference pixels of a reference frame from reconstructed pixels of the first frame, respectively, and storing reference pixel data into the data buffer for inter prediction, wherein the reference pixel data include information of pixel values of the reference pixels. Encoding of a second frame includes performing prediction upon a coding unit in the second frame to determine a target predictor for the coding unit. The prediction performed upon the coding unit includes: determining the target predictor for the coding unit according to whether a search range on the reference frame for finding a predictor of the coding unit under an inter prediction mode includes at least one reference pixel of the reference frame that is not accessible to the video encoding circuit.
Abstract:
A method and apparatus for processing transform coefficients for a video coder or encoder is disclosed in the present invention. Embodiments according to the present invention reduce the storage requirement for sign bit hiding (SBH), improve the parallelism of SBH processing or simplify parity checking. Partial quantized transform coefficients (QTCs) of a transform block may be processed before all QTCs of the transform block are received. Zero and non-zero QTCs of a scan block may be processed concurrently and the QTCs of multiple scan blocks in a transform block may also be processed concurrently when computing cost function for SBH compensation. The range for searching for a value-modification QTC may be less than the scan block to be processed. Parity checking on QTCs may be based on least significant bits (LSBs) of all QTCs or all non-zero QTCs of a scan block.
Abstract:
A method and apparatus of data reduction of search range buffer for motion estimation or motion compensation are disclosed. The method and apparatus use local memory to store reference data associated with search region to reduce system bandwidth requirement and use data reduction to reduce required local memory. The data reduction technique is also applied to intermediate data in a video coding system to reduce storage requirement associated with intermediate data. The data reduction technique is further applied to reference frames to reduce storage requirement for coding system incorporating picture enhancement processing to the reconstructed video.