Abstract:
A technique for determining an adaptive quantization parameter offset for a block of encoded video includes obtaining a rate control factor for the quantization parameter, determining a content-based quantization parameter factor for the quantization parameter, determining an adaptive variance based quantization offset based on content-based quantization parameter factors for a frame prior to the current frame, and combining the rate control factor, the content-based quantization parameter factor, and the adaptive offset to generate the quantization parameter.
Abstract:
A technique for determining an adaptive quantization parameter offset for a block of encoded video includes obtaining a rate control factor for the quantization parameter, determining a content-based quantization parameter factor for the quantization parameter, determining an adaptive variance based quantization offset based on content-based quantization parameter factors for a frame prior to the current frame, and combining the rate control factor, the content-based quantization parameter factor, and the adaptive offset to generate the quantization parameter.
Abstract:
Still frame detection for single pass video data, including: determining that an average quantization parameter of a frame of video data falls below a quantization parameter threshold; determining whether an amount of skipped macroblocks in the frame meets a skipped macroblock threshold; and responsive to the amount of skipped macroblocks exceeding the skipped macroblock threshold, identifying the frame as a still frame.
Abstract:
A motion estimation apparatus and method (carried out electronically) provides for encoding of multiview video, such as stereoscopic video, by providing motion estimation for pixels in a dependent eye view, using motion vector information from a colocated group of pixels in a base eye view and neighboring pixels to the colocated group of pixels in the base eye view. The method and apparatus encodes a group of pixels in a dependent eye view based on the estimated motion vector information. The method and apparatus may also include obtaining a frame of pixels that includes both base eye view pixels and dependent eye pixels so that, for example, frame compatible format packing can be employed. In one example, estimating the motion vector information for a block of pixels, for example, in a dependent eye view is based on a median value calculation of motion vectors for a block of pixels in a base eye view and motion vectors for neighboring blocks of pixels to the colocated group of pixels in the base eye view. An apparatus and method may include transmitting the encoded dependent eye view and base eye view information to another device and decoding the encoded dependent eye view and base eye view information for display.
Abstract:
A method and apparatus are described for performing video encoding mode decisions. A down-scaled frame is received that includes a macroblock corresponding to a first subset of macroblocks of a first area in a full-scale frame. A first average motion vector is calculated for the first subset of macroblocks, and a second average motion vector is calculated for a second subset of macroblocks of a second area surrounding the first subset of macroblocks. A comparison of a threshold to a distance measure between absolute values of the first and second average motion vectors is performed. A prediction mode for the macroblock in the down-scaled frame is determined based on the comparison to generate predicted blocks.
Abstract:
An integrated circuit includes a TSV extending from a first surface of a semiconductor substrate to a second surface of the semiconductor substrate and having a first end and a second end, and a non-volatile repair circuit. The non-volatile repair circuit includes a one-time programmable (OTP) element having a programming terminal, wherein in response to an application of a fuse voltage to the programming terminal, the OTP element electrically couples the first end of the TSV to the second end of the TSV.
Abstract:
Systems, methods, and devices for scene change detection and image encoding. A sequence of image frames is input. For a first image frame of the sequence, a first total sum of absolute transformed differences (SATD) is calculated. For a second frame of the sequence, a second total SATD is calculated. An absolute difference between the first total SATD and the second total SATD is calculated. If the absolute difference meets or exceeds a threshold, the second frame and a third frame of the sequence subsequent to the second frame are encoded based on a scene change, and the second frame and the third frame are transmitted. If the absolute difference does not meet or exceed the threshold, the second frame is encoded based on a same scene and the second frame is transmitted.
Abstract:
A method for determining a macroblock (MB) coding mode for a current MB in a dependent view. A window around a co-located MB in a base view is determined, wherein the co-located MB is a MB in the base view having a same location as the current MB in the dependent view. A coding mode complexity value (CMCV) is determined for each MB in the window, wherein the CMCV is based on a coding mode used to encode the MB. Rate distortion optimization (RDO) is performed for the current MB using a reduced number of coding modes if a total CMCV for all MBs in the window is less than a threshold, or using all supported coding modes if the total CMCV for all MBs in the window is greater than the threshold. A coding mode for the current MB is determined based on the RDO results.
Abstract:
An efficient motion estimation method and apparatus for 3D stereo video encoding is described herein. In an embodiment of the method, an enhancement layer motion vector for a frame is determined by obtaining a motion vector of a co-located macroblock (MB) from the same frame of a base layer. The motion vectors of a predetermined number of surrounding MBs from the same frame of the base layer are also obtained. A predicted motion vector for the MB of the frame in the enhancement layer is determined using, for example, a median value from the motion vectors associated with the co-located MB and the predetermined number of surrounding MBs. A small or less than full range motion refinement is performed to obtain a final motion vector, where full range refers to the maximum search range supported by an encoder performing the method.
Abstract:
An integrated circuit includes a TSV extending from a first surface of a semiconductor substrate to a second surface of the semiconductor substrate and having a first end and a second end, and a non-volatile repair circuit. The non-volatile repair circuit includes a one-time programmable (OTP) element having a programming terminal, wherein in response to an application of a fuse voltage to the programming terminal, the OTP element electrically couples the first end of the TSV to the second end of the TSV.