Abstract:
Block processing pipeline methods and apparatus in which reference data are stored to a memory according to tile formats to reduce memory accesses when fetching the data from the memory. When the pipeline stores reference data from a current frame being processed to memory as a reference frame, the reference samples are stored in macroblock sequential order. Each macroblock sample set is stored as a tile. Reference data may be stored in tile formats for luma and chroma. Chroma reference data may be stored in tile formats for chroma 4:2:0, 4:2:2, and/or 4:4:4 formats. A stage of the pipeline may write luma and chroma reference data for macroblocks to memory according to one or more of the macroblock tile formats in a modified knight's order. The stage may delay writing the reference data from the macroblocks until the macroblocks have been fully processed by the pipeline.
Abstract:
A context switching method for video encoders that enables higher priority video streams to interrupt lower priority video streams. A high priority frame may be received for processing while another frame is being processed. The pipeline may be signaled to perform a context stop for the current frame. The pipeline stops processing the current frame at an appropriate place, and propagates the stop through the stages of the pipeline and to a transcoder through DMA. The stopping location is recorded. The video encoder may then process the higher-priority frame. When done, a context restart is performed and the pipeline resumes processing the lower-priority frame beginning at the recorded location. The transcoder may process data for the interrupted frame while the higher-priority frame is being processed in the pipeline, and similarly the pipeline may begin processing the lower-priority frame after the context restart while the transcoder completes processing the higher-priority frame.
Abstract:
A context switching method for video encoders that enables higher priority video streams to interrupt lower priority video streams. A high priority frame may be received for processing while another frame is being processed. The pipeline may be signaled to perform a context stop for the current frame. The pipeline stops processing the current frame at an appropriate place, and propagates the stop through the stages of the pipeline and to a transcoder through DMA. The stopping location is recorded. The video encoder may then process the higher-priority frame. When done, a context restart is performed and the pipeline resumes processing the lower-priority frame beginning at the recorded location. The transcoder may process data for the interrupted frame while the higher-priority frame is being processed in the pipeline, and similarly the pipeline may begin processing the lower-priority frame after the context restart while the transcoder completes processing the higher-priority frame.
Abstract:
In the video encoders described herein, blocks of pixels from a video frame may be encoded (e.g., using CAVLC encoding) in a block processing pipeline using wavefront ordering (e.g., in knight's order). Each of the encoded blocks may be written to a particular one of multiple DMA buffers such that the encoded blocks written to each of the buffers represent consecutive blocks of the video frame in scan order. A transcode pipeline may operate in parallel with (or at least overlapping) the operation of the block processing pipeline. The transcode pipeline may read encoded blocks from the buffers in scan order and merge them into a single bit stream (in scan order). A transcoder core of the transcode pipeline may decode the encoded blocks and encode them using a different encoding process (e.g., CABAC). In some cases, the transcoder may be bypassed.
Abstract:
Memory latency tolerance methods and apparatus for maintaining an overall level of performance in block processing pipelines that prefetch reference data into a search window. In a general memory latency tolerance method, search window processing in the pipeline may be monitored. If status of search window processing changes in a way that affects pipeline throughput, then pipeline processing may be modified. The modification may be performed according to no stall methods, stall recovery methods, and/or stall prevention methods. In no stall methods, a block may be processed using the data present in the search window without waiting for the missing reference data. In stall recovery methods, the pipeline is allowed to stall, and processing is modified for subsequent blocks to speed up the pipeline and catch up in throughput. In stall prevention methods, processing is adjusted in advance of the pipeline encountering a stall condition.
Abstract:
Methods and apparatus for caching neighbor data in a block processing pipeline that processes blocks in knight's order with quadrow constraints. Stages of the pipeline may maintain two local buffers that contain data from neighbor blocks of a current block. A first buffer contains data from the last C blocks processed at the stage. A second buffer contains data from neighbor blocks on the last row of a previous quadrow. Data for blocks on the bottom row of a quadrow are stored to an external memory at the end of the pipeline. When a block on the top row of a quadrow is input to the pipeline, neighbor data from the bottom row of the previous quadrow is read from the external memory and passed down the pipeline, each stage storing the data in its second buffer and using the neighbor data in the second buffer when processing the block.
Abstract:
Methods and apparatus for caching neighbor data in a block processing pipeline that processes blocks in knight's order with quadrow constraints. Stages of the pipeline may maintain two local buffers that contain data from neighbor blocks of a current block. A first buffer contains data from the last C blocks processed at the stage. A second buffer contains data from neighbor blocks on the last row of a previous quadrow. Data for blocks on the bottom row of a quadrow are stored to an external memory at the end of the pipeline. When a block on the top row of a quadrow is input to the pipeline, neighbor data from the bottom row of the previous quadrow is read from the external memory and passed down the pipeline, each stage storing the data in its second buffer and using the neighbor data in the second buffer when processing the block.
Abstract:
Block processing pipeline methods and apparatus in which pixel data from a reference frame is prefetched into a search window memory. The search window may include two or more overlapping regions of pixels from the reference frame corresponding to blocks from the rows in the input frame that are currently being processed in the pipeline. Thus, the pipeline may process blocks from multiple rows of an input frame using one set of pixel data from a reference frame that is stored in a shared search window memory. The search window may be advanced by one column of blocks by initiating a prefetch for a next column of reference data from a memory. The pipeline may also include a reference data cache that may be used to cache a portion of a reference frame and from which at least a portion of a prefetch for the search window may be satisfied.
Abstract:
Methods and apparatus for caching reference data in a block processing pipeline. A cache may be implemented to which reference data corresponding to motion vectors for blocks being processed in the pipeline may be prefetched from memory. Prefetches for the motion vectors may be initiated one or more stages prior to a processing stage. Cache tags for the cache may be defined by the motion vectors. When a motion vector is received, the tags can be checked to determine if there are cache block(s) corresponding to the vector (cache hits) in the cache. Upon a cache miss, a cache block in the cache is selected according to a replacement policy, the respective tag is updated, and a prefetch (e.g., via DMA) for the respective reference data is issued.
Abstract:
A block processing pipeline that includes a software pipeline and a hardware pipeline that run in parallel. The software pipeline runs at least one block ahead of the hardware pipeline. The stages of the pipeline may each include a hardware pipeline component that performs one or more operations on a current block at the stage. At least one stage of the pipeline may also include a software pipeline component that determines a configuration for the hardware component at the stage of the pipeline for processing a next block while the hardware component is processing the current block. The software pipeline component may determine the configuration according to information related to the next block obtained from an upstream stage of the pipeline. The software pipeline component may also obtain and use information related to a block that was previously processed at the stage.