SPECULATIVE CACHE TAG EVALUATION
    1.
    发明申请
    SPECULATIVE CACHE TAG EVALUATION 有权
    频谱高速缓存标签评估

    公开(公告)号:US20080163008A1

    公开(公告)日:2008-07-03

    申请号:US11616558

    申请日:2006-12-27

    申请人: Rojit Jacob

    发明人: Rojit Jacob

    IPC分类号: G06K5/04

    CPC分类号: G06F12/0895 G06F11/1064

    摘要: A cache tag comparison unit in a cache controller evaluates tag data and error correction codes to determine if there is a cache hit or miss. The cache tag comparison unit speculatively compares the tag data with the request tag without regard to error correction. The error correction code verifies whether this initial comparison is correct and provides a confirmed cache hit or miss signal. The tag data is compared with the request tag to determine a provisional cache hit or miss, and in parallel, the error correction code is evaluated. If the error code evaluation indicates errors in the tag data, a provisional cache hit is converted into a cache miss if errors are responsible for a false match. If the error code evaluation identifies the locations of errors, a provisional cache miss is converted into a cache hit if the errors are responsible for the mismatch.

    摘要翻译: 高速缓存控制器中的缓存标签比较单元评估标签数据和纠错码,以确定是否存在高速缓存命中或丢失。 缓存标签比较单元推测性地将标签数据与请求标签进行比较,而不考虑错误校正。 纠错码验证这个初始比较是否正确,并提供确认的缓存命中或未命中信号。 将标签数据与请求标签进行比较以确定临时高速缓存命中或丢失,并且并行地评估纠错码。 如果错误代码评估指示标签数据中的错误,则如果错误导致错误匹配,则临时高速缓存命中被转换为高速缓存未命中。 如果错误代码评估识别错误的位置,则如果错误导致不匹配,则临时高速缓存未命中将被转换为高速缓存命中。

    Context model cache-management in a dual-pipeline CABAC architecture
    2.
    发明授权
    Context model cache-management in a dual-pipeline CABAC architecture 有权
    双管道CABAC架构中的上下文模型缓存管理

    公开(公告)号:US09258565B1

    公开(公告)日:2016-02-09

    申请号:US13172775

    申请日:2011-06-29

    申请人: Rojit Jacob

    发明人: Rojit Jacob

    IPC分类号: H04N19/13 H04N19/102

    摘要: A method and system are disclosed for managing cache memory in a dual-pipelined CABAC encoder. A request for a context model is received from both encoder pipelines. If the requested context model is not stored in cache, the requested context model is retrieved from a context table. At least one context model stored in cache is written to the context table. The retrieved context model is updated and written to the cache. If the requested context model is stored in cache, and if the requested context model was updated in the previous clock cycle, the requested context model is retrieved from the pipeline, updated, and written to cache. If the requested context model is not stored in cache, and if the requested contest model was not updated in the previous clock cycle, the requested context model retrieved from cache, updated, and written back to cache.

    摘要翻译: 公开了用于管理双流水线CABAC编码器中的高速缓冲存储器的方法和系统。 从两个编码器管道接收到上下文模型的请求。 如果请求的上下文模型没有存储在缓存中,则从上下文表中检索所请求的上下文模型。 存储在高速缓存中的至少一个上下文模型被写入上下文表。 检索到的上下文模型被更新并写入高速缓存。 如果所请求的上下文模型存储在高速缓存中,并且如果所请求的上下文模型在先前的时钟周期中被更新,则从流水线检索所请求的上下文模型,并将其更新并写入缓存。 如果所请求的上下文模型不存储在缓存中,并且如果所请求的比赛模型在先前的时钟周期内未被更新,则从高速缓存中检索所请求的上下文模型,并将其更新并写回缓存。

    Efficient resource arbitration
    3.
    发明授权
    Efficient resource arbitration 有权
    有效的资源仲裁

    公开(公告)号:US07865647B2

    公开(公告)日:2011-01-04

    申请号:US11616539

    申请日:2006-12-27

    申请人: Rojit Jacob

    发明人: Rojit Jacob

    CPC分类号: G06F13/362 G06F13/1642

    摘要: Resource requests are allocated by storing resource requests in a queue slots in a queue. A token is associated with one of the queue slots. During an arbitration cycle, the queue slot with the token is given the priority to the resource. If the queue slot with the token does not include a request, a different queue slot having the highest static priority and including a request is given access to the resource. The token is advanced to a different queue slot after one or more arbitration cycles. Requests are assigned to the highest priority queue slot, to random or arbitrarily selected queue slots, or based on the source and/or type of the request. One or more queue slots may be received for specific sources or types of requests. Resources include processor access, bus access, cache or system memory interface access, and internal or external interface access.

    摘要翻译: 通过将资源请求存储在队列中的队列插槽中来分配资源请求。 令牌与其中一个队列插槽相关联。 在仲裁周期中,具有令牌的队列插槽被赋予资源优先级。 如果带有令牌的队列插槽不包含请求,则具有静态优先级最高并包含请求的不同队列插槽被赋予对资源的访问权限。 在一个或多个仲裁周期后,令牌将提前到不同的队列插槽。 请求被分配给最高优先级的队列时隙,随机或任意选择的队列时隙,或者基于请求的源和/或类型。 可以为特定的来源或类型的请求接收一个或多个队列时隙。 资源包括处理器访问,总线访问,缓存或系统内存接口访问以及内部或外部接口访问。

    Dual-pipeline CABAC encoder architecture
    4.
    发明授权
    Dual-pipeline CABAC encoder architecture 有权
    双管道CABAC编码器架构

    公开(公告)号:US08798139B1

    公开(公告)日:2014-08-05

    申请号:US13172773

    申请日:2011-06-29

    申请人: Rojit Jacob

    发明人: Rojit Jacob

    IPC分类号: H04N7/12 H04N11/02 H04N11/04

    摘要: A method and system are disclosed for the lossless compression of video data in a synchronous pipelined environment. One or more syntax elements of video data are binarized into one or more ordered bins. A first context model associated with a first bin and a second context model associated with a second bin are received. The first bin is encoded based on the first context model and the second bin is encoded based on the second context model, both bins being encoded within the same clock cycle. One or more encoded bits are outputted based on encoding the first and second bin. In one embodiment, the first bin is encoded in a first pipeline and the second bin is encoded in a second pipeline. In this embodiment, two bins may be encoded every clock cycle, one per pipeline. Further, in one embodiment, multiple context models are received and one context model is selected by each pipeline for encoding. After encoding, one or more context models may be updated and stored.

    摘要翻译: 公开了用于在同步流水线环境中的视频数据的无损压缩的方法和系统。 视频数据的一个或多个语法元素被二值化为一个或多个有序的分区。 接收与第一仓相关联的第一上下文模型和与第二仓相关联的第二上下文模型。 基于第一上下文模型对第一个仓进行编码,并且基于第二上下文模型对第二个仓进行编码,两个仓都在相同的时钟周期内进行编码。 基于对第一和第二仓的编码来输出一个或多个编码比特。 在一个实施例中,第一个仓被编码在第一个流水线中,而第二个仓被编码在第二条流水线中。 在该实施例中,每个时钟周期可以对每个管道编码两个仓。 此外,在一个实施例中,接收多个上下文模型,并且每个流水线选择一个上下文模型进行编码。 编码之后,可以更新和存储一个或多个上下文模型。

    System and method using embedded microprocessor as a node in an adaptable computing machine
    5.
    发明授权
    System and method using embedded microprocessor as a node in an adaptable computing machine 有权
    使用嵌入式微处理器作为适应性计算机中的节点的系统和方法

    公开(公告)号:US07502915B2

    公开(公告)日:2009-03-10

    申请号:US10673678

    申请日:2003-09-29

    IPC分类号: G06F9/00

    CPC分类号: G06F15/7867 G06F15/17381

    摘要: The present invention provides an adaptive computing engine (ACE) that includes processing nodes having different capabilities such as arithmetic nodes, bit-manipulation nodes, finite state machine nodes, input/output nodes and a programmable scalar node (PSN). In accordance with one embodiment of the present invention, a common architecture is adaptable to function in either a kernel node, or k-node, or as general purpose RISC node. The k-node acts as a system controller responsible for adapting other nodes to perform selected functions. As a RISC node, the PSN is configured to perform computationally intensive applications such as signal processing.

    摘要翻译: 本发明提供一种自适应计算引擎(ACE),其包括具有不同能力的处理节点,例如算术节点,位操作节点,有限状态机节点,输入/输出节点和可编程标量节点(PSN)。 根据本发明的一个实施例,公共架构适用于在内核节点或k节点中或作为通用RISC节点的功能。 k节点充当系统控制器,负责调整其他节点执行选定的功能。 作为RISC节点,PSN被配置为执行诸如信号处理的计算密集型应用。

    System and method using embedded microprocessor as a node in an adaptable computing machine
    6.
    发明申请
    System and method using embedded microprocessor as a node in an adaptable computing machine 有权
    使用嵌入式微处理器作为适应性计算机中的节点的系统和方法

    公开(公告)号:US20050166033A1

    公开(公告)日:2005-07-28

    申请号:US10765556

    申请日:2004-01-26

    申请人: Rojit Jacob

    发明人: Rojit Jacob

    IPC分类号: G06F15/00 G06F15/80

    CPC分类号: G06F15/8007

    摘要: The present invention provides an adaptive computing engine (ACE) that includes processing nodes having different capabilities such as arithmetic nodes, bit-manipulation nodes, finite state machine nodes, input/output nodes and a programmable scalar node (PSN). In accordance with one embodiment of the present invention, a common architecture is adaptable to function in either a kernel node, or k-node, or as general purpose RISC node. The k-node acts as a system controller responsible for adapting other nodes to perform selected functions. As a RISC node, the PSN is configured to perform computationally intensive applications such as signal processing. The present invention further provides an interconnection scheme so that a plurality of ACE devices operates under the control of a single k-node.

    摘要翻译: 本发明提供一种自适应计算引擎(ACE),其包括具有不同能力的处理节点,例如算术节点,位操作节点,有限状态机节点,输入/输出节点和可编程标量节点(PSN)。 根据本发明的一个实施例,公共架构适用于在内核节点或k节点中或作为通用RISC节点的功能。 k节点充当系统控制器,负责调整其他节点执行选定的功能。 作为RISC节点,PSN被配置为执行诸如信号处理的计算密集型应用。 本发明还提供一种互连方案,使得多个ACE设备在单个k-节点的控制下操作。

    Optimized motion compensation and motion estimation for video coding

    公开(公告)号:US08411749B1

    公开(公告)日:2013-04-02

    申请号:US12572151

    申请日:2009-10-01

    IPC分类号: H04N7/12 G06K9/36 G11B21/08

    摘要: A system (and a method) are disclosed for intelligently fetch one or multiple reference blocks from memory for each block to be motion compensated or motion estimated within a video processing system. The system includes a reference block configuration evaluation unit and a motion compensation memory fetching unit. The reference block configuration evaluation unit analyzes the reference block configuration of the block being motion compensated with a plurality of reference block configurations of its neighboring blocks. In response to a reference block configuration evaluation result, the reference block configuration evaluation unit decides the configuration of reference blocks to be fetched from a memory. The motion vector memory fetching unit fetches the number of reference blocks for motion compensation accordingly.

    Speculative cache tag evaluation
    8.
    发明授权
    Speculative cache tag evaluation 有权
    推测缓存标签评估

    公开(公告)号:US07840874B2

    公开(公告)日:2010-11-23

    申请号:US11616558

    申请日:2006-12-27

    申请人: Rojit Jacob

    发明人: Rojit Jacob

    IPC分类号: H03M13/00

    CPC分类号: G06F12/0895 G06F11/1064

    摘要: A cache tag comparison unit in a cache controller evaluates tag data and error correction codes to determine if there is a cache hit or miss. The cache tag comparison unit speculatively compares the tag data with the request tag without regard to error correction. The error correction code verifies whether this initial comparison is correct and provides a confirmed cache hit or miss signal. The tag data is compared with the request tag to determine a provisional cache hit or miss, and in parallel, the error correction code is evaluated. If the error code evaluation indicates errors in the tag data, a provisional cache hit is converted into a cache miss if errors are responsible for a false match. If the error code evaluation identifies the locations of errors, a provisional cache miss is converted into a cache hit if the errors are responsible for the mismatch.

    摘要翻译: 高速缓存控制器中的缓存标签比较单元评估标签数据和纠错码,以确定是否存在高速缓存命中或丢失。 缓存标签比较单元推测性地将标签数据与请求标签进行比较,而不考虑错误校正。 纠错码验证这个初始比较是否正确,并提供确认的缓存命中或未命中信号。 将标签数据与请求标签进行比较以确定临时高速缓存命中或丢失,并且并行地评估纠错码。 如果错误代码评估指示标签数据中的错误,则如果错误导致错误匹配,则临时高速缓存命中被转换为高速缓存未命中。 如果错误代码评估识别错误的位置,则如果错误导致不匹配,则将临时高速缓存未命中转换为高速缓存命中。

    EFFICIENT RESOURCE ARBITRATION
    9.
    发明申请
    EFFICIENT RESOURCE ARBITRATION 有权
    有效的资源仲裁

    公开(公告)号:US20080162760A1

    公开(公告)日:2008-07-03

    申请号:US11616539

    申请日:2006-12-27

    申请人: Rojit Jacob

    发明人: Rojit Jacob

    IPC分类号: G06F13/14

    CPC分类号: G06F13/362 G06F13/1642

    摘要: Resource requests are allocated by storing resource requests in a queue slots in a queue. A token is associated with one of the queue slots. During an arbitration cycle, the queue slot with the token is given the priority to the resource. If the queue slot with the token does not include a request, a different queue slot having the highest static priority and including a request is given access to the resource. The token is advanced to a different queue slot after one or more arbitration cycles. Requests are assigned to the highest priority queue slot, to random or arbitrarily selected queue slots, or based on the source and/or type of the request. One or more queue slots may be received for specific sources or types of requests. Resources include processor access, bus access, cache or system memory interface access, and internal or external interface access.

    摘要翻译: 通过将资源请求存储在队列中的队列插槽中来分配资源请求。 令牌与其中一个队列插槽相关联。 在仲裁周期中,具有令牌的队列插槽被赋予资源优先级。 如果带有令牌的队列插槽不包含请求,则具有静态优先级最高并包含请求的不同队列插槽被赋予对资源的访问权限。 在一个或多个仲裁周期后,令牌将提前到不同的队列插槽。 请求被分配给最高优先级的队列时隙,随机或任意选择的队列时隙,或者基于请求的源和/或类型。 可以为特定的来源或类型的请求接收一个或多个队列时隙。 资源包括处理器访问,总线访问,缓存或系统内存接口访问以及内部或外部接口访问。

    System and method using embedded microprocessor as a node in an adaptable computing machine
    10.
    发明授权
    System and method using embedded microprocessor as a node in an adaptable computing machine 有权
    使用嵌入式微处理器作为适应性计算机中的节点的系统和方法

    公开(公告)号:US07194598B2

    公开(公告)日:2007-03-20

    申请号:US10765556

    申请日:2004-01-26

    申请人: Rojit Jacob

    发明人: Rojit Jacob

    IPC分类号: G06F15/16

    CPC分类号: G06F15/8007

    摘要: The present invention provides an adaptive computing engine (ACE) that includes processing nodes having different capabilities such as arithmetic nodes, bit-manipulation nodes, finite state machine nodes, input/output nodes and a programmable scalar node (PSN). In accordance with one embodiment of the present invention, a common architecture is adaptable to function in either a kernel node, or k-node, or as general purpose RISC node. The k-node acts as a system controller responsible for adapting other nodes to perform selected functions. As a RISC node, the PSN is configured to perform computationally intensive applications such as signal processing. The present invention further provides an interconnection scheme so that a plurality of ACE devices operates under the control of a single k-node.

    摘要翻译: 本发明提供一种自适应计算引擎(ACE),其包括具有不同能力的处理节点,例如算术节点,位操作节点,有限状态机节点,输入/输出节点和可编程标量节点(PSN)。 根据本发明的一个实施例,公共架构适用于在内核节点或k节点中或作为通用RISC节点的功能。 k节点充当系统控制器,负责调整其他节点执行选定的功能。 作为RISC节点,PSN被配置为执行诸如信号处理的计算密集型应用。 本发明还提供一种互连方案,使得多个ACE设备在单个k-节点的控制下操作。