Method and system for efficient cache locking mechanism
    1.
    发明授权
    Method and system for efficient cache locking mechanism 有权
    高效缓存锁定机制的方法与系统

    公开(公告)号:US07689776B2

    公开(公告)日:2010-03-30

    申请号:US11145844

    申请日:2005-06-06

    IPC分类号: G06F12/10 G06F12/12

    CPC分类号: G06F12/126 G06F12/1027

    摘要: Systems and methods for the implementation of more efficient cache locking mechanisms are disclosed. These systems and methods may alleviate the need to present both a virtual address (VA) and a physical address (PA) to a cache mechanism. A translation table is utilized to store both the address and the locking information associated with a virtual address, and this locking information is passed to the cache along with the address of the data. The cache can then lock data based on this information. Additionally, this locking information may be used to override the replacement mechanism used with the cache, thus keeping locked data in the cache. The translation table may also store translation table lock information such that entries in the translation table are locked as well.

    摘要翻译: 公开了用于实现更有效的高速缓存锁定机制的系统和方法。 这些系统和方法可以减轻将虚拟地址(VA)和物理地址(PA)呈现给缓存机制的需要。 翻译表用于存储与虚拟地址相关联的地址和锁定信息,并且该锁定信息与数据的地址一起被传递到高速缓存。 然后,缓存可以基于该信息来锁定数据。 此外,该锁定信息可以用于覆盖与缓存一起使用的替换机制,从而将锁定的数据保存在高速缓存中。 翻译表还可以存储翻译表锁定信息,使得翻译表中的条目也被锁定。

    Method of load/store dependencies detection with dynamically changing address length
    2.
    发明授权
    Method of load/store dependencies detection with dynamically changing address length 失效
    使用动态变化的地址长度进行加载/存储依赖关系检测的方法

    公开(公告)号:US07464242B2

    公开(公告)日:2008-12-09

    申请号:US11050039

    申请日:2005-02-03

    IPC分类号: G06F12/00

    摘要: A method, an apparatus, and a computer program product are provided for detecting load/store dependency in a memory system by dynamically changing the address width for comparison. An incoming load/store operation must be compared to the operations in the pipeline and the queues to avoid address conflicts. Overall, the present invention introduces a cache hit or cache miss input into the load/store dependency logic. If the incoming load operation is a cache hit, then the quadword boundary address value is used for detection. If the incoming load operation is a cache miss, then the cacheline boundary address value is used for detection. This invention enhances the performance of LHS and LHR operations in a memory system.

    摘要翻译: 提供了一种方法,装置和计算机程序产品,用于通过动态地改变地址宽度进行比较来检测存储器系统中的加载/存储相关性。 必须将进入的加载/存储操作与流水线和队列中的操作进行比较,以避免地址冲突。 总的来说,本发明将缓存命中或高速缓存未命中输入引入到加载/存储依赖逻辑中。 如果进入加载操作是缓存命中,则使用四字边界地址值进行检测。 如果进入加载操作是高速缓存未命中,则使用高速缓存行边界地址值进行检测。 本发明增强了存储系统中LHS和LHR操作的性能。

    Method and systems for executing load instructions that achieve sequential load consistency
    3.
    发明授权
    Method and systems for executing load instructions that achieve sequential load consistency 失效
    执行负载指令的方法和系统,以实现连续的负载一致性

    公开(公告)号:US07376816B2

    公开(公告)日:2008-05-20

    申请号:US10988310

    申请日:2004-11-12

    IPC分类号: G06F9/30 G06F9/40 G06F15/00

    CPC分类号: G06F9/383 G06F12/0855

    摘要: A method is disclosed for executing a load instruction. Address information of the load instruction is used to generate an address of needed data, and the address is used to search a cache memory for the needed data. If the needed data is found in the cache memory, a cache hit signal is generated. At least a portion of the address is used to search a queue for a previous load instruction specifying the same address. If a previous load instruction specifying the same address is found, the cache hit signal is ignored and the load instruction is stored in the queue. A load/store unit, and a processor implementing the method, are also described.

    摘要翻译: 公开了一种用于执行加载指令的方法。 加载指令的地址信息用于生成所需数据的地址,该地址用于搜索所需数据的高速缓冲存储器。 如果在高速缓冲存储器中找到所需的数据,则产生高速缓存命中信号。 地址的至少一部分用于在队列中搜索指定相同地址的先前加载指令。 如果找到指定相同地址的先前加载指令,则忽略缓存命中信号,并将加载指令存储在队列中。 还描述了加载/存储单元和实现该方法的处理器。

    System and method for detecting instruction dependencies in multiple phases
    4.
    发明申请
    System and method for detecting instruction dependencies in multiple phases 有权
    用于检测多个阶段的指令依赖关系的系统和方法

    公开(公告)号:US20060271767A1

    公开(公告)日:2006-11-30

    申请号:US11140847

    申请日:2005-05-31

    IPC分类号: G06F9/40

    摘要: Systems and methods for determining dependencies between processor instructions in multiple phases. In one embodiment, a partial comparison is made between the addresses of a sequence of instructions. Younger instructions having potential dependencies on older instructions are suspended if the partial comparison yields a match. One or more subsequent comparisons are made for suspended instructions based on portions of the addresses referenced by the instructions that were not previously compared. If subsequent comparisons determine that the addresses of the instructions do not match, the suspended instructions are reinstated and execution of the suspended instructions is resumed. In one embodiment, data needed by suspended instructions is speculatively requested in case the instructions are reinstated.

    摘要翻译: 用于确定多个阶段的处理器指令之间的依赖关系的系统和方法。 在一个实施例中,在指令序列的地址之间进行部分比较。 如果部分比较产生匹配,那么具有潜在依赖旧指令的较小指令将被暂停。 基于由先前未被比较的指令引用的地址的部分,对暂停的指令进行一个或多个后续的比较。 如果随后的比较确定指令的地址不匹配,则恢复暂停的指令,并恢复暂停指令的执行。 在一个实施例中,在恢复指令的情况下,推测性地请求暂停指令所需的数据。

    Method of updating cache state information where stores only read the cache state information upon entering the queue
    5.
    发明申请
    Method of updating cache state information where stores only read the cache state information upon entering the queue 失效
    更新高速缓存状态信息的方法,其中存储仅在进入队列时读取高速缓存状态信息

    公开(公告)号:US20060020759A1

    公开(公告)日:2006-01-26

    申请号:US10897348

    申请日:2004-07-22

    IPC分类号: G06F12/00

    摘要: The present invention provides a method of updating the cache state information for store transactions in an system in which store transactions only read the cache state information upon entering the unit pipe or store portion of the store/load queue. In this invention, store transactions in the unit pipe and queue are checked whenever a cache line is modified, and their cache state information updated as necessary. When the modification is an invalidate, the check tests that the two share the same physical addressable location. When the modification is a validate, the check tests that the two involve the same data cache line.

    摘要翻译: 本发明提供了一种在系统中更新用于存储事务的高速缓存状态信息的方法,其中存储事务仅在进入存储/加载队列的单元管道或存储部分时才读取高速缓存状态信息。 在本发明中,每当修改高速缓存行时检查单元管道和队列中的事务,并根据需要更新其缓存状态信息。 当修改无效时,检查测试两个共享相同的物理可寻址位置。 当修改是有效的,检查测试两个涉及相同的数据高速缓存行。

    METHOD AND SYSTEM FOR EFFICIENT CACHE LOCKING MECHANISM
    6.
    发明申请
    METHOD AND SYSTEM FOR EFFICIENT CACHE LOCKING MECHANISM 审中-公开
    高效锁定机制的方法与系统

    公开(公告)号:US20100146214A1

    公开(公告)日:2010-06-10

    申请号:US12707875

    申请日:2010-02-18

    IPC分类号: G06F12/08 G06F12/00 G06F12/10

    CPC分类号: G06F12/126 G06F12/1027

    摘要: Systems and methods for the implementation of more efficient cache locking mechanisms are disclosed. These systems and methods may alleviate the need to present both a virtual address (VA) and a physical address (PA) to a cache mechanism. A translation table is utilized to store both the address and the locking information associated with a virtual address, and this locking information is passed to the cache along with the address of the data. The cache can then lock data based on this information. Additionally, this locking information may be used to override the replacement mechanism used with the cache, thus keeping locked data in the cache. The translation table may also store translation table lock information such that entries in the translation table are locked as well.

    摘要翻译: 公开了用于实现更有效的高速缓存锁定机制的系统和方法。 这些系统和方法可以减轻将虚拟地址(VA)和物理地址(PA)呈现给高速缓存机制的需要。 翻译表用于存储与虚拟地址相关联的地址和锁定信息,并且该锁定信息与数据的地址一起被传递到高速缓存。 然后,缓存可以基于该信息来锁定数据。 此外,该锁定信息可以用于覆盖与缓存一起使用的替换机制,从而将锁定的数据保存在高速缓存中。 翻译表还可以存储翻译表锁定信息,使得翻译表中的条目也被锁定。

    Systems for executing load instructions that achieve sequential load consistency
    7.
    发明授权
    Systems for executing load instructions that achieve sequential load consistency 失效
    用于执行实现顺序负载一致性的加载指令的系统

    公开(公告)号:US07730290B2

    公开(公告)日:2010-06-01

    申请号:US12036992

    申请日:2008-02-25

    CPC分类号: G06F9/383 G06F12/0855

    摘要: A method is disclosed for executing a load instruction. Address information of the load instruction is used to generate an address of needed data, and the address is used to search a cache memory for the needed data. If the needed data is found in the cache memory, a cache hit signal is generated. At least a portion of the address is used to search a queue for a previous load instruction specifying the same address. If a previous load instruction specifying the same address is found, the cache hit signal is ignored and the load instruction is stored in the queue. A load/store unit, and a processor implementing the method, are also described.

    摘要翻译: 公开了一种用于执行加载指令的方法。 加载指令的地址信息用于生成所需数据的地址,该地址用于搜索所需数据的高速缓冲存储器。 如果在高速缓冲存储器中找到所需的数据,则产生高速缓存命中信号。 地址的至少一部分用于在队列中搜索指定相同地址的先前加载指令。 如果找到指定相同地址的先前加载指令,则忽略缓存命中信号,并将加载指令存储在队列中。 还描述了加载/存储单元和实现该方法的处理器。

    Systems and Methods for Processing Buffer Data Retirement Conditions
    8.
    发明申请
    Systems and Methods for Processing Buffer Data Retirement Conditions 有权
    处理缓冲区数据退休条件的系统和方法

    公开(公告)号:US20080022075A1

    公开(公告)日:2008-01-24

    申请号:US11459501

    申请日:2006-07-24

    IPC分类号: G06F9/30

    摘要: Systems and methods for determining whether to retire a data entry from a buffer using multiple retirement logic units. In one embodiment, each retirement unit concurrently evaluates retirement conditions for one of the buffer entries in an associated subset (e.g., even or odd) of the buffer. Selection logic coupled to the retirement units alternately selects the first or second retirement unit for retirement of one of the entries in the associated subset. Because the aggregate number of entries retired by the combined retirement logic units is divided by the number of retirement logic units, each retirement logic unit has more time to process the retirement conditions for corresponding queue entries. The buffer may be any of a variety of different types of buffers and may comprise a single buffer, or multiple buffers.

    摘要翻译: 用于确定是否使用多个退出逻辑单元从缓冲器中退出数据条目的系统和方法。 在一个实施例中,每个退休单元同时评估缓冲器的相关子集(例如偶数或奇数)中的一个缓冲条目的退休条件。 耦合到退休单元的选择逻辑交替地选择用于退出相关子集中的一个条目的第一或第二退出单元。 由于组合退出逻辑单元退休的条目的总数除以退出逻辑单元的数量,所以每个退出逻辑单元具有处理相应队列条目的退休条件的更多时间。 缓冲器可以是各种不同类型的缓冲器中的任何一个,并且可以包括单个缓冲器或多个缓冲器。

    Method of load/store dependencies detection with dynamically changing address length
    9.
    发明申请
    Method of load/store dependencies detection with dynamically changing address length 失效
    使用动态变化的地址长度进行加载/存储依赖关系检测的方法

    公开(公告)号:US20060174083A1

    公开(公告)日:2006-08-03

    申请号:US11050039

    申请日:2005-02-03

    IPC分类号: G06F13/28

    摘要: A method, an apparatus, and a computer program product are provided for detecting load/store dependency in a memory system by dynamically changing the address width for comparison. An incoming load/store operation must be compared to the operations in the pipeline and the queues to avoid address conflicts. Overall, the present invention introduces a cache hit or cache miss input into the load/store dependency logic. If the incoming load operation is a cache hit, then the quadword boundary address value is used for detection. If the incoming load operation is a cache miss, then the cacheline boundary address value is used for detection. This invention enhances the performance of LHS and LHR operations in a memory system.

    摘要翻译: 提供了一种方法,装置和计算机程序产品,用于通过动态地改变地址宽度进行比较来检测存储器系统中的加载/存储相关性。 必须将进入的加载/存储操作与流水线和队列中的操作进行比较,以避免地址冲突。 总的来说,本发明将缓存命中或高速缓存未命中输入引入到加载/存储依赖逻辑中。 如果进入加载操作是缓存命中,则使用四字边界地址值进行检测。 如果进入加载操作是高速缓存未命中,则使用高速缓存行边界地址值进行检测。 本发明增强了存储系统中LHS和LHR操作的性能。

    Microprocessor allowing simultaneous instruction execution and DMA transfer
    10.
    发明授权
    Microprocessor allowing simultaneous instruction execution and DMA transfer 有权
    微处理器允许同时执行指令和DMA传输

    公开(公告)号:US06389527B1

    公开(公告)日:2002-05-14

    申请号:US09246406

    申请日:1999-02-08

    IPC分类号: G06F1500

    摘要: The present invention comprises a LSU which executes instructions relating to load/store. The LSU includes a DCACHE which temporarily stores data read from and written to the external memory, an SPRAM used to specific purposes other than cache, and an address generator generating virtual addresses for access to the DCACHE and the SPRAM. Because the SPRAM can load and store data by a pipeline of the LSU and exchanges data with an external memory through a DMA transfer, the present invention is especially available to high-speedily process a large amount of data such as the image data. Because the LSU can access the SPRAM with the same latency as that of the DCACHE, after data being stored in the external memory is transferred to the SPRAM, the processor can access the SPRAM in order to perform data process, and it is possible to process a large amount of data with shorter time than time necessary to directly access an external memory.

    摘要翻译: 本发明包括执行与加载/存储相关的指令的LSU。 LSU包括临时存储从外部存储器读取并写入外部存储器的数据的DCACHE,用于除缓存之外的特定目的的SPRAM以及生成用于访问DCACHE和SPRAM的虚拟地址的地址生成器。 因为SPRAM可以通过LSU的流水线加载和存储数据,并且通过DMA传输与外部存储器交换数据,所以本发明特别可用于高速处理诸如图像数据的大量数据。 由于LSU可以以与DCACHE相同的延迟访问SPRAM,因此在将存储在外部存储器中的数据传输到SPRAM之后,处理器可以访问SPRAM以执行数据处理,并且可以处理 大量数据的时间短于直接访问外部存储器所需的时间。