Method of implementing off-chip cache memory in dual-use SRAM memory for network processors
    2.
    发明申请
    Method of implementing off-chip cache memory in dual-use SRAM memory for network processors 有权
    在网络处理器的双用SRAM存储器中实现片外高速缓存的方法

    公开(公告)号:US20050216667A1

    公开(公告)日:2005-09-29

    申请号:US10811608

    申请日:2004-03-29

    IPC分类号: G06F12/00 G06F12/08

    CPC分类号: G06F12/0802 G06F2212/601

    摘要: A method, apparatus, and system for implementing off-chip cache memory in dual-use static random access memory (SRAM) memory for network processors. An off-chip SRAM memory store is partitioned into a resizable cache region and general-purpose use region (i.e., conventional SRAM use). The cache region is used to store cached data corresponding to portions of data contained in a second off-chip memory store, such as a dynamic RAM (DRAM) memory store or an alternative type of memory store, such as a Rambus DRAM (RDRAM) memory store. An on-chip cache management controller is integrated on the network processor. Various cache management schemes are disclosed, including hardware-based cache tag arrays, memory-based cache tag arrays, content-addressable memory (CAM)-based cache management, and memory address-to-cache line lookup schemes. Under one scheme, multiple network processors are enabled to access shared SRAM and shared DRAM, wherein a portion of the shared SRAM is used as a cache for the shared DRAM.

    摘要翻译: 一种用于在用于网络处理器的两用静态随机存取存储器(SRAM)存储器中实现片外高速缓冲存储器的方法,装置和系统。 片外SRAM存储器被分割成可调整大小的高速缓存区域和通用用途区域(即常规SRAM使用)。 高速缓存区域用于存储对应于包含在第二片外存储器存储器(例如动态RAM(DRAM)存储器存储器或诸如Rambus DRAM(RDRAM))的替代类型的存储器存储器中的数据部分的缓存数据, 记忆库。 片上缓存管理控制器集成在网络处理器上。 公开了各种高速缓存管理方案,包括基于硬件的高速缓存标签阵列,基于存储器的高速缓存标签阵列,基于内容寻址存储器(CAM)的高速缓存管理以及存储器地址到高速缓存行查找方案。 在一种方案下,多个网络处理器能够访问共享SRAM和共享DRAM,其中共享SRAM的一部分被用作共享DRAM的高速缓存。

    Matching memory transactions to cache line boundaries
    3.
    发明申请
    Matching memory transactions to cache line boundaries 有权
    匹配内存事务以缓存行边界

    公开(公告)号:US20060112235A1

    公开(公告)日:2006-05-25

    申请号:US10993901

    申请日:2004-11-19

    IPC分类号: G06F12/00

    CPC分类号: G06F12/0879

    摘要: In general, in one aspect, the disclosure describes a method that includes generating multiple cache line accesses to multiple respective cache lines of a cache as required to satisfy an access to data specified by a single instruction of a processing element specifying an access to data.

    摘要翻译: 通常,在一个方面,本发明描述了一种方法,其包括根据需要对高速缓存的多个相应高速缓存行生成多个高速缓存行访问以满足对由指定访问数据的处理元素的单个指令指定的数据的访问的方法。

    Caching bypass
    4.
    发明申请
    Caching bypass 有权
    缓存旁路

    公开(公告)号:US20060112234A1

    公开(公告)日:2006-05-25

    申请号:US10993579

    申请日:2004-11-19

    IPC分类号: G06F12/00

    CPC分类号: G06F12/0888 G06F9/30047

    摘要: In general, in one aspect, the disclosure describes a method that includes providing a memory access instruction of a processing element's instruction set including multiple parameters. The parameters include at least one address and a token specifying whether the instruction should cause data retrieved from memory in response to the memory access instruction to be unavailable to a subsequent memory access instruction via a cache

    摘要翻译: 一般来说,一方面,本公开描述了一种方法,其包括提供包括多个参数的处理元件指令集的存储器访问指令。 这些参数包括至少一个地址和令牌,其指定该指令是否应导致响应于该存储器访问指令而从存储器检索到的数据不可通过一个高速缓存的后续存储器访问指令

    Method and apparatus to enable I/O agents to perform atomic operations in shared, coherent memory spaces
    6.
    发明申请
    Method and apparatus to enable I/O agents to perform atomic operations in shared, coherent memory spaces 有权
    使I / O代理能够在共享的,一致的存储器空间中执行原子操作的方法和装置

    公开(公告)号:US20070005908A1

    公开(公告)日:2007-01-04

    申请号:US11171155

    申请日:2005-06-29

    IPC分类号: G06F13/28

    CPC分类号: G06F13/1663 G06F12/0835

    摘要: Method and apparatus to enable I/O agents to perform atomic operations in shared, coherent memory spaces. The apparatus includes an arbitration unit, a host interface unit, and a memory interface unit. The arbitration unit provides an interface to one or more I/O agents that issue atomic transactions to access and/or modify data stored in a shared memory space accessed via the memory interface unit. The host interface unit interfaces to a front-side bus (FSB) to which one or more processors may be coupled. In response to an atomic transaction issued by an I/O agent, the transaction is forked into two interdependent processes. Under one process, an inbound write transaction is injected into the host interface unit, which then drives the FSB to cause the processor(s) to perform a cache snoop. At the same time, an inbound read transaction is injected into the memory interface unit, which retrieves a copy of the data from the shared memory space. If the cache snoop identifies a modified cache line, a copy of that cache line is returned to the I/O agent; otherwise, the copy of the data retrieved from the shared memory space is returned.

    摘要翻译: 使I / O代理能够在共享的,一致的存储器空间中执行原子操作的方法和装置。 该装置包括仲裁单元,主机接口单元和存储器接口单元。 仲裁单元向一个或多个发出原子事务的I / O代理提供接口以访问和/或修改存储在经由存储器接口单元访问的共享存储器空间中的数据。 主机接口单元连接到可以耦合一个或多个处理器的前端总线(FSB)。 为了响应由I / O代理发出的原子事务,事务被分成两个相互依赖的进程。 在一个过程中,入站写入事务被注入到主机接口单元中,然后驱动FSB使处理器执行缓存窥探。 同时,入站读取事务被注入到存储器接口单元中,该单元从共享存储器空间检索数据的副本。 如果缓存侦听器识别修改后的高速缓存行,则将该高速缓存行的副本返回给I / O代理; 否则,返回从共享存储空间检索的数据的副本。

    Instruction-assisted cache management for efficient use of cache and memory
    10.
    发明授权
    Instruction-assisted cache management for efficient use of cache and memory 有权
    指令辅助缓存管理,用于缓存和内存的高效使用

    公开(公告)号:US07437510B2

    公开(公告)日:2008-10-14

    申请号:US11241538

    申请日:2005-09-30

    IPC分类号: G06F12/00

    摘要: Instruction-assisted cache management for efficient use of cache and memory. Hints (e.g., modifiers) are added to read and write memory access instructions to identify the memory access is for temporal data. In view of such hints, alternative cache policy and allocation policies are implemented that minimize cache and memory access. Under one policy, a write cache miss may result in a write of data to a partial cache line without a memory read/write cycle to fill the remainder of the line. Under another policy, a read cache miss may result in a read from memory without allocating or writing the read data to a cache line. A cache line soft-lock mechanism is also disclosed, wherein cache lines may be selectably soft locked to indicate preference for keeping those cache lines over non-locked lines.

    摘要翻译: 指令辅助缓存管理,用于缓存和内存的高效使用。 添加提示(例如修饰符)来读取和写入存储器访问指令以识别用于时间数据的存储器访问。 鉴于这样的提示,实现了使缓存和存储器访问最小化的替代高速缓存策略和分配策略。 在一个策略下,写高速缓存未命中可能导致将数据写入部分高速缓存行而没有存储器读/写周期来填充该行的其余部分。 在另一种策略下,读高速缓存未命中可能导致从存储器的读取,而无需将读取的数据分配或写入高速缓存行。 还公开了一种高速缓存行软锁定机制,其中高速缓存行可以被可选地软锁定以指示将这些高速缓存行保持在非锁定行上的偏好。