Method to provide cache management commands for a DMA controller
    1.
    发明授权
    Method to provide cache management commands for a DMA controller 失效
    为DMA控制器提供高速缓存管理命令的方法

    公开(公告)号:US07657667B2

    公开(公告)日:2010-02-02

    申请号:US10809553

    申请日:2004-03-25

    IPC分类号: G06F13/28

    摘要: The present invention provides a method and a system for providing cache management commands in a system supporting a DMA mechanism and caches. A DMA mechanism is set up by a processor. Software running on the processor generates cache management commands. The DMA mechanism carries out the commands, thereby enabling the software program management of the caches. The commands include commands for writing data to the cache, loading data from the cache, and for marking data in the cache as no longer needed. The cache can be a system cache or a DMA cache.

    摘要翻译: 本发明提供了一种用于在支持DMA机制和高速缓存的系统中提供高速缓存管理命令的方法和系统。 DMA机制由处理器设置。 处理器上运行的软件会生成缓存管理命令。 DMA机制执行命令,从而实现高速缓存的软件程序管理。 这些命令包括用于将数据写入缓存的命令,从高速缓存加载数据,以及用于在不再需要的情况下将数据标记在缓存中。 缓存可以是系统缓存或DMA高速缓存。

    System and method for identifying and accessing streaming data in a locked portion of a cache
    3.
    发明授权
    System and method for identifying and accessing streaming data in a locked portion of a cache 失效
    用于在高速缓存的锁定部分中识别和访问流数据的系统和方法

    公开(公告)号:US06961820B2

    公开(公告)日:2005-11-01

    申请号:US10366440

    申请日:2003-02-12

    IPC分类号: G06F12/08 G06F12/00 G06F12/12

    CPC分类号: G06F12/126

    摘要: A system and method are provided for efficiently processing data with a cache in a computer system. The computer system has a processor, a cache and a system memory. The processor issues a data request for streaming data. The streaming data has one or more small data portions. The system memory is in communication with the processor. The system memory has a specific area for storing the streaming data. The cache is coupled to the processor. The cache has a predefined area locked for the streaming data. A cache controller is coupled to the cache and is in communication with both the processor and the system memory to transmit at least one small data portion of the streaming data from the specific area of the system memory to the predefined area of the cache when the one small data portion is not found in the predefined area of the cache.

    摘要翻译: 提供了一种系统和方法,用于在计算机系统中用高速缓存高效地处理数据。 计算机系统具有处理器,缓存和系统存储器。 处理器发出流数据的数据请求。 流数据具有一个或多个小数据部分。 系统存储器与处理器通信。 系统存储器具有用于存储流数据的特定区域。 缓存耦合到处理器。 高速缓存具有为流数据锁定的预定义区域。 高速缓存控制器耦合到高速缓存,并且与处理器和系统存储器通信,以将流式数据的至少一个小数据部分从系统存储器的特定区域发送到高速缓存的预定义区域 在缓存的预定义区域中没有找到小数据部分。

    On-chip data transfer in multi-processor system
    4.
    发明授权
    On-chip data transfer in multi-processor system 失效
    多处理器系统中的片上数据传输

    公开(公告)号:US06820143B2

    公开(公告)日:2004-11-16

    申请号:US10322127

    申请日:2002-12-17

    IPC分类号: G06F1328

    CPC分类号: G06F12/0817 G06F12/0897

    摘要: A system and method are provided for improving performance of a computer system by providing a direct data transfer between different processors. The system includes a first and second processor. The first processor is in need of data. The system also includes a directory in communication with the first processor. The directory receives a data request for the data and contains information as to where the data is stored. A cache is coupled to the second processor. An internal bus is coupled between the first processor and the cache to transfer the data from the cache to the first processor when the data is found to be stored in the cache.

    摘要翻译: 提供了一种通过在不同处理器之间提供直接数据传输来提高计算机系统的性能的系统和方法。 该系统包括第一和第二处理器。 第一个处理器需要数据。 该系统还包括与第一处理器通信的目录。 目录接收到数据的数据请求,并包含有关数据存储位置的信息。 缓存耦合到第二处理器。 当发现数据被存储在高速缓存中时,内部总线耦合在第一处理器和高速缓存之间以将数据从高速缓存传送到第一处理器。

    Implementation of an LRU and MRU algorithm in a partitioned cache
    5.
    发明授权
    Implementation of an LRU and MRU algorithm in a partitioned cache 有权
    在分区缓存中实现LRU和MRU算法

    公开(公告)号:US06931493B2

    公开(公告)日:2005-08-16

    申请号:US10346294

    申请日:2003-01-16

    IPC分类号: G06F12/12 G06F12/08

    CPC分类号: G06F12/123 G06F12/128

    摘要: The present invention provides for determining an MRU or LRU way of a partitioned cache. The partitioned cache has a plurality of ways. There are a plurality of partitions, each partition comprising at least one way. An updater is employable to update a logic table as a function of an access of a way. Partition comparison logic is employable to determine whether two ways are members of the same partition, and to allow the comparison of the ways correlating to a first matrix indices and a second matrix indices. An intersection generator is employable to create an intersection box of the memory table as a function of a first and second matrix indices. Access order logic is employable to combine the output of the intersection generator, thereby determining which way is the most or least recently used way.

    摘要翻译: 本发明提供用于确定分区高速缓存的MRU或LRU方式。 分区缓存具有多种方式。 存在多个分区,每个分区包括至少一个方式。 更新器可用于根据方式的访问来更新逻辑表。 分区比较逻辑可用于确定两种方式是否是相同分区的成员,并且允许比较与第一矩阵索引和第二矩阵索引相关的方式。 交叉点生成器可用于根据第一和第二矩阵索引创建存储表的交集框。 访问顺序逻辑可用于组合交叉发生器的输出,从而确定哪种方式是最近或最近最少使用的方式。

    CONTROLLING BANDWIDTH RESERVATIONS METHOD AND APPARATUS
    6.
    发明申请
    CONTROLLING BANDWIDTH RESERVATIONS METHOD AND APPARATUS 失效
    控制带宽预留方法和装置

    公开(公告)号:US20110246695A1

    公开(公告)日:2011-10-06

    申请号:US13162917

    申请日:2011-06-17

    IPC分类号: G06F12/00

    CPC分类号: H04L41/0896

    摘要: Disclosed is an apparatus which operates to substantially evenly distribute commands and/or data packets issued from a managed program or other entity over a given time period. The even distribution of these commands or data packets minimizes congestion in critical resources such as memory, I/O devices and/or the bus for transferring the data between source and destination. Any unmanaged commands or data packets are treated as in conventional technology.

    摘要翻译: 公开了一种操作以在给定时间段内基本上均匀分布从被管理程序或其他实体发出的命令和/或数据分组的装置。 这些命令或数据分组的均匀分布最大限度地减少了诸如存储器,I / O设备和/或用于在源和目的地之间传送数据的总线的关键资源的拥塞。 任何非托管命令或数据包都按常规技术处理。

    Method and apparatus for generating a mask value and command for extreme data rate memories utilizing error correction codes
    7.
    发明授权
    Method and apparatus for generating a mask value and command for extreme data rate memories utilizing error correction codes 有权
    用于利用纠错码生成用于极端数据速率存储器的掩码值和命令的方法和装置

    公开(公告)号:US07287103B2

    公开(公告)日:2007-10-23

    申请号:US11130911

    申请日:2005-05-17

    IPC分类号: G06F3/00 G06F12/00

    CPC分类号: G11C7/1006

    摘要: A method, an apparatus, and a computer program product are provided for the handling of write mask operations in an XDR™ DRAM memory system. This invention eliminates the need for a two-port array because the mask generation is done as the data is received. Less logic is needed for the mask calculation because only 144 of the 256 possible byte values are decoded. The mask value is generated and stored in a mask array. Independently, the write data is stored in a write buffer. The mask value is utilized to generate a write mask command. Once the write mask command is issued, the write data and the mask value are transmitted to a multiplexer. The multiplexer masks the write data using the mask value, so that the masked data can be stored in the XDR DRAMS.

    摘要翻译: 提供了一种方法,装置和计算机程序产品,用于处理XDR(TM)DRAM存储器系统中的写掩码操作。 本发明消除了对双端口阵列的需要,因为在接收到数据时完成了掩码生成。 掩码计算需要较少的逻辑,因为256个可能的字节值中只有144个被解码。 掩码值生成并存储在掩码数组中。 独立地,写入数据被存储在写入缓冲器中。 掩码值用于生成写掩码命令。 一旦写掩码命令被发出,写入数据和掩码值被发送到多路复用器。 多路器使用掩码值对写入数据进行掩码,以便将掩蔽的数据存储在XDR DRAMS中。

    Controlling bandwidth reservations method and apparatus
    8.
    发明授权
    Controlling bandwidth reservations method and apparatus 有权
    控制带宽预留方法和装置

    公开(公告)号:US08483227B2

    公开(公告)日:2013-07-09

    申请号:US10718302

    申请日:2003-11-20

    IPC分类号: H04L12/56

    CPC分类号: H04L41/0896

    摘要: Disclosed is an apparatus which operates to substantially evenly distribute commands and/or data packets issued from a managed program or other entity over a given time period. The even distribution of these commands or data packets minimizes congestion in critical resources such as memory, I/O devices and/or the bus for transferring the data between source and destination. Any unmanaged commands or data packets are treated as in conventional technology.

    摘要翻译: 公开了一种操作以在给定时间段内基本上均匀分布从被管理程序或其他实体发出的命令和/或数据分组的装置。 这些命令或数据分组的均匀分布最大限度地减少了诸如存储器,I / O设备和/或用于在源和目的地之间传送数据的总线的关键资源的拥塞。 任何非托管命令或数据包都按常规技术处理。

    Memory barriers primitives in an asymmetric heterogeneous multiprocessor environment
    9.
    发明授权
    Memory barriers primitives in an asymmetric heterogeneous multiprocessor environment 有权
    非对称异构多处理器环境中的内存障碍原语

    公开(公告)号:US07725618B2

    公开(公告)日:2010-05-25

    申请号:US10902474

    申请日:2004-07-29

    IPC分类号: G06F13/28

    CPC分类号: G06F13/28

    摘要: The present invention provides a method and apparatus for creating memory barriers in a Direct Memory Access (DMA) device. A memory barrier command is received and a memory command is received. The memory command is executed based on the memory barrier command. A bus operation is initiated based on the memory barrier command. A bus operation acknowledgment is received based on the bus operation. The memory barrier command is executed based on the bus operation acknowledgment. In a particular aspect, memory barrier commands are direct memory access sync (dmasync) and direct memory access enforce in-order execution of input/output (dmaeieio) commands.

    摘要翻译: 本发明提供了一种用于在直接存储器访问(DMA)设备中创建存储障碍的方法和装置。 接收到存储器屏障命令并接收存储器命令。 内存命令是根据内存屏障命令执行的。 基于内存障碍命令启动总线操作。 基于总线操作接收总线操作确认。 基于总线操作确认执行存储器障碍命令。 在特定方面,存储器屏障命令是直接存储器访问同步(dmasync),并且直接存储器访问强制执行输入/输出(dmaeie))命令的按顺序执行。

    System and Method for Getllar Hit Cache Line Data Forward Via Data-Only Transfer Protocol Through BEB Bus
    10.
    发明申请
    System and Method for Getllar Hit Cache Line Data Forward Via Data-Only Transfer Protocol Through BEB Bus 审中-公开
    通过BEB总线通过仅数据传输协议向Getllar命中缓存行数据转发的系统和方法

    公开(公告)号:US20090077322A1

    公开(公告)日:2009-03-19

    申请号:US11857674

    申请日:2007-09-19

    IPC分类号: G06F12/08

    CPC分类号: G06F12/0831 G06F2212/1016

    摘要: A system and method for using a data-only transfer protocol to store atomic cache line data in a local storage area is presented. A processing engine includes an atomic cache and a local storage. When the processing engine encounters a request to transfer cache line data from the atomic cache to the local storage (e.g., GETTLAR command), the processing engine utilizes a data-only transfer protocol to pass cache line data through the external bus node and back to the processing engine. The data-only transfer protocol comprises a data phase and does not include a prior command phase or snoop phase due to the fact that the processing engine communicates to the bus node instead of an entire computer system when the processing engine sends a data request to transfer data to itself.

    摘要翻译: 提出了一种使用仅数据传输协议将原始高速缓存行数据存储在本地存储区域中的系统和方法。 处理引擎包括原子缓存和本地存储。 当处理引擎遇到将高速缓存行数据从原子缓存传送到本地存储器(例如,GETTLAR命令)的请求时,处理引擎利用仅数据传输协议通过外部总线节点传递高速缓存行数据并返回 处理引擎。 仅数据传输协议包括数据相位,并且不包括先前的命令阶段或窥探阶段,因为当处理引擎发送数据请求传送时处理引擎与总线节点而不是整个计算机系统通信 数据本身。