Multilevel cache system and method having a merged tag array to store tags for multiple data arrays
    31.
    发明申请
    Multilevel cache system and method having a merged tag array to store tags for multiple data arrays 失效
    多级缓存系统和方法具有合并的标记数组,用于存储多个数据数组的标签

    公开(公告)号:US20040030834A1

    公开(公告)日:2004-02-12

    申请号:US10600715

    申请日:2003-06-23

    Inventor: Vinod Sharma

    CPC classification number: G06F12/1027 G06F12/0897

    Abstract: A multilevel cache system comprises a first data array, a second data array coupled to the first data array, and a merged tag array coupled to the second data array.

    Abstract translation: 多级缓存系统包括第一数据阵列,耦合到第一数据阵列的第二数据阵列以及耦合到第二数据阵列的合并标签阵列。

    Cache memory and method for addressing
    32.
    发明申请
    Cache memory and method for addressing 审中-公开
    高速缓存和寻址方法

    公开(公告)号:US20040015644A1

    公开(公告)日:2004-01-22

    申请号:US10619979

    申请日:2003-07-15

    CPC classification number: G06F12/0864 G06F12/1408

    Abstract: In a cache memory whose addresses are split into tag, index and offset parts, a transformation device is provided in hardware form for performing a transformation between a respective tag part of the address and a coded tag address that is unambiguous in both directions. In addition, the index field of the addresses of the cache memory can be encoded by another mapping procedure that maps the index field onto a coded index field and is unambiguous in both directions. A hardware unit of suitable configuration is also used for this purpose.

    Abstract translation: 在其地址被分割成标签,索引和偏移部分的高速缓冲存储器中,以硬件形式提供变换装置,用于在地址的相应标签部分和在两个方向上是明确的编码标签地址之间进行变换。 此外,高速缓冲存储器的地址的索引字段可以通过将索引字段映射到编码的索引字段并且在两个方向上是明确的另一映射过程来编码。 为此目的也使用合适配置的硬件单元。

    System and method for optimistic caching
    33.
    发明申请
    System and method for optimistic caching 有权
    用于乐观缓存的系统和方法

    公开(公告)号:US20030233522A1

    公开(公告)日:2003-12-18

    申请号:US10340023

    申请日:2003-01-10

    CPC classification number: G06F12/0815 Y10S707/99938 Y10S707/99952

    Abstract: Transactions are granted concurrent access to a data item through the use of an optimistic concurrency algorithm. Each transaction gets its own instance of the data item, such as in a cache or in an entity bean, such that it is not necessary to lock the data. The instances can come from the data or from other instances. When a transaction updates the data item, the optimistic concurrency algorithm ensures that the other instances are notified that the data item has been changed and that it is necessary to read a new instance, from the database or from an updated instance. This description is not intended to be a complete description of, or limit the scope of, the invention. Other features, aspects, and objects of the invention can be obtained from a review of the specification, the figures, and the claims.

    Abstract translation: 通过使用乐观并发算法,允许事务同时访问数据项。 每个事务获取其自己的数据项实例,例如缓存或实体bean,这样就不需要锁定数据。 这些实例可以来自数据或其他实例。 当事务更新数据项时,乐观并发算法确保通知其他实例数据项已被更改,并且需要从数据库或更新的实例读取新的实例。 本说明书不是对本发明的完整描述或限制本发明的范围。 本发明的其它特征,方面和目的可以通过对说明书,附图和权利要求的评述来获得。

    Data prefetching apparatus in a data processing system and method therefor
    34.
    发明申请
    Data prefetching apparatus in a data processing system and method therefor 有权
    数据处理系统中的数据预取装置及其方法

    公开(公告)号:US20030204673A1

    公开(公告)日:2003-10-30

    申请号:US10132918

    申请日:2002-04-26

    Abstract: A data processing system (20) is able to perform parameter-selectable prefetch instructions to prefetch data for a cache (38). When attempting to be backward compatible with previously written code, sometimes performing this instruction can result in attempting to prefetch redundant data by prefetching the same data twice. In order to prevent this, the parameters of the instruction are analyzed to determine if such redundant data will be prefetched. If so, then the parameters are altered to avoid prefetching redundant data. In some of the possibilities for the parameters of the instruction, the altering of the parameters requires significant circuitry so that an alternative approach is used. This alternative but slower approach, which can be used in the same system with the first approach, detects if the line of the cache that is currently being requested is the same as the previous request. If so, the current request is not executed.

    Abstract translation: 数据处理系统(20)能够执行参数可选择的预取指令以预取高速缓存(38)的数据。 当尝试向后兼容以前写入的代码时,有时执行此指令可能会导致尝试通过预取相同数据两次来预取冗余数据。 为了防止这种情况,分析指令的参数以确定是否将预取这样的冗余数据。 如果是这样,则修改参数以避免预取冗余数据。 在指令参数的一些可能性中,参数的更改需要大量的电路,以便使用另外的方法。 可以在与第一种方法相同的系统中使用的这种替代但较慢的方法检测当前正在请求的高速缓存行是否与先前请求相同。 如果是,则不执行当前请求。

    Process for controlling reading data from a DRAM array
    35.
    发明申请
    Process for controlling reading data from a DRAM array 失效
    用于控制从DRAM阵列读取数据的处理

    公开(公告)号:US20030200417A1

    公开(公告)日:2003-10-23

    申请号:US10452191

    申请日:2003-06-02

    Abstract: A memory circuit (14) having features specifically adapted to permit the memory circuit (14) to serve as a video frame memory is disclosed. The memory circuit (14) contains a dynamic random access memory array (24) with buffers (18, 20) on input and output data ports (22) thereof to permit asynchronous read, write and refresh accesses to the memory array (24). The memory circuit (14) is accessed both serially and randomly. An address generator (28) contains an address buffer register (36) which stores a random access address and an address sequencer (40) which provides a stream of addresses to the memory array (24). An initial address for the stream of addresses is the random access address stored in the address buffer register (36).

    Abstract translation: 公开了具有特别适于允许存储器电路(14)用作视频帧存储器的特征的存储器电路(14)。 存储器电路(14)包含在其输入和输出数据端口(22)上具有缓冲器(18,20)的动态随机存取存储器阵列(24),以允许对存储器阵列(24)的异步读取,写入和刷新访问。 存储器电路(14)被串行和随机地访问。 地址生成器(28)包含存储随机存取地址的地址缓冲寄存器(36)和向存储器阵列(24)提供地址流的地址定序器(40)。 地址流的初始地址是存储在地址缓冲寄存器(36)中的随机存取地址。

    Apparatus and method for a skip-list based cache
    36.
    发明申请
    Apparatus and method for a skip-list based cache 审中-公开
    用于基于跳过列表的缓存的装置和方法

    公开(公告)号:US20030196024A1

    公开(公告)日:2003-10-16

    申请号:US10122183

    申请日:2002-04-16

    Applicant: EXANET, INC.

    Inventor: Shahar Frank

    CPC classification number: G06F12/0864 G06F12/0886

    Abstract: An apparatus and a method for the implementation of a skip-list based cache is shown. While the traditional cache is basically a fixed length line based or fixed size block based structure, resulting in several performance problems for certain application, the skip-list based cache provides for a variable size line or block that enables a higher level of flexibility in the cache usage.

    Abstract translation: 示出了用于实现基于跳过列表的高速缓存的装置和方法。 虽然传统的缓存基本上是基于固定长度的行或基于固定大小的块的结构,但是导致某些应用程序出现了几个性能问题,因此基于跳过列表的高速缓存提供了一个可变大小的行或块,从而可以在 缓存使用情况。

    Multiprocessor environment supporting variable-sized coherency transactions
    37.
    发明申请
    Multiprocessor environment supporting variable-sized coherency transactions 失效
    多处理器环境支持可变大小的一致性事务

    公开(公告)号:US20030159005A1

    公开(公告)日:2003-08-21

    申请号:US10077560

    申请日:2002-02-15

    CPC classification number: G06F12/0831 Y02D10/13

    Abstract: A method and system for performing variable-sized memory coherency transactions. A bus interface unit coupled between a slave and a master may be configured to receive a request (master request) comprising a plurality of coherency granules from the master. Each snooping unit in the system may be configured to snoop a different number of coherency granules in the master request at a time. Once the bus interface unit has received a collection of sets of indications from each snooping logic unit indicating that the associated collection of coherency granules in the master request have been snooped by each snooping unit and that the data at the addresses for the collection of coherency granules snooped has not been updated, the bus interface unit may allow the data at the addresses of those coherency granules not updated to be transferred between the requesting master and the slave.

    Abstract translation: 用于执行可变大小的存储器一致性事务的方法和系统。 耦合在从机和主机之间的总线接口单元可以被配置为从主机接收包括多个相干性颗粒的请求(主请求)。 系统中的每个窥探单元可以被配置为一次窥探主请求中的不同数量的一致性粒子。 一旦总线接口单元已经从每个窥探逻辑单元接收到指示集合的指示集合,指示主请求中的相关性集合的集合已被每个监听单元窥探,并且用于收集相关性颗粒的地址上的数据 被侦听未被更新,总线接口单元可以允许未更新的那些一致性粒度的地址上的数据在请求主机和从机之间传送。

    Dirty data protection for cache memories
    38.
    发明申请
    Dirty data protection for cache memories 有权
    缓存存储器的脏数据保护

    公开(公告)号:US20030149845A1

    公开(公告)日:2003-08-07

    申请号:US10071014

    申请日:2002-02-07

    Inventor: Peter L. Fu

    CPC classification number: G06F12/0891

    Abstract: A method is described for protecting dirty data in cache memories in a cost-effective manner. When an instruction to write data to a memory location is received, and that memory location is being cached, the data is written to a plurality of cache lines, which are referred to as duplicate cache lines. When the data is written back to memory, one of the duplicate cache lines is read. If the cache line is not corrupt, it is written back to the appropriate memory location and marked available. In one embodiment, if more duplicate cache lines exist, they are invalidated. In another embodiment, the other corresponding cache lines may be read for the highest confidence of reliability, and then marked clean or invalid.

    Abstract translation: 描述了一种以成本有效的方式保护高速缓冲存储器中的脏数据的方法。 当接收到向存储器位置写入数据的指令并且该存储器位置被缓存时,数据被写入多个高速缓存行,这被称为重复高速缓存行。 当数据被写回存储器时,读取一个重复的高速缓存行。 如果缓存行没有被破坏,它将被写回适当的内存位置并标记为可用。 在一个实施例中,如果存在更多重复的高速缓存行,则它们被无效。 在另一个实施例中,可以读取其它对应的高速缓存行,以获得可靠性的最高置信度,然后标记为干净或无效。

    Reducing delay of command completion due to overlap condition
    40.
    发明申请
    Reducing delay of command completion due to overlap condition 有权
    由于重叠条件导致命令完成延迟

    公开(公告)号:US20030105919A1

    公开(公告)日:2003-06-05

    申请号:US10143235

    申请日:2002-05-10

    CPC classification number: G06F3/0601 G06F2003/0697

    Abstract: Method and apparatus for transferring data between a host device and a data storage device having a first memory space and a second memory space. The host issues access commands to store and retrieve data. The device stores commands in the first memory space pending transfer to the second memory space. An interface circuit evaluates relative proximity of first and second sets of LBAs associated with pending first and second commands, and delays promotion of later pending commands in front of earlier pending commands during an overlap condition. If the overlap is caused by performance enhancing features (PEF) the PEFs are disabled so the commands can be scheduled for disc access. Indicators are set in the commands to signal that a PEF has caused the overlap and that PEF can be disabled. Values are added to indicators in the commands such that the PEFs can be modified and avoid overlaps.

    Abstract translation: 用于在主机设备和具有第一存储器空间和第二存储器空间的数据存储设备之间传送数据的方法和装置。 主机发出访问命令来存储和检索数据。 设备将命令存储在第一存储器空间中,等待传送到第二存储器空间。 接口电路评估与等待的第一和第二命令相关联的第一组和第二组LBA的相对接近度,并且在重叠条件期间延迟提前稍早的待命命令。 如果重叠是由性能增强功能(PEF)引起的,则PEF被禁用,因此命令可以被调度用于光盘访问。 在命令中设置指示信号,指示PEF已经引起重叠,并且可以禁用PEF。 将值添加到命令中的指示符中,以便可以修改PEF并避免重叠。

Patent Agency Ranking