METHOD AND APPARATUS FOR CACHE VALIDATION FOR PROXY CACHES
    41.
    发明申请
    METHOD AND APPARATUS FOR CACHE VALIDATION FOR PROXY CACHES 失效
    用于高速缓存的方法和设备

    公开(公告)号:US20030061272A1

    公开(公告)日:2003-03-27

    申请号:US09000713

    申请日:1997-12-30

    CPC classification number: G06F17/30902

    Abstract: A proxy cache maintains a copy of multiple resources from various servers in a network. When the proxy cache must generate a validation request for at least one resource at one of the servers, the proxy cache piggybacks one or more additional cache validation requests related to documents presently stored in the cache but originating from or associated with the server in question. Upon receipt of an indication of the freshness or validity of the cached copy of the document, the proxy cache can then make a determination as to whether to request an update of the document.

    Abstract translation: 代理缓存维护网络中各种服务器的多个资源的副本。 当代理缓存必须为其中一个服务器上的至少一个资源生成验证请求时,代理缓存搭载与当前存储在高速缓存中但始发于或与所涉及的服务器相关联的文件相关的一个或多个附加缓存验证请求。 在接收到文档的缓存副本的新鲜度或有效性的指示时,代理缓存器然后可以确定是否请求文档的更新。

    System and method for industrial controller with an I/O processor using cache memory to optimize exchange of shared data
    42.
    发明申请
    System and method for industrial controller with an I/O processor using cache memory to optimize exchange of shared data 有权
    具有I / O处理器的工业控制器的系统和方法,使用高速缓冲存储器来优化共享数据的交换

    公开(公告)号:US20030023780A1

    公开(公告)日:2003-01-30

    申请号:US09915024

    申请日:2001-07-25

    CPC classification number: G05B19/054

    Abstract: A system and method for industrial control I/O forcing is provided. The invention includes a processor, shared memory and an I/O processor with cache memory. The invention provides for the cache memory to be loaded with I/O force data from the shared memory. The I/O processor performs I/O forcing utilizing the I/O force data stored in the cache memory. The invention further provides for the processor to notify the I/O processor in the event that I/O force data is altered during control program execution. The invention further provides for the I/O processor to refresh the cache memory (e.g., via a blocked write) after receipt of alteration of the I/O force data from the processor.

    Abstract translation: 提供了工业控制I / O强制的系统和方法。 本发明包括处理器,共享存储器和具有高速缓冲存储器的I / O处理器。 本发明提供了从共享存储器加载I / O力数据的高速缓冲存储器。 I / O处理器使用存储在高速缓冲存储器中的I / O力数据来执行I / O强制。 本发明还提供了在控制程序执行期间I / O力数据被改变的情况下,处理器通知I / O处理器。 本发明进一步提供I / O处理器在接收到来自处理器的I / O力数据的改变之后刷新高速缓冲存储器(例如经由阻塞的写入)。

    Compute engine employing a coprocessor
    43.
    发明申请
    Compute engine employing a coprocessor 有权
    采用协处理器的计算引擎

    公开(公告)号:US20030014589A1

    公开(公告)日:2003-01-16

    申请号:US10105587

    申请日:2002-03-25

    Abstract: A multi-processor includes multiple processing clusters for performing assigned applications. Each cluster includes a set of compute engines, with each compute engine coupled to a set of cache memory. A compute engine includes a central processing unit and a coprocessor with a set of application engines. The central processing unit and coprocessor are coupled to the compute engine's associated cache memory. The sets of cache memory within a cluster are also coupled to one another.

    Abstract translation: 多处理器包括用于执行分配的应用的多个处理集群。 每个群集包括一组计算引擎,其中每个计算引擎耦合到一组高速缓冲存储器。 计算引擎包括具有一组应用引擎的中央处理单元和协处理器。 中央处理单元和协处理器耦合到计算引擎的相关高速缓冲存储器。 集群内的高速缓存存储器集合也彼此耦合。

    Method and apparatus for controlling memory storage locks based on cache line ownership
    44.
    发明申请
    Method and apparatus for controlling memory storage locks based on cache line ownership 有权
    基于高速缓存行所有权控制存储器存储锁的方法和装置

    公开(公告)号:US20020174305A1

    公开(公告)日:2002-11-21

    申请号:US09750637

    申请日:2000-12-28

    Inventor: Kelvin S. Vartti

    CPC classification number: G06F12/0815

    Abstract: A system and method for controlling storage locks based on cache line ownership. Ownership of target data segments is acquired at a memory targeted by a first requesting device. A storage lock is enabled that prohibits requesting devices, other than the first requesting device, from acting on the target data segments during the time the targeted memory possesses ownership of the target data segments. A storage lock release signal is issued from the first requesting device to the targeted memory when exclusivity of the target data segments is no longer required at the first requesting device. In response, the storage lock at the targeted memory is released, thereby allowing other requesting devices to act on the target data segments.

    Abstract translation: 一种基于高速缓存行所有权控制存储锁的系统和方法。 目标数据段的所有权在由第一请求设备指定的存储器上获取。 启用存储锁定,其禁止除了第一请求设备之外的请求设备在目标存储器拥有目标数据段的所有权期间对目标数据段进行操作。 当在第一请求设备不再需要目标数据段的排他性时,存储锁释放信号从第一请求设备发出到目标存储器。 作为响应,释放目标存储器上的存储锁定,从而允许其他请求设备对目标数据段进行操作。

    Multiprocessor system snoop scheduling mechanism for limited bandwidth snoopers
    45.
    发明申请
    Multiprocessor system snoop scheduling mechanism for limited bandwidth snoopers 失效
    用于有限带宽窥探者的多处理器系统侦听调度机制

    公开(公告)号:US20020129209A1

    公开(公告)日:2002-09-12

    申请号:US09749328

    申请日:2001-03-12

    CPC classification number: G06F12/0831

    Abstract: A multiprocessor computer system in which snoop operations of the caches are synchronized to allow the issuance of a cache operation during a cycle which is selected based on the particular manner in which the caches have been synchronized. Each cache controller is aware of when these synchronized snoop tenures occur, and can target these cycles for certain types of requests that are sensitive to snooper retries, such as kill-type operations. The synchronization may set up a priority scheme for systems with multiple interconnect buses, or may synchronize the refresh cycles of the DRAM memory of the snooper's directory. In another aspect of the invention, windows are created during which a directory will not receive write operations (i.e., the directory is reserved for only read-type operations). The invention may be implemented in a cache hierarchy which provides memory arranged in banks, the banks being similarly synchronized. The invention is not limited to any particular type of instruction, and the synchronization functionality may be hardware or software programmable.

    Abstract translation: 一种多处理器计算机系统,其中高速缓存的窥探操作被同步以允许在基于高速缓存已被同步的特定方式选择的周期期间发出高速缓存操作。 每个缓存控制器都知道何时发生这些同步的窥探任务,并且可以针对某些对窥探重试敏感的请求(如kill-type操作)来定位这些周期。 同步可以为具有多个互连总线的系统建立优先级方案,或者可以同步窥探者目录的DRAM存储器的刷新周期。 在本发明的另一方面,创建窗口,在该窗口期间,目录将不会接收写入操作(即,该目录仅被保留用于仅读操作)。 本发明可以在缓存层级中实现,该缓存层级提供了布置在存储体中的存储体,存储体同样地同步。 本发明不限于任何特定类型的指令,并且同步功能可以是硬件或软件可编程的。

    Hierarchical RAID system including multiple RAIDs and method for controlling RAID system
    46.
    发明申请
    Hierarchical RAID system including multiple RAIDs and method for controlling RAID system 有权
    分层RAID系统,包括多个RAID和控制RAID系统的方法

    公开(公告)号:US20020124139A1

    公开(公告)日:2002-09-05

    申请号:US09818762

    申请日:2001-03-28

    Abstract: In a data storage system based on large capacitance, high performance and high availability through a hierarchical construction of redundant arrays of expensive disks (RAID) and a method for controlling the storage system, in order to provide better reliability and more prominent performance than the traditional RAID, and more particularly, in a hierarchical RAID system provided with a plurality of RAIDs in which at least one RAID composed of a large number of disks is used as a virtual disk, and a method for controlling the RAID system, and further in a record medium capable of being read through a computer having a writing of a program to realize the inventive method; the hierarchical RAID system includes a host computing unit; at least one upper level RAID controlling unit having a first RAID Level X, for controlling a plurality of first lower level RAID controlling units having a second RAID Level Y in order to use a lower level RAID as a virtual disk; and the plurality of first lower level RAID controlling units having the second RAID Level Y, for controlling numerous member disks under a control of the upper level RAID controlling unit so as to be used as the virtual disk of the upper level RAID.

    Abstract translation: 在基于大电容,高性能和高可用性的数据存储系统中,通过分层构建昂贵磁盘冗余阵列(RAID)和控制存储系统的方法,以提供比传统的更好的可靠性和更突出的性能 RAID,更具体地,在具有多个RAID的分级RAID系统中,其中至少一个由大量磁盘组成的RAID用作虚拟磁盘,以及用于控制RAID系统的方法, 能够通过具有程序写入的计算机读取以实现本发明的方法的记录介质; 分级RAID系统包括主机计算单元; 至少一个上层RAID控制单元具有第一RAID级别X,用于控制具有第二RAID级别Y的多个第一下级RAID控制单元,以便使用较低级别的RAID作为虚拟盘; 以及具有第二RAID级别Y的多个第一下级RAID控制单元,用于在上级RAID控制单元的控制下控制多个成员盘,以便用作上级RAID的虚拟盘。

    Efficient I-cache structure to support instructions crossing line boundaries

    公开(公告)号:US20020116567A1

    公开(公告)日:2002-08-22

    申请号:US09738690

    申请日:2000-12-15

    Abstract: A cache structure, organized in terms of cache lines, for use with variable length bundles of instructions (syllables), comprising: a first cache bank that is organized in columns and rows; a second cache bank that is organized in columns and rows; logic for defining said cache line into a sequence of equal sized segments, and mapping alternate segments in said sequence of segments to the columns in said cache banks such that said first bank holds even segments and said second bank holds odd segments; logic for storing bundles across at most a first column in said first cache bank and a sequentially adjacent column in said second cache bank; and logic for accessing bundles stored in the first and second cache banks.

    Mechanism for collapsing store misses in an SMP computer system
    48.
    发明申请
    Mechanism for collapsing store misses in an SMP computer system 有权
    在SMP计算机系统中崩溃存储错误的机制

    公开(公告)号:US20020112128A1

    公开(公告)日:2002-08-15

    申请号:US09782581

    申请日:2001-02-12

    CPC classification number: G06F12/0855 G06F12/0831

    Abstract: A method of handling a write operation in a multiprocessor computer system wherein each processing unit has a respective cache, by determining that a new value for a store instruction is the same as a current value already contained in the memory hierarchy, and discarding the store instruction without issuing any associated cache operation in response to this determination. When a store hit occurs, the current value is retrieved from the local cache. When a store miss occurs, the current value is retrieved from a remote cache by issuing a read request. The comparison may be performed using a portion of the cache line which is less than a granule size of the cache line. A store gathering queue can be use to collect pending store instructions that are directed to different portions of the same cache line.

    Abstract translation: 一种在多处理器计算机系统中处理写入操作的方法,其中每个处理单元具有相应的高速缓存,通过确​​定存储指令的新值与已经包含在存储器层级中的当前值相同,并且丢弃存储指令 而不响应于该确定而发出任何相关联的高速缓存操作。 当存储命中发生时,从本地缓存检索当前值。 发生存储错误时,通过发出读取请求从远程高速缓存检索当前值。 可以使用小于高速缓存行的粒度大小的高速缓存行的一部分来执行比较。 可以使用商店收集队列来收集指向相同高速缓存行的不同部分的未决存储指令。

    Address predicting apparatus and methods
    49.
    发明申请
    Address predicting apparatus and methods 有权
    地址预测装置和方法

    公开(公告)号:US20020112127A1

    公开(公告)日:2002-08-15

    申请号:US09741371

    申请日:2000-12-19

    CPC classification number: G06F12/0862 G06F2212/6024

    Abstract: Apparatus and methods for addressing predicting useful in high-performance computing systems. The present invention provides novel correlation prediction tables. In one embodiment, correlation prediction tables of the present invention contain an entered key for each successor value entered into the correlation table. In a second embodiment, correlation prediction tables of the present invention utilize address offsets for both the entered keys and entered successor values.

    Abstract translation: 用于寻址预测在高性能计算系统中有用的装置和方法。 本发明提供了新的相关预测表。 在一个实施例中,本发明的相关预测表包含输入到相关表中的每个后续值的输入密钥。 在第二实施例中,本发明的相关预测表利用输入的键和输入的后继值的地址偏移。

    Method and apparatus for reducing latency in a memory system
    50.
    发明申请
    Method and apparatus for reducing latency in a memory system 有权
    用于减少存储器系统中的等待时间的方法和装置

    公开(公告)号:US20020095559A1

    公开(公告)日:2002-07-18

    申请号:US09725461

    申请日:2000-11-30

    CPC classification number: G06F12/0893 G06F12/0884

    Abstract: A memory controller controls a buffer which stores the most recently used addresses and associated data, but the data stored in the buffer is only a portion of a row of data (termed row head data) stored in main memory. In a memory access initiated by the CPU, both the buffer and main memory are accessed simultaneously. If the buffer contains the address requested, the buffer immediately begins to provide the associated row head data in a burst to the cache memory. Meanwhile, the same row address is activated in the main memory bank corresponding to the requested address found in the buffer. After the buffer provides the row head data, the remainder of the burst of requested data is provided by the main memory to the CPU.

    Abstract translation: 存储器控制器控制存储最近使用的地址和相关联的数据的缓冲器,但存储在缓冲器中的数据仅是存储在主存储器中的一行数据(称为行头数据)的一部分。 在由CPU启动的存储器访问中,缓冲器和主存储器都被同时访问。 如果缓冲器包含请求的地址,则缓冲器立即开始以高速缓冲存储器的脉冲串提供关联的行头数据。 同时,在与存储在缓冲器中的请求地址对应的主存储器中激活相同的行地址。 在缓冲器提供行头数据之后,所请求数据的突发的其余部分由主存储器提供给CPU。

Patent Agency Ranking