Method for providing high availability within a data processing system via a reconfigurable hashed storage subsystem
    21.
    发明授权
    Method for providing high availability within a data processing system via a reconfigurable hashed storage subsystem 失效
    一种通过可重配置散列存储子系统在数据处理系统内提供高可用性的方法

    公开(公告)号:US06823471B1

    公开(公告)日:2004-11-23

    申请号:US09364281

    申请日:1999-07-30

    CPC classification number: G06F11/20

    Abstract: A processor includes execution resources, data storage, and an instruction sequencing unit, coupled to the execution resources and the data storage, that supplies instructions within the data storage to the execution resources. At least one of the execution resources, the data storage, and the instruction sequencing unit is implemented with a plurality of hardware partitions of like function for processing a respective one of a plurality of data streams. If an error is detected in a particular hardware partition, the data stream assigned to that hardware partition is reassigned to another of the plurality of hardware partitions, thus preventing an error in one of the hardware partitions from resulting in a catastrophic failure.

    Abstract translation: 处理器包括执行资源,数据存储和指令排序单元,其耦合到执行资源和数据存储器,其将数据存储器内的指令提供给执行资源。 执行资源,数据存储和指令排序单元中的至少一个由具有用于处理多个数据流中的相应一个的类似功能的多个硬件分区来实现。 如果在特定硬件分区中检测到错误,则分配给该硬件分区的数据流被重新分配给多个硬件分区中的另一个,从而防止其中一个硬件分区中的错误导致灾难性故障。

    High performance symmetric multiprocessing systems via super-coherent data mechanisms
    22.
    发明授权
    High performance symmetric multiprocessing systems via super-coherent data mechanisms 失效
    通过超相干数据机制的高性能对称多处理系统

    公开(公告)号:US06785774B2

    公开(公告)日:2004-08-31

    申请号:US09978362

    申请日:2001-10-16

    CPC classification number: G06F12/0831

    Abstract: A multiprocessor data processing system comprising a plurality of processing units, a plurality of caches, that is each affiliated with one of the processing units, and processing logic that, responsive to a receipt of a first system bus response to a coherency operation, causes the requesting processor to execute operations utilizing super-coherent data. The data processing system further includes logic eventually returning to coherent operations with other processing units responsive to an occurrence of a pre-determined condition. The coherency protocol of the data processing system includes a first coherency state that indicates that modification of data within a shared cache line of a second cache of a second processor has been snooped on a system bus of the data processing system. When the cache line is in the first coherency state, subsequent requests for the cache line is issued as a Z1 read on a system bus and one of two responses are received. If the response to the Z1 read indicates that the first processor should utilize local data currently available within the cache line, the first coherency state is changed to a second coherency state that indicates to the first processor that subsequent request for the cache line should utilize the data within the local cache and not be issued to the system interconnect. Coherency state transitions to the second coherency state is completed via the coherency protocol of the data processing system. Super-coherent data is provided to the processor from the cache line of the local cache whenever the second coherency state is set for the cache line and a request is received.

    Abstract translation: 一种多处理器数据处理系统,包括多个处理单元,多个高速缓存,每个高速缓存与每个处理单元中的一个相关联;以及处理逻辑,响应于对一致性操作的第一系统总线响应的接收,使得 请求处理器使用超相干数据执行操作。 数据处理系统还包括逻辑,其最终返回到响应于预定条件的发生的其他处理单元的相干操作。 数据处理系统的一致性协议包括第一相关性状态,其指示在数据处理系统的系统总线上已经窥探第二处理器的第二高速缓存的共享高速缓存行内的数据的修改。 当高速缓存行处于第一相关性状态时,在系统总线上作为Z1读取发出对高速缓存行的后续请求,并且接收到两个响应中的一个。 如果对Z1读取的响应指示第一处理器应利用高速缓存行内当前可用的本地数据,则将第一相关性状态改变为第二相关性状态,其向第一处理器指示对高速缓存行的后续请求应当利用 本地缓存内的数据,不发给系统互连。 通过数据处理系统的一致性协议完成一致性状态转换到第二相关性状态。 每当为高速缓存行设置第二相关性状态并接收到请求时,将超相干数据从本地高速缓存行提供给处理器。

    Method and system for prefetching utilizing memory initiated prefetch write operations
    23.
    发明授权
    Method and system for prefetching utilizing memory initiated prefetch write operations 有权
    利用存储器启动的预取写操作来预取的方法和系统

    公开(公告)号:US06760817B2

    公开(公告)日:2004-07-06

    申请号:US09886004

    申请日:2001-06-21

    Abstract: A computer system includes a processing unit, a system memory, and a memory controller coupled to the processing unit and the system memory. According to the present invention, the memory controller accesses the system memory to obtain prefetch data and transmits the prefetch data to the processing unit in a prefetch write operation specifying the processing unit in a destination field. In one embodiment, the memory controller transmits the prefetch write operation in response to receipt of a prefetch hint from the processing unit, which may accompany a read-type request by the processing unit. This prefetch methodology may advantageously be implemented imprecisely, with the memory controller responding to the prefetch hint only if a prefetch queue is available and ignoring the prefetch hint otherwise. The processing unit may similarly ignore the prefetch write operation if no snoop queue is available. Consequently, communication bandwidth is not wasted by the memory controller or processing unit retrying prefetch operations. In addition, because the memory controller directs prefetching, the processing unit need not allocate a queue to the prefetch operation, thus reducing the number of queues required in the processing unit.

    Abstract translation: 计算机系统包括处理单元,系统存储器和耦合到处理单元和系统存储器的存储器控​​制器。 根据本发明,存储器控制器访问系统存储器以获得预取数据,并且在预取写入操作中将预取数据发送到处理单元,该预取写入操作指定目的地字段中的处理单元。 在一个实施例中,存储器控制器响应于从处理单元接收到可能伴随处理单元的读取类型请求的预取提示而发送预取写入操作。 这种预取方法可以有利地被不精确地实现,只有在预取队列可用并且忽略预取提示的情况下,存储器控制器才响应预取提示。 如果没有窥探队列可用,处理单元可以类似地忽略预取写入操作。 因此,通信带宽不被存储器控制器或处理单元重试预取操作所浪费。 此外,由于存储器控制器指示预取,所以处理单元不需要为预取操作分配队列,从而减少处理单元中所需的队列数量。

    Non-uniform memory access (NUMA) data processing system that provides notification of remote deallocation of shared data
    24.
    发明授权
    Non-uniform memory access (NUMA) data processing system that provides notification of remote deallocation of shared data 失效
    非均匀内存访问(NUMA)数据处理系统,提供共享数据的远程解除通知

    公开(公告)号:US06633959B2

    公开(公告)日:2003-10-14

    申请号:US09885990

    申请日:2001-06-21

    CPC classification number: G06F12/0817 G06F2212/2542

    Abstract: A non-uniform memory access (NUMA) computer system includes a node interconnect to which a remote node and a home node are coupled. The home node contains a home system memory, and the remote node includes at least one processing unit and a cache. In response to the cache deallocating an unmodified cache line that corresponds to data resident in the home system memory, a cache controller of the cache issues a deallocate operation on a local interconnect of the remote node. In one embodiment, the deallocate operation is further transmitted to the home node via the node interconnect only in response to an indication, such as a combined response, that no other cache in the remote node caches the cache line. In response to receipt of the deallocate operation, a memory controller in the home node updates a local memory directory associated with the home system memory to indicate that the remote node does not hold a copy of the cache line.

    Abstract translation: 非均匀存储器访问(NUMA)计算机系统包括远程节点和家庭节点耦合到的节点互连。 家庭节点包含家庭系统存储器,并且远程节点包括至少一个处理单元和高速缓存。 响应于缓存释放对应于驻留在家庭系统存储器中的数据的未修改的高速缓存行,高速缓存控制器在远程节点的本地互连上发出释放操作。 在一个实施例中,解除分配操作通过节点互连仅通过响应于诸如组合响应的指示,远程节点中没有其他高速缓存缓存高速缓存线而进一步发送到家庭节点。 响应于取消分配操作的接收,家庭节点中的存储器控​​制器更新与归属系统存储器相关联的本地存储器目录,以指示远程节点不保存高速缓存行的副本。

    Cache coherency protocol permitting sharing of a locked data granule
    25.
    发明授权
    Cache coherency protocol permitting sharing of a locked data granule 失效
    缓存一致性协议允许共享锁定的数据粒子

    公开(公告)号:US06629209B1

    公开(公告)日:2003-09-30

    申请号:US09437185

    申请日:1999-11-09

    CPC classification number: G06F12/0831

    Abstract: A multiprocessor data processing system requires careful management to maintain cache coherency. Conventional systems using a MESI approach sacrifice some performance with inefficient lock-acquisition and lock-retention techniques. The disclosed system provides additional cache states, indicator bits, and lock-acquisition routines to improve cache performance. The additional cache states allow cache state transition sequences to be optimized by replacing frequently-occurring and inefficient MESI code sequences with improved sequences using modified cache states.

    Abstract translation: 多处理器数据处理系统需要仔细管理以保持高速缓存一致性。 使用MESI方法的常规系统通过低效的锁定采集和锁定保留技术来牺牲一些性能。 所公开的系统提供附加的高速缓存状态,指示符位和锁定采集例程以提高高速缓存性能。 额外的高速缓存状态允许通过使用修改的高速缓存状态替换具有改进的序列的频繁出现的和低效的MESI码序列来优化高速缓存状态转换序列。

    Extended cache coherency protocol with a modified store instruction lock release indicator
    26.
    发明授权
    Extended cache coherency protocol with a modified store instruction lock release indicator 失效
    扩展缓存一致性协议,具有修改后的存储指令锁定释放指示器

    公开(公告)号:US06625701B1

    公开(公告)日:2003-09-23

    申请号:US09437183

    申请日:1999-11-09

    CPC classification number: G06F12/0811 G06F12/0815

    Abstract: A multiprocessor data processing system requires careful management to maintain cache coherency. Conventional systems using a MESI approach sacrifice some performance with inefficient lock-acquisition and lock-retention techniques. The disclosed system provides additional cache states, indicator bits, and lock-acquisition routines to improve cache performance. In particular, as multiple processors compete for the same cache line, a significant amount of processor time is lost determining if another processor's cache line lock has been released and attempting to reserve that cache line while it is still owned by the other processor. The preferred embodiment provides an indicator bit with the cache store command which specifically indicates whether the store also acts as a lock-release.

    Abstract translation: 多处理器数据处理系统需要仔细管理以保持高速缓存一致性。 使用MESI方法的常规系统通过低效的锁定采集和锁定保留技术来牺牲一些性能。 所公开的系统提供附加的高速缓存状态,指示符位和锁定采集例程以提高高速缓存性能。 特别地,由于多个处理器竞争相同的高速缓存行,所以丢失了大量的处理器时间,这确定了另一个处理器的高速缓存行锁定是否已被释放,并尝试在该另一个处理器仍然拥有的情况下保留该高速缓存行。 优选实施例提供具有高速缓存存储命令的指示符位,其特别地指示存储还是否用作锁定释放。

    System and method for asynchronously overlapping storage barrier operations with old and new storage operations
    27.
    发明授权
    System and method for asynchronously overlapping storage barrier operations with old and new storage operations 有权
    使用旧的和新的存储操作异步重叠存储屏障操作的系统和方法

    公开(公告)号:US06609192B1

    公开(公告)日:2003-08-19

    申请号:US09588607

    申请日:2000-06-06

    CPC classification number: G06F9/30087 G06F9/3834 G06F9/3842

    Abstract: Disclosed is a multiprocessor data processing system that executes loads transactions out of order with respect to a barrier operation. The data processing system includes a memory and a plurality of processors coupled to an interconnect. At least one of the processors includes an instruction sequencing unit for fetching an instruction sequence in program order for execution. The instruction sequence includes a first and a second load instruction and a barrier instruction, which is between the first and second load instructions in the instruction sequence. Also included in the processor is a load/store unit (LSU), which has a load request queue (LRQ) that temporarily buffers load requests associated with the first and second load instructions. The LRQ is coupled to a load request arbitration unit, which selects an order of issuing the load requests from the LRQ. Then a controller issues a load request associated with the second load instruction to memory before completion of a barrier operation associated with the barrier instruction. Alternatively, load requests are issued out-of-order with respect to the program order before or after the barrier instruction. The load request arbitration unit selects the request associated with the second load instruction before a request associated with the first load instruction, and the controller issues the request associated with the second load instruction before the request associated with the first load instruction and before issuing the barrier operation.

    Abstract translation: 公开了一种多处理器数据处理系统,其针对屏障操作执行无序的负载事务。 数据处理系统包括存储器和耦合到互连的多个处理器。 至少一个处理器包括用于以程序顺序取出指令序列以执行的指令排序单元。 指令序列包括在指令序列中的第一和第二加载指令之间的第一和第二加载指令和障碍指令。 还包括在处理器中的是装载/存储单元(LSU),其具有临时缓冲与第一和第二加载指令相关联的加载请求的加载请求队列(LRQ)。 LRQ耦合到负载请求仲裁单元,该单元从LRQ中选择发出负载请求的顺序。 然后,在与障碍指令相关联的屏障操作完成之前,控制器向存储器发出与第二加载指令相关联的加载请求。 或者,负载请求在屏障指令之前或之后相对于程序顺序发出无序。 负载请求仲裁单元在与第一加载指令相关联的请求之前选择与第二加载指令相关联的请求,并且控制器在与第一加载指令相关联的请求之前发布与第二加载指令相关联的请求,并且在发布屏障之前 操作。

    Method and apparatus for accessing banked embedded dynamic random access memory devices
    28.
    发明授权
    Method and apparatus for accessing banked embedded dynamic random access memory devices 有权
    用于访问嵌入式嵌入式动态随机存取存储器件的方法和装置

    公开(公告)号:US06606680B2

    公开(公告)日:2003-08-12

    申请号:US09895224

    申请日:2001-06-29

    CPC classification number: G06F12/0897 G11C7/22 G11C11/408 G11C2207/104

    Abstract: An apparatus for accessing a banked embedded dynamic random access memory device is disclosed. The apparatus for accessing a banked embedded dynamic random access memory (DRAM) device comprises a general functional control logic and a bank RAS controller. The general functional control logic is coupled to each bank of the banked embedded DRAM device. Coupled to the general functional control logic, the bank RAS controller includes a rotating shift register having multiple bits. Each bit within the rotating shift register corresponds to each bank of the banked embedded DRAM device. As such, a first value within a bit of the rotating shift register allows accesses to an associated bank of the banked embedded DRAM device, and a second value within a bit of the rotating shift register denies accesses to an associated bank of the banked embedded DRAM device.

    Abstract translation: 公开了一种用于访问嵌入式动态随机存取存储器件的装置。 用于访问堆叠式嵌入式动态随机存取存储器(DRAM)装置的装置包括通用功能控制逻辑和银行RAS控制器。 一般的功能控制逻辑耦合到组合的嵌入式DRAM设备的每个组。 耦合到一般功能控制逻辑,银行RAS控制器包括具有多个位的旋转移位寄存器。 旋转移位寄存器内的每个位对应于组合嵌入式DRAM器件的每一组。 这样,旋转移位寄存器的一位内的第一值允许访问分组的嵌入式DRAM设备的相关联的存储体,并且旋转移位寄存器的一位内的第二值拒绝对相关联的组的嵌入式DRAM 设备。

    Method and apparatus for concurrently communicating with multiple embedded dynamic random access memory devices
    30.
    发明授权
    Method and apparatus for concurrently communicating with multiple embedded dynamic random access memory devices 有权
    用于与多个嵌入式动态随机存取存储器件同时通信的方法和装置

    公开(公告)号:US06574719B2

    公开(公告)日:2003-06-03

    申请号:US09903720

    申请日:2001-07-12

    CPC classification number: G06F13/28 G06F13/4243

    Abstract: An apparatus for providing concurrent communications between multiple memory devices and a processor is disclosed. Each of the memory device includes a driver, a phase/cycle adjust sensing circuit, and a bus alignment communication logic. Each phase/cycle adjust sensing circuit detects an occurrence of a cycle adjustment from a corresponding driver within a memory device. If an occurrence of a cycle adjustment has been detected, the bus alignment communication logic communicates the occurrence of a cycle adjustment to the processor. The bus alignment communication logic also communicates the occurrence of a cycle adjustment to the bus alignment communication logic in the other memory devices. There are multiple receivers within the processor, and each of the receivers is designed to receive data from a respective driver in a memory device. Each of the receivers includes a cycle delay block. The receiver that had received the occurrence of a cycle adjustment informs the other receivers that did not receive the occurrence of a cycle adjustment to use their cycle delay block to delay the incoming data for at least one cycle.

    Abstract translation: 公开了一种用于在多个存储器件和处理器之间提供并发通信的装置。 每个存储器件包括驱动器,相位/周期调整感测电路和总线对准通信逻辑。 每个相位/周期调整感测电路检测来自存储器件内相应的驱动器的周期调整的发生。 如果已经检测到循环调整的发生,则总线对准通信逻辑将处理器的循环调整的发生传达给处理器。 总线对准通信逻辑还将循环调整的发生与其他存储器件中的总线对准通信逻辑进行通信。 处理器内有多个接收器,并且每个接收器被设计成从存储器设备中的相应驱动器接收数据。 每个接收器包括循环延迟块。 接收到发生循环调整的接收器通知其他接收机没有接收周期调整的发生,以使用它们的周期延迟块来延迟输入数据至少一个周期。

Patent Agency Ranking