Demand-based larx-reserve protocol for SMP system buses
    2.
    发明授权
    Demand-based larx-reserve protocol for SMP system buses 失效
    用于SMP系统总线的基于需求的larx-reserve协议

    公开(公告)号:US5895495A

    公开(公告)日:1999-04-20

    申请号:US815647

    申请日:1997-03-13

    CPC分类号: G06F12/0811

    摘要: A method of handling load-and-reserve instructions in a multi-processor computer system wherein the processing units have multi-level caches. Symmetric multi-processor (SMP) computers use cache coherency to ensure the same values for a given memory address are provided to all processors in the system. Load-and-reserve instructions used, for example, in quick read-and-write operations, can become unnecessarily complicated. The present invention provides a method of accessing values in the computer's memory by loading the value from the memory device into all of said caches, and sending a reserve bus operation from a higher-level cache to the next lower-level cache only when the value is to be cast out of the higher cache, and thereafter casting out the value from the higher cache after sending the reserve bus operation. This procedure is preferably used for all caches in a multi-level cache architecture, i.e., when the value is to be cast out of any given cache, a reserve bus operation is sent from the given cache to the next lower-level cache (i.e., the adjacent cache which lies closer to the bus), but the reserve bus operation is not sent to all lower caches. Any attempt by any other processing unit in the computer system to write to an address of the memory device which is associated with the value will then be forwarded to all higher-level caches. The marking of the block as reserved is removed in response to any such attempt to write to the address.

    摘要翻译: 一种在多处理器计算机系统中处理加载和备用指令的方法,其中所述处理单元具有多级高速缓存。 对称多处理器(SMP)计算机使用高速缓存一致性来确保给定内存地址的相同值提供给系统中的所有处理器。 例如,在快速读写操作中使用的加载和备份指令可能会变得不必要的复杂。 本发明提供了一种通过将来自存储器设备的值加载到所有高速缓存中来访问计算机存储器中的值的方法,并且只有当值从高级高速缓存发送到下级高级缓存时, 将被抛出较高的缓存,然后在发送备用总线操作之后从较高的缓存中输出该值。 该过程优选地用于多级高速缓存架构中的所有高速缓存,即,当该值将从任何给定的高速缓存中抛出时,预留总线操作从给定的高速缓存发送到下一级的高级缓存(即 ,靠近总线的相邻缓存),但是备用总线操作不发送到所有较低的高速缓存。 计算机系统中的任何其他处理单元尝试写入与该值相关联的存储器件的地址然后将被转发到所有更高级别的高速缓存。 响应于写入地址的任何此类尝试,删除块作为保留的标记。

    Dynamic folding of cache operations for multiple coherency-size systems
    3.
    发明授权
    Dynamic folding of cache operations for multiple coherency-size systems 失效
    用于多个一致性大小系统的缓存操作的动态折叠

    公开(公告)号:US6105112A

    公开(公告)日:2000-08-15

    申请号:US834120

    申请日:1997-04-14

    IPC分类号: G06F9/30 G06F12/08 G06F12/00

    CPC分类号: G06F9/30047 G06F12/0831

    摘要: A method is disclosed of managing architectural operations in a computer system whose architecture includes components having varying coherency granule sizes. A queue is provided for receiving as entries a plurality of the architectural operations, the entries of the queue are compared with a new architectural operation to determine if the new architectural operation is redundant with any of the entries. If the new architectural operation is not redundant with any of the entries, it is loaded in the queue. The computer system may include a cache having a processor granularity size and a system bus granularity size which is larger than the processor granularity size, and the architectural operations are cache instructions. The comparison may be performed in an associative manner based on the varying coherency granule sizes.

    摘要翻译: 公开了一种在计算机系统中管理架构操作的方法,该系统的架构包括具有变化的一致性粒度大小的组件。 提供了用于作为条目接收多个架构操作的队列,将队列的条目与新的架构操作进行比较,以确定新的架构操作是否与任何条目冗余。 如果新架构操作对于任何条目都不是冗余的,则它将被加载到队列中。 计算机系统可以包括具有处理器粒度大小和大于处理器粒度大小的系统总线粒度大小的高速缓存,并且架构操作是高速缓存指令。 可以基于变化的一致性粒度大小的关联方式来进行比较。

    Demand based sync bus operation
    4.
    发明授权
    Demand based sync bus operation 失效
    基于需求的同步总线操作

    公开(公告)号:US6065086A

    公开(公告)日:2000-05-16

    申请号:US24615

    申请日:1998-02-17

    CPC分类号: G06F13/4243

    摘要: A register associated with the architected logic queue of a memory-coherent device within a multiprocessor system contains a flag set whenever an architected operation enters the initiating device's architected logic queue to be issued on the system bus. The flag remains set even after the architected logic queue is drained, and is reset only when a synchronization instruction is received from a local processor, providing historical information regarding architected operations which may be pending in other devices. This historical information is utilized to determine whether a synchronization operation should be presented on the system bus, allowing unnecessary synchronization operations to be filtered. When a local processor issues a synchronization instruction to the device managing the architected logic queue, the instruction is generally accepted when the architected logic queue is empty. Otherwise the architected operation is retried back to the local processor until the architected logic queue becomes empty. If the flag is set when the synchronization instruction is accepted from the local processor, it is presented on the system bus. If the flag is not set when the synchronization instruction is received from the local processor, the synchronization operation is unnecessary and is not presented on the system bus.

    摘要翻译: 与多处理器系统中的存储器相干设备的架构化逻辑队列相关联的寄存器包含每当架构操作进入要在系统总线上发布的启动设备的架构逻辑队列时的标志集。 即使在架构化逻辑队列耗尽之后,该标志也保持置位,并且仅当从本地处理器接收到同步指令时才复位,提供有关其他设备中可能挂起的架构操作的历史信息。 该历史信息用于确定是否应在系统总线上呈现同步操作,从而允许过滤不必要的同步操作。 当本地处理器向管理架构的逻辑队列的设备发出同步指令时,当架构的逻辑队列为空时,通常会接受该指令。 否则,架构操作将重新回到本地处理器,直到架构化的逻辑队列变为空。 如果在本地处理器接受同步指令时设置了标志,则会将其显示在系统总线上。 如果当从本地处理器接收到同步指令时未设置标志,则不需要同步操作,并且不会在系统总线上呈现同步操作。

    Method and system for controlling access to a shared resource in a data
processing system utilizing dynamically-determined weighted
pseudo-random priorities
    5.
    发明授权
    Method and system for controlling access to a shared resource in a data processing system utilizing dynamically-determined weighted pseudo-random priorities 失效
    用于使用动态确定的加权伪随机优先级来控制对数据处理系统中的共享资源的访问的方法和系统

    公开(公告)号:US5896539A

    公开(公告)日:1999-04-20

    申请号:US839438

    申请日:1997-04-14

    CPC分类号: G06F13/364

    摘要: A method and system for controlling access to a shared resource in a data processing system are described. According to the method, a number of requests for access to the resource are generated by a number of requestors that share the resource. Each of the requestors is dynamically associated with a priority weight in response to events in the data processing system. The priority weight indicates a probability that the associated requestor will be assigned a highest current priority. Each requester is then assigned a current priority that is determined substantially randomly with respect to previous priorities of the requestors. In response to the current priorities of the requestors, a request for access to the resource is granted.

    摘要翻译: 描述了用于控制对数据处理系统中的共享资源的访问的方法和系统。 根据该方法,由许多共享资源的请求者生成对资源的访问请求数量。 响应于数据处理系统中的事件,每个请求者与优先权重动态相关联。 优先权重指示相关联的请求者将被分配最高当前优先级的概率。 然后向每个请求者分配相对于请求者的先前优先级基本随机确定的当前优先级。 响应请求者的当前优先级,授予访问该资源的请求。

    Demand based sync bus operation
    6.
    发明授权
    Demand based sync bus operation 失效
    基于需求的同步总线操作

    公开(公告)号:US06175930B1

    公开(公告)日:2001-01-16

    申请号:US09024586

    申请日:1998-02-17

    IPC分类号: H02H305

    CPC分类号: G06F12/0831

    摘要: A register associated with the architected logic queue of a memory-coherent device within a multiprocessor system contains a flag set whenever an architected operation—one which might affect the storage hierarchy as perceived by other devices within the system—is posted in the snoop queue of a remote snooping device. The flag remains set and is reset only when a synchronization instruction (such as the “sync” instruction supported by the PowerPC™ family of devices) is received from a local processor. The state of the flag thus provides historical information regarding architected operations which may be pending in other devices within the system after being snooped from the system bus. This historical information is utilized to determine whether a synchronization operation should be presented on the system bus, allowing unnecessary synchronization operations to be filtered and additional system bus cycles made available for other purposes. When a local processor issues a synchronization instruction to the device managing the architected logic queue, the instruction is generally accepted when the architected logic queue is empty. Otherwise the architected operation is retried back to the local processor until the architected logic queue becomes empty. If the flag is set when the synchronization instruction is accepted from the local processor, it is presented on the system bus. If the flag is not set when the synchronization instruction is received from the local processor, the synchronization operation is unnecessary and is not presented on the system bus.

    摘要翻译: 与多处理器系统中的存储器相干设备的架构化逻辑队列相关联的寄存器包含标志集,每当可以影响系统内其他设备感知到的存储层次结构的操作(一个可能影响系统中的其他设备的架构操作)被发布在 一个远程监听设备。 该标志保持置位,并且仅当从本地处理器接收到同步指令(例如由PowerPC TM系列器件支持的“sync”指令)时才会复位该标志。 因此,标志的状态提供关于在从系统总线窥探之后可能在系统内的其他设备中挂起的架构操作的历史信息。 该历史信息用于确定是否应在系统总线上呈现同步操作,从而允许滤除不必要的同步操作,并为其他目的提供额外的系统总线周期。 当本地处理器向管理架构的逻辑队列的设备发出同步指令时,当架构的逻辑队列为空时,通常会接受该指令。 否则,架构操作将重新回到本地处理器,直到架构化的逻辑队列变为空。 如果在本地处理器接受同步指令时设置了标志,则会将其显示在系统总线上。 如果当从本地处理器接收到同步指令时未设置标志,则不需要同步操作,并且不会在系统总线上呈现同步操作。

    Method and system for controlling access to a shared resource that each
requestor is concurrently assigned at least two pseudo-random priority
weights
    8.
    发明授权
    Method and system for controlling access to a shared resource that each requestor is concurrently assigned at least two pseudo-random priority weights 失效
    用于控制对共享资源的访问的方法和系统,其中至少一个请求者被同时分配至少两个伪随机优先权重

    公开(公告)号:US5931924A

    公开(公告)日:1999-08-03

    申请号:US839437

    申请日:1997-04-14

    CPC分类号: G06F13/364

    摘要: A method and system for controlling access to a shared resource in a data processing system are described. According to the method, a number of requests for access to the resource are generated by a number of requesters that share the resource. Each of the requesters is associated with a priority weight that indicates a probability that the associated requester will be assigned a highest current priority. Each requester is then assigned a current priority that is determined substantially randomly with respect to previous priorities of the requesters. In response to the current priorities of the requesters, a request for access to the resource is granted. In one embodiment, a requester corresponding to a granted request is signaled that its request has been granted, and a requester corresponding to a rejected request is signaled that its request was not granted.

    摘要翻译: 描述了用于控制对数据处理系统中的共享资源的访问的方法和系统。 根据该方法,通过共享资源的多个请求者生成对资源的访问的多个请求。 每个请求者与优先级权重相关联,该权重指示相关请求者将被分配最高当前优先级的概率。 然后分配每个请求者相对于请求者的先前优先级基本随机确定的当前优先级。 为响应请求者的当前优先级,授予访问资源的请求。 在一个实施例中,与被许可的请求相对应的请求者用信号通知其请求已经被许可,并且与被拒绝的请求相对应的请求者用信号通知其请求未被授予。

    Eviction override for larx-reserved addresses
    9.
    发明授权
    Eviction override for larx-reserved addresses 失效
    撤销覆盖larx保留地址

    公开(公告)号:US06212605B1

    公开(公告)日:2001-04-03

    申请号:US08829577

    申请日:1997-03-31

    IPC分类号: G06F1200

    CPC分类号: G06F12/126

    摘要: A method of controlling eviction of cache blocks to override eviction of a value which is reserved for a later operation. When a value is loaded into a cache of the processor and is reserved using a lwarx instruction, it sometimes is evicted from the cache due to the need to store other values in the cache set that the value is mapped to. The present invention provides a method of overriding eviction of reserved values by evicting a selected block of the cache which is a block other than the block containing the reserved value. The reserved value is indicated as being reserved by loading a memory address associated with the value into a reservation unit of the cache, and making a reservation flag in the reservation unit active. In two alternative implementations, the eviction mechanism selects a tentative block for eviction and then determines whether the tentative block is the same as the reserved block (and, if so, chooses a different block for the selected block), or preemptively prohibits the reserved block from being chosen as the selected block. The method of the present invention can be implemented with different types of cache replacement controls, e.g., a random mechanism or a least recently used mechanism.

    摘要翻译: 控制高速缓存块的驱逐的方法,以覆盖为稍后的操作保留的值的驱逐。 当值被加载到处理器的高速缓存中并且使用lwarx指令保留时,由于需要在值映射到的高速缓存集中存储其他值,有时它会从缓存中逐出。 本发明提供一种通过逐出除了包含保留值的块之外的块的高速缓存的选定块来覆盖预留值的方法。 保留值被指示为通过将与该值相关联的存储器地址加载到高速缓存的预留单元中而保留,并且使预约单元中的预留标志成为活动状态。 在两个替代实施方案中,驱逐机制选择用于逐出的临时块,然后确定临时块是否与保留块相同(并且如果是,则为所选块选择不同的块),或者先预先禁止保留块 从被选择为所选块。 本发明的方法可以用不同类型的高速缓存替换控制来实现,例如随机机制或最近最少使用的机制。

    Demand-based issuance of cache operations to a system bus
    10.
    发明授权
    Demand-based issuance of cache operations to a system bus 失效
    基于需求的缓存操作向系统总线发出

    公开(公告)号:US06182201B2

    公开(公告)日:2001-01-30

    申请号:US08834116

    申请日:1997-04-14

    IPC分类号: G06F1200

    CPC分类号: G06F12/0831 G06F12/0815

    摘要: A method of managing and speculatively issuing architectural operations in a computer system is disclosed. A first architectural operation at a first coherency granule size is issued and translated into a large-scale architectural operation. The first architectural operation can be a first cache instruction directed to a memory block, and the translating results in a page-level cache instruction being issued which is directed to a page that includes the memory block. The large-scale architectural operation is transmitted to a system bus of the computer system. A system bus history table may be used to store a record of the large-scale architectural operations. The history table then can be used to filter out any later architectural operation that is subsumed by the large-scale architectural operation. The history table monitors the computer system to ensure that the large-scale architectural operations recorded in the table are still valid.

    摘要翻译: 公开了一种在计算机系统中管理和推测发布架构操作的方法。 第一个相干性粒度大小的第一个架构操作被发布并转换成大规模的架构操作。 第一架构操作可以是针对存储器块的第一高速缓存指令,并且翻译导致正在发布的页级缓存指令被引导到包括该存储器块的页面。 大型架构操作被传送到计算机系统的系统总线。 可以使用系统总线历史表来存储大型建筑操作的记录。 历史表然后可以用于过滤掉大规模架构操作所包含的任何后来的架构操作。 历史表监视计算机系统,以确保表中记录的大型架构操作仍然有效。