Speculative execution of instructions and processes before completion of preceding barrier operations
    21.
    发明授权
    Speculative execution of instructions and processes before completion of preceding barrier operations 失效
    完成前面的障碍操作之前,对指令和过程的推测执行

    公开(公告)号:US06880073B2

    公开(公告)日:2005-04-12

    申请号:US09753053

    申请日:2000-12-28

    IPC分类号: G06F9/30 G06F9/38 G06F9/00

    摘要: Described is a data processing system and processor that provides full multiprocessor speculation by which all instructions subsequent to barrier operations in a instruction sequence are speculatively executed before the barrier operation completes on the system bus. The processor comprises a load/store unit (LSU) with a barrier operation (BOP) controller that permits load instructions subsequent to syncs in an instruction sequence to be speculatively issued prior to the return of the sync acknowledgment. Data returned is immediately forwarded to the processor's execution units. The returned data and results of subsequent operations are held temporarily in rename registers. A multiprocessor speculation flag is set in the corresponding rename registers to indicate that the value is “barrier” speculative. When a barrier acknowledge is received by the BOP controller, the flag(s) of the corresponding rename register(s) are reset.

    摘要翻译: 描述了提供完整的多处理器推测的数据处理系统和处理器,在系统总线上的屏障操作完成之前,推测性地执行指令序列中的屏障操作之后的所有指令。 处理器包括具有屏障操作(BOP)控制器的加载/存储单元(LSU),其允许在指令序列中的同步之后的加载指令在返回同步确认之前被推测地发出。 返回的数据立即转发到处理器的执行单元。 返回的数据和后续操作的结果暂时保存在重命名寄存器中。 在相应的重命名寄存器中设置多处理器推测标志,以指示该值为“屏障”推测。 当BOP控制器接收到屏障确认时,相应的重命名寄存器的标志被重置。

    System and method for providing multiprocessor speculation within a speculative branch path
    22.
    发明授权
    System and method for providing multiprocessor speculation within a speculative branch path 失效
    在推测性分支路径中提供多处理器推测的系统和方法

    公开(公告)号:US06728873B1

    公开(公告)日:2004-04-27

    申请号:US09588507

    申请日:2000-06-06

    IPC分类号: G06F9312

    摘要: Disclosed is a method of operation within a processor, that enhances speculative branch processing. A speculative execution path contains an instruction sequence that includes a barrier instruction followed by a load instruction. While a barrier operation associated with the barrier instruction is pending, a load request associated with the load instruction is speculatively issued to memory. A flag is set for the load request when it is speculatively issued and reset when an acknowledgment is received for the barrier operation. Data which is returned by the speculatively issued load request is temporarily held and forwarded to a register or execution unit of the data processing system after the acknowledgment is received. All process results, including data returned by the speculatively issued load instructions are discarded when the speculative execution path is determined to be incorrect.

    摘要翻译: 公开了一种处理器内的操作方法,其增强了推测性分支处理。 推测执行路径包含指令序列,其中包含跟随加载指令的障碍指令。 当与障碍指令相关联的障碍操作正在等待时,与加载指令相关联的加载请求被推测地发布到存储器。 当推测性地发出加载请求时设置标志,并且当接收到用于屏障操作的确认时,重置该标志。 在接收到确认之后,由推测发出的加载请求返回的数据被暂时保存并转发到数据处理系统的寄存器或执行单元。 当推测性执行路径被确定为不正确时,所有处理结果(包括由推测发出的加载指令返回的数据)被丢弃。

    Mechanism for folding storage barrier operations in a multiprocessor system
    23.
    发明授权
    Mechanism for folding storage barrier operations in a multiprocessor system 失效
    在多处理器系统中折叠存储屏障操作的机制

    公开(公告)号:US06725340B1

    公开(公告)日:2004-04-20

    申请号:US09588509

    申请日:2000-06-06

    IPC分类号: G06F9312

    摘要: Disclosed is a processor that reduces barrier operations during instruction processing. An instruction sequence includes a first barrier instruction and a second barrier instruction with a store instruction in between the first and second barrier instructions. A store request associated with the store instruction is issued prior to a barrier operation associated with the first barrier instruction. A determination is made of when the store request completes before the first barrier instruction has issued. In response, only a single barrier operation is issued for both the first and second barrier instructions. The single barrier operation is issued after the store request has been issued and at the time the second barrier operation is scheduled to be issued.

    摘要翻译: 公开了一种在指令处理期间减少屏障操作的处理器。 指令序列包括在第一和第二屏障指令之间具有存储指令的第一屏障指令和第二屏障指令。 在与第一屏障指令相关联的屏障操作之前发出与存储指令相关联的存储请求。 确定存储请求何时在第一个屏障指令发出之前完成。 作为响应,仅为第一和第二屏障指令发出单个屏障操作。 单个屏障操作在存储请求已经被发出之后并且在第二屏障操作被安排发布的时候发出。

    Dynamic hardware and software performance optimizations for super-coherent SMP systems
    24.
    发明授权
    Dynamic hardware and software performance optimizations for super-coherent SMP systems 失效
    超连贯SMP系统的动态硬件和软件性能优化

    公开(公告)号:US06704844B2

    公开(公告)日:2004-03-09

    申请号:US09978361

    申请日:2001-10-16

    IPC分类号: G06F1210

    CPC分类号: G06F12/0831

    摘要: A method for increasing performance optimization in a multiprocessor data processing system. A number of predetermined thresholds are provided within a system controller logic and utilized to trigger specific bandwidth utilization responses. Both an address bus and data bus bandwidth utilization are monitored. Responsive to a fall of a percentage of data bus bandwidth utilization below a first predetermined threshold value, the system controller provides a particular response to a request for a cache line at a snooping processor having the cache line, where the response indicates to a requesting processor that the cache line will be provided. Conversely, if the percentage of data bus bandwidth utilization rises above a second predetermined threshold value, the system controller provides a next response to the request that indicates to any requesting processors that the requesting processor should utilize super-coherent data which is currently within its local cache. Similar operation on the address bus permits the system controller to triggering the issuing of Z1 Read requests for modified data in a shared cache line by processors which still have super-coherent data. The method also comprises enabling a load instruction with a plurality of bits that (1) indicates whether a resulting load request may receive super-coherent data and (2) overrides a coherency state indicating utilization of super-coherent data when said plurality of bits indicates that said load request may not utilize said super-coherent data. Specialized store instructions with appended bits and related functionality are also provided.

    摘要翻译: 一种用于在多处理器数据处理系统中提高性能优化的方法。 在系统控制器逻辑中提供多个预定阈值,并用于触发特定带宽利用响应。 监视地址总线和数据总线带宽利用率。 响应于低于第一预定阈值的百分比的数据总线带宽利用率的下降,系统控制器在具有高速缓存行的窥探处理器处提供对高速缓存行的请求的特定响应,其中响应向请求处理器指示 将提供缓存行。 相反,如果数据总线带宽利用率的百分比上升到高于第二预定阈值,则系统控制器向请求处理器提供对请求的下一个响应,该请求指示请求处理器应该利用当前在其本地内的超相干数据 缓存。 地址总线上的类似操作允许系统控制器通过仍具有超相干数据的处理器触发在共享高速缓存行中发出对于修改数据的Z1读请求。 该方法还包括启用具有多个位的加载指令,其中(1)指示所产生的加载请求是否可以接收超相干数据,以及(2)当所述多个比特指示时,超过表示超相干数据的利用的相关性状态 所述加载请求可能不利用所述超相干数据。 还提供了具有附加位和相关功能的专用存储指令。

    Method and apparatus for executing multiply-initiated, multiply-sourced
variable delay system bus operations
    25.
    发明授权
    Method and apparatus for executing multiply-initiated, multiply-sourced variable delay system bus operations 失效
    用于执行多重启动的多来源可变延迟系统总线操作的方法和装置

    公开(公告)号:US6128705A

    公开(公告)日:2000-10-03

    申请号:US4148

    申请日:1998-01-07

    IPC分类号: G06F12/08 G06F13/00

    CPC分类号: G06F12/0831

    摘要: A method and apparatus for preventing the occurrence of deadlocks from the execution of multiply-initiated multiply-sourced variable delay system bus operations. In general, each snooper excepts a given operation at the same time according to an agreed upon condition. In other words, the snooper in a given cache can accept an operation and begin working on it even while retrying the operation. Furthermore, none of the active snoopers release an operation until all the active snoopers are done with the operation. In other words, execution of a given operation is started by the snoopers at the same time and finished by each of the snoopers at the same time. This prevents the ping-pong deadlock by keeping any one cache from finishing the operation before any of the others.

    摘要翻译: 一种用于防止从多次发起的多来源可变延迟系统总线操作的执行中发生死锁的方法和装置。 一般来说,每个窥探者除了按照约定的条件同时给定操作。 换句话说,即使在重试操作时,给定缓存中的监听器也可以接受操作并开始处理。 此外,没有一个主动侦听器释放一个操作,直到所有的主动侦听器都完成了操作。 换句话说,给定操作的执行由窥探者同时开始,同时由每个窥探者完成。 这可以防止乒乓的死锁,因为任何一个缓存都不会在其他任何缓存之前完成操作。

    Apparatus and method of layering cache and architectural specific
functions to permit generic interface definition
    26.
    发明授权
    Apparatus and method of layering cache and architectural specific functions to permit generic interface definition 失效
    分层缓存和架构特定功能以允许通用接口定义的装置和方法

    公开(公告)号:US6122691A

    公开(公告)日:2000-09-19

    申请号:US224105

    申请日:1999-01-04

    IPC分类号: G06F12/08 G06F13/16 G06F13/00

    摘要: Cache and architectural functions within a cache controller are layered and provided with generic interfaces. Layering cache and architectural operations allows the definition of generic interfaces between controller logic and bus interface units within the controller. The generic interfaces are defined by extracting the essence of supported operations into a generic protocol. The interfaces themselves may be pulsed or held interfaces, depending on the character of the operation. Because the controller logic is isolated from the specific protocols required by a processor or bus architecture, the design may be directly transferred to new controllers for different protocols or processors by modifying the bus interface units appropriately.

    摘要翻译: 高速缓存控制器中的缓存和架构功能是分层的,并具有通用接口。 分层缓存和架构操作允许在控制器内的控制器逻辑和总线接口单元之间定义通用接口。 通用接口通过将支持的操作的本质提取到通用协议中来定义。 接口本身可以是脉冲或保持的接口,这取决于操作的特性。 由于控制器逻辑与处理器或总线架构所需的特定协议隔离,所以可以通过适当地修改总线接口单元将设计直接传送到不同协议或处理器的新控制器。

    Method of layering cache and architectural specific functions to promote
operation symmetry
    27.
    发明授权
    Method of layering cache and architectural specific functions to promote operation symmetry 失效
    分层缓存和架构特定功能的方法,以促进操作对称

    公开(公告)号:US6061755A

    公开(公告)日:2000-05-09

    申请号:US839441

    申请日:1997-04-14

    IPC分类号: G06F12/08 G06F13/38 G06F12/00

    CPC分类号: G06F12/0831

    摘要: Cache and architectural functions within a cache controller are layered so that architectural operations may be symmetrically treated regardless of whether initiated by a local processor or by a horizontal processor. The same cache controller logic which handles architectural operations initiated by a horizontal device also handles architectural operations initiated by a local processor. Architectural operations initiated by a local processor are passed to the system bus and self-snooped by the controller. If necessary, the architectural controller changes the operation protocol to conform to the system bus architecture.

    摘要翻译: 高速缓存控制器内的缓存和架构功能被分层,使得架构操作可以被对称地处理,而不管是由本地处理器还是由水平处理器启动。 处理由水平设备发起的架构操作的相同缓存控制器逻辑也处理由本地处理器启动的架构操作。 由本地处理器启动的架构操作被传递给系统总线,并由控制器自行侦测。 如果需要,架构控制器改变操作协议以符合系统总线体系结构。

    Method and system for transferring data between buses having differing
ordering policies

    公开(公告)号:US5951668A

    公开(公告)日:1999-09-14

    申请号:US934407

    申请日:1997-09-19

    IPC分类号: G06F13/36 G06F13/40 G06F13/00

    CPC分类号: G06F13/4013 G06F13/36

    摘要: A method and apparatus for ordering operations and data received by a first bus having a first ordering policy according to a second ordering policy which is different from the first ordering policy, and for transferring the ordered data on a second bus having the second ordering policy. The system includes a plurality of execution units for storing operations and executing the transfer of data between the first and second buses. Each one of the execution units are assigned to a group which represent a class of operations. The apparatus further includes intra prioritizing means, for each group, for prioritizing the stored operations according to the second ordering policy exclusive of the operation stored in the other group. The system also includes inter prioritizing means for determining which one of the prioritized operations can proceed to execute according to the second ordering policy.

    Method and system for controlling access to a shared resource in a data
processing system utilizing pseudo-random priorities
    29.
    发明授权
    Method and system for controlling access to a shared resource in a data processing system utilizing pseudo-random priorities 失效
    用于利用伪随机优先级来控制对数据处理系统中的共享资源的访问的方法和系统

    公开(公告)号:US5935234A

    公开(公告)日:1999-08-10

    申请号:US839436

    申请日:1997-04-14

    CPC分类号: G06F13/364

    摘要: A method and system for controlling access to a shared resource in a data processing system are described. According to the method, a number of requests for access to the resource are generated by a number of requesters that share the resource. Each of the requesters is assigned a current priority, at least the highest current priority being determined substantially randomly with respect to previous priorities of the requestors. In response to the current priorities of the requestors, a request for access to the resource is granted. In one embodiment, a requester corresponding to a granted request is signaled that its request has been granted, and a requester corresponding to a rejected request is signaled that its request was not granted.

    摘要翻译: 描述了用于控制对数据处理系统中的共享资源的访问的方法和系统。 根据该方法,通过共享资源的多个请求者生成对资源的访问的多个请求。 每个请求者被分配当前优先级,至少最高当前优先级相对于请求者的先前优先级基本随机地确定。 响应请求者的当前优先级,授予访问该资源的请求。 在一个实施例中,与被许可的请求相对应的请求者用信号通知其请求已经被许可,并且与被拒绝的请求相对应的请求者用信号通知其请求未被授予。