Bus master and bus snooper for execution of global operations utilizing a single token for multiple operations with explicit release
    81.
    发明授权
    Bus master and bus snooper for execution of global operations utilizing a single token for multiple operations with explicit release 失效
    总线主机和总线监听器,用于执行全局操作,利用单个令牌进行多次操作,并显式释放

    公开(公告)号:US06516368B1

    公开(公告)日:2003-02-04

    申请号:US09435928

    申请日:1999-11-09

    IPC分类号: G06F1314

    CPC分类号: G06F12/0831

    摘要: In response to a need to initiate one or more global operations, a bus master within a multiprocessor system issues a combined token and operation request in a single bus transaction on a bus coupled to the bus master. The combined token and operation request solicits a single existing token required to complete the global operations within the multiprocessor system and identifies the first of the global operations to be processed with the token, if granted. Once a bus master is granted the token, no other bus master will be granted the token until the current token owner explicitly requests release. The current token owner repeats the combined token and operation request for each global operation which needs to be initiated and, on the last global operation, issues a combined request with an explicit release. Acknowledgement of the combined request with release implies release of the token for use by other bus masters.

    摘要翻译: 响应于需要启动一个或多个全局操作,多处理器系统内的总线主机在耦合到总线主机的总线上的单总线事务中发出组合令牌和操作请求。 组合的令牌和操作请求请求在多处理器系统中完成全局操作所需的单个现有令牌,并且如果被授予则标识要使用令牌处理的第一个全局操作。 一旦总线主机被授予令牌,在当前令牌所有者明确请求发布之前,将不会授予其他总线主机的令牌。 当前的标记所有者重复需要启动的每个全局操作的组合令牌和操作请求,并且在最后一个全局操作中发出具有明确版本的组合请求。 对发布的组合请求的确认意味着释放令牌供其他总线主机使用。

    Error recovery mechanism for a high-performance interconnect
    82.
    发明授权
    Error recovery mechanism for a high-performance interconnect 有权
    高性能互连的错误恢复机制

    公开(公告)号:US06487679B1

    公开(公告)日:2002-11-26

    申请号:US09437041

    申请日:1999-11-09

    IPC分类号: G06F1108

    CPC分类号: G06F11/1443

    摘要: An error recovery mechanism for an interconnect is disclosed. A data processing system includes a bus connected between a bus master and a bus slave. In response to a parity error occurring on the bus, the bus slave issues a bus parity error response to the bus master via the bus. After waiting for a predetermined number of bus cycles to allow the bus to idle, the bus master then issues a RESTART bus command packet to the bus slave via the bus to clear the parity error. If the RESTART bus command packet is received correctly, the slave bus will remove the parity error response such that normal bus communication may resume.

    摘要翻译: 公开了一种用于互连的错误恢复机制。 数据处理系统包括连接在总线主机和总线从机之间的总线。 响应于总线上发生的奇偶校验错误,总线从站通过总线向总线主机发出总线奇偶校验错误响应。 在等待预定数量的总线周期以允许总线空闲之后,总线主机然后通过总线向总线从设备发出RESTART总线命令分组以清除奇偶校验错误。 如果正确接收到RESTART总线命令包,则从总线将消除奇偶校验错误响应,以使正常总线通信可以恢复。

    Programmable agent and method for managing prefetch queues
    83.
    发明授权
    Programmable agent and method for managing prefetch queues 有权
    用于管理预取队列的可编程代理和方法

    公开(公告)号:US06470427B1

    公开(公告)日:2002-10-22

    申请号:US09436373

    申请日:1999-11-09

    IPC分类号: G06F1200

    CPC分类号: G06F12/0862 G06F2212/6028

    摘要: A programmable agent and method for managing prefetch queues provide dynamically configurable handling of priorities in a prefetching subsystem for providing look-ahead memory loads in a computer system. When it's queues are at capacity an agent handling prefetches from memory either ignores new requests, forces the new requests to retry or cancels a pending request in order to perform the new request. The behavior can be adjusted under program control by programming a register, or the control may be coupled to a load pattern analyzer. In addition, the behavior with respect to new requests can be set to different types depending on a phase of a pending request.

    摘要翻译: 用于管理预取队列的可编程代理和方法为预取子系统中的优先级提供动态可配置的处理,以在计算机系统中提供先行存储器负载。 当队列处理能力时,代理处理来自内存的预取将忽略新的请求,强制新的请求重试或取消挂起的请求,以执行新的请求。 通过对寄存器进行编程,可以在程序控制下调整行为,或者控制可以耦合到负载模式分析器。 此外,根据待处理请求的阶段,关于新请求的行为可以被设置为不同的类型。

    Method and apparatus for forwarding data in a hierarchial cache memory architecture
    84.
    发明授权
    Method and apparatus for forwarding data in a hierarchial cache memory architecture 失效
    用于在分层缓存存储器架构中转发数据的方法和装置

    公开(公告)号:US06467030B1

    公开(公告)日:2002-10-15

    申请号:US09435962

    申请日:1999-11-09

    IPC分类号: G06F1200

    摘要: A method and apparatus for forwarding data in a hierarchial cache memory architecture is disclosed. A cache memory hierarchy includes multiple levels of cache memories, each level having a different size and speed. A command is initially issued from a processor to the cache memory hierarchy. If the command is a Demand Load command, data corresponding to the Demand Load command is immediately forwarded from a cache having the data to the processor. Otherwise, if the command is a Prefetch Load command, data corresponding to the Prefetch Load command is held in a cache reload buffer within a cache memory preceding the processor.

    摘要翻译: 公开了一种用于在分级高速缓冲存储器架构中转发数据的方法和装置。 高速缓冲存储器层级包括多级缓存存储器,每级具有不同的大小和速度。 命令最初从处理器发出到高速缓存存储器层次结构。 如果命令是Demand Load命令,则与Demand Load命令相对应的数据将立即从具有数据的缓存转发到处理器。 否则,如果命令是预取加载命令,则与预取加载命令对应的数据保存在处理器之前的高速缓存中的缓存重新加载缓冲区中。

    Method and apparatus supporting non-geographic telephone numbers
    85.
    发明授权
    Method and apparatus supporting non-geographic telephone numbers 失效
    支持非地理电话号码的方法和装置

    公开(公告)号:US06463270B1

    公开(公告)日:2002-10-08

    申请号:US08592212

    申请日:1996-01-26

    IPC分类号: H04Q720

    CPC分类号: H04W8/26

    摘要: A communications network may include a translation server containing a NGPN-to-HLR mapping table. The translation server may be a single, centralized translation server; or several TSs may be on the network. When a number of translation servers are used, a VLR or other network entity receiving an NGPN determines which translation server contains the mapping for that NGPN. One way to do this is when a subscriber roams out of his “home” region, his NGPN is presented to the “foreign” service provider's TS. The foreign TS broadcasts a query to all other TSs in the network either simultaneously or in stages. Another way is that a VLR receiving a NGPN performs a hash function on the NGPN. The hash function identifies a translation server. The VLR may then query the translation server and obtain the NGPN-to-HLR mapping. Where a hash function is used, an extendable hash function to accommodate the addition of new TSs without changing the VLR operating systems. Alternatively, where translation servers are identified with hash functions, further additional TSs are accommodated by a two stage TS. A TS split into a number of TSs performs a second hash function to determine the location of the TS having the NGPN-to-HLR mapping requested.

    摘要翻译: 通信网络可以包括包含NGPN到HLR映射表的翻译服务器。 翻译服务器可以是单个的集中式翻译服务器; 或者几个TS可能在网络上。 当使用多个翻译服务器时,接收NGPN的VLR或其他网络实体确定哪个翻译服务器包含该NGPN的映射。 一种方法是当用户从他的“家”区域漫游时,他的NGPN被呈现给“外部”服务提供商的TS。 外部TS同时或分阶段地向网络中的所有其他TS广播查询。 另一种方式是接收NGPN的VLR在NGPN上执行散列函数。 散列函数识别翻译服务器。 然后,VLR可以查询翻译服务器并获得NGPN到HLR映射。 在使用散列函数的情况下,可扩展散列函数以适应添加新的TS而不改变VLR操作系统。 或者,当翻译服务器用散列函数标识时,两级TS容纳另外的附加TS。 分割成多个TS的TS执行第二散列函数以确定具有请求的NGPN到HLR映射的TS的位置。

    Bus snooper for SMP execution of global operations utilizing a single token with implied release
    86.
    发明授权
    Bus snooper for SMP execution of global operations utilizing a single token with implied release 失效
    使用具有隐含释放的单个令牌来执行全局操作的SMP的总线监听器

    公开(公告)号:US06460100B1

    公开(公告)日:2002-10-01

    申请号:US09435929

    申请日:1999-11-09

    IPC分类号: G06F1314

    CPC分类号: G06F13/37

    摘要: Only a single snooper queue for global operations within a multiprocessor system is implemented within each bus snooper, controlled by a single token allowing completion of one operation. A bus snooper, upon detecting a combined token and operation request, begins speculatively processing the operation if the snooper is not already busy. The snooper then watches for a combined response acknowledging the combined request or a subsequent token request from the same processor, which indicates that the originating processor has been granted the sole token for completing global operations, before completing the operation. When processing an operation from a combined request and detecting an operation request (only) from a different processor, which indicates that another processor has been granted the token, the snooper suspends processing of the current operation and begins processing the new operation. If the snooper is busy when a combined request is received, the snooper retries the operation portion of the combined request and, upon detecting a subsequent operation request (only) for the operation, begins processing the operation at that time if not busy. Snoop logic for large multiprocessor systems is thus simplified, with conflict reduced to situations in which multiple processors are competing for the token.

    摘要翻译: 在一个多处理器系统内,只有一个用于全局操作的侦听队列是在每个总线侦听器中实现的,由一个允许完成一个操作的令牌控制。 一旦检测到组合的令牌和操作请求,总线侦听器开始推测性地处理该操作,如果该侦听器尚未忙。 监听器然后在完成操作之前监视来自同一处理器的组合请求或后续令牌请求的组合响应,其指示始发处理器已经被授予用于完成全局操作的唯一令牌。 当从组合请求处理操作并从另一处理器(仅指示另一个处理器已被授予令牌)检测到操作请求时,监听器暂停对当前操作的处理并开始处理新的操作。 如果接收到组合请求时,监听器正忙,则侦听器重试组合请求的操作部分,并且在检测到用于该操作的后续操作请求(仅))时,如果不忙,则开始处理该操作。 因此,大型多处理器系统的窥探逻辑被简化,冲突降低到多个处理器竞争令牌的情况。

    Address dependent caching behavior within a data processing system having HSA (hashed storage architecture)
    87.
    发明授权
    Address dependent caching behavior within a data processing system having HSA (hashed storage architecture) 失效
    具有HSA(散列存储架构)的数据处理系统中的依赖于地址的缓存行为

    公开(公告)号:US06446165B1

    公开(公告)日:2002-09-03

    申请号:US09364287

    申请日:1999-07-30

    IPC分类号: G06F1300

    CPC分类号: G06F12/0811

    摘要: A processor includes at least one execution unit, an instruction sequencing unit coupled to the execution unit, and a plurality of caches at a same level. The caches, which store data utilized by the execution unit, each store only data having associated addresses within a respective one of a plurality of subsets of an address space and implement diverse caching behaviors. The diverse caching behaviors can include differing memory update policies, differing coherence protocols, differing prefetch behaviors, and differing cache line replacement policies.

    摘要翻译: 处理器包括至少一个执行单元,耦合到执行单元的指令排序单元和在同一级别的多个高速缓存。 存储由执行单元使用的数据的高速缓存仅存储具有地址空间的多个子集中的相应地址内的相关联地址的数据,并且实现多种缓存行为。 不同的缓存行为可以包括不同的内存更新策略,不同的一致性协议,不同的预取行为以及不同的缓存行替换策略。

    Method of cache management to dynamically update information-type dependent cache policies
    88.
    发明授权
    Method of cache management to dynamically update information-type dependent cache policies 失效
    高速缓存管理方法来动态更新信息类型相关缓存策略

    公开(公告)号:US06434669B1

    公开(公告)日:2002-08-13

    申请号:US09390189

    申请日:1999-09-07

    IPC分类号: G06F1206

    摘要: A set associative cache includes a cache controller, a directory, and an array including at least one congruence class containing a plurality of sets. The plurality of sets are partitioned into multiple groups according to which of a plurality of information types each set can store. The sets are partitioned so that at least two of the groups include the same set and at least one of the sets can store fewer than all of the information types. To optimize cache operation, the cache controller dynamically modifies a cache policy of a first group while retaining a cache policy of a second group, thus permitting the operation of the cache to be individually optimized for different information types. The dynamic modification of cache policy can be performed in response to either a hardware-generated or software-generated input.

    摘要翻译: 集合关联高速缓存包括高速缓存控制器,目录和包括至少一个包含多个集合的同余类的数组。 多个集合根据每个集合可以存储的多个信息类型中的哪一个被划分为多个组。 这些集合被分区,使得至少两个组包括相同的集合,并且集合中的至少一个可以存储少于所有信息类型的集合。 为了优化高速缓存操作,高速缓存控制器在保留第二组的高速缓存策略的同时动态地修改第一组的高速缓存策略,从而允许高速缓存的操作针对不同的信息类型单独优化。 高速缓存策略的动态修改可以响应于硬件生成的或软件生成的输入来执行。

    Method for alternate preferred time delivery of load data
    89.
    发明授权
    Method for alternate preferred time delivery of load data 失效
    负载数据交替优选时间交付方法

    公开(公告)号:US06389529B1

    公开(公告)日:2002-05-14

    申请号:US09344059

    申请日:1999-06-25

    IPC分类号: G06F9312

    摘要: A system for time-ordered execution of load instructions. More specifically, the system enables just-in-time delivery of data requested by a load instruction. The system consists of a processor, an L1 data cache with corresponding L1 cache controller, and an instruction processor. The instruction processor manipulates a plurality of architected time dependency fields of a load instruction to create a plurality of dependency fields. The dependency fields holds a relative dependency value which is utilized to order the load instruction in a Relative Time-Ordered Queue (RTOQ) of the L1 cache controller. The load instruction is sent from RTOQ to the L1 data cache at a particular time so that the data requested is loaded from the L1 data cache at the time specified by one of the dependency fields. The dependency fields are prioritized so that the cycle corresponding to the highest priority field which is available is utilized.

    摘要翻译: 用于加载指令的时间执行的系统。 更具体地,该系统实现了由加载指令请求的数据的及时传送。 该系统由处理器,具有对应的L1高速缓存控制器的L1数据高速缓存器和指令处理器组成。 指令处理器操纵加载指令的多个架构时间依赖性字段以创建多个依赖项。 相关性字段保持相对依赖性值,该相关性值用于对L1高速缓存控制器的相对时间排序队列(RTOQ)中的加载指令进行排序。 加载指令在特定时间从RTOQ发送到L1数据高速缓存,以便在由一个依赖项指定的时间内从L1数据高速缓存中加载请求的数据。 优先依赖关系字段,以便利用对应于可用的最高优先级字段的周期。

    Method and system for allocating lower level cache entries for data castout from an upper level cache
    90.
    发明授权
    Method and system for allocating lower level cache entries for data castout from an upper level cache 失效
    从上级缓存分配用于数据丢弃的较低级缓存条目的方法和系统

    公开(公告)号:US06370618B1

    公开(公告)日:2002-04-09

    申请号:US09436376

    申请日:1999-11-09

    IPC分类号: G06F1208

    CPC分类号: G06F12/0897 G06F12/123

    摘要: A method and system for allocating lower level cache entries for data castout from an upper level cache provides improved computer system performance by adjusting the ordering of least-recently-used (LRU) information within a cache. Data that is castout from a higher level cache can be written after a read is satisfied and the castout entry will not be labeled as most-recently-used. This improves performance under certain operating conditions of a computing system, as castout data is often less important to keep in lower level cache than data that is also present in the higher level cache.

    摘要翻译: 通过调整高速缓存中最近最少使用的(LRU)信息的顺序来分配来自上级高速缓存的用于数据丢弃的低级缓存条目的方法和系统提供了改进的计算机系统性能。 在读取满足后,可以写入从较高级缓存中抛出的数据,并且castout条目不会被标记为最近使用的。 这提高了计算系统的某些操作条件下的性能,因为与上级缓存中存在的数据相比,丢弃数据通常对于保持在较低级别的缓存中不太重要。