L1 cache flush when processor is entering low power mode
    31.
    发明授权
    L1 cache flush when processor is entering low power mode 有权
    当处理器进入低功耗模式时,L1缓存刷新

    公开(公告)号:US07752474B2

    公开(公告)日:2010-07-06

    申请号:US11525584

    申请日:2006-09-22

    IPC分类号: G06F1/32

    摘要: In one embodiment, a processor comprises a data cache configured to store a plurality of cache blocks and a control unit coupled to the data cache. The control unit is configured to flush the plurality of cache blocks from the data cache responsive to an indication that the processor is to transition to a low power state in which one or more clocks for the processor are inhibited.

    摘要翻译: 在一个实施例中,处理器包括被配置为存储多个高速缓存块的数据高速缓存和耦合到数据高速缓存的控制单元。 控制单元被配置为响应于处理器将转换到其中禁止用于处理器的一个或多个时钟的低功率状态的指示,从数据高速缓冲存储器中刷新多个高速缓存块。

    Non-blocking address switch with shallow per agent queues
    32.
    发明授权
    Non-blocking address switch with shallow per agent queues 有权
    非阻塞地址切换,每个代理队列较浅

    公开(公告)号:US07752366B2

    公开(公告)日:2010-07-06

    申请号:US12263255

    申请日:2008-10-31

    IPC分类号: G06F13/00

    CPC分类号: G06F13/362 G06F13/4022

    摘要: In one embodiment, a switch is configured to be coupled to an interconnect. The switch comprises a plurality of storage locations and an arbiter control circuit coupled to the plurality of storage locations. The plurality of storage locations are configured to store a plurality of requests transmitted by a plurality of agents. The arbiter control circuit is configured to arbitrate among the plurality of requests stored in the plurality of storage locations. A selected request is the winner of the arbitration, and the switch is configured to transmit the selected request from one of the plurality of storage locations onto the interconnect. In another embodiment, a system comprises a plurality of agents, an interconnect, and the switch coupled to the plurality of agents and the interconnect. In another embodiment, a method is contemplated.

    摘要翻译: 在一个实施例中,开关被配置为耦合到互连。 开关包括多个存储位置和耦合到多个存储位置的仲裁器控制电路。 多个存储位置被配置为存储由多个代理发送的多个请求。 仲裁器控制电路被配置为在存储在多个存储位置中的多个请求之间进行仲裁。 所选择的请求是仲裁的赢家,并且交换机被配置为将所选择的请求从多个存储位置之一发送到互连上。 在另一个实施例中,系统包括多个代理,互连和耦合到多个代理和互连的开关。 在另一个实施例中,预期了一种方法。

    Fast L1 flush mechanism
    33.
    发明申请
    Fast L1 flush mechanism 有权
    快速L1冲洗机构

    公开(公告)号:US20080077813A1

    公开(公告)日:2008-03-27

    申请号:US11525584

    申请日:2006-09-22

    IPC分类号: G06F1/32

    摘要: In one embodiment, a processor comprises a data cache configured to store a plurality of cache blocks and a control unit coupled to the data cache. The control unit is configured to flush the plurality of cache blocks from the data cache responsive to an indication that the processor is to transition to a low power state in which one or more clocks for the processor are inhibited.

    摘要翻译: 在一个实施例中,处理器包括被配置为存储多个高速缓存块的数据高速缓存和耦合到数据高速缓存的控制单元。 控制单元被配置为响应于处理器将转换到其中禁止用于处理器的一个或多个时钟的低功率状态的指示,从数据高速缓冲存储器中刷新多个高速缓存块。

    Establishing an operating mode in a processor
    34.
    发明授权
    Establishing an operating mode in a processor 有权
    在处理器中建立操作模式

    公开(公告)号:US07124286B2

    公开(公告)日:2006-10-17

    申请号:US09824890

    申请日:2001-04-02

    IPC分类号: G06F9/30

    摘要: A processor supports a processing mode in which the address size is greater than 32 bits and the operand size may be 32 or 64 bits. The address size may be nominally indicated as 64 bits, although various embodiments of the processor may implement any address size which exceeds 32 bits, up to and including 64 bits, in the processing mode. The processing mode may be established by placing an enable indication in a control register into an enabled state and by setting a first operating mode indication and a second operating mode indication in a segment descriptor to predefined states. Other combinations of the first operating mode indication and the second operating mode indication may be used to provide compatibility modes for 32 bit and 16 bit processing compatible with the x86 processor architecture (with the enable indication remaining in the enabled state).

    摘要翻译: 处理器支持地址大小大于32位的处理模式,操作数大小可以是32位或64位。 地址大小可以名义上表示为64位,尽管在处理模式下,处理器的各种实施例可以实现超过32位,高达并包括64位的任何地址大小。 可以通过将控制寄存器中的使能指示置于使能状态并且通过将段描述符中的第一操作模式指示和第二操作模式指示设置为预定状态来建立处理模式。 可以使用第一操作模式指示和第二操作模式指示的其他组合来提供与x86处理器架构兼容的32位和16位处理的兼容性模式(使能指示保持在使能状态)。

    High speed bus system that incorporates uni-directional point-to-point buses
    35.
    发明授权
    High speed bus system that incorporates uni-directional point-to-point buses 失效
    采用单向点对点总线的高速总线系统

    公开(公告)号:US06928500B1

    公开(公告)日:2005-08-09

    申请号:US08883118

    申请日:1997-06-26

    摘要: A high speed bus system for use in a shared memory system that allows for the high speed transmissions of commands and data between a number of processors and a memory array of a multi-processor, shared memory system, with the high speed bus system including a central unit and a series of uni-directional buses that connect between the plurality of processors and shared memory, with the central unit including arbitration logic and a series of multiplexers to determine which CPUs are granted access to shared buses, scheduling logic that works with the arbitration logic and multiplexers to determine which CPUs are granted access to the shared buses, and port logic for combining the CPU transmissions and determining if such transmissions are valid.

    摘要翻译: 一种用于共享存储器系统的高速总线系统,其允许多个处理器与多处理器共享存储器系统的存储器阵列之间的命令和数据的高速传输,其中高速总线系统包括 中央单元和连接在多个处理器和共享存储器之间的一系列单向总线,中央单元包括仲裁逻辑和一系列多路复用器,以确定哪些CPU被授权访问共享总线,调度逻辑与 仲裁逻辑和多路复用器,以确定哪些CPU被授权访问共享总线,以及端口逻辑,用于组合CPU传输并确定这些传输是否有效。

    Response virtual channel for handling all responses
    36.
    发明授权
    Response virtual channel for handling all responses 失效
    响应虚拟通道来处理所有响应

    公开(公告)号:US06888843B2

    公开(公告)日:2005-05-03

    申请号:US09398624

    申请日:1999-09-17

    IPC分类号: G06F13/40 H04L12/54

    CPC分类号: G06F13/405

    摘要: A computer system employs virtual channels and allocates different resources to the virtual channels. Packets which do not have logical/protocol-related conflicts are grouped into a virtual channel. Accordingly, logical conflicts occur between packets in separate virtual channels. The packets within a virtual channel may share resources (and hence experience resource conflicts), but the packets within different virtual channels may not share resources. Since packets which may experience resource conflicts do not experience logical conflicts, and since packets which may experience logical conflicts do not experience resource conflicts, deadlock-free operation may be achieved. Additionally, nodes within the computer system may be configured to preallocate resources to process response packets. Some response packets may have logical conflicts with other response packets, and hence would normally not be allocable to the same virtual channel. However, by preallocating response-processing resources, response packets are accepted by the destination node. Thus, any resource conflicts which may occur are temporary (as the response packets which make forward progress are processable). Viewed in another way, response packets may be logically independent if the destination node is capable of processing the response packets upon receipt. Accordingly, a response virtual channel is formed to which each response packet belongs.

    摘要翻译: 计算机系统采用虚拟通道并为虚拟通道分配不同的资源。 没有逻辑/协议相关冲突的数据包被分组成虚拟通道。 因此,在分离的虚拟通道中的分组之间发生逻辑冲突。 虚拟通道内的数据包可能共享资源(从而遇到资源冲突),但不同虚拟通道内的数据包可能不共享资源。 由于可能遇到资源冲突的数据包不会出现逻辑冲突,并且由于可能遇到逻辑冲突的数据包不会遇到资源冲突,因此可能会实现无死锁操作。 此外,计算机系统内的节点可以被配置为预先分配资源以处理响应分组。 一些响应分组可能与其他响应分组具有逻辑冲突,因此通常不能分配给相同的虚拟信道。 然而,通过预分配响应处理资源,响应分组被目的节点接受。 因此,可能发生的任何资源冲突都是临时的(因为可以进行进展的响应数据包是可处理的)。 以另一种方式观察,如果目的地节点在接收时能够处理响应分组,则响应分组可以在逻辑上是独立的。 因此,形成每个响应分组所属的响应虚拟信道。

    Memory controller with programmable configuration
    37.
    发明授权
    Memory controller with programmable configuration 有权
    内存控制器,具有可编程配置

    公开(公告)号:US06877076B1

    公开(公告)日:2005-04-05

    申请号:US10626790

    申请日:2003-07-24

    IPC分类号: G06F12/02 G06F12/06

    摘要: A memory controller provides programmable flexibility, via one or more configuration registers, for the configuration of the memory. The memory may be optimized for a given application by programming the configuration registers. For example, in one embodiment, the portion of the address of a memory transaction used to select a storage location for access in response to the memory transaction may be programmable. In an implementation designed for DRAM, a first portion may be programmably selected to form the row address and a second portion may be programmable selected to form the column address. Additional embodiments may further include programmable selection of the portion of the address used to select a bank. Still further, interleave modes among memory sections assigned to different chip selects and among two or more channels to memory may be programmable, in some implementations. Furthermore, the portion of the address used to select between interleaved memory sections or interleaved channels may be programmable. One particular implementation may include all of the above programmable features, which may provide a high degree of flexibility in optimizing the memory system.

    摘要翻译: 存储器控制器通过一个或多个配置寄存器为存储器的配置提供可编程的灵活性。 可以通过编程配置寄存器来为给定应用优化存储器。 例如,在一个实施例中,用于响应于存储器事务选择用于访问的存储位置的存储器事务的地址部分可以是可编程的。 在为DRAM设计的实现中,可编程地选择第一部分以形成行地址,并且第二部分可以被编程选择以形成列地址。 另外的实施例还可以包括用于选择银行的地址部分的可编程选择。 此外,在一些实现中,分配给不同芯片选择的存储器部分之间的交织模式和在存储器的两个或更多个通道中的交织模式可以是可编程的。 此外,用于在交织的存储器部分或交织的信道之间选择的地址的部分可以是可编程的。 一个具体实现可以包括所有上述可编程特征,其可以在优化存储器系统时提供高度的灵活性。

    Computer system implementing a system and method for tracking the progress of posted write transactions
    38.
    发明授权
    Computer system implementing a system and method for tracking the progress of posted write transactions 有权
    计算机系统实现跟踪发布的写事务进度的系统和方法

    公开(公告)号:US06721813B2

    公开(公告)日:2004-04-13

    申请号:US09774148

    申请日:2001-01-30

    IPC分类号: G06F1300

    CPC分类号: G06F13/4243

    摘要: A computer system is presented which implements a system and method for tracking the progress of posted write transactions. In one embodiment, the computer system includes a processing subsystem and an input/output (I/O) subsystem. The processing subsystem includes multiple processing nodes interconnected via coherent communication links. Each processing node may include a processor preferably executing software instructions. The I/O subsystem includes one or more I/O nodes. Each I/O node may embody one or more I/O functions (e.g., modem, sound card, etc.). The multiple processing nodes may include a first processing node and a second processing node, wherein the first processing node includes a host bridge, and wherein a memory is coupled to the second processing node. An I/O node may generate a non-coherent write transaction to store data within the second processing node's memory, wherein the non-coherent write transaction is a posted write transaction. The I/O node may dispatch the non-coherent write transaction directed to the host bridge. The host bridge may respond to the non-coherent write transaction by translating the non-coherent write transaction to a coherent write transaction, and dispatching the coherent write transaction to the second processing node. The second processing node may respond to the coherent write transaction by dispatching a target done response directed to the host bridge.

    摘要翻译: 提出了一种实现用于跟踪已发布的写入事务进度的系统和方法的计算机系统。 在一个实施例中,计算机系统包括处理子系统和输入/输出(I / O)子系统。 处理子系统包括通过相干通信链路互连的多个处理节点。 每个处理节点可以包括优选执行软件指令的处理器。 I / O子系统包括一个或多个I / O节点。 每个I / O节点可以体现一个或多个I / O功能(例如,调制解调器,声卡等)。 多个处理节点可以包括第一处理节点和第二处理节点,其中第一处理节点包括主机桥,并且其中存储器耦合到第二处理节点。 I / O节点可以生成非相干写事务以在第二处理节点的存储器内存储数据,其中非相干写事务是已发布的写事务。 I / O节点可以调度定向到主桥的非相干写入事务。 主桥可以通过将非相干写事务转换为相干写事务来响应非相干写事务,并将相干写事务分派到第二处理节点。 第二处理节点可以通过调度定向到主桥的目标完成响应来响应相干写事务。

    Store load forward predictor training
    39.
    发明授权
    Store load forward predictor training 有权
    存储负载前进预测器训练

    公开(公告)号:US06694424B1

    公开(公告)日:2004-02-17

    申请号:US09476579

    申请日:2000-01-03

    IPC分类号: G06F900

    摘要: A processor employs a store to load forward (STLF) predictor which may indicate, for dispatching loads, a dependency on a store. The dependency is indicated for a store which, during a previous execution, interfered with the execution of the load. Since a dependency is indicated on the store, the load is prevented from scheduling and/or executing prior to the store. The STLF predictor is trained with information for a particular load and store in response to executing the load and store and detecting the interference. Additionally, the STLF predictor may be untrained (e.g. information for a particular load and store may be deleted) if a load is indicated by the STLF predictor as dependent upon a particular store and the dependency does not actually occur. In one implementation, the STLF predictor records at least a portion of the PC of a store which interferes with the load in a first table indexed by the load PC. A second table maintains a corresponding portion of the store PCs of recently dispatched stores, along with tags identifying the recently dispatched stores. In another implementation, the STLF predictor records a difference between the tags assigned to a load and a store which interferes with the load in a first table indexed by the load PC. The PC of the dispatching load is used to select a difference from the table, and the difference is added to the tag assigned to the load.

    摘要翻译: 处理器使用存储来加载(STLF)预测器,其可以指示用于调度负载对存储的依赖性。 对于在先前执行期间干扰负载的执行的存储器,指示依赖性。 由于在存储器上指示依赖关系,所以在存储之前防止了负载的调度和/或执行。 响应于执行负载并存储和检测干扰,STLF预测器被训练用于特定负载和存储的信息。 此外,如果由STLF预测器指示负载依赖于特定存储并且实际上不发生依赖性,则STLF预测器可以是未经训练的(例如,针对特定负载的信息可以被删除)。 在一个实现中,STLF预测器在由负载PC索引的第一表中记录干扰负载的商店的PC的至少一部分。 第二个表维护最近派驻的商店的商店PC的相应部分,以及标识最近派发的商店的标签。 在另一实现中,STLF预测器记录分配给负载的标签与由负载PC索引的第一表中的负载干扰的存储器之间的差异。 调度负载的PC用于选择与表的差异,并将差值添加到分配给负载的标签。

    Cache which provides partial tags from non-predicted ways to direct search if way prediction misses
    40.
    发明授权
    Cache which provides partial tags from non-predicted ways to direct search if way prediction misses 有权
    从非预测方式提供部分标签的缓存,如果方式预测错失,则直接搜索

    公开(公告)号:US06687789B1

    公开(公告)日:2004-02-03

    申请号:US09476577

    申请日:2000-01-03

    IPC分类号: G06F1200

    摘要: A cache is coupled to receive an input address and a corresponding way prediction. The cache provides output bytes in response to the predicted way (instead of, performing tag comparisons to select the output bytes). Furthermore, a tag may be read from the predicted way and only partial tags are read from the non-predicted ways. The tag is compared to the tag portion of the input address, and the partial tags are compared to a corresponding partial tag portion of the input address. If the tag matches the tag portion of the input address, a hit in the predicted way is detected and the bytes provided in response to the predicted way are correct. If the tag does not match the tag portion of the input address, a miss in the predicted way is detected. If none of the partial tags match the corresponding partial tag portion of the input address, a miss in the cache is determined. On the other hand, if one or more of the partial tags match the corresponding partial tags portion of the input address, the cache searches the corresponding ways to determine whether or not the input address hits or misses in the cache.

    摘要翻译: 高速缓存被耦合以接收输入地址和相应的方式预测。 缓存提供响应于预测方式的输出字节(而不是执行标签比较以选择输出字节)。 此外,可以从预测的方式读取标签,并且仅从非预测方式读取部分标签。 将标签与输入地址的标签部分进行比较,并将部分标签与输入地址的相应部分标签部分进行比较。 如果标签与输入地址的标签部分匹配,则以预测的方式检测到命中,并且响应于预测方式提供的字节是正确的。 如果标签与输入地址的标签部分不匹配,则以预测的方式检测到未命中。 如果部分标签中没有一个与输入地址的相应部分标签部分匹配,则确定高速缓存中的未命中。 另一方面,如果一个或多个部分标签与输入地址的相应部分标签部分匹配,则高速缓存搜索相应的方式以确定输入地址是否在高速缓存中命中或丢失。