Apparatus and method for shared cache control including cache lines selectively operable in inclusive or non-inclusive mode
    1.
    发明授权
    Apparatus and method for shared cache control including cache lines selectively operable in inclusive or non-inclusive mode 有权
    用于共享高速缓存控制的装置和方法,包括可选择性地以包含或不包含模式操作的高速缓存行

    公开(公告)号:US09477600B2

    公开(公告)日:2016-10-25

    申请号:US13137357

    申请日:2011-08-08

    IPC分类号: G06F12/08

    摘要: A data processing system 2 includes a cache hierarchy having a plurality of local cache memories and a shared cache memory 18. State data 30, 32 stored within the shared cache memory 18 on a per cache line basis is used to control whether or not that cache line of data is stored and managed in accordance with non-inclusive operation or inclusive operation of the cache memory system. Snoop transactions are filtered on the basis of data indicating whether or not a cache line of data is unique or non-unique. A switch from non-inclusive operation to inclusive operation may be performed in dependence upon the transaction type of a received transaction requesting a cache line of data.

    摘要翻译: 数据处理系统2包括具有多个本地高速缓存存储器和共享高速缓存存储器18的高速缓存层级。基于每个高速缓存行,存储在共享高速缓存存储器18内的状态数据30,32被用于控制该高速缓存 根据高速缓冲存储器系统的非包容性操作或包容性操作来存储和管理数据行。 基于指示数据的高速缓存行是唯一的还是非唯一的数据来对窥探事务进行过滤。 可以根据请求高速缓存行数据的接收到的事务的事务类型来执行从非包容性操作到包含性操作的切换。

    Processing resource allocation within an integrated circuit supporting transaction requests of different priority levels
    2.
    发明授权
    Processing resource allocation within an integrated circuit supporting transaction requests of different priority levels 有权
    处理支持不同优先级的事务请求的集成电路中的资源分配

    公开(公告)号:US08490107B2

    公开(公告)日:2013-07-16

    申请号:US13137362

    申请日:2011-08-08

    IPC分类号: G06F9/46 G06F15/16 G06F15/173

    摘要: An integrated circuit 2 includes a plurality of transaction sources 6, 8, 10, 12, 14, 16, 18, 20 communicating via a ring-based interconnect 30 with shared caches 22, 24 each having an associated POC/POS 30, 34 and serving as a request servicing circuit. The request servicing circuits have a set of processing resources 36 that may be allocated to different transactions. These processing resources may be allocated either dynamically or statically. Static allocation can be made in dependence upon a selection algorithm. This selection algorithm may use a quality of service value/priority level as one of its input variables. A starvation ratio may also be defined such that lower priority levels are forced to be selected if they are starved of allocation for too long. A programmable mapping may be made between quality of service values and priority levels. The maximum number of processing resources allocated to each priority level may also be programmed.

    摘要翻译: 集成电路2包括经由基于环的互连30与共享高速缓存22,24进行通信的多个事务源6,8,10,12,14,16,18,20,每个具有相关联的POC / POS 30,34,以及 作为请求维修电路。 请求服务电路具有可被分配给不同事务的一组处理资源36。 可以动态地或静态地分配这些处理资源。 可以根据选择算法进行静态分配。 该选择算法可以使用服务质量值/优先级作为其输入变量之一。 还可以定义饥饿比例,使得如果饥饿分配太长时间,则强制选择较低的优先级。 可以在服务质量值和优先级之间进行可编程映射。 分配给每个优先级的处理资源的最大数量也可以被编程。

    Snoop filter and non-inclusive shared cache memory

    公开(公告)号:US20130042078A1

    公开(公告)日:2013-02-14

    申请号:US13137359

    申请日:2011-08-08

    IPC分类号: G06F12/08

    摘要: A data processing apparatus 2 includes a plurality of transaction sources 8, 10 each including a local cache memory. A shared cache memory 16 stores cache lines of data together with shared cache tag values. Snoop filter circuitry 14 stores snoop filter tag values tracking which cache lines of data are stored within the local cache memories. When a transaction is received for a target cache line of data, then the snoop filter circuitry 14 compares the target tag value with the snoop filter tag values and the shared cache circuitry 16 compares the target tag value with the shared cache tag values. The shared cache circuitry 16 operates in a default non-inclusive mode. The shared cache memory 16 and the snoop filter 14 accordingly behave non-inclusively in respect of data storage within the shared cache memory 16, but inclusively in respect of tag storage given the combined action of the snoop filter tag values and the shared cache tag values. Tag maintenance operations moving tag values between the snoop filter circuitry 14 and the shared cache memory 16 are performed atomically. The snoop filter circuitry 14 and the shared cache memory 16 compare operations are performed using interlocked parallel pipelines.

    Snoop filter and non-inclusive shared cache memory
    4.
    发明授权
    Snoop filter and non-inclusive shared cache memory 有权
    监听过滤器和非包容性共享缓存

    公开(公告)号:US08935485B2

    公开(公告)日:2015-01-13

    申请号:US13137359

    申请日:2011-08-08

    IPC分类号: G06F12/00 G06F12/08

    摘要: A data processing apparatus 2 includes a plurality of transaction sources 8, 10 each including a local cache memory. A shared cache memory 16 stores cache lines of data together with shared cache tag values. Snoop filter circuitry 14 stores snoop filter tag values tracking which cache lines of data are stored within the local cache memories. When a transaction is received for a target cache line of data, then the snoop filter circuitry 14 compares the target tag value with the snoop filter tag values and the shared cache circuitry 16 compares the target tag value with the shared cache tag values. The shared cache circuitry 16 operates in a default non-inclusive mode. The shared cache memory 16 and the snoop filter 14 accordingly behave non-inclusively in respect of data storage within the shared cache memory 16, but inclusively in respect of tag storage given the combined action of the snoop filter tag values and the shared cache tag values. Tag maintenance operations moving tag values between the snoop filter circuitry 14 and the shared cache memory 16 are performed atomically. The snoop filter circuitry 14 and the shared cache memory 16 compare operations are performed using interlocked parallel pipelines.

    摘要翻译: 数据处理装置2包括多个事务源8,10,每个事务源8包括本地高速缓冲存储器。 共享高速缓存存储器16将高速缓存行数据与共享高速缓存标签值一起存储。 窥探滤波器电路14存储跟踪哪些高速缓存行数据被存储在本地高速缓冲存储器内的窥探滤波器标签值。 当针对目标高速缓存行数据接收事务时,监听滤波器电路14将目标标签值与窥探过滤标签值进行比较,共享高速缓存电路16将目标标签值与共享缓存标签值进行比较。 共享高速缓存电路16以默认的非包容模式运行。 共享高速缓存存储器16和窥探过滤器14相对于共享高速缓冲存储器16内的数据存储而相对地表现为非包容性,而在包含窥探过滤器标签值和共享高速缓存标签值的组合动作的情况下,包含标签存储 。 在窥探滤波器电路14和共享高速缓冲存储器16之间移动标签值的标签维护操作被原子地执行。 窥探滤波器电路14和共享高速缓冲存储器16的比较操作使用互锁的并行流水线进行。

    Processing resource allocation within an integrated circuit
    5.
    发明申请
    Processing resource allocation within an integrated circuit 审中-公开
    处理集成电路内的资源分配

    公开(公告)号:US20130042252A1

    公开(公告)日:2013-02-14

    申请号:US13137360

    申请日:2011-08-08

    IPC分类号: G06F9/50

    CPC分类号: G06F13/374

    摘要: An integrated circuit 2 includes a plurality of transaction sources 6, 8, 10, 12, 14, 16, 18, 20 communicating via a ring-based interconnect 30 with shared caches 22, 24 each having an associated POC/POS 30, 34 and serving as a request servicing circuit. The request servicing circuits have a set of processing resources 36 that may be allocated to different transactions. These processing resources may be allocated either dynamically or statically. Static allocation can be made in dependence upon a selection algorithm. This selection algorithm may use a quality of service value/priority level as one of its input variables. A starvation ratio may also be defined such that lower priority levels are forced to be selected if they are starved of allocation for too long. A programmable mapping may be made between quality of service values and priority levels. The maximum number of processing resources allocated to each priority level may also be programmed.

    摘要翻译: 集成电路2包括经由基于环的互连30与共享高速缓存22,24进行通信的多个事务源6,8,10,12,14,16,18,20,每个具有相关联的POC / POS 30,34,以及 作为请求维修电路。 请求服务电路具有可被分配给不同事务的一组处理资源36。 可以动态地或静态地分配这些处理资源。 可以根据选择算法进行静态分配。 该选择算法可以使用服务质量值/优先级作为其输入变量之一。 还可以定义饥饿比例,使得如果饥饿分配太长时间,则强制选择较低的优先级。 可以在服务质量值和优先级之间进行可编程映射。 分配给每个优先级的处理资源的最大数量也可以被编程。

    Shared cache memory control
    7.
    发明申请
    Shared cache memory control 有权
    共享缓存内存控制

    公开(公告)号:US20130042070A1

    公开(公告)日:2013-02-14

    申请号:US13137357

    申请日:2011-08-08

    IPC分类号: G06F12/08

    摘要: A data processing system 2 includes a cache hierarchy having a plurality of local cache memories and a shared cache memory 18. State data 30, 32 stored within the shared cache memory 18 on a per cache line basis is used to control whether or not that cache line of data is stored and managed in accordance with non-inclusive operation or inclusive operation of the cache memory system. Snoop transactions are filtered on the basis of data indicating whether or not a cache line of data is unique or non-unique. A switch from non-inclusive operation to inclusive operation may be performed in dependence upon the transaction type of a received transaction requesting a cache line of data.

    摘要翻译: 数据处理系统2包括具有多个本地高速缓存存储器和共享高速缓存存储器18的高速缓存层级。基于每个高速缓存行,存储在共享高速缓存存储器18内的状态数据30,32被用于控制该高速缓存 根据高速缓冲存储器系统的非包容性操作或包容性操作来存储和管理数据行。 基于指示数据的高速缓存行是唯一的还是非唯一的数据来对窥探事务进行过滤。 可以根据请求高速缓存行数据的接收到的事务的事务类型来执行从非包容性操作到包含性操作的切换。

    Memory controller and method of selecting a transaction using a plurality of ordered lists
    8.
    发明授权
    Memory controller and method of selecting a transaction using a plurality of ordered lists 有权
    存储器控制器和使用多个有序列表选择事务的方法

    公开(公告)号:US08775754B2

    公开(公告)日:2014-07-08

    申请号:US13067775

    申请日:2011-06-24

    IPC分类号: G06F12/00

    CPC分类号: G06F13/1631

    摘要: A memory controller is for controlling access to a memory device of the type having a non-uniform access timing characteristic. An interface receives transactions issued from at least one transaction source and a buffer temporarily stores as pending transactions those transactions received by the interface that have not yet been issued to the memory device. The buffer maintains a plurality of ordered lists (having a number of entries) for the stored pending transactions, including at least one priority based ordered list and at least one access timing ordered list. Each entry being associated with one of the pending transactions, and ordered within its priority based ordered list based on the priority indication of the associated pending transaction. Arbitration circuitry performs an arbitration operation during which the plurality of ordered lists are referenced so as to select a winning transaction to be issued to the memory device.

    摘要翻译: 存储器控制器用于控制对具有不均匀访问定时特性的类型的存储器件的访问。 接口接收从至少一个事务源发出的事务,并且缓冲器临时将尚未发布的接口的事务临时存储为存储器设备。 缓冲器维护用于所存储的待处理事务的多个有序列表(具有多个条目),包括至少一个基于优先级的有序列表和至少一个访问定时有序列表。 每个条目与其中一个待处理事务相关联,并根据关联的挂起事务的优先级指示在其基于优先级的有序列表中排序。 仲裁电路执行仲裁操作,在该仲裁操作期间,引用多个有序列表,以便选择要发给存储器设备的获胜事务。

    Communication infrastructure for a data processing apparatus and a method of operation of such a communication infrastructure
    9.
    发明授权
    Communication infrastructure for a data processing apparatus and a method of operation of such a communication infrastructure 有权
    用于数据处理设备的通信基础设施和这种通信基础设施的操作方法

    公开(公告)号:US08285912B2

    公开(公告)日:2012-10-09

    申请号:US12461345

    申请日:2009-08-07

    IPC分类号: G06F13/00

    CPC分类号: G06F13/4022 G06F2213/0038

    摘要: A communication infrastructure for a data processing apparatus, and a method of operation of such a communication infrastructure are provided. The communication infrastructure provides first and second switching circuits interconnected via a bidirectional link. Both of the switching circuits employ a multi-channel communication protocol, such that for each transaction a communication path is established from an initiating master interface to a target slave interface, with that communication path comprising m channels. The m channels comprise one or more forward channels from the initiating master interface to the target slave interface and one or more reverse channels from the target slave interface to the initiating master interface, and handshaking signals are associated with each of the m channels. The bidirectional link comprises n connection lines, where n is less than m, the bidirectional link supporting a first communication path from the first switching circuit to the second switching circuit and a second communication path in an opposite direction from the second switching circuit to the first switching circuit. Control circuitry is used to multiplex at least one forward channel of the first communication path and at least one reverse channel of the second communication path, with the multiplexing being performed in dependence on the handshaking signals associated with the channels to be multiplexed. This allows the 2m channels formed by the first and second communication paths to be provided by the n connection lines of the bidirectional link.

    摘要翻译: 提供了一种用于数据处理装置的通信基础设施,以及这种通信基础设施的操作方法。 通信基础设施提供通过双向链路互连的第一和第二交换电路。 两个开关电路采用多通道通信协议,使得对于每个事务,通信路径从启动主接口建立到目标从接口,该通信路径包括m个信道。 m个信道包括从初始主接口到目标从接口的一个或多个前向信道,以及从目标从接口到发起主接口的一个或多个反向信道,以及握手信号与m个信道中的每一个相关联。 双向链路包括n个小于m的n个连接线,支持从第一开关电路到第二开关电路的第一通信路径的双向链路和与第二开关电路相反的第二通信路径 开关电路。 控制电路用于复用第一通信路径的至少一个前向信道和第二通信路径的至少一个反向信道,根据与要多路复用的信道相关联的握手信号执行复用。 这允许由第一和第二通信路径形成的2m信道由双向链路的n个连接线提供。

    Synchronising between clock domains
    10.
    发明授权
    Synchronising between clock domains 有权
    时钟域之间的同步

    公开(公告)号:US08301932B2

    公开(公告)日:2012-10-30

    申请号:US12591315

    申请日:2009-11-16

    IPC分类号: G06F1/12

    摘要: An integrated circuit 2 is provided with multiple clock domains separated by a clock boundary 8. Data values are passed across the clock boundary 8 using a first-in-first-out memory (FIFO), a read pointer and a write pointer for the FIFO are passed across the clock boundary 8 and must be synchronized to the receiving clock frequency. The clocks being used on either side of the clock boundary 8 may be switched and have a variable relationship therebetween. Multiple synchronization paths are provided within pointer synchronizing circuitry 32 which are used depending upon the particular relationship between the clocks on either side of the clock boundary 8. A pre-switch pointer value is held in a transition register 44 until a post-switch pointer value is available from the new synchronizing path 36 when a switch in clock mode is made which requires an increase in synchronization delay.

    摘要翻译: 集成电路2具有由时钟边界8隔开的多个时钟域。数据值使用先进先出存储器(FIFO),读指针和用于FIFO的写指针通过时钟边界8传递 通过时钟边界8并且必须与接收时钟频率同步。 可以切换在时钟边界8的任一侧使用的时钟,并且在它们之间具有可变的关系。 在指针同步电路32内提供多个同步路径,这取决于时钟边界8任一侧的时钟之间的特定关系。预切换指针值保持在转换寄存器44中,直到后切换指针值 当需要增加同步延迟的时钟模式切换时,可从新同步路径36获得。