-
公开(公告)号:US20070005938A1
公开(公告)日:2007-01-04
申请号:US11170083
申请日:2005-06-30
IPC分类号: G06F9/30
CPC分类号: G06F9/3844 , G06F9/3804
摘要: A data processing apparatus, comprising: a processor for executing instructions; a prefetch unit for prefetching instructions from a memory prior to sending those instructions to said processor for execution; branch prediction logic; and a branch target cache for storing predetermined information about branch operations executed by said processor, said predetermined information including, identification of an instruction specifying a branch operation, a target address for said branch operation and a prediction as to whether said branch is taken or not; wherein said prefetch unit is operable prior to fetching an instruction from said memory, to access said branch target cache and to determine if there is predetermined information corresponding to said instruction stored within said branch target cache and if there is to retrieve said predetermined information; said branch prediction logic being operable in response to said retrieved predetermined information to predict whether said instruction specifies a branch operation that will be taken and will cause a change in instruction flow, and if so to indicate to said prefetch unit a target address within said memory from which a following instruction should be fetched; wherein said access to said branch target cache is initiated at least one clock cycle before initiating fetching of said instruction from said memory.
摘要翻译: 一种数据处理装置,包括:处理器,用于执行指令; 预取单元,用于在将所述指令发送到所述处理器以执行之前从存储器预取指令; 分支预测逻辑; 以及分支目标缓存,用于存储关于由所述处理器执行的分支操作的预定信息,所述预定信息包括指定分支操作的指令的识别,所述分支操作的目标地址以及关于所述分支是否被采取的预测 ; 其中所述预取单元在从所述存储器获取指令之前可操作,以访问所述分支目标高速缓存并且确定是否存在对应于存储在所述分支目标高速缓存内的所述指令的预定信息,以及是否检索所述预定信息; 所述分支预测逻辑可响应于所述检索的预定信息进行操作,以预测所述指令是否指定将被采用的分支操作,并且将导致指令流程的改变,并且如果是,则向所述预取单元指示所述存储器内的目标地址 应从中获取以下指令; 其中所述对所述分支目标高速缓存的访问在开始从所述存储器获取所述指令之前至少一个时钟周期被启动。
-
公开(公告)号:US20050005073A1
公开(公告)日:2005-01-06
申请号:US10812050
申请日:2004-03-30
IPC分类号: G06F1/32 , G06F12/08 , G06F15/177 , G06F12/00
CPC分类号: G06F1/3237 , G06F1/3203 , G06F1/3287 , G06F12/0831 , G06F2212/1028 , Y02D10/128 , Y02D10/13 , Y02D10/171
摘要: Within a multi-processing system including a plurality of processor cores 4, 6 operating in accordance with coherent multi-processing, each of the cores includes a cache memory 10, 12 storing local copies of data values from a coherent memory region. The respective processor cores may be placed into a power saving mode in which they are non-operative whilst the cache memory remains responsive to coherency management requests such that the system as a whole can continue to operate and manage coherency.
摘要翻译: 在包括根据相干多处理操作的多个处理器核心4,6的多处理系统中,每个核心包括存储来自相干存储器区域的数据值的本地副本的高速缓冲存储器10,12。 相应的处理器核心可以被放置在省电模式中,其中它们是非操作的,而高速缓存存储器保持对相干性管理请求的响应,使得整个系统可以继续操作和管理一致性。
-
公开(公告)号:US20070101064A1
公开(公告)日:2007-05-03
申请号:US11264374
申请日:2005-11-02
申请人: Frederic Piry , Philippe Raphalen , Gilles Grandou
发明人: Frederic Piry , Philippe Raphalen , Gilles Grandou
IPC分类号: G06F12/00
CPC分类号: G06F12/127 , Y02D10/13
摘要: There is disclosed a method, a cache controller and a data processing apparatus for allocating a data value to a cache way. The method comprises the steps of: (i) receiving a request to allocate the data value to an ‘n’-way set associative cache in which the data value may be allocated to a corresponding cache line of any one of the ‘n’-ways, where ‘n’ is an integer greater than 1; (ii) reviewing attribute information indicating whether the corresponding cache line of any of the ‘n’-ways is clean; and (iii) utilising the attribute information when executing a way allocation algorithm to provide an increased probability that the data value is allocated to a clean corresponding cache line. By allocating data value to the corresponding clean cache line there is no need to evict any data values prior to the allocation occurring, this obviates the need to power the eviction infrastructure and reduces eviction traffic over any interconnect. It will be appreciated that this can significantly reduce power consumption and improve the performance of the system.
摘要翻译: 公开了一种用于将数据值分配给高速缓存方式的方法,高速缓存控制器和数据处理装置。 该方法包括以下步骤:(i)接收将数据值分配给“n”组的关联高速缓存的请求,其中数据值可以被分配给“n”组中的任何一个的对应高速缓存行, 方式,其中'n'是大于1的整数; (ii)检查指示任何“n”方式的相应高速缓存行是否干净的属性信息; 以及(iii)当执行方式分配算法时利用属性信息来提供将数据值分配给干净的对应高速缓存行的增加概率。 通过将数据值分配给相应的干净的高速缓存线,在分配发生之前不需要驱逐任何数据值,这就避免了对逐出基础设施供电的需要,并减少任何互连的驱逐流量。 应当理解,这可以显着降低功耗并提高系统的性能。
-
公开(公告)号:US20070233962A1
公开(公告)日:2007-10-04
申请号:US11391689
申请日:2006-03-29
IPC分类号: G06F12/00
CPC分类号: G06F12/0855 , G06F9/3824 , G06F9/3834 , Y02D10/13
摘要: A store buffer, method and data processing apparatus is disclosed. The store buffer comprises: reception logic operable to receive a request to write a data value to an address in memory; buffer logic having a plurality of entries, each entry being selectively operable to store request information indicative of a previous request and to maintain associated cache information indicating whether a cache line in a cache is currently allocated for writing data values to an address associated with that request; and entry selection logic operable to determine which one of the plurality entries to allocate to store the request using the request information and the associated cache information of the plurality of entries to determine whether a cache line in the cache is currently allocated for writing the data value to the address in memory. By reviewing the entries in the buffer logic and identifying which entry to store the request based on information currently stored by the buffer logic, the need to obtain cache information indicating whether any cache line in a cache is currently allocated for writing the data value may be obviated. In turn, the need to perform a cache look up to obtain the cache information may also be obviated. It will be appreciated that by obviating the need to perform a cache lookup, the power consumption of the store buffer may be reduced. Also, the amount of cache bandwidth consumed by performing unnecessary cache lookups may also be reduced, thereby significantly improving the performance of the cache.
摘要翻译: 公开了存储缓冲器,方法和数据处理装置。 存储缓冲器包括:接收逻辑,用于接收将数据值写入存储器中的地址的请求; 缓冲器逻辑具有多个条目,每个条目可选择性地操作以存储指示先前请求的请求信息,并且维护指示高速缓存中的高速缓存行当前是否被分配用于将数据值写入与该请求相关联的地址的相关联的高速缓存信息 ; 以及条目选择逻辑,其可操作以使用所述多个条目的所述请求信息和所述相关联的高速缓存信息来确定要分配的所述多个条目中的哪一个以存储所述请求,以确定所述高速缓存中的高速缓存行当前是否被分配用于写入所述数据值 到内存中的地址。 通过根据缓冲器逻辑当前存储的信息来查看缓冲器逻辑中的条目并识别存储请求的条目,需要获得指示高速缓存中的任何高速缓存行当前被分配用于写数据值的高速缓存信息可以是 消除了 反过来,也可以避免执行缓存查询以获得高速缓存信息的需要。 应当理解,通过消除执行高速缓存查找的需要,可以减少存储缓冲器的功耗。 此外,还可以减少通过执行不必要的高速缓存查找而消耗的高速缓存带宽的量,从而显着地提高高速缓存的性能。
-
公开(公告)号:US20060265551A1
公开(公告)日:2006-11-23
申请号:US11134513
申请日:2005-05-23
申请人: Gilles Grandou , Philippe Raphalen
发明人: Gilles Grandou , Philippe Raphalen
IPC分类号: G06F12/00
CPC分类号: G06F12/0895 , G06F12/128 , Y02D10/13
摘要: The present invention provides a data processing apparatus and method for handling cache accesses. The data processing apparatus comprises a processing unit operable to issue a series of access requests, each access request having associated therewith an address of a data value to be accessed. Further, the data processing apparatus has an n-way set associative cache memory operable to store data values for access by the processing unit, each way of the cache memory comprising a plurality of cache lines, and each cache line being operable to store a plurality of data values. The cache memory further comprises for each way a TAG storage for storing, for each cache line of that way, a corresponding TAG value. The cache memory is operable, when the processing unit is issuing access requests specifying data values held sequentially in a cache line of a current way of the cache memory, to perform a speculative lookup in at least one TAG storage to determine whether the TAG value associated with the next cache line in one way associated with the at least one TAG storage equals an expected tag value. If that TAG value does equal the expected tag value, and following an access request identifying a last data value in the cache line of the current way, a further access request is issued identifying the next cache line, then the cache memory is operable, without further reference to any TAG storage of the cache memory, to access from that next cache line of the one way the data value the subject of the further access request. This provides significant power savings when handling accesses to a cache memory.
摘要翻译: 本发明提供一种用于处理高速缓存存取的数据处理装置和方法。 数据处理装置包括可操作以发出一系列访问请求的处理单元,每个访问请求具有与其相关联的要访问的数据值的地址。 此外,数据处理装置具有n路组合关联高速缓冲存储器,其可操作以存储用于由处理单元进行访问的数据值,高速缓存存储器的每个方式包括多条高速缓存行,并且每条高速缓存行可操作以存储多个 的数据值。 高速缓冲存储器还包括用于每个方式的TAG存储器,用于针对每个高速缓存行存储相应的TAG值。 高速缓存存储器可操作,当处理单元发出指定在缓冲存储器的当前方式的高速缓存行中顺序保持的数据值的访问请求时,在至少一个TAG存储器中执行推测查找以确定是否相关联的TAG值 其中与所述至少一个TAG存储器相关联的下一个高速缓存行与预期的标签值相等。 如果该TAG值等于预期标签值,并且在当前方式的高速缓存行中识别出最后一个数据值的访问请求之后,则发出另一个访问请求,标识下一个高速缓存行,则高速缓冲存储器可操作,而不需要 进一步参考高速缓冲存储器的任何TAG存储器,从该方式的下一个高速缓存行访问数据值作为进一步访问请求的主题。 当处理对高速缓冲存储器的访问时,这提供了显着的功率节省。
-
-
-
-