Data Reorganization through Hardware-Supported Intermediate Addresses
    82.
    发明申请
    Data Reorganization through Hardware-Supported Intermediate Addresses 审中-公开
    通过硬件支持的中间地址进行数据重组

    公开(公告)号:US20110238946A1

    公开(公告)日:2011-09-29

    申请号:US12730285

    申请日:2010-03-24

    IPC分类号: G06F12/10

    摘要: A virtual address scheme for improving performance and efficiency of memory accesses of sparsely-stored data items in a cached memory system is disclosed. In a preferred embodiment of the present invention, a special address translation unit is used to translate sets of non-contiguous addresses in real memory into contiguous blocks of addresses in an “intermediate address space.” This intermediate address space is a fictitious or “virtual” address space, but is distinguishable from the virtual address space visible to application programs, and in user-level memory operations, effective addresses seen/manipulated by application programs are translated into intermediate addresses by an additional address translation unit for memory caching purposes. This scheme allows non-contiguous data items in memory to be assembled into contiguous cache lines for more efficient caching/access (due to the perceived spatial proximity of the data from the perspective of the processor).

    摘要翻译: 公开了一种用于提高缓存存储器系统中稀疏存储的数据项的存储器访问的性能和效率的虚拟地址方案。 在本发明的优选实施例中,特殊地址转换单元用于将实际存储器中的不连续地址集合转换为“中间地址空间”中的连续地址块。该中间地址空间是虚拟的或“虚拟的 “地址空间,但是与应用程序可见的虚拟地址空间是区别的,并且在用户级存储器操作中,由应用程序看到/操纵的有效地址由用于存储器高速缓存的附加地址转换单元转换成中间地址。 该方案允许存储器中的不连续的数据项被组合成连续的高速缓存行,以便更有效的高速缓存/访问(由于从处理器的角度看,数据的空间接近)。

    System and method for reducing unnecessary cache operations
    84.
    发明授权
    System and method for reducing unnecessary cache operations 失效
    减少不必要的缓存操作的系统和方法

    公开(公告)号:US07698508B2

    公开(公告)日:2010-04-13

    申请号:US11674960

    申请日:2007-02-14

    IPC分类号: G06F12/00

    摘要: A system and method for cache management in a data processing system. The data processing system includes a processor and a memory hierarchy. The memory hierarchy includes at least an upper memory cache, at least a lower memory cache, and a write-back data structure. In response to replacing data from the upper memory cache, the upper memory cache examines the write-back data structure to determine whether or not the data is present in the lower memory cache. If the data is present in the lower memory cache, the data is replaced in the upper memory cache without casting out the data to the lower memory cache.

    摘要翻译: 一种用于数据处理系统中缓存管理的系统和方法。 数据处理系统包括处理器和存储器层级。 存储器层级至少包括上部存储器高速缓存,至少下部存储器高速缓存和回写数据结构。 响应于从上部存储器高速缓存替换数据,上部存储器高速缓存检查回写数据结构以确定数据是否存在于下部存储器高速缓存中。 如果数据存在于较低存储器高速缓存中,则数据将在上部存储器高速缓存中替换,而不会将数据丢弃到较低的内存高速缓存。

    Estimating bandwidth of client-ISP link
    85.
    发明授权
    Estimating bandwidth of client-ISP link 有权
    估计客户端 - ISP链路的带宽

    公开(公告)号:US07475129B2

    公开(公告)日:2009-01-06

    申请号:US10734771

    申请日:2003-12-12

    IPC分类号: G06F15/16 G06F15/173

    摘要: A method, program, and server for estimating the bandwidth of a network connection between a client and a server includes requesting the server to serve first and second objects back-to-back to the client. The first and second objects are sent to the client. The client determines the time interval between delivery of the first and second objects. The time interval is used, in conjunction with information about the size of the second object, to estimate the bandwidth. The requests for the first and second objects preferably identify the first and second objects with URL's that are unique on the network to prevent the request from being serviced by a file cache. The first and second objects may be transmitted to the client from a content distribution network server that is architecturally close to the client's ISP to improve the reliability of the bandwidth estimation.

    摘要翻译: 用于估计客户端和服务器之间的网络连接的带宽的方法,程序和服务器包括请求服务器将第一和第二对象反向服务于客户端。 第一个和第二个对象被发送到客户端。 客户端确定第一个和第二个对象的传递之间的时间间隔。 使用时间间隔结合关于第二对象的大小的信息来估计带宽。 对第一和​​第二对象的请求优选地识别具有在网络上唯一的URL的第一和第二对象,以防止该请求由文件高速缓存服务。 第一和第二对象可以从架构上靠近客户端的ISP的内容分发网络服务器发送到客户端,以提高带宽估计的可靠性。

    Method and System for Enhanced Scheduling of Memory Access Requests
    86.
    发明申请
    Method and System for Enhanced Scheduling of Memory Access Requests 审中-公开
    用于增强内存访问请求计划的方法和系统

    公开(公告)号:US20080141258A1

    公开(公告)日:2008-06-12

    申请号:US12033341

    申请日:2008-02-19

    IPC分类号: G06F9/46

    摘要: The forgoing objects are achieved as is now described. In information storage systems in which data retrieval requires movement of at least one physical element, a measurable amount of energy and time are required to reposition that physical element in response to each data write or read request. After selecting one or more data requests for dispatch based solely on an approaching or past due time deadline, additional requests are identified for data to be read or written to locations which are in close proximity to previously scheduled requests, obviating the need to expend the full amount of energy and time required to accelerate the physical element and then decelerate the physical element to position it over the desired area within the information storage system. In this manner, data may be transferred to or retrieved from an information storage system more efficiently with less expenditure of energy and time.

    摘要翻译: 正如现在描述的那样实现前述目的。 在其中数据检索需要移动至少一个物理元件的信息存储系统中,响应于每个数据写入或读取请求,需要可测量的能量和时间量来重新定位该物理元件。 仅在接近或过期到期时间限制之后选择一个或多个数据请求进行调度时,将为要读取或写入到与先前安排的请求非常接近的位置的数据识别附加请求,从而避免需要花费足够的时间 加速物理元件所需的能量和时间量,然后减速物理元件以将其定位在信息存储系统内的期望区域上。 以这种方式,可以以更少的能量和时间的支出更有效地将数据传送到信息存储系统或从信息存储系统检索。

    Chained cache coherency states for sequential homogeneous access to a cache line with outstanding data response
    87.
    发明授权
    Chained cache coherency states for sequential homogeneous access to a cache line with outstanding data response 失效
    链接高速缓存一致性状态用于对具有出色数据响应的高速缓存行进行顺序同步访问

    公开(公告)号:US07370155B2

    公开(公告)日:2008-05-06

    申请号:US11245313

    申请日:2005-10-06

    IPC分类号: G06F12/00

    CPC分类号: G06F12/0831 G06F12/0822

    摘要: A method and data processing system for sequentially coupling successive, homogenous processor requests for a cache line in a chain before the data is received in the cache of a first processor within the chain. Chained intermediate coherency states are assigned to track the chain of processor requests and subsequent access permission provided, prior to receipt of the data at the first processor starting the chain. The chained intermediate coherency state assigned identifies the processor operation and a directional identifier identifies the processor to which the cache line is to be forwarded. When the data is received at the cache of the first processor within the chain, the first processor completes its operation on (or with) the data and then forwards the data to the next processor in the chain. The chain is immediately stopped when a non-homogenous operation is snooped by the last-in-chain processor.

    摘要翻译: 一种方法和数据处理系统,用于在数据在链中的第一处理器的高速缓存中接收之前,将链接中的高速缓存行的连续的均匀处理器请求顺序耦合。 分配链接的中间一致性状态,以便在启动链路的第一个处理器接收到数据之前跟踪处理器请求链和后续访问权限。 所分配的链接中间一致性状态标识处理器操作,并且方向标识符标识要向其转发高速缓存行的处理器。 当在链中的第一处理器的高速缓存处接收数据时,第一处理器完成其数据处理(或与数据)的操作,然后将数据转发到链中的下一个处理器。 当最后一个链接处理器窥探非均匀操作时,链条立即停止。

    Just-In-Time Prefetching
    88.
    发明申请
    Just-In-Time Prefetching 失效
    即时预取

    公开(公告)号:US20070283101A1

    公开(公告)日:2007-12-06

    申请号:US11422459

    申请日:2006-06-06

    IPC分类号: G06F12/00

    CPC分类号: G06F12/0862

    摘要: A method and an apparatus for performing just-in-time data prefetching within a data processing system comprising a processor, a cache or prefetch buffer, and at least one memory storage device. The apparatus comprises a prefetch engine having means for issuing a data prefetch request for prefetching a data cache line from the memory storage device for utilization by the processor. The apparatus further comprises logic/utility for dynamically adjusting a prefetch distance between issuance by the prefetch engine of the data prefetch request and issuance by the processor of a demand (load request) targeting the data/cache line being returned by the data prefetch request, so that a next data prefetch request for a subsequent cache line completes the return of the data/cache line at effectively the same time that a demand for that subsequent data/cache line is issued by the processor.

    摘要翻译: 一种用于在包括处理器,高速缓存或预取缓冲器的数据处理系统中执行即时数据预取的方法和装置,以及至少一个存储器存储装置。 该装置包括预取引擎,具有用于发出数据预取请求的装置,用于从存储器存储装置预取数据高速缓存行以供处理器利用。 该装置还包括逻辑/实用程序,用于动态地调整数据预取请求的预取引擎的发布之间的预取距离,并且由处理器发出针对由数据预取请求返回的数据/高速缓存线的需求(加载请求) 使得对于后续高速缓存行的下一个数据预取请求在处理器发出对后续数据/高速缓存行的请求的同时有效地完成数据/高速缓存行的返回。

    Policy-Based Management in a Computer Environment
    89.
    发明申请
    Policy-Based Management in a Computer Environment 审中-公开
    计算机环境中的政策性管理

    公开(公告)号:US20070282982A1

    公开(公告)日:2007-12-06

    申请号:US11422127

    申请日:2006-06-05

    IPC分类号: G06F15/173

    CPC分类号: H04L41/0893

    摘要: A system for policy-based management in a computer environment, the system including at least one rule configured to be applied to an element of a computer environment, at least one policy including at least one of the rules, at least one profile including at least one element of the computer environment, at least one association defining a relationship between one of the policies and one of the profiles, and a computer configured to instaniate any of the associations, thereby invoking any of the rules included in the related policy for application to any of the elements in the related profile.

    摘要翻译: 一种用于计算机环境中用于基于策略的管理的系统,所述系统包括被配置为应用于计算机环境的元素的至少一个规则,包括至少一个规则的至少一个策略,至少一个简档至少包括 所述计算机环境的一个要素,至少一个关联,其定义所述策略中的一个与所述简档中的一个之间的关系;以及计算机,被配置为实现任何关联,从而调用包括在相关策略中的任何规则以应用于 相关资料中的任何元素。

    System and method of managing cache hierarchies with adaptive mechanisms
    90.
    发明授权
    System and method of managing cache hierarchies with adaptive mechanisms 失效
    用自适应机制管理缓存层次的系统和方法

    公开(公告)号:US07281092B2

    公开(公告)日:2007-10-09

    申请号:US11143328

    申请日:2005-06-02

    IPC分类号: G06F12/00

    摘要: A system and method of managing cache hierarchies with adaptive mechanisms. A preferred embodiment of the present invention includes, in response to selecting a data block for eviction from a memory cache (the source cache) out of a collection of memory caches, examining a data structure to determine whether an entry exists that indicates that the data block has been evicted from the source memory cache, or another peer cache, to a slower cache or memory and subsequently retrieved from the slower cache or memory into the source memory cache or other peer cache. Also, a preferred embodiment of the present invention includes, in response to determining the entry exists in the data structure, selecting a peer memory cache out of the collection of memory caches at the same level in the hierarchy to receive the data block from the source memory cache upon eviction.

    摘要翻译: 一种使用自适应机制管理缓存层次结构的系统和方法。 本发明的优选实施例包括响应于从存储器高速缓存的集合中的存储器高速缓存(源高速缓存)中选择用于逐出的数据块,检查数据结构以确定是否存在指示数据 块已经从源存储器高速缓存或另一个对等缓存驱逐到较慢的高速缓存或存储器,并随后从较慢的高速缓存或存储器检索到源存储器高速缓存或其他对等高速缓存。 此外,本发明的优选实施例包括响应于确定条目存在于数据结构中,从层级中的相同级别的存储器高速缓存的集合中选择对等存储器高速缓存以从源接收数据块 内存缓存被驱逐。