Intelligent pre-fetching using compound operations
    3.
    发明授权
    Intelligent pre-fetching using compound operations 有权
    使用复合操作进行智能预取

    公开(公告)号:US08370456B2

    公开(公告)日:2013-02-05

    申请号:US11534446

    申请日:2006-09-22

    IPC分类号: G06F15/16

    摘要: A system and method for pre-fetching data uses a combination of heuristics to determine likely next data retrieval operations and an evaluation of available resources for executing speculative data operations. When local resources, such as cache memory for storing speculative command results is not available, the compound operation request may not be sent. When resources on a server-side system are insufficient, only the primary command of a compound operation request may be processed and speculative command requests may be rejected. Both local computing resources and network resources may be evaluated when determining whether to build or process a compound operations request.

    摘要翻译: 用于预取数据的系统和方法使用启发式的组合来确定可能的下一个数据检索操作以及用于执行推测数据操作的可用资源的评估。 当诸如用于存储推测命令结果的高速缓冲存储器的本地资源不可用时,可能不发送复合操作请求。 当服务器端系统的资源不足时,只能处理复合操作请求的主命令,并且可能会拒绝推测命令请求。 当确定是否构建或处理复合操作请求时,可以评估本地计算资源和网络资源。

    File server pipelining with denial of service mitigation
    4.
    发明授权
    File server pipelining with denial of service mitigation 有权
    文件服务器流水线与拒绝服务缓解

    公开(公告)号:US07872975B2

    公开(公告)日:2011-01-18

    申请号:US11690962

    申请日:2007-03-26

    IPC分类号: H04L12/56

    摘要: A method of metering bandwidth allocation on a server using credits is disclosed. The method may receive a request for data from a client, respond to the request for data and determining if the request for data for the client exceeds a current data allocation credit limit for the client. Using the round trip time, the method may calculate a connection throughput for a client and may increase the current data allocation credit limit for the client if the server has resources to spare, the client is actively using the current pipeline depth allowed and network connection latency and bandwidth indicate a deeper pipeline is necessary for saturation. The method may decrease the current data allocation credit limit for the client if the server does not have resources to spare.

    摘要翻译: 公开了一种在使用信用的服务器上计量带宽分配的方法。 该方法可以从客户端接收对数据的请求,响应数据请求并确定客户端的数据请求是否超过客户端的当前数据分配信用限额。 使用往返时间,该方法可以计算客户端的连接吞吐量,并且可以增加客户端当前的数据分配信用限额,如果服务器具有备用资源,则客户端正在主动使用当前允许的流水线深度和网络连接延迟 并且带宽表示更深的流水线对于饱和是必需的。 如果服务器没有资源可用,该方法可能会降低客户端的当前数据分配信用限额。

    File server pipelining with denial of service mitigation
    5.
    发明申请
    File server pipelining with denial of service mitigation 有权
    文件服务器流水线与拒绝服务缓解

    公开(公告)号:US20080240144A1

    公开(公告)日:2008-10-02

    申请号:US11690962

    申请日:2007-03-26

    IPC分类号: H04J3/16 H04J3/22

    摘要: A method of metering bandwidth allocation on a server using credits is disclosed. The method may receive a request for data from a client, respond to the request for data and determining if the request for data for the client exceeds a current data allocation credit limit for the client. Using the round trip time, the method may calculate a connection throughput for a client and may increase the current data allocation credit limit for the client if the server has resources to spare, the client is actively using the current pipeline depth allowed and network connection latency and bandwidth indicate a deeper pipeline is necessary for saturation. The method may decrease the current data allocation credit limit for the client if the server does not have resources to spare.

    摘要翻译: 公开了一种在使用信用的服务器上计量带宽分配的方法。 该方法可以从客户端接收对数据的请求,响应数据请求并确定客户端的数据请求是否超过客户端的当前数据分配信用限额。 使用往返时间,该方法可以计算客户端的连接吞吐量,并且可以增加客户端当前的数据分配信用限额,如果服务器具有备用资源,则客户端正在主动使用当前允许的流水线深度和网络连接延迟 并且带宽表示更深的流水线对于饱和是必需的。 如果服务器没有资源可用,该方法可能会降低客户端的当前数据分配信用限额。

    Intelligent Pre-fetching using Compound Operations
    6.
    发明申请
    Intelligent Pre-fetching using Compound Operations 有权
    使用复合操作进行智能预取

    公开(公告)号:US20080077655A1

    公开(公告)日:2008-03-27

    申请号:US11534446

    申请日:2006-09-22

    IPC分类号: G06F15/16

    摘要: A system and method for pre-fetching data uses a combination of heuristics to determine likely next data retrieval operations and an evaluation of available resources for executing speculative data operations. When local resources, such as cache memory for storing speculative command results is not available, the compound operation request may not be sent. When resources on a server-side system are insufficient, only the primary command of a compound operation request may be processed and speculative command requests may be rejected. Both local computing resources and network resources may be evaluated when determining whether to build or process a compound operations request.

    摘要翻译: 用于预取数据的系统和方法使用启发式的组合来确定可能的下一个数据检索操作以及用于执行推测数据操作的可用资源的评估。 当诸如用于存储推测命令结果的高速缓冲存储器的本地资源不可用时,可能不发送复合操作请求。 当服务器端系统的资源不足时,只能处理复合操作请求的主命令,并且可能会拒绝推测命令请求。 当确定是否构建或处理复合操作请求时,可以评估本地计算资源和网络资源。

    NON-BLOCKING DATA TRANSFER VIA MEMORY CACHE MANIPULATION
    7.
    发明申请
    NON-BLOCKING DATA TRANSFER VIA MEMORY CACHE MANIPULATION 有权
    通过内存缓存操作进行非阻塞数据传输

    公开(公告)号:US20110119451A1

    公开(公告)日:2011-05-19

    申请号:US12619571

    申请日:2009-11-16

    IPC分类号: G06F12/08 G06F12/00

    CPC分类号: G06F12/0802 G06F12/0893

    摘要: A cache controller in a computer system is configured to manage a cache such that the use of bus bandwidth is reduced. The cache controller receives commands from a processor. In response, a cache mapping maintaining information for each block in the cache is modified. The cache mapping may include an address, a dirty bit, a zero bit, and a priority for each cache block. The address indicates an address in main memory for which the cache block caches data. The dirty bit indicates whether the data in the cache block is consistent with data in main memory at the address. The zero bit indicates whether data at the address should be read as a default value, and the priority specifies a priority for evicting the cache block. By manipulating this mapping information, commands such as move, copy swap, zero, deprioritize and deactivate may be implemented.

    摘要翻译: 计算机系统中的高速缓存控制器被配置为管理高速缓存,使得减少总线带宽的使用。 缓存控制器从处理器接收命令。 作为响应,修改了针对高速缓存中的每个块的维护信息的高速缓存映射。 高速缓存映射可以包括每个高速缓存块的地址,脏位,零位和优先级。 该地址指示高速缓存块高速缓存数据的主存储器中的地址。 脏位指示高速缓存块中的数据是否与地址中主存储器中的数据一致。 零位表示地址中的数据是否应被读取为默认值,优先级指定用于逐出缓存块的优先级。 通过操纵该映射信息,可以实现诸如移动,复制交换,零,优先化和停用之类的命令。

    Non-blocking data transfer via memory cache manipulation
    8.
    发明授权
    Non-blocking data transfer via memory cache manipulation 有权
    通过存储器高速缓存操作进行非阻塞数据传输

    公开(公告)号:US08495299B2

    公开(公告)日:2013-07-23

    申请号:US12619571

    申请日:2009-11-16

    IPC分类号: G06F12/00

    CPC分类号: G06F12/0802 G06F12/0893

    摘要: A cache controller in a computer system is configured to manage a cache. The cache controller receives commands from a processor. In response, a cache mapping maintaining information for each block in the cache is modified. The cache mapping may include an address, a dirty bit, a zero bit, and a priority for each cache block. The address indicates an address in main memory for which the cache block caches data. The dirty bit indicates whether the data in the cache block is consistent with data in main memory at the address. The zero bit indicates whether data at the address should be read as a default value, and the priority specifies a priority for evicting the cache block. By manipulating this mapping information, commands such as move, copy swap, zero, deprioritize and deactivate may be implemented.

    摘要翻译: 计算机系统中的高速缓存控制器被配置为管理高速缓存。 缓存控制器从处理器接收命令。 作为响应,修改了针对高速缓存中的每个块的维护信息的高速缓存映射。 高速缓存映射可以包括每个高速缓存块的地址,脏位,零位和优先级。 该地址指示高速缓存块高速缓存数据的主存储器中的地址。 脏位指示高速缓存块中的数据是否与地址中主存储器中的数据一致。 零位表示地址中的数据是否应被读取为默认值,优先级指定用于逐出缓存块的优先级。 通过操纵该映射信息,可以实现诸如移动,复制交换,零,优先化和停用之类的命令。

    Method and system for moderating thread priority boost for I/O completion
    9.
    发明授权
    Method and system for moderating thread priority boost for I/O completion 有权
    调整I / O完成线程优先级提升的方法和系统

    公开(公告)号:US07496928B2

    公开(公告)日:2009-02-24

    申请号:US10650176

    申请日:2003-08-28

    申请人: Jeffrey C. Fuller

    发明人: Jeffrey C. Fuller

    IPC分类号: G06F9/54 G06F9/46

    CPC分类号: G06F9/4881

    摘要: A system and method uses a heuristic approach to manage the boosting of thread priorities after I/O completion to improve system performance. Upon detection of the completion of an I/O operation in response to a request, the system thread does not automatically boost the priority of the thread that made the I/O request by a fixed amount. Instead, the system thread determines whether to boost the requesting thread's priority by applying heuristic criteria based on the I/O operation status, such as whether the system thread has additional I/O requests to process, how many I/O request packets have been completed in the current thread context without a priority boost to the requesting thread, and the time that has passed since the last boosted I/O completion.

    摘要翻译: 系统和方法使用启发式方法来管理I / O完成后线程优先级的提升,以提高系统性能。 当检测到响应于请求的I / O操作的完成时,系统线程不会自动将使I / O请求的线程的优先级提高固定的量。 相反,系统线程通过应用基于I / O操作状态的启发式标准来确定是否提升请求线程的优先级,例如系统线程是否有额外的I / O请求来处理,有多少I / O请求数据包已被 在当前线程上下文中没有优先级提升到请求线程,以及自上次升级的I / O完成以来已经过去的时间。