Variable cache line size management
    2.
    发明授权
    Variable cache line size management 有权
    可变缓存行大小管理

    公开(公告)号:US08943272B2

    公开(公告)日:2015-01-27

    申请号:US13451742

    申请日:2012-04-20

    IPC分类号: G06F12/00 G06F12/08

    摘要: According to one aspect of the present disclosure, a method and technique for variable cache line size management is disclosed. The method includes: determining whether an eviction of a cache line from an upper level sectored cache to an unsectored lower level cache is to be performed, wherein the upper level cache includes a plurality of sub-sectors, each sub-sector having a cache line size corresponding to a cache line size of the lower level cache; responsive to determining that an eviction is to be performed, identifying referenced sub-sectors of the cache line to be evicted; invalidating unreferenced sub-sectors of the cache line to be evicted; and storing the referenced sub-sectors in the lower level cache.

    摘要翻译: 根据本公开的一个方面,公开了一种用于可变高速缓存行大小管理的方法和技术。 该方法包括:确定是否执行将高速缓存行从高级扇区高速缓存驱逐到未故障的较低级高速缓存,其中高级缓存包括多个子扇区,每个子扇区具有高速缓存行 对应于较低级缓存的高速缓存行大小的大小; 响应于确定要执行驱逐,识别要被驱逐的高速缓存行的参考子扇区; 使要删除的缓存行的未引用子扇区无效; 并将所引用的子扇区存储在下级缓存中。

    Dynamic prioritization of cache access
    3.
    发明授权
    Dynamic prioritization of cache access 有权
    高速缓存访​​问的动态优先级

    公开(公告)号:US08769210B2

    公开(公告)日:2014-07-01

    申请号:US13323076

    申请日:2011-12-12

    IPC分类号: G06F12/08

    CPC分类号: G06F12/0815

    摘要: Some embodiments of the inventive subject matter are directed to a cache comprising a tracking unit and cache state machines. In some embodiments, the tracking unit is configured to track an amount of cache resources used to service cache misses within a past period. In some embodiments, each of the cache state machines is configured to, determine whether a memory access request results in a cache miss or cache hit, and in response to a cache miss for a memory access request, query the tracking unit for the amount of cache resources used to service cache misses within the past period. In some embodiments, the each of the cache state machines is configured to service the memory access request based, at least in part, on the amount of cache resources used to service the cache misses within the past period according to the tracking unit.

    摘要翻译: 本发明主题的一些实施例涉及包括跟踪单元和高速缓存状态机的高速缓存。 在一些实施例中,跟踪单元被配置为跟踪用于在过去时间段内服务高速缓存未命中的高速缓存资源的量。 在一些实施例中,每个高速缓存状态机被配置为,确定存储器访问请求是否导致高速缓存未命中或高速缓存命中,并且响应于存储器访问请求的高速缓存未命中,查询跟踪单元的数量 用于在过去一段时间内缓存未命中服务的缓存资源。 在一些实施例中,每个高速缓存状态机被配置为至少部分地基于用于根据跟踪单元在过去时段内服务高速缓存未命中的高速缓存资源的量来服务存储器访问请求。

    DYNAMIC PRIORITIZATION OF CACHE ACCESS
    4.
    发明申请
    DYNAMIC PRIORITIZATION OF CACHE ACCESS 失效
    缓存访问动态优先

    公开(公告)号:US20130151784A1

    公开(公告)日:2013-06-13

    申请号:US13586518

    申请日:2012-08-15

    IPC分类号: G06F12/12

    CPC分类号: G06F12/0815

    摘要: Some embodiments of the inventive subject matter are directed to determining that a memory access request results in a cache miss and determining an amount of cache resources used to service cache misses within a past period in response to determining that the memory access request results in the cache miss. Some embodiments are further directed to determining that servicing the memory access request would increase the amount of cache resources used to service cache misses within the past period to exceed a threshold. In some embodiments, the threshold corresponds to reservation of a given amount of cache resources for potential cache hits. Some embodiments are further directed to rejecting the memory access request in response to the determining that servicing the memory access request would increase the amount of cache resources used to service cache misses within the past period to exceed the threshold.

    摘要翻译: 本发明的一些实施例涉及确定存储器访问请求导致高速缓存未命中,并且响应于确定存储器访问请求导致高速缓存而确定用于在过去时段内服务高速缓存未命中的高速缓存资源量 小姐。 一些实施例进一步涉及确定对存储器访问请求的服务将增加用于在过去时间段内服务高速缓存未命中的高速缓存资源的数量超过阈值。 在一些实施例中,阈值对应于用于潜在高速缓存命中的给定量的高速缓存资源的预留。 响应于确定对存储器访问请求的服务会增加在过去时间段内用于服务高速缓存未命中的高速缓存资源的数量超过阈值,一些实施例进一步涉及拒绝存储器访问请求。

    METHOD AND APPARATUS FOR MINIMIZING CACHE CONFLICT MISSES
    5.
    发明申请
    METHOD AND APPARATUS FOR MINIMIZING CACHE CONFLICT MISSES 有权
    用于最小化缓存冲突丢失的方法和装置

    公开(公告)号:US20120198121A1

    公开(公告)日:2012-08-02

    申请号:US13015771

    申请日:2011-01-28

    IPC分类号: G06F12/08

    摘要: A method for minimizing cache conflict misses is disclosed. A translation table capable of facilitating the translation of a virtual address to a real address during a cache access is provided. The translation table includes multiple entries, and each entry of the translation table includes a page number field and a hash value field. A hash value is generated from a first group of bits within a virtual address, and the hash value is stored in the hash value field of an entry within the translation table. In response to a match on the entry within the translation table during a cache access, the hash value of the matched entry is retrieved from the translation table, and the hash value is concatenated with a second group of bits within the virtual address to form a set of indexing bits to index into a cache set.

    摘要翻译: 公开了一种最小化缓存冲突漏洞的方法。 提供了一种能够有助于在高速缓存访​​问期间将虚拟地址转换为真实地址的转换表。 翻译表包括多个条目,并且翻译表的每个条目包括页码字段和散列值字段。 从虚拟地址内的第一组比特生成哈希值,并将哈希值存储在转换表内的条目的哈希值字段中。 响应于高速缓存访​​问期间在转换表内的条目的匹配,从转换表中检索匹配条目的散列值,并且将散列值与虚拟地址中的第二组位相连,以形成 一组索引位索引到高速缓存集中。

    Performance of Emerging Applications in a Virtualized Environment Using Transient Instruction Streams
    6.
    发明申请
    Performance of Emerging Applications in a Virtualized Environment Using Transient Instruction Streams 审中-公开
    使用瞬态指令流在虚拟化环境中新兴应用的性能

    公开(公告)号:US20120179873A1

    公开(公告)日:2012-07-12

    申请号:US13427083

    申请日:2012-03-22

    IPC分类号: G06F12/08

    摘要: A method, system and computer-usable medium are disclosed for managing transient instruction streams. Transient flags are defined in Branch-and-Link (BRL) instructions that are known to be infrequently executed. A bit is likewise set in a Special Purpose Register (SPR) of the hardware (e.g., a core) that is executing an instruction request thread. Subsequent fetches or prefetches in the request thread are treated as transient and are not written to lower-level caches. If an instruction is non-transient, and if a lower-level cache is non-inclusive of the L1 instruction cache, a fetch or prefetch miss that is obtained from memory may be written in both the L1 and the lower-level cache. If it is not inclusive, a cast-out from the L1 instruction cache may be written in the lower-level cache.

    摘要翻译: 公开了用于管理瞬时指令流的方法,系统和计算机可用介质。 在已知很少执行的分支和链路(BRL)指令中定义了瞬态标志。 在执行指令请求线程的硬件(例如,核心)的专用寄存器(SPR)中同样设置一个位。 请求线程中的后续提取或预取将被视为暂时的,并且不会写入低级缓存。 如果指令是非瞬态的,并且如果低级缓存不包括L1指令高速缓存,则从存储器获得的获取或预取缺失可以写入L1和低级高速缓存中。 如果不包括在内,则可以将低速缓存中的L1指令高速缓存中的退出写入。

    Changing Ethernet MTU size on demand with no data loss
    7.
    发明授权
    Changing Ethernet MTU size on demand with no data loss 失效
    根据需要更改以太网MTU大小,无数据丢失

    公开(公告)号:US08214535B2

    公开(公告)日:2012-07-03

    申请号:US11390787

    申请日:2006-03-28

    IPC分类号: G06F13/00

    CPC分类号: H04L69/324 H04L69/32

    摘要: A method and system for substantially avoiding loss of data and enabling continuing connection to the application during an MTU size changing operation in an active network computing device. Logic is added to the device driver, which logic provides several enhancements to the MTU size changing operation/process. Among these enhancements are: (1) logic for temporarily pausing the data coming in from the linked partner while changing the MTU size; (2) logic for returning a “device busy” status to higher-protocol transmit requests during the MTU size changing process. This second logic prevents the application from issuing new requests until the busy signal is removed; and (3) logic for enabling resumption of both flows when the MTU size change is completed. With this new logic, the device driver/adapter does not have any transmit and receive packets to process for a short period of time, while the MTU size change is ongoing.

    摘要翻译: 一种用于在活动网络计算设备中在MTU大小改变操作期间基本上避免数据丢失并使得能够持续连接到应用的方法和系统。 逻辑被添加到设备驱动程序,该逻辑提供了对MTU大小改变操作/过程的几个增强。 这些增强功能包括:(1)在改变MTU大小的同时临时暂停从链接伙伴进来的数据的逻辑; (2)在MTU大小改变过程中将“设备忙”状态返回到更高协议传输请求的逻辑。 该第二逻辑防止应用程序发出新的请求,直到忙信号被移除; 和(3)当MTU大小改变完成时能够恢复两个流的逻辑。 使用这种新的逻辑,设备驱动程序/适配器没有任何发送和接收数据包在短时间内处理,而MTU大小更改正在进行。

    NETWORK DATA PACKET FRAMENTATION AND REASSEMBLY METHOD
    8.
    发明申请
    NETWORK DATA PACKET FRAMENTATION AND REASSEMBLY METHOD 审中-公开
    网络数据包框架和重组方法

    公开(公告)号:US20110274120A1

    公开(公告)日:2011-11-10

    申请号:US12774834

    申请日:2010-05-06

    IPC分类号: H04J3/24

    CPC分类号: H04L49/9094

    摘要: The method determines whether a particular jumbo data packet benefits from fragmentation and reassembly management during communication through a network or networks. The method determines the best communication path, including path partners, between a sending information handling system (IHS) and a receiving IHS for the jumbo packet. A packet manager determines the maximum transmission unit (MTU) size for each path partner or switch in the communication path including the sending and receiving IHSs. The method provides transfer of the jumbo packets intact between those path partner switches of the communication path exhibiting MTU sized for jumbo or larger packet transfer. The method provides fragmentation of jumbo packets into multiple normal packets for transfer between switches exhibiting normal packet MTU sizes. The packet manager reassembles multiple normal packets back into jumbo packets for those network devices, including the receiving IHS, capable of managing jumbo packets.

    摘要翻译: 该方法确定特定的巨型数据分组是否受益于通过网络或网络的通信期间的分段和重组管理。 该方法确定发送信息处理系统(IHS)和巨型数据包的接收IHS之间的最佳通信路径,包括路径伙伴。 分组管理器确定包括发送和接收IHS的通信路径中每个路径伙伴或交换机的最大传输单元(MTU)大小。 该方法提供在通信路径的那些路径伙伴交换机之间完整传送巨型数据包,其中MTU的大小适用于巨型或更大的数据包传输。 该方法提供巨型分组到多个正常分组的分段,用于在呈现正常分组MTU大小的交换机之间进行传输。 数据包管理器将那些网络设备的多个正常数据包重新组合成巨型数据包,包括能够管理巨型数据包的接收IHS。

    HYBRID STORAGE SUBSYSTEM WITH MIXED PLACEMENT OF FILE CONTENTS
    9.
    发明申请
    HYBRID STORAGE SUBSYSTEM WITH MIXED PLACEMENT OF FILE CONTENTS 有权
    混合放置文件内容的混合存储子系统

    公开(公告)号:US20110153931A1

    公开(公告)日:2011-06-23

    申请号:US12644721

    申请日:2009-12-22

    IPC分类号: G06F12/08

    摘要: A storage subsystem combining solid state drive (SSD) and hard disk drive (HDD) technologies provides low access latency and low complexity. Separate free lists are maintained for the SSD and the HDD and blocks of file system data are stored uniquely on either the SSD or the HDD. When a read access is made to the subsystem, if the data is present on the SSD, the data is returned, but if the block is present on the HDD, it is migrated to the SSD and the block on the HDD is returned to the HDD free list. On a write access, if the block is present in the either the SSD or HDD, the block is overwritten, but if the block is not present in the subsystem, the block is written to the HDD.

    摘要翻译: 组合固态硬盘(SSD)和硬盘驱动器(HDD)技术的存储子系统提供低访问延迟和低复杂度。 为SSD保留独立的免费列表,HDD和文件系统数据块可以唯一存储在SSD或HDD上。 当对子系统进行读取访问时,如果SSD上存在数据,则返回数据,但是如果该块存在于HDD上,则迁移到SSD,并将HDD上的块返回到 硬盘免费列表。 在写访问中,如果该块存在于SSD或HDD中,则该块被覆盖,但是如果块不存在于子系统中,则该块被写入HDD。

    REDUCING IDLE TIME DUE TO ACKNOWLEDGEMENT PACKET DELAY
    10.
    发明申请
    REDUCING IDLE TIME DUE TO ACKNOWLEDGEMENT PACKET DELAY 有权
    由于确认包延迟而减少空闲时间

    公开(公告)号:US20090300211A1

    公开(公告)日:2009-12-03

    申请号:US12131167

    申请日:2008-06-02

    IPC分类号: G06F15/16

    摘要: Mechanisms for reducing the idle time of a computing device due to delays in transmitting/receiving acknowledgement packets are provided. A first data amount corresponding to a window size for a communication connection is determined. A second data amount, in excess of the first data amount, which may be transmitted with the first data amount, is calculated. The first and second data amounts are then transmitted from the sender to the receiver. The first data amount is provided to the receiver in a receive buffer of the receiver. The second data amount is maintained in a switch port buffer of a switch port without being provided to the receive buffer. The second data amount is transmitted from the switch port buffer to the receive buffer in response to the switch port detecting an acknowledgement packet from the receiver.

    摘要翻译: 提供了由于发送/接收确认分组中的延迟而减少计算设备的空闲时间的机制。 确定对应于通信连接的窗口大小的第一数据量。 计算超过可以以第一数据量发送的第一数据量的第二数据量。 然后,第一和第二数据量从发送器发送到接收器。 在接收器的接收缓冲器中将第一数据量提供给接收器。 第二数据量保持在交换机端口的交换机端口缓冲器中,而不被提供给接收缓冲器。 响应于交换端口检测到来自接收机的确认分组,第二数据量从交换机端口缓冲器发送到接收缓冲器。