Mechanisms for Priority Control in Resource Allocation
    11.
    发明申请
    Mechanisms for Priority Control in Resource Allocation 有权
    资源配置优先控制机制

    公开(公告)号:US20100146512A1

    公开(公告)日:2010-06-10

    申请号:US12631407

    申请日:2009-12-04

    CPC classification number: G06F13/362

    Abstract: Mechanisms for priority control in resource allocation is provided. With these mechanisms, when a unit makes a request to a token manager, the unit identifies the priority of its request as well as the resource which it desires to access and the unit's resource access group (RAG). This information is used to set a value of a storage device associated with the resource, priority, and RAG identified in the request. When the token manager generates and grants a token to the RAG, the token is in turn granted to a unit within the RAG based on a priority of the pending requests identified in the storage devices associated with the resource and RAG. Priority pointers are utilized to provide a round-robin fairness scheme between high and low priority requests within the RAG for the resource.

    Abstract translation: 提供资源分配优先控制机制。 利用这些机制,当单元向令牌管理器发出请求时,该单元识别其请求的优先级以及它希望访问的资源和单元的资源访问组(RAG)。 该信息用于设置与请求中标识的资源,优先级和RAG相关联的存储设备的值。 当令牌管理器生成并向RAG授予令牌时,根据在与资源和RAG相关联的存储设备中标识的未决请求的优先级,将令牌授予RAG内的单元。 优先级指针用于在资源的RAG内提供高优先级请求和低优先级请求之间的循环公平性方案。

    Adaptive shared data interventions in coupled broadcast engines
    12.
    发明授权
    Adaptive shared data interventions in coupled broadcast engines 失效
    耦合广播引擎中的自适应共享数据干预

    公开(公告)号:US06986002B2

    公开(公告)日:2006-01-10

    申请号:US10322075

    申请日:2002-12-17

    Applicant: Ram Raghavan

    Inventor: Ram Raghavan

    CPC classification number: G06F12/0813 G06F12/122

    Abstract: The present invention provides for a bus system having a local bus ring coupled to a remote bus ring. A processing unit is coupled to the local bus node and is employable to request data. A cache is coupled to the processing unit through a command bus. A cache investigator, coupled to the cache, is employable to determine whether the cache contains the requested data. The cache investigator is further employable to generate and broadcast cache utilization parameters, which contain information as to the degree of accessing the cache by other caches, its own associated processing unit, and so on. In one aspect, the cache is a local cache. In another aspect, the cache is a remote cache.

    Abstract translation: 本发明提供一种具有耦合到远程总线环的局部总线环的总线系统。 处理单元耦合到本地总线节点并且可用于请求数据。 缓存通过命令总线耦合到处理单元。 耦合到高速缓存的缓存调查器可用于确定高速缓存是否包含所请求的数据。 高速缓存调查器还可用于生成和广播高速缓存利用率参数,其包含关于由其他高速缓存访​​问高速缓存的程度的信息,其自己的相关处理单元等。 在一个方面,高速缓存是本地高速缓存。 另一方面,高速缓存是远程高速缓存。

    Mixed operating performance modes including a shared cache mode

    公开(公告)号:US08695011B2

    公开(公告)日:2014-04-08

    申请号:US13458769

    申请日:2012-04-27

    CPC classification number: G06F9/5077

    Abstract: Functionality is implemented to determine that a plurality of multi-core processing units of a system are configured in accordance with a plurality of operating performance modes. It is determined that a first of the plurality of operating performance modes satisfies a first performance criterion that corresponds to a first workload of a first logical partition of the system. Accordingly, the first logical partition is associated with a first set of the plurality of multi-core processing units that are configured in accordance with the first operating performance mode. It is determined that a second of the plurality of operating performance modes satisfies a second performance criterion that corresponds to a second workload of a second logical partition of the system. Accordingly, the second logical partition is associated with a second set of the plurality of multi-core processing units that are configured in accordance with the second operating performance mode.

    Two Partition Accelerator and Application of Tiered Flash to Cache Hierarchy in Partition Acceleration
    15.
    发明申请
    Two Partition Accelerator and Application of Tiered Flash to Cache Hierarchy in Partition Acceleration 失效
    分区加速器的两个分区加速器和应用分区加速中的缓存层次结构

    公开(公告)号:US20110022803A1

    公开(公告)日:2011-01-27

    申请号:US12508621

    申请日:2009-07-24

    CPC classification number: G06F12/0811 G06F2212/1024 G06F2212/1032

    Abstract: An approach is provided to identify a disabled processing core and an active processing core from a set of processing cores included in a processing node. Each of the processing cores is assigned a cache memory. The approach extends a memory map of the cache memory assigned to the active processing core to include the cache memory assigned to the disabled processing core. A first amount of data that is used by a first process is stored by the active processing core to the cache memory assigned to the active processing core. A second amount of data is stored by the active processing core to the cache memory assigned to the inactive processing core using the extended memory map.

    Abstract translation: 提供了一种用于从包括在处理节点中的一组处理核心识别禁用的处理核心和活动处理核心的方法。 每个处理核心被分配一个高速缓冲存储器。 该方法扩展了分配给活动处理核心的高速缓存存储器的存储器映射,以包括分配给禁用处理核心的高速缓存存储器。 由第一进程使用的第一数据量由活动处理核存储到分配给活动处理核的高速缓冲存储器。 第二数据量由活动处理核心使用扩展存储器映射存储到分配给非活动处理核心的缓存存储器。

    Reducing memory access latency for hypervisor- or supervisor-initiated memory access requests
    16.
    发明授权
    Reducing memory access latency for hypervisor- or supervisor-initiated memory access requests 失效
    降低管理程序或管理程序启动的内存访问请求的内存访问延迟

    公开(公告)号:US07774563B2

    公开(公告)日:2010-08-10

    申请号:US11621189

    申请日:2007-01-09

    Applicant: Ram Raghavan

    Inventor: Ram Raghavan

    CPC classification number: G06F13/4239

    Abstract: A computer-implemented method, data processing system, and computer usable program code are provided for reducing memory access latency. A memory controller receives a memory access request and determines if an address associated with the memory access request falls within an address range of a plurality of paired memory address range registers. The memory controller determines if an enable bit associated with the address range is set to 1 in response to the address falling within one of the address ranges. The memory controller flags the memory access request as a high-priority request in response to the enable bit being set to 1 and places the high-priority request on a request queue.A dispatcher receives an indication that a memory bank is idle. The dispatcher determines if high-priority requests are present in the request queue and, if so, sends the earliest high-priority request to the idle memory bank.

    Abstract translation: 提供计算机实现的方法,数据处理系统和计算机可用程序代码以减少存储器访问等待时间。 存储器控制器接收存储器访问请求并确定与存储器访问请求相关联的地址是否落在多个成对存储器地址范围寄存器的地址范围内。 存储器控制器响应于地址落在一个地址范围内,确定与地址范围相关联的使能位是否被设置为1。 存储器控制器将存储器访问请求标记为响应于使能位被设置为1并将高优先级请求放置在请求队列上的高优先级请求。 调度器接收到存储体空闲的指示。 调度程序确定请求队列中是否存在高优先级请求,如果是,则将最早的高优先级请求发送到空闲存储体。

    Token swapping for hot spot management
    17.
    发明授权
    Token swapping for hot spot management 有权
    令牌交换热点管理

    公开(公告)号:US06996647B2

    公开(公告)日:2006-02-07

    申请号:US10738722

    申请日:2003-12-17

    CPC classification number: G06F13/37

    Abstract: A method and apparatus are provided for efficiently managing hot spots in a resource managed computer system. The system utilizes a controller, a series of requestor groups, and a series of loan registers. The controller is configured to allocate and is configured to reallocate resources among the requestor groups to efficiently manage the computer system. The loan registers account for reallocated resources such that intended preallocation of use of shared resources is closely maintained. Hence, the computer system is able to operate efficiently while preventing any single requestor or group of requestors from monopolizing shared resources.

    Abstract translation: 提供了一种用于在资源管理的计算机系统中有效地管理热点的方法和装置。 该系统利用控制器,一系列请求者组和一系列贷款寄存器。 控制器被配置为分配并被配置为在请求者组之间重新分配资源以有效地管理计算机系统。 贷款登记册用于重新分配资源,以便密切维护预期分配使用共享资源。 因此,计算机系统能够有效地运行,同时防止任何单个请求者或一组请求者垄断共享资源。

    POLYCRYSTALLINE DIAMOND COMPACT COATED WITH HIGH ABRASION RESISTANCE DIAMOND LAYERS
    18.
    发明申请
    POLYCRYSTALLINE DIAMOND COMPACT COATED WITH HIGH ABRASION RESISTANCE DIAMOND LAYERS 审中-公开
    多金刚石镶嵌耐磨耐磨金刚石层

    公开(公告)号:US20140060937A1

    公开(公告)日:2014-03-06

    申请号:US13602083

    申请日:2012-08-31

    CPC classification number: C23C16/272 C23C16/0272 C23C16/50 E21B10/567

    Abstract: A cutting element may comprise a substrate, a first polycrystalline diamond volume, and a second diamond or diamond like volume. The first polycrystalline diamond volume may contain a catalyst material. The first polycrystalline diamond volume may be bonded to the substrate. The second diamond or diamond like volume may be formed predominantly from carbon atoms and free of catalyst materials. The second diamond or diamond like volume may be adjacent to a working surface of cutting element. The second diamond or diamond like volume may be bonded to the first polycrystalline diamond volume.

    Abstract translation: 切割元件可以包括基底,第一多晶金刚石体积和第二金刚石或类金刚石体积。 第一多晶金刚石体积可以含有催化剂材料。 第一多晶金刚石体积可以结合到基底上。 第二种金刚石或类金刚石体积可以主要由碳原子形成并且不含催化剂材料。 第二个钻石或类金刚石体积可以与切割元件的工作表面相邻。 第二金刚石或类金刚石体积可以结合到第一多晶金刚石体积。

    Flexible use of extended cache using a partition cache footprint
    19.
    发明申请
    Flexible use of extended cache using a partition cache footprint 失效
    灵活使用扩展缓存使用分区缓存占用空间

    公开(公告)号:US20120042131A1

    公开(公告)日:2012-02-16

    申请号:US12856682

    申请日:2010-08-15

    CPC classification number: G06F12/0811 G06F12/0284 G06F2212/502 G06F2212/604

    Abstract: An approach is provided to identifying cache extension sizes that correspond to different partitions that are running on a computer system. The approach extends a first hardware cache associated with a first processing core that is included in the processor's silicon substrate with a first memory allocation from a system memory area, with the system memory area being external to the silicon substrate and the first memory allocation corresponding to one of the plurality of cache extension sizes that corresponds to one of the partitions that is running on the computer system. The approach further extends a second hardware cache associated with a second processing core also included in the processor's silicon substrate with a second memory allocation from the system memory area with the second memory allocation corresponding to another of the cache extension sizes that corresponds to a different partitions that is being executed by the second processing core.

    Abstract translation: 提供了一种方法来识别对应于在计算机系统上运行的不同分区的高速缓存扩展大小。 该方法利用来自系统存储器区域的第一存储器分配来扩展与包括在处理器的硅衬底中的第一处理核心相关联的第一硬件高速缓存,系统存储器区域在硅衬底外部,并且第一存储器分配对应于 多个缓存扩展大小中的一个对应于在计算机系统上运行的分区之一。 该方法进一步扩展与第二处理核心相关联的第二硬件高速缓存,该第二处理核心还包括在处理器的硅衬底中,具有来自系统存储区域的第二存储器分配,其中第二存储器分配对应于对应于不同分区的另一个高速缓存扩展大小 正在由第二处理核心执行。

    Priority control in resource allocation for low request rate, latency-sensitive units
    20.
    发明授权
    Priority control in resource allocation for low request rate, latency-sensitive units 失效
    低请求率,延迟敏感单位的资源分配优先级控制

    公开(公告)号:US07631131B2

    公开(公告)日:2009-12-08

    申请号:US11260579

    申请日:2005-10-27

    CPC classification number: G06F13/362

    Abstract: A mechanism for priority control in resource allocation for low request rate, latency-sensitive units is provided. With this mechanism, when a unit makes a request to a token manager, the unit identifies the priority of its request as well as the resource which it desires to access and the unit's resource access group (RAG). This information is used to set a value of a storage device associated with the resource, priority, and RAG identified in the request. When the token manager generates and grants a token to the RAG, the token is in turn granted to a unit within the RAG based on a priority of the pending requests identified in the storage devices associated with the resource and RAG. Priority pointers are utilized to provide a round-robin fairness scheme between high and low priority requests within the RAG for the resource.

    Abstract translation: 提供了一种用于低请求率,延迟敏感单元的资源分配中的优先级控制机制。 利用该机制,当单元向令牌管理器发出请求时,该单元识别其请求的优先级以及它希望访问的资源和单元的资源访问组(RAG)。 该信息用于设置与请求中标识的资源,优先级和RAG相关联的存储设备的值。 当令牌管理器生成并向RAG授予令牌时,根据在与资源和RAG相关联的存储设备中标识的未决请求的优先级,将令牌授予RAG内的单元。 优先级指针用于在资源的RAG内提供高优先级请求和低优先级请求之间的循环公平性方案。

Patent Agency Ranking