CONFIGURING PROCESSOR POLICIES BASED ON PREDICTED DURATIONS OF ACTIVE PERFORMANCE STATES
    43.
    发明申请
    CONFIGURING PROCESSOR POLICIES BASED ON PREDICTED DURATIONS OF ACTIVE PERFORMANCE STATES 审中-公开
    基于预期活跃绩效状态的配置处理者政策

    公开(公告)号:US20150186160A1

    公开(公告)日:2015-07-02

    申请号:US14146588

    申请日:2014-01-02

    Abstract: Durations of active performance states of components of a processing system can be predicted based on one or more previous durations of an active state of the components. One or more entities in the processing system such as processor cores or caches can be configured based on the predicted durations of the active state of the components. Some embodiments configure a first component in a processing system based on a predicted duration of an active state of a second component of the processing system. The predicted duration is predicted based on one or more previous durations of an active state of the second component.

    Abstract translation: 可以基于组件的活动状态的一个或多个先前持续时间来预测处理系统的组件的主动性能状态的持续时间。 可以基于组件的活动状态的预测持续时间来配置处理系统中的一个或多个实体,例如处理器核心或高速缓存。 一些实施例基于处理系统的第二组件的活动状态的预测持续时间来配置处理系统中的第一组件。 基于第二组件的活动状态的一个或多个先前持续时间预测预测持续时间。

    Computation Memory Operations in a Logic Layer of a Stacked Memory
    44.
    发明申请
    Computation Memory Operations in a Logic Layer of a Stacked Memory 有权
    堆叠存储器逻辑层中的计算存储器操作

    公开(公告)号:US20140181483A1

    公开(公告)日:2014-06-26

    申请号:US13724506

    申请日:2012-12-21

    CPC classification number: G06F15/7821 Y02D10/12 Y02D10/13

    Abstract: Some die-stacked memories will contain a logic layer in addition to one or more layers of DRAM (or other memory technology). This logic layer may be a discrete logic die or logic on a silicon interposer associated with a stack of memory dies. Additional circuitry/functionality is placed on the logic layer to implement functionality to perform various computation operations. This functionality would be desired where performing the operations locally near the memory devices would allow increased performance and/or power efficiency by avoiding transmission of data across the interface to the host processor.

    Abstract translation: 除了一层或多层DRAM(或其他存储器技术)之外,一些堆叠堆叠的存储器将包含逻辑层。 该逻辑层可以是与存储器管芯堆叠相关联的硅插入器上的离散逻辑管芯或逻辑。 额外的电路/功能被放置在逻辑层上以实现执行各种计算操作的功能。 通过避免通过接口向主机处理器传输数据,执行本地在存储器件附近的操作将允许提高性能和/或功率效率将需要该功能。

    PROCESSING ENGINE FOR COMPLEX ATOMIC OPERATIONS
    45.
    发明申请
    PROCESSING ENGINE FOR COMPLEX ATOMIC OPERATIONS 有权
    加工发动机用于复杂的原子操作

    公开(公告)号:US20140181421A1

    公开(公告)日:2014-06-26

    申请号:US13725724

    申请日:2012-12-21

    CPC classification number: G06F9/50 G06F9/526 G06F2209/521 G06F2209/522

    Abstract: A system includes an atomic processing engine (APE) coupled to an interconnect. The interconnect is to couple to one or more processor cores. The APE receives a plurality of commands from the one or more processor cores through the interconnect. In response to a first command, the APE performs a first plurality of operations associated with the first command. The first plurality of operations references multiple memory locations, at least one of which is shared between two or more threads executed by the one or more processor cores.

    Abstract translation: 系统包括耦合到互连的原子处理引擎(APE)。 互连将耦合到一个或多个处理器内核。 APE通过互连从一个或多个处理器核接收多个命令。 响应于第一命令,APE执行与第一命令相关联的第一多个操作。 第一组多个操作引用多个存储器位置,其中至少一个在一个或多个处理器核心执行的两个或多个线程之间共享。

    Prefetch kernels on data-parallel processors

    公开(公告)号:US11500778B2

    公开(公告)日:2022-11-15

    申请号:US16813075

    申请日:2020-03-09

    Abstract: Embodiments include methods, systems and non-transitory computer-readable computer readable media including instructions for executing a prefetch kernel with reduced intermediate state storage resource requirements. These include executing a prefetch kernel on a graphics processing unit (GPU), such that the prefetch kernel begins executing before a processing kernel. The prefetch kernel performs memory operations that are based upon at least a subset of memory operations in the processing kernel.

    Memory system with region-specific memory access scheduling

    公开(公告)号:US11474703B2

    公开(公告)日:2022-10-18

    申请号:US17199949

    申请日:2021-03-12

    Abstract: An integrated circuit device includes a memory controller coupleable to a memory. The memory controller to schedule memory accesses to regions of the memory based on memory timing parameters specific to the regions. A method includes receiving a memory access request at a memory device. The method further includes accessing, from a timing data store of the memory device, data representing a memory timing parameter specific to a region of the memory cell circuitry targeted by the memory access request. The method also includes scheduling, at the memory controller, the memory access request based on the data.

    Instruction set architecture and software support for register state migration

    公开(公告)号:US10037267B2

    公开(公告)日:2018-07-31

    申请号:US15299990

    申请日:2016-10-21

    Abstract: Systems, apparatuses, and methods for migrating execution contexts are disclosed. A system includes a plurality of processing units and memory devices. The system is configured to execute any number of software applications. The system is configured to detect, within a first software application, a primitive for migrating at least a portion of the execution context of a source processing unit to a target processing unit, wherein the primitive includes one or more instructions. The execution context includes a plurality of registers. A first processing unit is configured to execute the one or more instructions of the primitive to cause a portion of an execution context of the first processing unit to be migrated to a second processing unit. In one embodiment, executing the primitive instruction(s) causes an instruction pointer value, with an optional offset value, to be sent to the second processing unit.

Patent Agency Ranking