Processor having multiple cores, shared core extension logic, and shared core extension utilization instructions
    4.
    发明授权
    Processor having multiple cores, shared core extension logic, and shared core extension utilization instructions 有权
    处理器具有多个核心,共享核心扩展逻辑和共享核心扩展利用指令

    公开(公告)号:US09582287B2

    公开(公告)日:2017-02-28

    申请号:US13629460

    申请日:2012-09-27

    CPC classification number: G06F9/3887 G06F9/30076 G06F9/3879 G06F15/8007

    Abstract: An apparatus of an aspect includes a plurality of cores and shared core extension logic coupled with each of the plurality of cores. The shared core extension logic has shared data processing logic that is shared by each of the plurality of cores. Instruction execution logic, for each of the cores, in response to a shared core extension call instruction, is to call the shared core extension logic. The call is to have data processing performed by the shared data processing logic on behalf of a corresponding core. Other apparatus, methods, and systems are also disclosed.

    Abstract translation: 一个方面的装置包括与多个核心中的每一个耦合的多个核心和共享核心扩展逻辑。 共享核心扩展逻辑具有由多个核心中的每一个共享的共享数据处理逻辑。 用于响应于共享核心扩展调用指令的每个核心的指令执行逻辑是调用共享的核心扩展逻辑。 呼叫是由共享数据处理逻辑代表相应的核进行数据处理。 还公开了其他装置,方法和系统。

    Dynamic fill policy for a shared cache

    公开(公告)号:US10229059B2

    公开(公告)日:2019-03-12

    申请号:US15476816

    申请日:2017-03-31

    Abstract: Technologies are provided in embodiments to dynamically fill a shared cache. At least some embodiments include determining that data requested in a first request for the data by a first processing device is not stored in a cache shared by the first processing device and a second processing device, where a dynamic fill policy is applicable to the first request. Embodiments further include determining to deallocate, based at least in part on a threshold, an entry in a buffer, the entry containing information corresponding to the first request for the data. Embodiments also include sending a second request for the data to a system memory, and sending the data from the system memory to the first processing device. In more specific embodiments, the data from the system memory is not written to the cache based, at least in part, on the determination to deallocate the entry.

    Management of coherent links and multi-level memory

    公开(公告)号:US10599568B2

    公开(公告)日:2020-03-24

    申请号:US15948569

    申请日:2018-04-09

    Inventor: Eran Shifer

    Abstract: Techniques for managing multi-level memory and coherency using a unified page granular controller can simplify software programming of both file system handling for persistent memory and parallel programming of host and accelerator and enable better software utilization of host processors and accelerators. As part of the management techniques, a line granular controller cooperates with a page granular controller to support both fine grain and coarse grain coherency and maintain overall system inclusion property. In one example, a controller to manage coherency in a system includes a memory data structure and on-die tag cache to store state information to indicate locations of pages in a memory hierarchy and an ownership state for the pages, the ownership state indicating whether the pages are owned by a host processor, owned by an accelerator device, or shared by the host processor and the accelerator device. The controller can also include logic to, in response to a memory access request from the host processor or the accelerator to access a cacheline in a page in a state indicating ownership by a device other than the requesting device, cause the page to transition to a state in which the requesting device owns or shares the page.

Patent Agency Ranking