Updating persistent data in persistent memory-based storage
    41.
    发明授权
    Updating persistent data in persistent memory-based storage 有权
    在永久存储器存储中更新持久数据

    公开(公告)号:US09430396B2

    公开(公告)日:2016-08-30

    申请号:US14579934

    申请日:2014-12-22

    Abstract: A processor includes a processing core to execute an application including instructions encoding a transaction with a persistent memory via a volatile cache that includes a cache line associated with the transaction, the cache line being associated with a cache line status, and a cache controller operatively coupled to the volatile cache, the cache controller, in response to detecting a failure event, to, in response to determining that the cache line status that the cache line is committed, evict contents of the cache line to the persistent memory, and in response to determining that the cache line status indicating that the cache line is uncommitted, discard the contents of the cache line.

    Abstract translation: 处理器包括处理核心,用于执行包括通过包括与事务相关联的高速缓存行的易失性高速缓冲存储器与持久存储器进行交易的指令的应用,所述高速缓存行与高速缓存行状态相关联,高速缓存控制器可操作地耦合 响应于确定高速缓存行被提交的高速缓存行状态,缓存控制器响应于检测到故障事件,将高速缓存行的内容驱逐到永久存储器,并且响应于缓存行 确定指示高速缓存行未被提交的高速缓存行状态,丢弃高速缓存行的内容。

    High-performance input-output devices supporting scalable virtualization

    公开(公告)号:US12164971B2

    公开(公告)日:2024-12-10

    申请号:US18301733

    申请日:2023-04-17

    Abstract: Techniques for scalable virtualization of an Input/Output (I/O) device are described. An electronic device composes a virtual device comprising one or more assignable interface (AI) instances of a plurality of AI instances of a hosting function exposed by the I/O device. The electronic device emulates device resources of the I/O device via the virtual device. The electronic device intercepts a request from the guest pertaining to the virtual device, and determines whether the request from the guest is a fast-path operation to be passed directly to one of the one or more AI instances of the I/O device or a slow-path operation that is to be at least partially serviced via software executed by the electronic device. For a slow-path operation, the electronic device services the request at least partially via the software executed by the electronic device.

    HIGHLY SCALABLE ACCELERATOR
    46.
    发明公开

    公开(公告)号:US20230251986A1

    公开(公告)日:2023-08-10

    申请号:US18296875

    申请日:2023-04-06

    CPC classification number: G06F13/364 G06F9/5027 G06F13/24

    Abstract: Embodiments of apparatuses, methods, and systems for highly scalable accelerators are described. In an embodiment, an apparatus includes an interface to receive a plurality of work requests from a plurality of clients and a plurality of engines to perform the plurality of work requests. The work requests are to be dispatched to the plurality of engines from a plurality of work queues. The work queues are to store a work descriptor per work request. Each work descriptor is to include all information needed to perform a corresponding work request.

    Address space identifier management in complex input/output virtualization environments

    公开(公告)号:US11269782B2

    公开(公告)日:2022-03-08

    申请号:US16772765

    申请日:2018-03-28

    Abstract: Embodiment of this disclosure provides a mechanism to extend a workload instruction to include both untranslated and translated address space identifiers (ASIDs). In one embodiment, a processing device comprising a translation manager is provided. The translation manager receives a workload instruction from a guest application. The workload instruction comprises an untranslated (ASID) and a workload for an input/output (I/O) device. The untranslated ASID is translated to a translated ASID. The translated ASID inserted into a payload of the workload instruction. Thereupon, the payload is provided to a work queue of the I/O device to execute the workload based in part on at least one of: the translated ASID or the untranslated ASID.

    Shared accelerator memory systems and methods

    公开(公告)号:US10817441B2

    公开(公告)日:2020-10-27

    申请号:US16370587

    申请日:2019-03-29

    Abstract: The present disclosure is directed to systems and methods sharing memory circuitry between processor memory circuitry and accelerator memory circuitry in each of a plurality of peer-to-peer connected accelerator units. Each of the accelerator units includes virtual-to-physical address translation circuitry and migration circuitry. The virtual-to-physical address translation circuitry in each accelerator unit includes pages for each of at least some of the plurality of accelerator units. The migration circuitry causes the transfer of data between the processor memory circuitry and the accelerator memory circuitry in each of the plurality of accelerator circuits. The migration circuitry migrates and evicts data to/from accelerator memory circuitry based on statistical information associated with accesses to at least one of: processor memory circuitry or accelerator memory circuitry in one or more peer accelerator circuits. Thus, the processor memory circuitry and accelerator memory circuitry may be dynamically allocated to advantageously minimize system latency attributable to data access operations.

Patent Agency Ranking