APPROACH FOR ENFORCING ORDERING BETWEEN MEMORY-CENTRIC AND CORE-CENTRIC MEMORY OPERATIONS

    公开(公告)号:US20220317926A1

    公开(公告)日:2022-10-06

    申请号:US17219446

    申请日:2021-03-31

    Abstract: Ordering between memory-centric memory operations, referred to hereinafter as “MC-Mem-Ops,” and core-centric memory operations, referred to hereinafter as “CC-Mem-Ops,” is enforced using inter-centric fences, referred to hereinafter as an “IC-fences.” IC-fences are implemented by an ordering primitive or ordering instruction, that cause a memory controller, a cache controller, etc., to enforce ordering of MC-Mem-Ops and CC-Mem-Ops throughout the memory pipeline and at the memory controller by not reordering MC-Mem-Ops (or sometimes CC-Mem-Ops) that arrive before the IC-fence to after the IC-fence. Processing of an IC-fence also causes the memory controller to issue an ordering acknowledgment to the thread that issued the IC-fence instruction. IC-fences are tracked at the core and designated as complete when the ordering acknowledgment is received. Embodiments include a completion level-specific cache flush operation which, when used with an IC-fence, provides proper ordering between cached CC-Mem-Ops and MC-Mem-ops with reduced data transfer and completion times.

    Device and method for accelerating matrix multiply operations

    公开(公告)号:US10956536B2

    公开(公告)日:2021-03-23

    申请号:US16176662

    申请日:2018-10-31

    Abstract: A processing device is provided which comprises memory configured to store data and a plurality of processor cores in communication with each other via first and second hierarchical communication links. Processor cores of a first hierarchical processor core group are in communication with each other via the first hierarchical communication links and are configured to store, in the memory, a sub-portion of data of a first matrix and a sub-portion of data of a second matrix. The processor cores are also configured to determine a product of the sub-portion of data of the first matrix and the sub-portion of data of the second matrix, receive, from another processor core, another sub-portion of data of the second matrix and determine a product of the sub-portion of data of the first matrix and the other sub-portion of data of the second matrix.

    Device and method for accelerating matrix multiply operations as a sum of outer products

    公开(公告)号:US10902087B2

    公开(公告)日:2021-01-26

    申请号:US16176678

    申请日:2018-10-31

    Abstract: A processing device is provided which includes memory and a processor comprising a plurality of processor cores in communication with each other via first and second hierarchical communication links. Each processor core in a group of the processor cores is in communication with each other via the first hierarchical communication links. Each processor core is configured to store, in the memory, one of a plurality of sub-portions of data of a first matrix, store, in the memory, one of a plurality of sub-portions of data of a second matrix, determine an outer product of the sub-portion of data of the first matrix and the sub-portion of data of the second matrix, receive, from another processor core of the group of processor cores, another sub-portion of data of the second matrix and determine another outer product of the sub-portion of data of the first matrix and the other sub-portion of data of the second matrix.

    NEAR-MEMORY DATA-DEPENDENT GATHER AND PACKING

    公开(公告)号:US20200081651A1

    公开(公告)日:2020-03-12

    申请号:US16123837

    申请日:2018-09-06

    Abstract: Methods, systems, and devices for near-memory data-dependent gathering and packing of data stored in a memory. A processing device extracts a function, a memory source address, and a memory destination address from a near-memory data-dependent gathering and packing primitive. A signal to perform gathering and packing operations based on the primitive is sent to near-memory processing circuitry of a memory device. The near-memory processing circuitry receives the signal, gathers data from the memory device based on the function and the memory source address, and packs the gathered data into the memory device based on the memory destination address.

    Approach for performing efficient memory operations using near-memory compute elements

    公开(公告)号:US12235756B2

    公开(公告)日:2025-02-25

    申请号:US17557568

    申请日:2021-12-21

    Abstract: Near-memory compute elements perform memory operations and temporarily store at least a portion of address information for the memory operations in local storage. A broadcast memory command is then issued to the near-memory compute elements that causes the near-memory compute elements to perform a subsequent memory operation using their respective address information stored in the local storage. This allows a single broadcast memory command to be used to perform memory operations across multiple memory elements, such as DRAM banks, using bank-specific address information. In one implementation, the approach is used to process workloads with irregular updates to memory while consuming less command bus bandwidth than conventional approaches. Implementations include using conditional flags to selectively designate address information in local storage that is to be processed with the broadcast memory command.

    APPROACH FOR MANAGING NEAR-MEMORY PROCESSING COMMANDS FROM MULTIPLE PROCESSOR THREADS TO PREVENT INTERFERENCE AT NEAR-MEMORY PROCESSING ELEMENTS

    公开(公告)号:US20240004653A1

    公开(公告)日:2024-01-04

    申请号:US17853613

    申请日:2022-06-29

    CPC classification number: G06F9/3009 G06F9/3004 G06F9/30101

    Abstract: An approach is provided for managing near-memory processing commands (“PIM commands”) from multiple processor threads in a manner to prevent interference and maintain correctness at near-memory processing elements. A memory controller uses thread identification information and last command information to issue a PIM command sequence from a first processor thread, directed to a PIM-enabled memory element, while deferring the issuance of PIM command sequences from other processor threads, directed to the same PIM-enabled memory element. After the last PIM command in the PIM command sequence for the first processor thread has been issued, a PIM command sequence for another processor thread is issued, and so on. The approach allows multiple processor threads to concurrently issue fine grained PIM commands to the same PIM-enabled memory element without having to be aware of address-to-memory element mapping, and without having to coordinate with other threads.

Patent Agency Ranking