PROCESSING-IN-MEMORY CONCURRENT PROCESSING SYSTEM AND METHOD

    公开(公告)号:US20220318012A1

    公开(公告)日:2022-10-06

    申请号:US17217792

    申请日:2021-03-30

    Abstract: A processing system includes a processing unit and a memory device. The memory device includes a processing-in-memory (PIM) module that performs processing operations on behalf of the processing unit. An instruction set architecture (ISA) of the PIM module has fewer instructions than an ISA of the processing unit. Instructions received from the processing unit are translated such that processing resources of the PIM module are virtualized. As a result, the PIM module concurrently performs processing operations for multiple threads or applications of the processing unit.

    MEMORY LATENCY-AWARE GPU ARCHITECTURE

    公开(公告)号:US20220092724A1

    公开(公告)日:2022-03-24

    申请号:US17030024

    申请日:2020-09-23

    Abstract: One or more processing units, such as a graphics processing unit (GPU), execute an application. A resource manager selectively allocates a first memory portion or a second memory portion to the processing units based on memory access characteristics. The first memory portion has a first latency that is lower that a second latency of the second memory portion. In some cases, the memory access characteristics indicate a latency sensitivity. In some cases, hints included in corresponding program code are used to determine the memory access characteristics. The memory access characteristics can also be determined by monitoring memory access requests, measuring a cache miss rate or a row buffer miss rate for the monitored memory access requests, and determining the memory access characteristics based on the cache miss rate or the row buffer miss rate.

Patent Agency Ranking