METHODS FOR PERFORMING PROCESSING-IN-MEMORY OPERATIONS, AND RELATED SYSTEMS

    公开(公告)号:US20240192953A1

    公开(公告)日:2024-06-13

    申请号:US18582520

    申请日:2024-02-20

    CPC classification number: G06F9/3001 G06F7/5443 G06F9/30032 G06F9/30043

    Abstract: Methods, apparatuses, and systems for in-or near-memory processing are described. Strings of bits (e.g., vectors) may be fetched and processed in logic of a memory device without involving a separate processing unit. Operations (e.g., arithmetic operations) may be performed on numbers stored in a bit-parallel way during a single sequence of clock cycles. Arithmetic may thus be performed in a single pass as numbers are bits of two or more strings of bits are fetched and without intermediate storage of the numbers. Vectors may be fetched (e.g., identified, transmitted, received) from one or more bit lines. Registers of a memory array may be used to write (e.g., store or temporarily store) results or ancillary bits (e.g., carry bits or carry flags) that facilitate arithmetic operations. Circuitry near, adjacent, or under the memory array may employ XOR or AND (or other) logic to fetch, organize, or operate on the data.

    Distributed Computing based on Memory as a Service

    公开(公告)号:US20230004502A1

    公开(公告)日:2023-01-05

    申请号:US17943739

    申请日:2022-09-13

    Abstract: Systems, methods and apparatuses of distributed computing based on memory as a service are described. For example, a set of networked computing devices can each be configured to execute an application that accesses memory using a virtual memory address region. Each respective device can map the virtual memory address region to the local memory for a first period of time during which the application is being executed in the respective device, map the virtual memory address region to a local memory of a remote device in the group for a second period of time after starting the application in the respective device and before terminating the application in the respective device, and request the remote device to process data in the virtual memory address region during at least the second period of time.

    USER INTERFACE BASED PAGE MIGRATION FOR PERFORMANCE ENHANCEMENT

    公开(公告)号:US20220413919A1

    公开(公告)日:2022-12-29

    申请号:US17898164

    申请日:2022-08-29

    Abstract: Enhancement or reduction of page migration can include operations that include scoring, in a computing device, each executable of at least a first group and a second group of executables in the computing device. The executables can be related to user interface elements of applications and associated with pages of memory in the computing device. For each executable, the scoring can be based at least partly on an amount of user interface elements using the executable. The first group can be located at first pages of the memory, and the second group can be located at second pages. When the scoring of the executables in the first group is higher than the scoring of the executables in the second group, the operations can include allocating or migrating the first pages to a first type of memory, and allocating or migrating the second pages to a second type of memory.

    RECONFIGURABLE PROCESSING-IN-MEMORY LOGIC

    公开(公告)号:US20220284936A1

    公开(公告)日:2022-09-08

    申请号:US17752430

    申请日:2022-05-24

    Inventor: Dmitri Yudanov

    Abstract: An example system implementing a processing-in-memory pipeline includes: a memory array to store data in a plurality of memory cells electrically coupled to a plurality of wordlines and a plurality of bitlines; a logic array coupled to the memory array, the logic array to implement configurable logic controlling the plurality of memory cells; and a control block coupled to the memory array and the logic array, the control block to control a computational pipeline to perform computations on the data by activating at least one of: one or more bitlines of the plurality of bitlines or one or more wordlines of the plurality of wordlines.

    ADDRESS MAPPING BETWEEN SHARED MEMORY MODULES AND CACHE SETS

    公开(公告)号:US20220283936A1

    公开(公告)日:2022-09-08

    申请号:US17752142

    申请日:2022-05-24

    Inventor: Dmitri Yudanov

    Abstract: A memory module system with a global shared context. A memory module system can include a plurality of memory modules and at least one processor, which can implement the global shared context. The memory modules of the system can provide the global shared context at least in part by providing an address space shared between the modules and applications running on the modules. The address space sharing can be achieved by having logical addresses global to the modules, and each logical address can be associated with a certain physical address of a specific module.

    SEARCH AND MATCH OPERATIONS IN SPIKING NEURAL NETWORKS

    公开(公告)号:US20220156549A1

    公开(公告)日:2022-05-19

    申请号:US16951888

    申请日:2020-11-18

    Inventor: Dmitri Yudanov

    Abstract: The present disclosure is directed to search and match operations of a spiking neural network (SNN) that performs in-memory operations. To model a computer-implemented SNN after a biological neural network, the architecture in the present disclosure involves different memory sections for storing inbound spike messages, synaptic connection data, and synaptic connection parameters. The section of memory containing synaptic connection data to identify matching inbound spike messages. Various embodiments are directed to an efficient search and match operation performed in memory to determine targeted synaptic connections.

Patent Agency Ranking