MEMORY MODULE WITH COMPUTATION CAPABILITY

    公开(公告)号:US20220107907A1

    公开(公告)日:2022-04-07

    申请号:US17554400

    申请日:2021-12-17

    Inventor: Dmitri Yudanov

    Abstract: A memory module having a plurality of memory chips, at least one controller (e.g., a central processing unit or special-purpose controller), and at least one interface device configured to communicate input and output data for the memory module. The input and output data bypasses at least one processor (e.g., a central processing unit) of a computing device in which the memory module is installed. And, the at least one interface device can be configured to communicate the input and output data to at least one other memory module in the computing device. Also, the memory module can be one module in a plurality of memory modules of a memory module system.

    Matching patterns in memory arrays
    32.
    发明授权

    公开(公告)号:US11276463B2

    公开(公告)日:2022-03-15

    申请号:US16902685

    申请日:2020-06-16

    Inventor: Dmitri Yudanov

    Abstract: Systems and methods for performing a pattern matching operation in a memory device are disclosed. The memory device may include a controller and memory arrays where the memory arrays store different patterns along bit lines. An input pattern is applied to the memory array(s) to determine whether the pattern is stored in the memory device. Word lines may be activated in series or in parallel to search for patterns within the memory array. The memory array may include memory cells that store binary digits, discrete values or analog values.

    Memory module with computation capability

    公开(公告)号:US11232049B2

    公开(公告)日:2022-01-25

    申请号:US16713989

    申请日:2019-12-13

    Inventor: Dmitri Yudanov

    Abstract: A memory module having a plurality of memory chips, at least one controller (e.g., a central processing unit or special-purpose controller), and at least one interface device configured to communicate input and output data for the memory module. The input and output data bypasses at least one processor (e.g., a central processing unit) of a computing device in which the memory module is installed. And, the at least one interface device can be configured to communicate the input and output data to at least one other memory module in the computing device. Also, the memory module can be one module in a plurality of memory modules of a memory module system.

    RECONFIGURABLE PROCESSING-IN-MEMORY LOGIC USING LOOK-UP TABLES

    公开(公告)号:US20220019442A1

    公开(公告)日:2022-01-20

    申请号:US16932524

    申请日:2020-07-17

    Inventor: Dmitri Yudanov

    Abstract: An example system implementing a processing-in-memory pipeline includes: a memory array to store a plurality of look-up tables (LUTs) and data; a control block coupled to the memory array, the control block to control a computational pipeline by activating one or more LUTs of the plurality of LUTs; and a logic array coupled to the memory array and the control block, the logic array to perform, based on control inputs received from the control block, logic operations on the activated LUTs and the data.

    Memory Management Unit (MMU) for Accessing Borrowed Memory

    公开(公告)号:US20210342274A1

    公开(公告)日:2021-11-04

    申请号:US17375455

    申请日:2021-07-14

    Abstract: Systems, methods and apparatuses to accelerate accessing of borrowed memory over network connection are described. For example, a memory management unit (MMU) of a computing device can be configured to be connected both to the random access memory over a memory bus and to a computer network via a communication device. The computing device can borrow an amount of memory from a remote device over a network connection using the communication device; and applications running in the computing device can use virtual memory addresses mapped to the borrowed memory. When a virtual address mapped to the borrowed memory is used, the MMU translates the virtual address into a physical address and instruct the communication device to access the borrowed memory.

    DISTRIBUTED GRAPHICS PROCESSOR UNIT ARCHITECTURE

    公开(公告)号:US20210334234A1

    公开(公告)日:2021-10-28

    申请号:US16855879

    申请日:2020-04-22

    Inventor: Dmitri Yudanov

    Abstract: The present disclosure is directed to a distributed graphics processor unit (GPU) architecture that includes an array of processing nodes. Each processing node may include a GPU node that is coupled to its own fast memory unit and its own storage unit. The fast memory unit and storage unit may be integrated into a single unit or may be separately coupled to the GPU node. The processing node may have its fast memory unit coupled to both the GPU node and the storage node. The various architectures provide a GPU-based system that may be treated as a storage unit, such as solid state drive (SSD) that performs onboard processing to perform memory-oriented operations. In this respect, the system may be viewed as a “smart drive” for big-data near-storage processing.

    Distributed Computing based on Memory as a Service

    公开(公告)号:US20210263856A1

    公开(公告)日:2021-08-26

    申请号:US17319002

    申请日:2021-05-12

    Abstract: Systems, methods and apparatuses of distributed computing based on Memory as a Service are described. For example, a set of networked computing devices can each be configured to execute an application that accesses memory using a virtual memory address region. Each respective device can map the virtual memory address region to the local memory for a first period of time during which the application is being executed in the respective device, map the virtual memory address region to a local memory of a remote device in the group for a second period of time after starting the application in the respective device and before terminating the application in the respective device, and request the remote device to process data in the virtual memory address region during at least the second period of time.

    SHADOW COMPUTATIONS IN BASE STATIONS

    公开(公告)号:US20210182119A1

    公开(公告)日:2021-06-17

    申请号:US16713996

    申请日:2019-12-13

    Inventor: Dmitri Yudanov

    Abstract: Systems and methods for implementing shadow computations in base stations. The systems and methods can include a method including initiating, at a base station (such as a cellular base station), a shadow computation of a main computation executing for a mobile device. The main computation can include a computational task, and the shadow computation can be at least a part of or a derivative of the main computation. The method can also include executing, by the base station, the shadow computation.

    RENDERING ENHANCEMENT BASED IN PART ON EYE TRACKING

    公开(公告)号:US20210132690A1

    公开(公告)日:2021-05-06

    申请号:US16675171

    申请日:2019-11-05

    Abstract: An apparatus having a computing device and a user interface—such as a user interface having a display that can provide a graphical user interface (GUI). The apparatus also includes a camera, and a processor in the computing device. The camera can be connected to the computing device and/or the user interface, and the camera can be configured to capture pupil location and/or eye movement of a user. The processor can be configured to: identify a visual focal point of the user relative to the user interface based on the captured pupil location, and/or identify a type of eye movement of the user (such as a saccade) based on the captured eye movement. The processor can also be configured to control parameters of the user interface based at least partially on the identified visual focal point and/or the identified type of eye movement.

Patent Agency Ranking