MATCHING PATTERNS IN MEMORY ARRAYS
    101.
    发明申请

    公开(公告)号:US20210391004A1

    公开(公告)日:2021-12-16

    申请号:US16902685

    申请日:2020-06-16

    Inventor: Dmitri Yudanov

    Abstract: Systems and methods for performing a pattern matching operation in a memory device are disclosed. The memory device may include a controller and memory arrays where the memory arrays store different patterns along bit lines. An input pattern is applied to the memory array(s) to determine whether the pattern is stored in the memory device. Word lines may be activated in series or in parallel to search for patterns within the memory array. The memory array may include memory cells that store binary digits, discrete values or analog values.

    Accelerated in-memory cache with memory array sections having different configurations

    公开(公告)号:US11126548B1

    公开(公告)日:2021-09-21

    申请号:US16824618

    申请日:2020-03-19

    Inventor: Dmitri Yudanov

    Abstract: An apparatus having a memory array. The memory array having a first section and a second section. The first section of the memory array including a first sub-array of memory cells made up of a first type of memory. The second section of the memory array including a second sub-array of memory cells made up of the first type of memory with a configuration to each memory cell of the second sub-array that is different from the configuration to each cell of the first sub-array. Alternatively, the section can include memory cells made up of a second type of memory that is different from the first type of memory. Either way, the second type of memory or the differently configured first type of memory has memory cells in the second sub-array having less memory latency than each memory cell of the first type of memory in the first sub-array.

    MEMORY MODULE WITH COMPUTATION CAPABILITY

    公开(公告)号:US20210182220A1

    公开(公告)日:2021-06-17

    申请号:US16713989

    申请日:2019-12-13

    Inventor: Dmitri Yudanov

    Abstract: A memory module having a plurality of memory chips, at least one controller (e.g., a central processing unit or special-purpose controller), and at least one interface device configured to communicate input and output data for the memory module. The input and output data bypasses at least one processor (e.g., a central processing unit) of a computing device in which the memory module is installed. And, the at least one interface device can be configured to communicate the input and output data to at least one other memory module in the computing device. Also, the memory module can be one module in a plurality of memory modules of a memory module system.

    USER INTERFACE BASED PAGE MIGRATION FOR PERFORMANCE ENHANCEMENT

    公开(公告)号:US20210157646A1

    公开(公告)日:2021-05-27

    申请号:US16694371

    申请日:2019-11-25

    Abstract: Enhancement or reduction of page migration can include operations that include scoring, in a computing device, each executable of at least a first group and a second group of executables in the computing device. The executables can be related to user interface elements of applications and associated with pages of memory in the computing device. For each executable, the scoring can be based at least partly on an amount of user interface elements using the executable. The first group can be located at first pages of the memory, and the second group can be located at second pages. When the scoring of the executables in the first group is higher than the scoring of the executables in the second group, the operations can include allocating or migrating the first pages to a first type of memory, and allocating or migrating the second pages to a second type of memory.

    Customized Root Processes for Individual Applications

    公开(公告)号:US20210103462A1

    公开(公告)日:2021-04-08

    申请号:US16592529

    申请日:2019-10-03

    Abstract: A computing device (e.g., a mobile device) can execute a root process of an application to an initial point according to patterns of prior executions of the application. The root process can be one of many respective customized root processes of individual applications in the computing device. The device can receive a request to start the application from a user of the device. And, the device can start the application upon receiving the request to start the application and by using the root process of the application. At least one of the executing, receiving, or starting can be performed by an operating system in the device. The device can also fork the root process of the application into multiple processes, and can start upon receiving the request to start the application and by using at least one of the multiple processes according to the request to start the application.

    Distributed Computing based on Memory as a Service

    公开(公告)号:US20200379913A1

    公开(公告)日:2020-12-03

    申请号:US16424424

    申请日:2019-05-28

    Abstract: Systems, methods and apparatuses of distributed computing based on Memory as a Service are described. For example, a set of networked computing devices can each be configured to execute an application that accesses memory using a virtual memory address region. Each respective device can map the virtual memory address region to the local memory for a first period of time during which the application is being executed in the respective device, map the virtual memory address region to a local memory of a remote device in the group for a second period of time after starting the application in the respective device and before terminating the application in the respective device, and request the remote device to process data in the virtual memory address region during at least the second period of time.

    Memory as a Service for Artificial Neural Network (ANN) Applications

    公开(公告)号:US20200379809A1

    公开(公告)日:2020-12-03

    申请号:US16424429

    申请日:2019-05-28

    Abstract: Systems, methods and apparatuses of Artificial Neural Network (ANN) applications implemented via Memory as a Service (MaaS) are described. For example, a computing system can include a computing device and a remote device. The computing device can borrow memory from the remote device over a wired or wireless network. Through the borrowed memory, the computing device and the remote device can collaborate with each other in storing an artificial neural network and in processing based on the artificial neural network. Some layers of the artificial neural network can be stored in the memory loaned by the remote device to the computing device. The remote device can perform the computation of the layers stored in the borrowed memory on behalf of the computing device. When the network connection degrades, the computing device can use an alternative module to function as a substitute of the layers stored in the borrowed memory.

    Throttle Memory as a Service based on Connectivity Bandwidth

    公开(公告)号:US20200379808A1

    公开(公告)日:2020-12-03

    申请号:US16424413

    申请日:2019-05-28

    Abstract: Systems, methods and apparatuses to throttle network communications for memory as a service are described. For example, a computing device can borrow an amount of random access memory of the lender device over a communication connection between the lender device and the computing device. The computing device can allocate virtual memory to applications running in the computing device, and configure at least a portion of the virtual memory to be hosted on the amount of memory loaned by the lender device to the computing device. The computing device can throttle data communications used by memory regions in accessing the amount of memory over the communication connection according to the criticality levels of the contents stored in the memory regions.

Patent Agency Ranking