Distributed Computing based on Memory as a Service

    公开(公告)号:US20200379913A1

    公开(公告)日:2020-12-03

    申请号:US16424424

    申请日:2019-05-28

    Abstract: Systems, methods and apparatuses of distributed computing based on Memory as a Service are described. For example, a set of networked computing devices can each be configured to execute an application that accesses memory using a virtual memory address region. Each respective device can map the virtual memory address region to the local memory for a first period of time during which the application is being executed in the respective device, map the virtual memory address region to a local memory of a remote device in the group for a second period of time after starting the application in the respective device and before terminating the application in the respective device, and request the remote device to process data in the virtual memory address region during at least the second period of time.

    Memory as a Service for Artificial Neural Network (ANN) Applications

    公开(公告)号:US20200379809A1

    公开(公告)日:2020-12-03

    申请号:US16424429

    申请日:2019-05-28

    Abstract: Systems, methods and apparatuses of Artificial Neural Network (ANN) applications implemented via Memory as a Service (MaaS) are described. For example, a computing system can include a computing device and a remote device. The computing device can borrow memory from the remote device over a wired or wireless network. Through the borrowed memory, the computing device and the remote device can collaborate with each other in storing an artificial neural network and in processing based on the artificial neural network. Some layers of the artificial neural network can be stored in the memory loaned by the remote device to the computing device. The remote device can perform the computation of the layers stored in the borrowed memory on behalf of the computing device. When the network connection degrades, the computing device can use an alternative module to function as a substitute of the layers stored in the borrowed memory.

    Throttle Memory as a Service based on Connectivity Bandwidth

    公开(公告)号:US20200379808A1

    公开(公告)日:2020-12-03

    申请号:US16424413

    申请日:2019-05-28

    Abstract: Systems, methods and apparatuses to throttle network communications for memory as a service are described. For example, a computing device can borrow an amount of random access memory of the lender device over a communication connection between the lender device and the computing device. The computing device can allocate virtual memory to applications running in the computing device, and configure at least a portion of the virtual memory to be hosted on the amount of memory loaned by the lender device to the computing device. The computing device can throttle data communications used by memory regions in accessing the amount of memory over the communication connection according to the criticality levels of the contents stored in the memory regions.

    CONTENT ADDRESSABLE MEMORY SYSTEMS WITH CONTENT ADDRESSABLE MEMORY BUFFERS

    公开(公告)号:US20200327942A1

    公开(公告)日:2020-10-15

    申请号:US16382449

    申请日:2019-04-12

    Abstract: An apparatus (e.g., a content addressable memory system) can have a controller; a first content addressable memory coupled to the controller and a second content addressable memory coupled to the controller. The controller can be configured to cause the first content addressable memory to compare input data to first data stored in the first content addressable memory and cause the second content addressable memory to compare the input data to second data stored in the second content addressable memory such the input data is compared to the first and second data concurrently and replace a result of the comparison of the input data to the first data with a result of the comparison of the input data to the second data in response to determining that the first data is invalid and that the second data corresponds to the first data.

    MEMORY DEVICES WITH SELECTIVE PAGE-BASED REFRESH

    公开(公告)号:US20190318779A1

    公开(公告)日:2019-10-17

    申请号:US16456493

    申请日:2019-06-28

    Inventor: Ameen D. Akel

    Abstract: Several embodiments of memory devices and systems with selective page-based refresh are disclosed herein. In one embodiment, a memory device includes a controller operably coupled to a main memory having at least one memory region comprising a plurality of memory pages. The controller is configured to track, in one or more refresh schedule tables stored on the memory device and/or on a host device, a subset of memory pages in the plurality of memory pages having an refresh schedule. In some embodiments, the controller is further configured to refresh the subset of memory pages in accordance with the refresh schedule.

    ROW HAMMER MITIGATION USING A VICTIM CACHE

    公开(公告)号:US20250165406A1

    公开(公告)日:2025-05-22

    申请号:US19030174

    申请日:2025-01-17

    Abstract: Row hammer attacks takes advantage of unintended and undesirable side effects of memory devices in which memory cells interact electrically between themselves by leaking their charges and possibly changing the contents of nearby memory rows that were not addressed in an original memory access. Row hammer attacks are mitigated by using a victim cache. Data is written to cache lines of a cache. A least recently used cache line of the cache is written to the victim cache.

    SPARING TECHNIQUES IN STACKED MEMORY ARCHITECTURES

    公开(公告)号:US20250149108A1

    公开(公告)日:2025-05-08

    申请号:US18775981

    申请日:2024-07-17

    Abstract: Methods, systems, and devices for sparing techniques in stacked memory architectures are described. A memory system may implement a stacked memory architecture that includes a set of array dies stacked along a direction and a logic die coupled with the set of array dies. Each array die may include one or more memory arrays accessible using one or more first interface blocks of the array die. To support sparing, the memory system may remap access from one or more first memory arrays of the set of array dies to one or more second memory arrays of the set of array dies. Logic circuitry of the logic die may be operable to perform the remapping in accordance with one or more levels of granularity, such as at a die level, channel level, pseudo-channel level, bank level, or a combination thereof.

    DATA PROTECTION TECHNIQUES IN STACKED MEMORY ARCHITECTURES

    公开(公告)号:US20250077353A1

    公开(公告)日:2025-03-06

    申请号:US18762284

    申请日:2024-07-02

    Abstract: Methods, systems, and devices for data protection techniques in stacked memory architectures are described. A memory system having a stacked memory architecture may include error correction information associated with a data set that includes multiple data segments stored across multiple memory arrays and, in some examples, multiple dies of the memory system. As part of a write operation for a first data segment of a data set, the memory system may retrieve the remaining data segments of the data set and calculate error correction information using the first data segment and the remaining data segments. As part of a read operation for a second data segment of the data set, the memory system may retrieve each data segment of the data set and perform an error correction operation on the data set using the error correction information.

Patent Agency Ranking