Abstract:
Methods, apparatuses, and techniques related to a host-assisted memory-side prefetcher are described herein. In general, prefetchers monitor the pattern of memory-address requests by a host device and use the pattern information to determine or predict future memory-address requests and fetch data associated with those predicted requests into a faster memory. In many cases, prefetchers that can make predictions with high performance use appreciable processing and computing resources, power, and cooling. Generally, however, producing a prefetching configuration that the prefetcher uses involves more resources than making predictions. The described host-assisted memory-side prefetcher uses the greater computing resources of the host device (102) to produce at least an updated prefetching configuration (404). The memory-side prefetcher uses the prefetching configuration to predict the data to prefetch into the faster memory, which allows a higher-performance prefetcher to be implemented in the memory device with a reduced resource burden on the memory device (202).
Abstract:
A cache system, having: a first cache; a second cache; a configurable data bit; and a logic circuit coupled to a processor to control the caches based on the configurable bit. When the configurable bit is in a first state, the logic circuit is configured to: implement commands for accessing a memory system via the first cache, when an execution type is a first type; and implement commands for accessing the memory system via the second cache, when the execution type is a second type. When the configurable data bit is in a second state, the logic circuit is configured to: implement commands for accessing the memory system via the second cache, when the execution type is the first type; and implement commands for accessing the memory system via the first cache, when the execution type is the second type.
Abstract:
The present disclosure includes apparatuses and methods for an operating system cache in a solid state device (SSD). An example apparatus includes the SSD, which includes an In-SSD volatile memory, a non-volatile memory, and an interconnect that couples the non-volatile memory to the In-SSD volatile memory. The SSD also includes a controller configured to receive a request for performance of an operation and to direct that a result of the performance of the operation is accessible in the In-SSD volatile memory as an In-SSD main memory operating system cache.
Abstract:
A multi-plane, multi-protocol memory switch system is disclosed. In some embodiments, a memory switch includes a plurality of switch ports, the memory switch connectable to one or more root complex (RC) devices through one or more respective switch ports of the plurality of switch ports, and the memory switch connectable to a set of endpoints through a set of other switch ports of the plurality of switch ports, wherein the set includes zero or multiple endpoints; a cacheline exchange engine configured to provide a data-exchange path between two endpoints and to map an address space of one endpoint to an address space of another endpoint; and a bulk data transfer engine configured to facilitate data-exchange between two endpoints as a source-destination data stream, one endpoint being designated a source address and another endpoint being designated a destination address.
Abstract:
Disclosed herein are system, method, and computer program product aspects for managing a storage system. In an aspect, a host device may generate a configuration corresponding to a file and transmit the configuration to a memory device, such as 3D NAND memory. The configuration instructs the memory device to refrain from transmitting a logic-to-physical (L2P) dirty entry notification to the host device. The L2P dirty entry notification corresponds to the file. The host device may also generate a second configuration corresponding to the file and transmit the second configuration to the memory device. The second configuration instructs the memory device to resume transmitting the L2P dirty entry notification corresponding to the file to the host device.
Abstract:
A computer-implemented method for synchronizing local caches is disclosed. The method may include receiving a content update which is an update to a data entry stored in local caches of each of a plurality of remote servers. The method may include transmitting the content update to a first remote server to update a corresponding data entry in a local cache of the first remote server. Further, the method may include generating an invalidation command, indicating the change in the corresponding data entry. The method may include transmitting the invalidation command from the first remote server to the message server. The method may include generating, by the message server, a plurality of partitions based on the received invalidation command. The method may include transmitting, from the message server to each of the remote servers, the plurality of partitions, so that the remote servers update their respective local caches.
Abstract:
Aspects of a storage device including a master chip controller and a slave chip processor and memory including a plurality of memory locations are provided which allow for simplified processing of descriptors associated with host commands in the slave chip based on an adaptive context metadata message from the master chip. When the controller receives a host command, the controller in the master chip provides to the processor in the slave chip a descriptor associated with a host command, an instruction to store the descriptor in the one of the memory locations, and the adaptive context metadata message mapping a type of the descriptor to the one of the memory locations. The processor may then process the descriptor stored in the one of the memory locations based on the message, for example, by refraining from identifying certain information indicated in the descriptor. Reduced latency in command execution may thereby result.
Abstract:
A system includes integrated circuit (IC) dies having memory cells and a processing device coupled to the IC dies. The processing device performs operations including storing, within a zone map data structure, zones of a logical block address (LBA) space sequentially mapped to physical address space of the IC dies. A zone map entry in the zone map data structure corresponds to a data group written to one or more of the IC dies. The operations further include storing, within a block set data structure indexed by a block set identifier of the zone map entry, a die identifier and a block identifier for each data block of multiple data blocks of the data group, and writing multiple data groups, which are sequentially mapped across the zones, sequentially across the IC dies. Each data block can correspond to a media (or erase) block of the IC dies.
Abstract:
A method (700) for obliviously moving N data blocks (102) stored in memory hardware (114) includes organizing memory locations (118) of the memory hardware into substantially formula (I) data buckets (350) each containing formula (I) data blocks, and allocating substantially formula (I) buffer buckets (360) associated with new memory locations in the memory hardware. Each buffer bucket is associated with a corresponding cache slot (370) allocated at the client (104) for storing cached permutated data blocks. The method further includes iteratively providing the substantially formula (I) data blocks to the client. The client is configured to apply a random permutation on the substantially formula (I) data blocks within each corresponding received data bucket to generate permutated data blocks and determine a corresponding buffer bucket and a corresponding cache slot for each permutated data block.