Abstract:
A method that includes evaluating, with a controller, local error detection (LED) information in response to a first memory access operation is disclosed. The LED information is evaluated per cache line segment of data associated with a rank of a memory. The method further includes determining an error in at least one of the cache line segments based on an error detection code and determining whether global error correction (GEC) data for a first cache line associated with the at least one cache line segment is stored in a GEC cache in the controller. The method also includes correcting the first cache line associated with the at least one cache line segment based on the GEC data retrieved from the GEC cache in the controller without accessing GEC data from a memory.
Abstract:
A system can include an optical multiplexer to combine a plurality of optical input signals having respective wavelengths into a wide-channel optical input signal that is provided to an input channel. The system also includes a photonic packet switch comprising a switch core and a plurality of ports defining a switch radix of the photonic packet switch. The input channel and an output channel can be associated with one of the plurality of ports. The photonic packet switch can process the wide-channel optical input signal and can generate a wide-channel optical output signal that is provided to the output channel. The system further includes an optical demultiplexer to separate the wide-channel optical output signal into a plurality of optical output signals having respective wavelengths. The optical multiplexer and the optical demultiplexer can collectively provide the system with a radix greater than the switch radix.
Abstract:
Operating a memory unit during a memory access operation. The memory unit includes a configuration of N data chips. A line of data stored in the memory unit is divided, with a controller, into a first portion and a second portion. The first portion of the line of data is encoded, with an outer code encoder, to generate an outer code output. The second portion of the line of data and the outer code output from the outer code encoder are encoded, with an inner code encoder, to generate an inner code output. A first layer of protection for the line of data is generated based on the inner code output and is stored to the memory unit, where the first layer of protection includes local error detection (LED) information combined with the line of data. A second layer of protection for the line of data is generated based on the first layer of protection and is stored to the memory unit. A decoding operation to retrieve the line of data is performing at the controller.
Abstract:
A memory region stores a data structure that contains a mapping between a virtual address space and a physical address space of a memory. A portion of the mapping is cached in a cache memory. In response to a miss in the cache memory responsive to a lookup of a virtual address of a request, an indication is sent to the buffer device. In response to the indication, a hardware controller on the buffer device performs a lookup of the data structure in the memory region to find a physical address corresponding to the virtual address.
Abstract:
A method for performing memory operations is provided. One or more processors can determine that at least a portion of data stored in a cache memory of the one or more processors is to be stored in the main memory. One or more ranges of addresses of the main memory is determined that correspond to a plurality of cache lines in the cache memory. A set of cache lines corresponding to addresses in the one or more ranges of addresses is identified, so that data stored in the identified set can be stored in the main memory. For each cache line of the identified set having data that has been modified since that cache line was first loaded to the cache memory or since a previous store operation, data stored in that cache line is caused to be stored in the main memory.
Abstract:
A non-volatile multi-level cell (“MLC”) memory device is disclosed. The memory device has an array of non-volatile memory cells, an array of non-volatile memory cells, with each non-volatile memory cell storing multiple groups of bits. A row buffer in the memory device has multiple buffer portions, each buffer portion storing one or more bits from the memory cells and having different read and write latencies and energies.
Abstract:
According to an example, a method for adaptive-granularity row buffer (AG-RB) caching may include determining whether to cache data to a RB cache, and adjusting, by a processor or a memory side logic, an amount of the data to cache to the RB cache for different memory accesses, such as dynamic random-access memory (DRAM) accesses. According to another example, an AG-RB cache apparatus may include a 3D stacked DRAM including a plurality of DRAM dies including one or more DRAM banks, and a logic die including a RB cache. The AG-RB cache apparatus may further include a processor die including a memory controller including a predictor module to determine whether to cache data to the RB cache, and to adjust an amount of the data to cache to the RB cache for different DRAM accesses.
Abstract:
A method for operating a memory unit is disclosed. The method includes encoding data from a cache line divided in a plurality of groups and generating a plurality of codewords. The method further includes storing the LED data for the cache line combined with the data of the cache line retrieved from a first portion of the codewords across a plurality of chips in the memory unit to create a first tier of protection. The method also includes storing the GEC data for the cache line retrieved from a second portion of the codewords across the plurality of chips to create a second tier of protection for the cache line. The method also includes receiving information corresponding to the first tier of protection, determining whether an error exists in the data of the cache line, decoding the data of the cache line, and outputting the data of the cache line at the controller.
Abstract:
A multiple subarray-access memory system is disclosed. The system includes a plurality of memory chips, each including a plurality of subarrays and a memory controller in communication. with the memory chips, the memory controller to receive a memory fetch width (“MFW”) instruction during an operating system start-up and responsive to the MFW instruction to fix a quantity of the subarrays that will be activated in response to memory access requests.
Abstract:
A detector detects, using an error code, an error in data stored in a memory. The detector determines whether the error is uncorrectable using the error code. In response to determining that the error is uncorrectable, an error handler associated with an application is invoked to handle the error in the data by recovering the data to an application-wide consistent state.