摘要:
A storage device includes; a first memory subsystem including a first nonvolatile memory device (NVM), a first storage controller configured to control operation of the first NVM, and a first resource, and a second memory subsystem including a second NVM, a second storage controller configured to control operation of the second NVM, and a second resource, wherein the first resource is a shared resource useable by the second memory subsystem, and the second resource is shared resource useable by the first memory subsystem.
摘要:
A system and method is disclosed for managing data in a non-volatile memory. The system may include a non-volatile memory having multiple non-volatile memory sub-drives, including a staging sub-drive to receive all data from a host and a plurality of other sub-drives each associated with a respective data temperature range. A controller of the memory system is configured to route all incoming host data only to the staging sub-drive and during garbage collection each individual piece of valid data from a selected source block in a selected source sub-drive is routed to a respective one of the sub-drives. The method may include only routing host data to the staging sub-drive and only relocating valid data to sub-drives other than the staging sub-drive based on a determined temperature of valid data and a unique temperature range associated with sub-drives other than the staging sub-drive in the non-volatile memory system.
摘要:
Resolving coherency issues inherent in sharing distributed cache is described. A chip multiprocessor may include at least first and second processing clusters, each having multiple cores of a processor, multiple cache slices co-located with the multiple cores, and a memory controller (MC). The processor stores directory information in a memory coupled to the processor to indicate cluster cache ownership of a first address space to the first cluster. In response to a request to change the cluster cache ownership of the first address space, the processor may remap first lines of first cache slices, corresponding to the first address space, to second lines in second cache slices of the second cluster, and update the directory information (e.g., a state of the first cache lines) to change the cluster cache ownership of the first address space to the second cluster. One of the MCs may manage such updating of the directory.
摘要:
A tiled storage array provides reduction in access latency for frequently-accessed values by re-organizing to always move a requested value to a front-most storage element of array. The previous occupant of the front-most location is moved backward according to a systolic pulse, and the new occupant is moved forward according to the systolic pulse, preserving the uniqueness of the stored values within the array, and providing for multiple in-flight access requests within the array. The placement heuristic that moves the values according to the systolic pulse can be implemented by control logic within identical tiles, so that the placement heuristic moves the values according to the position of the tiles within the array. The movement of the values can be performed via only next-neighbor connections of adjacent tiles within the array.
摘要:
A cache memory system includes cache memories of at least one layer, at least one of the cache memories having a data cache to store data and a tag to store an address of data stored in the data cache, and a first address conversion information storage to store entry information that includes address conversion information for virtual addresses issued by a processor to physical addresses and cache presence information that indicates whether data corresponding to the converted physical address is stored in a specific cache memory of at least one layer among the cache memories,
摘要:
A method, computer program product, and system for maintaining a proper ordering of a data steam that includes two or more sequentially ordered stores, the data stream being moved to a destination memory device, the two or more sequentially ordered stores including at least a first store and a second store, wherein the first store is rejected by the destination memory device. A computer-implemented method includes sending the first store to the destination memory device. A conditional request is sent to the destination memory device for approval to send the second store to the destination memory device, the conditional request dependent upon successful completion of the first store. The second store is cancelled responsive to receiving a reject response corresponding to the first store.
摘要:
A mechanism is provided in a cache for providing a read and write aware cache. The mechanism partitions a large cache into a read-often region and a write-often region. The mechanism considers read/write frequency in a non-uniform cache architecture replacement policy. A frequently written cache line is placed in one of the farther banks. A frequently read cache line is placed in one of the closer banks. The size ratio between read-often and write-often regions may be static or dynamic. The boundary between the read-often region and the write-often region may be distinct or fuzzy.
摘要:
A computer-implemented method for a computerized system having at least a first processor and a second processor, where each of the processors are operatively interconnected to a memory storing a set of data to be processed by a processor. The method includes monitoring data accessed by the first processor while executing, and if the second processor is at a shorter distance than the first processor from the monitored data, instructing to interrupt execution at the first processor and resume the execution at the second processor.
摘要:
A method for determining whether to store binary information in a fast way or a slow way of a cache is disclosed. The method includes receiving a block of binary information to be stored in a cache memory having a plurality of ways. The plurality of ways includes a first subset of ways and a second subset of ways, wherein a cache access by a first execution core from one of the first subset of ways has a lower latency time than a cache access from one of the second subset of ways. The method further includes determining, based on a predetermined access latency and one or more parameters associated with the block of binary information, whether to store the block of binary information into one of the first set of ways or one of the second set of ways.
摘要:
A tiled memory and a method of power management within the tiled memory provides efficient use of energy within a computer storage, which may be a spiral cache memory. The tiled memory is power-managed by placing tiles in a power-saving state, which may be a state in which storage circuits are powered-down and network circuits are powered-up, so that for serially-connected tiles, information can still be forwarded by a tile in the power-saving state. The tiles may be power managed under direction of a central controlled, which sends commands to the tiles to enter and leave the power-saving state, or the tiles may self-manage their power-saving state according to activity measured at the individual tiles. Activity may be measured at the tiles of a spiral cache by comparing a hit rate and a push back rate to corresponding thresholds. The measurements may be used with either tile-managed or centrally-managed techniques.