摘要:
A computing system includes a memory storage unit, having memory blocks, configured as a memory cache to store values of key-value pairs; and a device control unit, coupled to the memory storage unit, configured to: identify eviction targets from key-value eviction candidates in a key-value registry based on an eviction policy; calculate an associated eviction count of associated eviction candidates within the same instance of the memory blocks as the eviction targets; select an erase block as the memory blocks associated with the highest value of the associated eviction count; and interface with the memory storage unit to perform an erase operation on the erase block.
摘要:
An electronic system includes: a master controller configured to: monitor an execution of a user program, and generate a pre-fetching hint; a cluster node, coupled to the master controller, configured to be a pre-processing client; a local storage, coupled to the cluster node, configured to store input data for the user program; and wherein the master controller is further configured to: transfer the pre-fetching hint to the pre-processing clients for pre-fetching a split of the input data from the local storage based on the pre-fetching hint.
摘要:
A storage device includes an application container containing applications, each of which runs in one or more namespaces; flash memory to store data; a host interface to manage communications between the storage device and a host machine; a flash translation layer to translate a first address received from the host machine into a second address in the flash memory; a flash interface to access the data from the second address in the flash memory; and a polymorphic device kernel including an in-storage monitoring engine. The polymorphic device kernel receives a plurality of packets to an application running on the storage device and provides the flash interface based on a namespace associated with the plurality of packets. The in-storage monitoring engine determines a dynamic characteristic of the storage device at run-time based on a matching of a profiling command received from the host machine in a performance table.
摘要:
A storage device (220) is described. The storage device (220) may store data in a storage memory (445), and may have a host interface (420) to manage communications between the storage device (220) and a host machine (110, 115, 120, 125, 130). The storage device (220) may also include a translation layer (430) to translate addresses between the host machine (110, 115, 120, 125, 130) and the storage memory (445), and a storage interface (440) to access data from the storage memory (445). An in-storage monitoring engine (425) may determine characteristics (605, 610, 615) of the storage device (220), such as latency (605), bandwidth (610), and retention (615).
摘要:
An in-memory cluster computing framework node is described. The node includes storage devices having various priorities. The node also includes a resource monitor to monitor the operation of the storage devices. The node also includes a resource scheduler. When the resource monitor indicates that a storage device is at or approaching saturation, the resource scheduler can migrate data from that storage device to another storage device of lower priority.
摘要:
A solid state drive with a capability to select physical flash memory blocks and erasure and programming methods according to requirements of an application using storage in the solid state drive. A wear-out tracker in the solid state drive counts programming and erase cycles, and a raw bit error rate tracker in the solid state drive monitors raw bit errors in data read from the solid state drive. The application provides, to the solid state drive, requirements on an allowable retention time, corresponding to the anticipated storage time of data stored by the application, and on an average response time corresponding to programming and read times for the flash memory. The solid state drive identifies physical flash memory blocks suitable for meeting the requirements, and allocates storage space to the application from among the identified physical flash memory blocks.
摘要:
An electronic system includes: a storage interface configured to receive system information; a storage control unit, coupled to the storage interface, configured to implement a preprocessing block for partitioning data based on the system information; and a learning block for processing partial data of the data for distributing machine learning processes.
摘要:
Embodiments are disclosed for adaptive power reduction for a solid-state storage device to dynamically control power consumption. Aspects of the embodiments include receiving a power limit command from a host; receiving power consumption feedback; using the power limit command and the power consumption feedback to calculate a new degree of parallelism; using the new degree of parallelism to control one or more of: i) processor parallelism, including activation of different numbers of processors, ii) memory parallelism, including memory pool length; and iii) nonvolatile memory parallelism, including activation of different numbers of nonvolatile memory devices.
摘要:
An embodiment includes a system, comprising: a processor; a plurality of memories; and a control circuit coupled to the processor and the memories, and configured to: receive a power limit; measure a power consumption of the processor and the memories; and iteratively change a plurality of operating parameters of the processor and the memories to optimize an objective function associated with the system to operating states where the power consumption is less than or equal to the power limit.
摘要:
An electronic system includes: a master controller configured to: monitor an execution of a user program, and generate a pre-fetching hint; a cluster node, coupled to the master controller, configured to be a pre-processing client; a local storage, coupled to the cluster node, configured to store input data for the user program; and wherein the master controller is further configured to: transfer the pre-fetching hint to the pre-processing clients for pre-fetching a split of the input data from the local storage based on the pre-fetching hint.