Abstract:
Commands associated with one or more logical block addresses are received via a host interface of a storage device. Based on a timing and sequence of the commands, an extent of a file that contains the logical block addresses is determined, the file being stored on the storage device. The logical block addresses are managed internally as a unitary data structure based on determining an association between the logical block addresses and the file.
Abstract:
A single device that provides computing system-level functionality with non-volatile storage controller functionality. These functionalities can share the same electronics.
Abstract:
A first input is processed via a first configuration of a neural network to produce a first output. The first configuration defines attributes of the neural network, such as connections between neural elements of the neural network. If the neural network requires a context switch to process a second input, a second configuration is applied to the neural network to change the attributes, and the second input is processed via the second configuration of the neural network to produce a second output.
Abstract:
A compressed format is selected for storage of a matrix based on a computation to be performed using the matrix and architecture of a storage compute device to which the matrix is stored. Data of the matrix is stored on the storage compute device according to the compressed format. The computation is performed using the data via a computation unit that resides within the storage compute device.
Abstract:
A single device that provides computing system-level functionality with non-volatile storage controller functionality. These functionalities can share the same electronics.
Abstract:
A logical block address space of a storage compute device is reserved for use in executing commands from a host. First data is received at a first portion of the logical block address space, the first data causing a computation to be performed by the storage compute device. Second data is sent to the host via a second portion of the logical block address space, the second data describing a result of the computation.
Abstract:
Commands associated with one or more logical block addresses are received via a host interface of a storage device. Based on a timing and sequence of the commands, an extent of a file that contains the logical block addresses is determined, the file being stored on the storage device. The logical block addresses are managed internally as a unitary data structure based on determining an association between the logical block addresses and the file.
Abstract:
A data object is received at a storage compute device in response to a request from a host. A requirement of the data object is determined based on a computation to be performed on the data object. The requirement related to at least speed and capacity of media used to store the data object. A tier is selected from the storage compute device based on speed and capacity characteristics of the selected tier corresponding to the requirement of the data object. The data object is stored in the selected tier.
Abstract:
Systems and methods are disclosed for distributed power delivery. In certain embodiments, an apparatus may comprise a device configured to control power to one or more power-consuming components via managing power usage among the one or more power-consuming components based on a priority of a task associated with the one or more power-consuming components. In certain embodiments, a device may comprise a processor configured to: receive a request to allow a component to expend an amount of power, determine if the request can be satisfied with an unallocated power budget managed by the processor, the unallocated power budget being an unallocated portion of a total power budget managed by the first processor, and allow the component to expend the amount of power when the request can be satisfied with the unallocated power budget.
Abstract:
A first input is processed via a first configuration of a neural network to produce a first output. The first configuration defines attributes of the neural network, such as connections between neural elements of the neural network. If the neural network requires a context switch to process a second input, a second configuration is applied to the neural network to change the attributes, and the second input is processed via the second configuration of the neural network to produce a second output.