Abstract:
A compressed format is selected for storage of a matrix based on a computation to be performed using the matrix and architecture of a storage compute device to which the matrix is stored. Data of the matrix is stored on the storage compute device according to the compressed format. The computation is performed using the data via a computation unit that resides within the storage compute device.
Abstract:
A data object is received at a storage compute device in response to a request from a host. A requirement of the data object is determined based on a computation to be performed on the data object. The requirement related to at least speed and capacity of media used to store the data object. A tier is selected from the storage compute device based on speed and capacity characteristics of the selected tier corresponding to the requirement of the data object. The data object is stored in the selected tier.
Abstract:
An apparatus includes a heat pipe with a fluid path. A first part of the fluid path is thermally coupled to a first region of a higher temperature and a second part of the fluid path thermally is coupled to a second region of a lower temperature. A difference between the higher temperature and the lower temperature induces a flow of a magnetic fluid in the fluid path. A switchable magnetic device is magnetically coupled to the fluid path. Activation of the switchable magnetic device reduces the flow of the magnetic fluid in the fluid path, which reduces heat transfer from the first region to the second region.
Abstract:
A logical block address space of a storage compute device is reserved for use in executing commands from a host. The logical block address space is not mapped to a physical address space. First data is received at a first portion of the logical block address space, the first data causing a computation to be performed by the storage compute device. Second data is sent to the host via a second portion of the logical block address space, the second data describing a result of the computation.
Abstract:
Commands associated with one or more logical block addresses are received via a host interface of a storage device. Based on a timing and sequence of the commands, an extent of a file that contains the logical block addresses is determined, the file being stored on the storage device. The logical block addresses are managed internally as a unitary data structure based on determining an association between the logical block addresses and the file.
Abstract:
A logical block address space of a storage compute device is reserved for use in executing commands from a host. First data is received at a first portion of the logical block address space, the first data causing a computation to be performed by the storage compute device. Second data is sent to the host via a second portion of the logical block address space, the second data describing a result of the computation.
Abstract:
A logical block address space of a storage compute device is reserved for use in executing commands from a host. The logical block address space is not mapped to a physical address space. First data is received at a first portion of the logical block address space, the first data causing a computation to be performed by the storage compute device. Second data is sent to the host via a second portion of the logical block address space, the second data describing a result of the computation.
Abstract:
An apparatus includes a heat pipe with a fluid path. A first part of the fluid path is thermally coupled to a first region of a higher temperature and a second part of the fluid path thermally is coupled to a second region of a lower temperature. A difference between the higher temperature and the lower temperature induces a flow of a magnetic fluid in the fluid path. A switchable magnetic device is magnetically coupled to the fluid path. Activation of the switchable magnetic device reduces the flow of the magnetic fluid in the fluid path, which reduces heat transfer from the first region to the second region.
Abstract:
A first input is processed via a first configuration of a neural network to produce a first output. The first configuration defines attributes of the neural network, such as connections between neural elements of the neural network. If the neural network requires a context switch to process a second input, a second configuration is applied to the neural network to change the attributes, and the second input is processed via the second configuration of the neural network to produce a second output.
Abstract:
A compressed format is selected for storage of a matrix based on a computation to be performed using the matrix and architecture of a storage compute device to which the matrix is stored. Data of the matrix is stored on the storage compute device according to the compressed format. The computation is performed using the data via a computation unit that resides within the storage compute device.