SYSTEMS, METHODS, AND DEVICES FOR STORAGE SHUFFLE ACCELERATION

    公开(公告)号:US20220156287A1

    公开(公告)日:2022-05-19

    申请号:US17112975

    申请日:2020-12-04

    IPC分类号: G06F16/27 G06F3/06 H04L29/08

    摘要: A method of processing data in a system having a host and a storage node may include performing a shuffle operation on data stored at the storage node, wherein the shuffle operation may include performing a shuffle write operation, and performing a shuffle read operation, wherein at least a portion of the shuffle operation is performed by an accelerator at the storage node. A method for partitioning data may include sampling, at a device, data from one or more partitions based on a number of samples, transferring the sampled data from the device to a host, determining, at the host, one or more splitters based on the sampled data, communicating the one or more splitters from the host to the device, and partitioning, at the device, data for the one or more partitions based on the one or more splitters.

    KEY SORTING BETWEEN KEY-VALUE SOLID STATE DRIVES AND HOSTS

    公开(公告)号:US20220011948A1

    公开(公告)日:2022-01-13

    申请号:US17029026

    申请日:2020-09-22

    IPC分类号: G06F3/06

    摘要: A Key-Value storage device is disclosed. The Key-Value storage device may include a first storage for data that is persistent. The Key-Value storage device 125) may also include a second storage for a main index structure to map a key to a location in the first storage. A controller may process a read request, a write request, or a delete request from a host using the first storage. A third storage may store a secondary index structure that stores the key, the secondary index structure being sorted.

    PIPELINED DATA PROCESSING IN FABRIC-ENABLED COMPUTATIONAL STORAGE

    公开(公告)号:US20210397567A1

    公开(公告)日:2021-12-23

    申请号:US17006767

    申请日:2020-08-28

    IPC分类号: G06F13/16 G06F3/06 G06F9/38

    摘要: A storage device is disclosed. The storage device may include compute engines. The compute engines may include storage for data, a storage processing unit to manage writing data to the storage and reading data from the storage, a data processing unit to perform some functions on the data, and an accelerator to perform other functions on the data. An Ethernet component may receive a request at the storage device from a host over a network. A data processing coordinator may process the request using a compute engine.

    INTERACTIVE CONTINUOUS IN-DEVICE TRANSACTION PROCESSING USING KEY-VALUE (KV) SOLID STATE DRIVES (SSDS)

    公开(公告)号:US20210390091A1

    公开(公告)日:2021-12-16

    申请号:US16992096

    申请日:2020-08-12

    摘要: Various aspects include an interactive continuous in-device KV transaction processing system and method. The system includes a host device and a KV-SSD. The KV-SSD includes a command handler module to receive and process command packets from the host device, to identify KV input/output (I/O) requests associated with a KV transaction, and to prepare a per-transaction index structure. The method includes receiving a command packet from a host device, and determining, by the command handler module, whether a transaction tag associated with the KV transaction is embedded in the command packet. Based on determining that the transaction tag is not embedded in the command packet, the method includes processing one or more KV I/O requests using a main KV index structure. Based on determining that the transaction tag is embedded in the command packet, the method includes individually processing the one or more KV I/O requests using a per-transaction index structure.

    PLATFORM FOR CONCURRENT EXECUTION OF GPU OPERATIONS

    公开(公告)号:US20200234146A1

    公开(公告)日:2020-07-23

    申请号:US16442447

    申请日:2019-06-14

    IPC分类号: G06N3/10 G06N3/08 G06F17/15

    摘要: Computing resources are optimally allocated for a multipath neural network using a multipath neural network analyzer that includes an interface and a processing device. The interface receives a multipath neural network that includes two or more paths. A first path includes one or more layers. A first layer of the first path corresponds to a first kernel that runs on a compute unit that includes two or more cores. The processing device allocates to the first kernel a minimum number of cores of the compute unit and a maximum number of cores of the compute unit. The minimum number of cores of the compute unit is allocated based on the first kernel being run concurrently with at least one other kernel on the compute unit and the maximum number of cores of the compute unit is allocated based on the first kernel being run alone on the compute unit.