-
公开(公告)号:US20220156287A1
公开(公告)日:2022-05-19
申请号:US17112975
申请日:2020-12-04
发明人: HUI ZHANG , JOO HWAN LEE , YIQUN ZHANG , ARMIN HAJ ABOUTALEBI , XIAODONG ZHAO , PRAVEEN KRISHNAMOORTHY , ANDREW CHANG , YANG SEOK KI
摘要: A method of processing data in a system having a host and a storage node may include performing a shuffle operation on data stored at the storage node, wherein the shuffle operation may include performing a shuffle write operation, and performing a shuffle read operation, wherein at least a portion of the shuffle operation is performed by an accelerator at the storage node. A method for partitioning data may include sampling, at a device, data from one or more partitions based on a number of samples, transferring the sampled data from the device to a host, determining, at the host, one or more splitters based on the sampled data, communicating the one or more splitters from the host to the device, and partitioning, at the device, data for the one or more partitions based on the one or more splitters.
-
公开(公告)号:US20220011948A1
公开(公告)日:2022-01-13
申请号:US17029026
申请日:2020-09-22
发明人: YANGWOOK KANG , PRATIK MISHRA , YANG SEOK KI
IPC分类号: G06F3/06
摘要: A Key-Value storage device is disclosed. The Key-Value storage device may include a first storage for data that is persistent. The Key-Value storage device 125) may also include a second storage for a main index structure to map a key to a location in the first storage. A controller may process a read request, a write request, or a delete request from a host using the first storage. A third storage may store a secondary index structure that stores the key, the secondary index structure being sorted.
-
公开(公告)号:US20220231698A1
公开(公告)日:2022-07-21
申请号:US17357953
申请日:2021-06-24
发明人: Sahand SALAMAT , JOO HWAN LEE , ARMIN HAJ ABOUTALEBI , PRAVEEN KRISHNAMOORTHY , XIAODONG ZHAO , HUI ZHANG , YANG SEOK KI
摘要: An accelerator is disclosed. The accelerator may include a memory that may store a dictionary table. An address generator may be configured to generate an address in the dictionary table based on an encoded value, which may have an encoded width. An output filter may be configured to filter a decoded value from the dictionary table based on the encoded value, the encoded width, and a decoded width of the decoded data. The accelerator may be configured to support at least two different encoded widths.
-
公开(公告)号:US20210397567A1
公开(公告)日:2021-12-23
申请号:US17006767
申请日:2020-08-28
发明人: YANGWOOK KANG , WOONGJIN CHUN , YANG SEOK KI
摘要: A storage device is disclosed. The storage device may include compute engines. The compute engines may include storage for data, a storage processing unit to manage writing data to the storage and reading data from the storage, a data processing unit to perform some functions on the data, and an accelerator to perform other functions on the data. An Ethernet component may receive a request at the storage device from a host over a network. A data processing coordinator may process the request using a compute engine.
-
公开(公告)号:US20230205449A1
公开(公告)日:2023-06-29
申请号:US17680773
申请日:2022-02-25
发明人: WONSEB JEONG , YANG SEOK KI , JUNGMIN SEO , BEOMKYU SHIN , SANGOAK WOO , YOUNGGEON YOO , CHANHO YOON , MYUNGJUNE JUNG
IPC分类号: G06F3/06 , G06F12/0802
CPC分类号: G06F3/0655 , G06F3/0604 , G06F3/0679 , G06F12/0802 , G06F2212/60
摘要: A storage device includes a nonvolatile memory device and a storage controller. The storage controller includes a multi-protocol host interface circuit that receives a first-type request including a first logical address from an external host and transmits/receives data corresponding to the first-type request with the external host by a block unit. Additionally, the multi-protocol host interface circuit receives a second-type request including a first physical address from the external host and transmits/receives data corresponding to the second-type request with the external host by a unit smaller than the block unit. A mapping cache manager manages an address translation table cache, sends an address translation request including the first physical address to the external host, and receives a response including mapping information corresponding to the first physical address from the external host.
-
公开(公告)号:US20210390091A1
公开(公告)日:2021-12-16
申请号:US16992096
申请日:2020-08-12
发明人: YANGWOOK KANG , PRATIK MISHRA , YANG SEOK KI
摘要: Various aspects include an interactive continuous in-device KV transaction processing system and method. The system includes a host device and a KV-SSD. The KV-SSD includes a command handler module to receive and process command packets from the host device, to identify KV input/output (I/O) requests associated with a KV transaction, and to prepare a per-transaction index structure. The method includes receiving a command packet from a host device, and determining, by the command handler module, whether a transaction tag associated with the KV transaction is embedded in the command packet. Based on determining that the transaction tag is not embedded in the command packet, the method includes processing one or more KV I/O requests using a main KV index structure. Based on determining that the transaction tag is embedded in the command packet, the method includes individually processing the one or more KV I/O requests using a per-transaction index structure.
-
公开(公告)号:US20200234146A1
公开(公告)日:2020-07-23
申请号:US16442447
申请日:2019-06-14
发明人: JOO HWAN LEE , YANG SEOK KI , BEHNAM POURGHASSEMI
摘要: Computing resources are optimally allocated for a multipath neural network using a multipath neural network analyzer that includes an interface and a processing device. The interface receives a multipath neural network that includes two or more paths. A first path includes one or more layers. A first layer of the first path corresponds to a first kernel that runs on a compute unit that includes two or more cores. The processing device allocates to the first kernel a minimum number of cores of the compute unit and a maximum number of cores of the compute unit. The minimum number of cores of the compute unit is allocated based on the first kernel being run concurrently with at least one other kernel on the compute unit and the maximum number of cores of the compute unit is allocated based on the first kernel being run alone on the compute unit.
-
-
-
-
-
-