-
公开(公告)号:US20230367675A1
公开(公告)日:2023-11-16
申请号:US18223019
申请日:2023-07-17
Applicant: Samsung Electronics Co., Ltd.
Inventor: Mian QIN , Joo Hwan LEE , Rekha PITCHUMANI , Yang Seok KI
CPC classification number: G06F11/1076 , G06F13/28
Abstract: According to one general aspect, an apparatus may include a host interface circuit configured to receive offloading instructions from a host processing device, wherein the offloading instructions instruct the apparatus to compute an error correction code associated with a plurality of data elements. The apparatus may include a memory interface circuit configured to receive the plurality of data elements. The apparatus may include a plurality of memory buffer circuits configured to temporarily store the plurality of data elements. The apparatus may include a plurality of error code computation circuits configured to, at least in part, compute the error correction code without additional processing by the host processing device.
-
公开(公告)号:US20220207040A1
公开(公告)日:2022-06-30
申请号:US17174350
申请日:2021-02-11
Applicant: Samsung Electronics Co., Ltd.
Inventor: Shiyu LI , Yiqun ZHANG , Joo Hwan LEE , Yang Seok KI , Andrew CHANG
IPC: G06F16/2453
Abstract: A method of processing data may include receiving a stream of first keys associated with first data, receiving a stream of second keys associated with second data, comparing, in parallel, a batch of the first keys and a batch of the second keys, collecting one or more results from the comparing, and gathering one or more results from the collecting. The collecting may include reducing an index matrix and a mask matrix. Gathering one or more results may include storing, in a leftover vector, at least a portion of the one or more results from the collecting. Gathering one or more results further may include combining at least a portion of the leftover vector from a first cycle with at least a portion of the one or more results from the collecting from a second cycle.
-
公开(公告)号:US20210334162A1
公开(公告)日:2021-10-28
申请号:US17367315
申请日:2021-07-02
Applicant: Samsung Electronics Co., Ltd.
Inventor: Mian QIN , Joo Hwan LEE , Rekha PITCHUMANI , Yang Seok KI
Abstract: According to one general aspect, an apparatus may include a host interface circuit configured to receive offloading instructions from a host processing device, wherein the offloading instructions instruct the apparatus to compute an error correction code associated with a plurality of data elements. The apparatus may include a memory interface circuit configured to receive the plurality of data elements. The apparatus may include a plurality of memory buffer circuits configured to temporarily store the plurality of data elements. The apparatus may include a plurality of error code computation circuits configured to, at least in part, compute the error correction code without additional processing by the host processing device.
-
公开(公告)号:US20220164122A1
公开(公告)日:2022-05-26
申请号:US17225085
申请日:2021-04-07
Applicant: Samsung Electronics Co., Ltd.
Inventor: Chen ZOU , Hui ZHANG , Joo Hwan LEE , Yang Seok KI
IPC: G06F3/06
Abstract: A method of shuffling data may include shuffling a first batch of data using a first memory on a first level of a memory hierarchy to generate a first batch of shuffled data, shuffling a second batch of data using the first memory to generate a second batch of shuffled data, and storing the first batch of shuffled data and the second batch of shuffled data in a second memory on a second level of the memory hierarchy. The method may further include merging the first batch of shuffled data and the second batch of shuffled data. A data shuffling device may include a buffer memory configured to stream one or more records to a partitioning circuit and transfer, by random access, one or more records to a grouping circuit.
-
公开(公告)号:US20200234115A1
公开(公告)日:2020-07-23
申请号:US16442440
申请日:2019-06-14
Applicant: Samsung Electronics Co., Ltd.
Inventor: Behnam POURGHASSEMI , Joo Hwan LEE , Yang Seok KI
Abstract: Computing resources may be optimally allocated for a multipath neural network using a multipath neural network analyzer that includes an interface and a processing device. The interface receives a multipath neural network. The processing device generates the multipath neural network to include one or more layers of a critical path through the multipath neural network that are allocated a first allocation of computing resources that are available to execute the multipath neural network. The critical path limits throughput of the multipath neural network. The first allocation of computing resources reduces an execution time of the multipath neural network to be less than a baseline execution time of a second allocation of computing resources for the multipath neural network. The first allocation of computing resources for a first layer of the critical path is different than the second allocation of computing resources for the first layer of the critical path.
-
-
-
-