APPARATUS AND METHOD WITH MULTI-TASK PROCESSING

    公开(公告)号:US20230131543A1

    公开(公告)日:2023-04-27

    申请号:US17903969

    申请日:2022-09-06

    Abstract: A processor-implemented method with multi-task processing includes: obtaining weights of a first neural network; obtaining first delta weights of a second neural network that is fine-tuned from the first neural network, based on a target task; performing an operation of the second neural network on first input data, based on sums of the weights of the first neural network and the first delta weights; obtaining second delta weights of a third neural network that is fine-tuned from the first neural network, based on a change of the target task; replacing the first delta weights with the second delta weights; and performing an operation of the third neural network on second input data, based on sums of the weights of the first neural network and the second delta weights, wherein the first delta weights comprise difference values in the weights of the first neural network and weights of the second neural network, and the second delta weight comprises difference values in the weights of the first neural network and weights of the third neural network.

    DEVICE AND METHOD WITH NEURAL NETWORK OPERATION

    公开(公告)号:US20220164674A1

    公开(公告)日:2022-05-26

    申请号:US17523129

    申请日:2021-11-10

    Abstract: A neural network device includes: a memory configured to store a first feature map and a second feature map; and a neural network processor configured to operate a neural network, and comprising: a fetcher configured to fetch input data from the first feature map of the memory; a buffer configured to store the input data; an operator configured to generate output data by performing a convolution operation between the input data and a kernel; a writer configured to write the output data in the second feature map of the memory; and a controller configured to control the fetcher to fetch the input data and control the writer to write the output data, according to one or more intervals and one or more offsets determined based on a dilation rate of the kernel in multiple steps.

    DATA TRANSMISSION METHOD FOR CONVOLUTION OPERATION, FETCHER, AND CONVOLUTION OPERATION APPARATUS

    公开(公告)号:US20220342833A1

    公开(公告)日:2022-10-27

    申请号:US17858506

    申请日:2022-07-06

    Abstract: A data transmission method for a convolution operation, and a convolution operation apparatus including a fetcher that includes a loader, at least one sender, a buffer controller, and a reuse buffer. The method includes loading, by the loader, input data of an input feature map according to a loading order, based on input data stored in the reuse buffer, a shape of a kernel to be used for a convolution operation, and two-dimensional (2D) zero-value information of weights of the kernel; storing, by the buffer controller, the loaded input data in the reuse buffer of an address cyclically assigned according to the loading order; and selecting, by each of the at least one sender, input data corresponding to each output data of a convolution operation among the input data stored in the reuse buffer, based on one-dimensional (1D) zero-value information of the weights, and outputting the selected input data.

    COMPUTING METHOD AND DEVICE WITH DATA SHARING

    公开(公告)号:US20220164289A1

    公开(公告)日:2022-05-26

    申请号:US17317339

    申请日:2021-05-11

    Abstract: A computing method and device with data sharing re provided. The method includes loading, by a loader, input data of an input feature map stored in a memory in loading units according to a loading order, storing, by a buffer controller, the loaded input data in a reuse buffer of an address rotationally allocated according to the loading order, and transmitting, by each of a plurality of senders, to an executer respective input data corresponding to each output data of respective convolution operations among the input data stored in the reuse buffer, wherein portions of the transmitted respective input data overlap other.

    METHOD AND APPARATUS OF OPERATING A NEURAL NETWORK

    公开(公告)号:US20220253692A1

    公开(公告)日:2022-08-11

    申请号:US17400353

    申请日:2021-08-12

    Abstract: Disclosed is a method and apparatus of operating a neural network. The neural network operation method includes receiving data for the neural network operation, verifying whether competition occurs between a first data traversal path corresponding to a first operation device and a second data traversal path corresponding to a second operation device, determining first operand data and second operand data from among the data using a result of the verifying and a priority between the first data traversal path and the second data traversal path, and performing the neural network operation based on the first operand data and the second operand data.

    HARDWARE ACCELERATOR METHOD AND DEVICE

    公开(公告)号:US20220383103A1

    公开(公告)日:2022-12-01

    申请号:US17499149

    申请日:2021-10-12

    Abstract: A processor-implemented hardware accelerator method includes: receiving input data; loading a lookup table (LUT); determining an address of the LUT by inputting the input data to a comparator; obtaining a value of the LUT corresponding to the input data based on the address; and determining a value of a nonlinear function corresponding to the input data based on the value of the LUT, wherein the LUT is determined based on a weight of a neural network that outputs the value of the nonlinear function.

Patent Agency Ranking