COMPUTING METHOD AND DEVICE WITH DATA SHARING

    公开(公告)号:US20220164289A1

    公开(公告)日:2022-05-26

    申请号:US17317339

    申请日:2021-05-11

    Abstract: A computing method and device with data sharing re provided. The method includes loading, by a loader, input data of an input feature map stored in a memory in loading units according to a loading order, storing, by a buffer controller, the loaded input data in a reuse buffer of an address rotationally allocated according to the loading order, and transmitting, by each of a plurality of senders, to an executer respective input data corresponding to each output data of respective convolution operations among the input data stored in the reuse buffer, wherein portions of the transmitted respective input data overlap other.

    METHOD AND APPARATUS OF OPERATING A NEURAL NETWORK

    公开(公告)号:US20220253692A1

    公开(公告)日:2022-08-11

    申请号:US17400353

    申请日:2021-08-12

    Abstract: Disclosed is a method and apparatus of operating a neural network. The neural network operation method includes receiving data for the neural network operation, verifying whether competition occurs between a first data traversal path corresponding to a first operation device and a second data traversal path corresponding to a second operation device, determining first operand data and second operand data from among the data using a result of the verifying and a priority between the first data traversal path and the second data traversal path, and performing the neural network operation based on the first operand data and the second operand data.

    DEVICE AND METHOD WITH NEURAL NETWORK OPERATION

    公开(公告)号:US20220164674A1

    公开(公告)日:2022-05-26

    申请号:US17523129

    申请日:2021-11-10

    Abstract: A neural network device includes: a memory configured to store a first feature map and a second feature map; and a neural network processor configured to operate a neural network, and comprising: a fetcher configured to fetch input data from the first feature map of the memory; a buffer configured to store the input data; an operator configured to generate output data by performing a convolution operation between the input data and a kernel; a writer configured to write the output data in the second feature map of the memory; and a controller configured to control the fetcher to fetch the input data and control the writer to write the output data, according to one or more intervals and one or more offsets determined based on a dilation rate of the kernel in multiple steps.

Patent Agency Ranking