-
公开(公告)号:US20180365561A1
公开(公告)日:2018-12-20
申请号:US15627022
申请日:2017-06-19
申请人: Google Inc.
CPC分类号: G06N3/08 , G06F8/4441 , G06F8/452 , G06F9/50 , G06N99/005 , G06T1/20
摘要: Methods, systems, and apparatus for accessing a N-dimensional tensor are described. In some implementations, a method includes, for each of one or more first iterations of a first nested loop, performing iterations of a second nested loop that is nested within the first nested loop until a first loop bound for the second nested loop is reached. A number of iterations of the second nested loop for the one or more first iterations of the first nested loop is limited by the first loop bound in response to the second nested loop having a total number of iterations that exceeds a value of a hardware property of the computing system. After a penultimate iteration of the first nested loop has completed, one or more iterations of the second nested loop are performed for a final iteration of the first nested loop until an alternative loop bound is reached.
-
公开(公告)号:US09928460B1
公开(公告)日:2018-03-27
申请号:US15625810
申请日:2017-06-16
申请人: Google Inc.
摘要: A three dimensional neural network accelerator that includes a first neural network accelerator tile that includes a first transmission coil, and a second neural network accelerator tile that includes a second transmission coil, wherein the first neural network accelerator tile is adjacent to and aligned vertically with the second neural network accelerator tile, and wherein the first transmission coil is configured to wirelessly communicate with the second transmission coil via inductive coupling.
-
公开(公告)号:US09710265B1
公开(公告)日:2017-07-18
申请号:US15462180
申请日:2017-03-17
申请人: Google Inc.
CPC分类号: G06F9/3001 , G06F9/30065 , G06F13/28 , G06N3/04 , G06N3/0454 , G06N3/063
摘要: A computing unit is disclosed, comprising a first memory bank for storing input activations and a second memory bank for storing parameters used in performing computations. The computing unit includes at least one cell comprising at least one multiply accumulate (“MAC”) operator that receives parameters from the second memory bank and performs computations. The computing unit further includes a first traversal unit that provides a control signal to the first memory bank to cause an input activation to be provided to a data bus accessible by the MAC operator. The computing unit performs one or more computations associated with at least one element of a data array, the one or more computations being performed by the MAC operator and comprising, in part, a multiply operation of the input activation received from the data bus and a parameter received from the second memory bank.
-
公开(公告)号:US20180121196A1
公开(公告)日:2018-05-03
申请号:US15335769
申请日:2016-10-27
申请人: Google Inc.
CPC分类号: G06F9/3001 , G06F9/30065 , G06F13/28 , G06N3/04 , G06N3/0454 , G06N3/063
摘要: A computing unit is disclosed, comprising a first memory bank for storing input activations and a second memory bank for storing parameters used in performing computations. The computing unit includes at least one cell comprising at least one multiply accumulate (“MAC”) operator that receives parameters from the second memory bank and performs computations. The computing unit further includes a first traversal unit that provides a control signal to the first memory bank to cause an input activation to be provided to a data bus accessible by the MAC operator. The computing unit performs one or more computations associated with at least one element of a data array, the one or more computations being performed by the MAC operator and comprising, in part, a multiply operation of the input activation received from the data bus and a parameter received from the second memory bank.
-
公开(公告)号:US09846837B1
公开(公告)日:2017-12-19
申请号:US15336216
申请日:2016-10-27
申请人: Google Inc.
IPC分类号: G06N3/04
摘要: A computer-implemented method that includes receiving, by a processing unit, an instruction that specifies data values for performing a tensor computation. In response to receiving the instruction, the method may include, performing, by the processing unit, the tensor computation by executing a loop nest comprising a plurality of loops, wherein a structure of the loop nest is defined based on one or more of the data values of the instruction. The tensor computation can be at least a portion of a computation of a neural network layer. The data values specified by the instruction may comprise a value that specifies a type of the neural network layer, and the structure of the loop nest can be defined at least in part by the type of the neural network layer.
-
公开(公告)号:US20190065937A1
公开(公告)日:2019-02-28
申请号:US15685672
申请日:2017-08-24
申请人: Google Inc.
IPC分类号: G06N3/04 , H04L12/721 , H04L12/703
摘要: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for three-dimensionally stacked neural network accelerators. In one aspect, a method includes obtaining data specifying that a tile from a plurality of tiles in a three-dimensionally stacked neural network accelerator is a faulty tile. The three-dimensionally stacked neural network accelerator includes a plurality of neural network dies, each neural network die including a respective plurality of tiles, each tile has input and output connections. The three-dimensionally stacked neural network accelerator is configured to process inputs by routing the input through each of the plurality of tiles according to a dataflow configuration and modifying the dataflow configuration to route an output of a tile before the faulty tile in the dataflow configuration to an input connection of a tile that is positioned above or below the faulty tile on a different neural network die than the faulty tile.
-
公开(公告)号:US09836691B1
公开(公告)日:2017-12-05
申请号:US15455685
申请日:2017-03-10
申请人: Google Inc.
CPC分类号: G06N3/04 , G06F13/28 , G06N3/0454 , G06N3/063
摘要: A computer-implemented method that includes receiving, by a processing unit, an instruction that specifies data values for performing a tensor computation. In response to receiving the instruction, the method may include, performing, by the processing unit, the tensor computation by executing a loop nest comprising a plurality of loops, wherein a structure of the loop nest is defined based on one or more of the data values of the instruction. The tensor computation can be at least a portion of a computation of a neural network layer. The data values specified by the instruction may comprise a value that specifies a type of the neural network layer, and the structure of the loop nest can be defined at least in part by the type of the neural network layer.
-
-
-
-
-
-