Abstract:
Thus, the present disclosure is directed to systems and methods for training neural networks using a tensor that includes a plurality of FP16 values and a plurality of bits that define an exponent shared by some or all of the FP16 values included in the tensor. The FP16 values may include IEEE 754 format 16-bit floating point values and the tensor may include a plurality of bits defining the shared exponent. The tensor may include a shared exponent and FP16 values that include a variable bit-length mantissa and a variable bit-length exponent that may be dynamically set by processor circuitry. The tensor may include a shared exponent and FP16 values that include a variable bit-length mantissa; a variable bit-length exponent that may be dynamically set by processor circuitry; and a shared exponent switch set by the processor circuitry to selectively combine the FP16 value exponent with the shared exponent.
Abstract:
An apparatus and method for a converting tensor data. For example, one embodiment of a method comprises: fetching source tensor blocks of a source tensor data structure, each source tensor block comprising a plurality of source tensor data elements having a first numeric representation, wherein the source tensor data structure comprises a predefined structural arrangement of source tensor blocks; converting the one or more source tensor blocks into one or more destination tensor blocks comprising a plurality of destination tensor data elements having a second numeric representation different from the first numeric representation, wherein the sets of one or more source tensor blocks are converted to one or more corresponding destination tensor blocks in a specified order based on the first and second numeric representations; and storing each individual destination tensor block in a designated memory region to maintain coherency with the predefined structural arrangement of the source tensor blocks.
Abstract:
Described herein are methods, systems, and apparatuses to utilize a matrix operation by accessing each of the operation's matrix operands via a respective single memory handle. This use of a single memory handle for each matrix operand eliminates significant overhead in memory allocation, data tracking, and subroutine complexity present in prior art solutions. The result of the matrix operation can also be accessible via a single memory handle identifying the matrix elements of the result.
Abstract:
A network of matrix processing units (MPUs) is provided on a device, where each MPU is connected to at least one other MPU in the network, and each MPU is to perform matrix multiplication operations. Computer memory stores tensor data and a master control central processing unit (MCC) is provided on the device to receive an instruction from a host device, where the instruction includes one or more tensor operands based on the tensor data. The MCC invokes a set of operations on one or more of the MPUs based on the instruction, where the set of operations includes operations on the tensor operands. A result is generated from the set of operations, the result embodied as a tensor value.
Abstract:
A network of matrix processing units (MPUs) is provided on a device, where each MPU is connected to at least one other MPU in the network, and each MPU is to perform matrix multiplication operations. Computer memory stores tensor data and a master control central processing unit (MCC) is provided on the device to receive an instruction from a host device, where the instruction includes one or more tensor operands based on the tensor data. The MCC invokes a set of operations on one or more of the MPUs based on the instruction, where the set of operations includes operations on the tensor operands. A result is generated from the set of operations, the result embodied as a tensor value.
Abstract:
Embodiments include a method comprising identifying, by an instruction scheduler of a processor core, a first high power instruction in an instruction stream to be executed by an execution unit of the processor core. A pre-charge signal is asserted indicating that the first high power instruction is scheduled for execution. Subsequent to the pre-charge signal being asserted, a voltage boost signal is asserted to cause a supply voltage for the execution unit to be increased. A busy signal indicating that the first high power instruction is executing is received from the execution unit. Based at least in part on the busy signal being asserted, de-asserting the voltage boost signal. More specific embodiments include decreasing the supply voltage for the execution unit subsequent to the de-asserting the voltage boost signal. More Further embodiments include delaying asserting the voltage boost signal based on a start delay time.
Abstract:
Described herein are methods, systems, and apparatuses to utilize a matrix operation by accessing each of the operation's matrix operands via a respective single memory handle. This use of a single memory handle for each matrix operand eliminates significant overhead in memory allocation, data tracking, and subroutine complexity present in prior art solutions. The result of the matrix operation can also be accessible via a single memory handle identifying the matrix elements of the result.
Abstract:
Updating an artificial neural network is disclosed. A node characteristic is represented using a fixed point node characteristic parameter. A network characteristic is represented using a fixed point network characteristic parameter. The fixed point node characteristic parameter and the fixed point network characteristic parameter are processed to determine a fixed point intermediate parameter having a larger size than either the fixed point node characteristic parameter or the fixed point network characteristic parameter. A value associated with the fixed point intermediate parameter is truncated according to a system truncation schema. The artificial neural network is updated according to the truncated value.