-
公开(公告)号:US12141225B2
公开(公告)日:2024-11-12
申请号:US16750823
申请日:2020-01-23
Applicant: NVIDIA Corporation
Inventor: William James Dally , Rangharajan Venkatesan , Brucek Kurdo Khailany
Abstract: Neural networks, in many cases, include convolution layers that are configured to perform many convolution operations that require multiplication and addition operations. Compared with performing multiplication on integer, fixed-point, or floating-point format values, performing multiplication on logarithmic format values is straightforward and energy efficient as the exponents are simply added. However, performing addition on logarithmic format values is more complex. Conventionally, addition is performed by converting the logarithmic format values to integers, computing the sum, and then converting the sum back into the logarithmic format. Instead, logarithmic format values may be added by decomposing the exponents into separate quotient and remainder components, sorting the quotient components based on the remainder components, summing the sorted quotient components using an asynchronous accumulator to produce partial sums, and multiplying the partial sums by the remainder components to produce a sum. The sum may then be converted back into the logarithmic format.
-
公开(公告)号:US12045307B2
公开(公告)日:2024-07-23
申请号:US17086118
申请日:2020-10-30
Applicant: NVIDIA Corporation
Inventor: Brucek Kurdo Khailany , Steve Haihang Dai , Rangharajan Venkatesan , Haoxing Ren
CPC classification number: G06F17/16 , G06F5/01 , G06F7/5443
Abstract: Today neural networks are used to enable autonomous vehicles and improve the quality of speech recognition, real-time language translation, and online search optimizations. However, operation of the neural networks for these applications consumes energy. Quantization of parameters used by the neural networks reduces the amount of memory needed to store the parameters while also reducing the power consumed during operation of the neural network. Matrix operations performed by the neural networks require many multiplication calculations, so reducing the number of bits that are multiplied reduces the energy that is consumed. Quantizing smaller sets of the parameters using a shared scale factor improves accuracy compared with quantizing larger sets of the parameters. Accuracy of the calculations may be maintained by quantizing and scaling the parameters using fine-grained per-vector scale factors. A vector includes one or more elements within a single dimension of a multi-dimensional matrix.
-
公开(公告)号:US12033060B2
公开(公告)日:2024-07-09
申请号:US16750917
申请日:2020-01-23
Applicant: NVIDIA Corporation
Abstract: Neural networks, in many cases, include convolution layers that are configured to perform many convolution operations that require multiplication and addition operations. Compared with performing multiplication on integer, fixed-point, or floating-point format values, performing multiplication on logarithmic format values is straightforward and energy efficient as the exponents are simply added. However, performing addition on logarithmic format values is more complex. Conventionally, addition is performed by converting the logarithmic format values to integers, computing the sum, and then converting the sum back into the logarithmic format. Instead, logarithmic format values may be added by decomposing the exponents into separate quotient and remainder components, sorting the quotient components based on the remainder components, summing the sorted quotient components using an asynchronous accumulator to produce partial sums, and multiplying the partial sums by the remainder components to produce a sum. The sum may then be converted back into the logarithmic format.
-
公开(公告)号:US20240112007A1
公开(公告)日:2024-04-04
申请号:US18537570
申请日:2023-12-12
Applicant: NVIDIA Corporation
Inventor: William James Dally , Rangharajan Venkatesan , Brucek Kurdo Khailany
CPC classification number: G06N3/063 , G06F7/4833 , G06F17/16
Abstract: Neural networks, in many cases, include convolution layers that are configured to perform many convolution operations that require multiplication and addition operations. Compared with performing multiplication on integer, fixed-point, or floating-point format values, performing multiplication on logarithmic format values is straightforward and energy efficient as the exponents are simply added. However, performing addition on logarithmic format values is more complex. Conventionally, addition is performed by converting the logarithmic format values to integers, computing the sum, and then converting the sum back into the logarithmic format. Instead, logarithmic format values may be added by decomposing the exponents into separate quotient and remainder components, sorting the quotient components based on the remainder components, summing the sorted quotient components to produce partial sums, and multiplying the partial sums by the remainder components to produce a sum. The sum may then be converted back into the logarithmic format.
-
公开(公告)号:US20230229916A1
公开(公告)日:2023-07-20
申请号:US18157608
申请日:2023-01-20
Applicant: NVIDIA Corporation
Inventor: Gal Chechik , Eli Alexander Meirom , Haggai Maron , Brucek Kurdo Khailany , Paul Martin Springer , Shie Mannor
Abstract: A method for contracting a tensor network is provided. The method comprises generating a graph representation of the tensor network, processing the graph representation to determine a contraction for the tensor network by an agent that implements a reinforcement learning algorithm, and processing the tensor network in accordance with the contraction to generate a contracted tensor network.
-
公开(公告)号:US20210056397A1
公开(公告)日:2021-02-25
申请号:US16549683
申请日:2019-08-23
Applicant: NVIDIA Corporation
Inventor: William James Dally , Rangharajan Venkatesan , Brucek Kurdo Khailany
Abstract: Neural networks, in many cases, include convolution layers that are configured to perform many convolution operations that require multiplication and addition operations. Compared with performing multiplication on integer, fixed-point, or floating-point format values, performing multiplication on logarithmic format values is straightforward and energy efficient as the exponents are simply added. However, performing addition on logarithmic format values is more complex. Conventionally, addition is performed by converting the logarithmic format values to integers, computing the sum, and then converting the sum back into the logarithmic format. Instead, logarithmic format values may be added by decomposing the exponents into separate quotient and remainder components, sorting the quotient components based on the remainder components, summing the sorted quotient components to produce partial sums, and multiplying the partial sums by the remainder components to produce a sum. The sum may then be converted back into the logarithmic format.
-
公开(公告)号:US09886409B2
公开(公告)日:2018-02-06
申请号:US14715394
申请日:2015-05-18
Applicant: NVIDIA Corporation
Inventor: Stephen William Keckler , William J. Dally , Steven Lee Scott , Brucek Kurdo Khailany , Michael Allen Parker
CPC classification number: G06F13/409 , G06F13/1668 , G06F13/4068 , G06F17/5054
Abstract: An integrated circuit device comprises pin resources, a memory controller circuit, a network interface controller circuit, and transmitter circuitry. The pin resources comprise pads coupled to off-chip pins of the integrated circuit device. The memory controller circuit comprises a first interface and the network interface controller circuit comprises a second interface. The transmitter circuitry is configurable to selectively couple either a first signal of the first interface or a second signal of the second interface to a first pad of the pin resources based on a pin distribution between the first interface and the second interface.
-
公开(公告)号:US20170212857A1
公开(公告)日:2017-07-27
申请号:US14715394
申请日:2015-05-18
Applicant: NVIDIA Corporation
Inventor: Stephen William Keckler , William J. Dally , Steven Lee Scott , Brucek Kurdo Khailany , Michael Allen Parker
CPC classification number: G06F13/409 , G06F13/1668 , G06F13/4068 , G06F17/5054
Abstract: An integrated circuit device comprises pin resources, a memory controller circuit, a network interface controller circuit, and transmitter circuitry. The pin resources comprise pads coupled to off-chip pins of the integrated circuit device. The memory controller circuit comprises a first interface and the network interface controller circuit comprises a second interface. The transmitter circuitry is configurable to selectively couple either a first signal of the first interface or a second signal of the second interface to a first pad of the pin resources based on a pin distribution between the first interface and the second interface.
-
公开(公告)号:US20240311626A1
公开(公告)日:2024-09-19
申请号:US18674632
申请日:2024-05-24
Applicant: NVIDIA Corporation
Abstract: Neural networks, in many cases, include convolution layers that are configured to perform many convolution operations that require multiplication and addition operations. Compared with performing multiplication on integer, fixed-point, or floating-point format values, performing multiplication on logarithmic format values is straightforward and energy efficient as the exponents are simply added. However, performing addition on logarithmic format values is more complex. Conventionally, addition is performed by converting the logarithmic format values to integers, computing the sum, and then converting the sum back into the logarithmic format. Instead, logarithmic format values may be added by decomposing the exponents into separate quotient and remainder components, sorting the quotient components based on the remainder components, summing the sorted quotient components using an asynchronous accumulator to produce partial sums, and multiplying the partial sums by the remainder components to produce a sum. The sum may then be converted back into the logarithmic format.
-
公开(公告)号:US20230237308A1
公开(公告)日:2023-07-27
申请号:US17814957
申请日:2022-07-26
Applicant: NVIDIA Corporation
Inventor: Charbel Sakr , Steve Haihang Dai , Brucek Kurdo Khailany , William James Dally , Rangharajan Venkatesan , Brian Matthew Zimmer
Abstract: Quantizing tensors and vectors processed within a neural network reduces power consumption and may accelerate processing. Quantization reduces the number of bits used to represent a value, where decreasing the number of bits used can decrease the accuracy of computations that use the value. Ideally, quantization is performed without reducing accuracy. Quantization-aware training (QAT) is performed by dynamically quantizing tensors (weights and activations) using optimal clipping scalars. “Optimal” in that the mean squared error (MSE) of the quantized operation is minimized and the clipping scalars define the degree or amount of quantization for various tensors of the operation. Conventional techniques that quantize tensors during training suffer from high amounts of noise (error). Other techniques compute the clipping scalars offline through a brute force search to provide high accuracy. In contrast, the optimal clipping scalars can be computed online and provide the same accuracy as the clipping scalars computed offline.
-
-
-
-
-
-
-
-
-