-
1.
公开(公告)号:US20250078815A1
公开(公告)日:2025-03-06
申请号:US18826135
申请日:2024-09-05
Applicant: Google LLC
Inventor: Shaojin Ding , David Qiu , David Rim , Amir Yazdanbakhsh , Yanzhang He , Zhonglin Han , Rohit Prakash Prabhavalkar , Weiran Wang , Bo Li , Jian Li , Tara N. Sainath , Shivani Agrawal , Oleg Rybakov
IPC: G10L15/06
Abstract: A method includes obtaining a plurality of training samples that each include a respective speech utterance and a respective textual utterance representing a transcription of the respective speech utterance. The method also includes fine-tuning, using quantization and sparsity aware training with native integer operations, a pre-trained automatic speech recognition (ASR) model on the plurality of training samples. Here, the pre-trained ASR model includes a plurality of weights and the fine-tuning includes pruning one or more weights of the plurality of weights using a sparsity mask and quantizing each weight of the plurality of weights based on an integer with a fixed-bit width. The method also includes providing the fine-tuned ASR model to a user device.
-
公开(公告)号:US20240428006A1
公开(公告)日:2024-12-26
申请号:US18211967
申请日:2023-06-20
Applicant: GOOGLE LLC
Inventor: Jian Li , Zhifeng Chen , Yanping Huang , Yuanzhong Xu , Tao Wang , YaGuang Li
IPC: G06F40/40
Abstract: Implementations relate to asymmetric quantization of large language models (LLMs). Processor(s) of a system can: obtain a trained LLM, wherein the trained LLM includes a plurality of layers, each layer comprising a respective plurality of weights; for each layer of the plurality of layers: calculate an optimal clipping range for the respective plurality of weights, and clip one or more weights of the respective plurality of weights that lie outside of the optimal clipping range to produce a clipped layer; quantize the LLM to generate a quantized LLM, wherein the instructions to quantize include instructions to map weights of the plurality of clipped layers of the LLM from continuous values to discrete values; and provide the quantized LLM for downstream processing.
-