-
公开(公告)号:US20240185078A1
公开(公告)日:2024-06-06
申请号:US18456112
申请日:2023-08-25
Applicant: QUALCOMM Incorporated
Inventor: Simyung CHANG , Byeonggeun KIM , Seunghan YANG , Kyuhong SHIM
Abstract: A processor-implement method includes generating, for each input of a group of inputs, a clean sample and an augmented sample. The method also includes associating, for each input of the group of inputs, the clean sample with the augmented sample to form a positive pair. The method further includes associating, for each input of the group of inputs, the clean sample with another clean sample associated with another input of the group of inputs to form a negative pair. The method still further includes learning one or more representations of the group of inputs based on the positive pair and the negative pair of each input of the group of inputs.
-
公开(公告)号:US20250131262A1
公开(公告)日:2025-04-24
申请号:US18493286
申请日:2023-10-24
Applicant: QUALCOMM Incorporated
Inventor: Jihwan BANG , Kyu Woong HWANG , Simyung CHANG , Juntae LEE , Kyuhong SHIM , Seunghan YANG , Eunji KIM
IPC: G06N3/08
Abstract: Certain aspects of the present disclosure provide techniques and apparatus for improved machine learning. A machine learning model is accessed. And an enrollment dataset for a device is accessed. A personalized model adapter generated based on the enrollment dataset and a plurality of model adapters is accessed. An input to the machine learning model is processed using the machine learning model in conjunction with the personalized model adapter. An output is provided, by the device, based on the processing.
-
3.
公开(公告)号:US20250131023A1
公开(公告)日:2025-04-24
申请号:US18492360
申请日:2023-10-23
Applicant: QUALCOMM Incorporated
Inventor: Kyuhong SHIM , Jaeseong YOU , Sunghyun PARK , Geunho LEE , Michael Franco TAVEIRA
IPC: G06F16/33 , G06F3/01 , G06F3/041 , G06F16/335 , G06F16/34
Abstract: Various embodiments include systems and methods for generating a prompt for a large generative AI model (LXM), such as a large language model (LLM), large speech model (LSM), large/language vision model (LVM), vision language models (VLMs), hybrid model, multi-modal model, etc. A computing device may be equipped with components configured to receive a user's prompt for the LXM, determine a user's attention to the subject matter at the time or prior to receipt of the user's prompt, generate an enhanced prompt based on the user's prompt and the subject matter to which the user is paying attention at the time or prior to receipt of the user's prompt, and submit the enhanced prompt to the LLM.
-
4.
公开(公告)号:US20250077313A1
公开(公告)日:2025-03-06
申请号:US18459239
申请日:2023-08-31
Applicant: QUALCOMM Incorporated
Inventor: Simyung CHANG , Kyu Woong HWANG , Juntae LEE , Kyuhong SHIM , Jihwan BANG , Seunghan YANG , Jaeseong YOU , Minseop PARK , Christopher LOTT
Abstract: A processor-implemented method for generating a default adapter for context switching includes analyzing a first neural network model and one or more adapters. The first neural network model is pre-trained and each of the adapters is configured with an architecture and parameters for performing a different downstream task of a set of downstream tasks. A default adapter is defined based on a capacity of the one or more adapters. The default adapter is applied to one or more layers of the first neural network model during a context switch to a replace one of the adapters for a different task. A graph corresponding to the first neural network model is unchanged.
-
公开(公告)号:US20250165854A1
公开(公告)日:2025-05-22
申请号:US18514602
申请日:2023-11-20
Applicant: QUALCOMM Incorporated
Inventor: Simyung CHANG , Jaeseong YOU , Minseop PARK , Kyuhong SHIM , Kyu Woong HWANG
IPC: G06N20/00
Abstract: Certain aspects of the present disclosure provide techniques and apparatus for improved machine learning. A first machine learning model comprising a first plurality of blocks is accessed, the first plurality of blocks being associated with a first precision. A second machine learning model comprising a second plurality of blocks associated with a second precision, where the second plurality of blocks comprises a first block that corresponds to a first block of the first plurality of blocks. An input to the first machine learning model is processed using the first plurality of blocks and the second plurality of blocks, comprising modifying an output of the first block of the first plurality of blocks based on the corresponding first block of the second plurality of blocks. An output of the first machine learning model is provided based on the processing.
-
公开(公告)号:US20250095643A1
公开(公告)日:2025-03-20
申请号:US18468964
申请日:2023-09-18
Applicant: QUALCOMM Incorporated
Inventor: Kyu Woong HWANG , Simyung CHANG , Sungha CHOI , Joo Seong JEONG , Hyoungwoo PARK , Kyuhong SHIM
IPC: G10L15/197 , G10L15/06 , G10L15/22
Abstract: Various embodiments include systems and methods for continuous speech monitoring artificial intelligence solutions. A low-power always-on listening module (LPALM) may maintain continuous auditory awareness or alertness without consuming an excessive amount of the processing, memory, or battery resources of the user computing system or device. As such, the LPALM may operate on the computing device for an extended period of time without depleting the device's battery resources, rendering the user device non-responsive, or otherwise having a negative or user-perceivable impact on the performance, functionality, or power consumption characteristics of the user device.
-
-
-
-
-