Invention Grant
- Patent Title: Generating synthetic code-switched data for training language models
-
Application No.: US17651555Application Date: 2022-02-17
-
Publication No.: US12242820B2Publication Date: 2025-03-04
- Inventor: Cesa Salaam , Seunghyun Yoon , Trung Huu Bui , Franck Dernoncourt
- Applicant: Adobe Inc.
- Applicant Address: US CA San Jose
- Assignee: Adobe Inc.
- Current Assignee: Adobe Inc.
- Current Assignee Address: US CA San Jose
- Agency: Weaver Austin Villeneuve & Sampson LLP
- Main IPC: G10L15/22
- IPC: G10L15/22 ; G06F40/47 ; G06F40/58 ; G06N3/045 ; G06N3/08

Abstract:
Techniques for training a language model for code switching content are disclosed. Such techniques include, in some embodiments, generating a dataset, which includes identifying one or more portions within textual content in a first language, the identified one or more portions each including one or more of offensive content or non-offensive content; translating the identified one or more salient portions to a second language; and reintegrating the translated one or more portions into the textual content to generate code-switched textual content. In some cases, the textual content in the first language includes offensive content and non-offensive content, the identified one or more portions include the offensive content, and the translated one or more portions include a translated version of the offensive content. In some embodiments, the code-switched textual content is at least part of a synthetic dataset usable to train a language model, such as a multilingual classification model.
Public/Granted literature
- US20230259718A1 GENERATING SYNTHETIC CODE-SWITCHED DATA FOR TRAINING LANGUAGE MODELS Public/Granted day:2023-08-17
Information query