SAMPLE CLASSIFICATION
    1.
    发明公开

    公开(公告)号:US20240104894A1

    公开(公告)日:2024-03-28

    申请号:US17949078

    申请日:2022-09-20

    Applicant: Lemon Inc.

    Inventor: Song Bai Yujun Shi

    CPC classification number: G06V10/764 G06V10/72 G06V10/771

    Abstract: A method is proposed for sample processing. A first group of data are received, here data in the first group of data comprises a sample and a classification of the sample, and the classification belonging to a first group of classifications in a plurality of classifications associated with the data. A plurality of data with the classification are selected from the first group of data. A first and a second loss function are determined for training a classification model that represents an association relationship between samples and classifications of the samples based on a plurality of samples comprised in the plurality of data and the classification, the first and second loss functions represent classification accuracy and a feature distribution for the classification model. The classification model is trained based on the first and second loss functions. Therefore, the accuracy of the classification model may be increased.

    MULTIMODAL DATA PROCESSING
    4.
    发明公开

    公开(公告)号:US20240144664A1

    公开(公告)日:2024-05-02

    申请号:US18393238

    申请日:2023-12-21

    CPC classification number: G06V10/82 G06V10/467

    Abstract: Embodiments of the present disclosure provide a solution for multimodal data processing. A method comprises: obtaining image data and text data; and extracting a target visual feature of image data and a target textual feature of text data using a feature extraction model. The feature extraction model comprises alternatively deployed cross-modal encoding parts and visual encoding parts. The extracting comprises: performing, using a first cross-modal encoding part of the feature extraction model, cross-modal feature encoding on a first intermediate visual feature of the image data and a first intermediate textual feature of the text data, to obtain a second intermediate visual feature and a second intermediate textual feature; performing, using a first visual encoding part of the feature extraction model, visual modal feature encoding on the second intermediate visual feature, to obtain a third intermediate visual feature.

    Pre-training for scene text detection

    公开(公告)号:US12254707B2

    公开(公告)日:2025-03-18

    申请号:US17955285

    申请日:2022-09-28

    Abstract: Embodiments of the present disclosure relate to a method, device and computer readable storage medium of scene text detection. In the method, a first visual representation of a first image is generated with an image encoding process. A first textual representation of a first text unit in the first image is generated with a text encoding process based on a first plurality of symbols obtained by masking a first symbol of a plurality of symbols in the first text unit. A first prediction of the masked first symbol is determined with a decoding process based on the first visual and textual representations. At least the image encoding process is updating according to at least a first training objective to increase at least similarity of the first prediction and the masked first symbol.

    MULTI-DIMENSIONAL GENERATIVE FRAMEWORK FOR VIDEO GENERATION

    公开(公告)号:US20240193412A1

    公开(公告)日:2024-06-13

    申请号:US18063843

    申请日:2022-12-09

    Applicant: Lemon Inc.

    CPC classification number: G06N3/08 G06T2207/20081

    Abstract: Generating a multi-dimensional video using a multi-dimensional video generative model for, including, but not limited to, at least one of static portrait animation, video reconstruction, or motion editing. The method including providing data into the multi-dimensionally aware generator of the multi-dimensional video generative model, and generating the multi-dimensional video from the data by the multi-dimensionally aware generator. The generating of the multi-dimensional video includes inverting the data into a latent space of the multi-dimensionally aware generator, synthesizing content of the multi-dimensional video using an appearance component of the multi-dimensionally aware generator and corresponding camera pose and formulating an intermediate appearance code, developing a synthesis layer for encoding a motion component of the multi-dimensionally aware generator at a plurality of timesteps and formulating an intermediate motion code, introducing temporal dynamics into the intermediate appearance code and the intermediate motion code, and generating multi-dimensionally aware spatio-temporal representations of the data.

    DATA AUGMENTATION BASED ON ATTENTION

    公开(公告)号:US20220270353A1

    公开(公告)日:2022-08-25

    申请号:US17740211

    申请日:2022-05-09

    Applicant: Lemon Inc.

    Abstract: Implementations of the present disclosure relate to methods, devices, and computer program products for data augmentation. In the method, mixed data is generated from first data and second data, and the mixed data comprises a first portion from the first data and a second portion from the second data. An attention map is obtained for the mixed data based on distributions of the first and second portions in the mixed data, here the attention map describes contributions of the first and second data to the mixed data. A label is determined for the mixed data based on the attention map and a first label for the first data and a second label for the second data. With these implementations, the label is determined based on the contributions of the first and second images in an accurate and effective way, and thus has a value that is much closer to the ground true.

Patent Agency Ranking