Using multiple trained models to reduce data labeling efforts

    公开(公告)号:US11983171B2

    公开(公告)日:2024-05-14

    申请号:US18219333

    申请日:2023-07-07

    CPC classification number: G06F16/2379 G06N20/00

    Abstract: A method of labeling a dataset includes inputting a testing set comprising a plurality of input data samples into a plurality of pre-trained machine learning models to generate a set of embeddings output by the plurality of pre-trained machine learning models. The method further includes performing an iterative cluster labeling algorithm that includes generating a plurality of clusterings from the set of embeddings, analyzing the plurality of clusterings to identify a target embedding with a highest duster quality, analyzing the target embedding to determine a compactness for each of the plurality of clusterings of the target embedding, and identifying a target cluster among the plurality of clusterings of the target embedding based on the compactness. The method further includes assigning pseudo-labels to the subset of the plurality of input data samples that are members of the target duster.

    USING MULTIPLE TRAINED MODELS TO REDUCE DATA LABELING EFFORTS

    公开(公告)号:US20230350880A1

    公开(公告)日:2023-11-02

    申请号:US18219333

    申请日:2023-07-07

    CPC classification number: G06F16/2379 G06N20/00

    Abstract: A method of labeling a dataset includes inputting a testing set comprising a plurality of input data samples into a plurality of pre-trained machine learning models to generate a set of embeddings output by the plurality of pre-trained machine learning models. The method further includes performing an iterative cluster labeling algorithm that includes generating a plurality of clusterings from the set of embeddings, analyzing the plurality of clusterings to identify a target embedding with a highest duster quality, analyzing the target embedding to determine a compactness for each of the plurality of clusterings of the target embedding, and identifying a target cluster among the plurality of clusterings of the target embedding based on the compactness. The method further includes assigning pseudo-labels to the subset of the plurality of input data samples that are members of the target duster.

    Automatic mobile photo capture using video analysis

    公开(公告)号:US10015397B2

    公开(公告)日:2018-07-03

    申请号:US14885186

    申请日:2015-10-16

    Abstract: A system creates an electronic file corresponding to a printed artifact by launching a video capture module that causes a mobile electronic device to capture a video of a scene that includes the printed artifact. The system analyzes image frames in the video in real time as the video is captured to identify a suitable instance. In one example, the suitable instance is a frame or sequence of frames that contain an image of a page or side of the printed artifact and that do not exhibit a page-turn event. In response to identification of the suitable instance, the system will automatically cause a photo capture module of the device to capture a still image of the printed artifact. The still image has a resolution that is higher than that of the image frames in the video. The system will save the captured still images to a computer-readable file.

    Learning emotional states using personalized calibration tasks

    公开(公告)号:US09767349B1

    公开(公告)日:2017-09-19

    申请号:US15149284

    申请日:2016-05-09

    CPC classification number: G06K9/00335 G06K9/00302 G06K9/00315

    Abstract: A method for determining an emotional state of a subject taking an assessment. The method includes eliciting predicted facial expressions from a subject administered questions each intended to elicit a certain facial expression that conveys a baseline characteristic of the subject; receiving a video sequence capturing the subject answering the questions; determining an observable physical behavior experienced by the subject across a series of frames corresponding to the sample question; associating the observed behavior with the emotional state that corresponds with the facial expression; and training a classifier using the associations. The method includes receiving a second video sequence capturing the subject during an assessment and applying features extracted from the second image data to the classifier for determining the emotional state of the subject in response to an assessment item administered during the assessment.

Patent Agency Ranking