MULTIMODAL SEMANTIC ANALYSIS AND IMAGE RETRIEVAL

    公开(公告)号:US20240354336A1

    公开(公告)日:2024-10-24

    申请号:US18639500

    申请日:2024-04-18

    Abstract: Systems and methods are provided for identifying and retrieving semantically similar images from a database. Semantic analysis is performed on an input query utilizing a vision language model to identify semantic concepts associated with the input query. A preliminary set of images is retrieved from the database for semantic concepts identified. Relevant concepts are extracted for images with a tokenizer by comparing images against a predefined label space to identify relevant concepts. A ranked list of relevant concepts is generated based on occurrence frequency within the set. The preliminary set of images is refined based on selecting specific relevant concepts from the ranked list by the user by combining the input query with the specific relevant concepts. Additional semantic analysis is iteratively performed to retrieve additional sets of images semantically similar to the combined input query and selection of the specific relevant concepts until a threshold condition is met.

    LEARNING PRIVACY-PRESERVING OPTICS VIA ADVERSARIAL TRAINING

    公开(公告)号:US20220067457A1

    公开(公告)日:2022-03-03

    申请号:US17412704

    申请日:2021-08-26

    Abstract: A method for acquiring privacy-enhancing encodings in an optical domain before image capture is presented. The method includes feeding a differentiable sensing model with a plurality of images to obtain encoded images, the differentiable sensing model including parameters for sensor optics, integrating the differentiable sensing model into an adversarial learning framework where parameters of attack networks, parameters of utility networks, and the parameters of the sensor optics are concurrently updated, and, once adversarial training is complete, validating efficacy of a learned sensor design by fixing the parameters of the sensor optics and training the attack networks and the utility networks to learn to estimate private and public attributes, respectively, from a set of the encoded images.

    Viewpoint invariant object recognition by synthesization and domain adaptation

    公开(公告)号:US11055989B2

    公开(公告)日:2021-07-06

    申请号:US16051924

    申请日:2018-08-01

    Abstract: Systems and methods for performing domain adaptation include collecting a labeled source image having a view of an object. Viewpoints of the object in the source image are synthesized to generate view augmented source images. Photometrics of each of the viewpoints of the object are adjusted to generate lighting and view augmented source images. Features are extracted from each of the lighting and view augmented source images with a first feature extractor and from captured images captured by an image capture device with a second feature extractor. The extracted features are classified using domain adaptation with domain adversarial learning between extracted features of the captured images and extracted features of the lighting and view augmented source images. Labeled target images are displayed corresponding to each of the captured images including labels corresponding to classifications of the extracted features of the captured images.

    Pose-variant 3D facial attribute generation

    公开(公告)号:US10991145B2

    公开(公告)日:2021-04-27

    申请号:US16673256

    申请日:2019-11-04

    Abstract: A system is provided for pose-variant 3D facial attribute generation. A first stage has a hardware processor based 3D regression network for directly generating a space position map for a 3D shape and a camera perspective matrix from a single input image of a face and further having a rendering layer for rendering a partial texture map of the single input image based on the space position map and the camera perspective matrix. A second stage has a hardware processor based two-part stacked Generative Adversarial Network (GAN) including a Texture Completion GAN (TC-GAN) stacked with a 3D Attribute generation GAN (3DA-GAN). The TC-GAN completes the partial texture map to form a complete texture map based on the partial texture map and the space position map. The 3DA-GAN generates a target facial attribute for the single input image based on the complete texture map and the space position map.

    POSE-VARIANT 3D FACIAL ATTRIBUTE GENERATION
    58.
    发明申请

    公开(公告)号:US20200151940A1

    公开(公告)日:2020-05-14

    申请号:US16673256

    申请日:2019-11-04

    Abstract: A system is provided for pose-variant 3D facial attribute generation. A first stage has a hardware processor based 3D regression network for directly generating a space position map for a 3D shape and a camera perspective matrix from a single input image of a face and further having a rendering layer for rendering a partial texture map of the single input image based on the space position map and the camera perspective matrix. A second stage has a hardware processor based two-part stacked Generative Adversarial Network (GAN) including a Texture Completion GAN (TC-GAN) stacked with a 3D Attribute generation GAN (3DA-GAN). The TC-GAN completes the partial texture map to form a complete texture map based on the partial texture map and the space position map. The 3DA-GAN generates a target facial attribute for the single input image based on the complete texture map and the space position map.

    Video retrieval system based on larger pose face frontalization

    公开(公告)号:US10474881B2

    公开(公告)日:2019-11-12

    申请号:US15888693

    申请日:2018-02-05

    Abstract: A video retrieval system is provided that includes a server for retrieving video sequences from a remote database responsive to a text specifying a face recognition result as an identity of a subject of an input image. The face recognition result is determined by a processor of the server, which estimates, using a 3DMM conditioned Generative Adversarial Network, 3DMM coefficients for the subject of the input image. The subject varies from an ideal front pose. The processor produces a synthetic frontal face image of the subject of the input image based on the input image and coefficients. An area spanning the frontal face of the subject is made larger in the synthetic than in the input image. The processor provides a decision of whether the synthetic image subject is an actual person and provides the identity of the subject in the input image based on the synthetic and input images.

    Liveness detection for antispoof face recognition

    公开(公告)号:US10289822B2

    公开(公告)日:2019-05-14

    申请号:US15637264

    申请日:2017-06-29

    Abstract: A face recognition system and corresponding method are provided. The face recognition system includes a camera configured to capture an input image of a subject purported to be a person. The face recognition system further includes a memory storing a deep learning model configured to perform multi-task learning for a pair of tasks including a liveness detection task and a face recognition task. The face recognition system also includes a processor configured to apply the deep learning model to the input image to recognize an identity of the subject in the input image and a liveness of the subject. The liveness detection task is configured to evaluate a plurality of different distractor modalities corresponding to different physical spoofing materials to prevent face spoofing for the face recognition task.

Patent Agency Ranking