Image data illumination detection

    公开(公告)号:US10699152B1

    公开(公告)日:2020-06-30

    申请号:US15841241

    申请日:2017-12-13

    Abstract: Described is a method for processing image data to determine if a portion of the imaged environment is exposed to high illumination, such as sunlight. In some implementations, image data from multiple different imaging devices may be processed to produce for each imaging device a respective illumination mask that identifies pixels that represent a portion of the environment that is exposed to high illumination. Overlapping portions of those illumination masks may then be combined to produce a unified illumination map of an area of the environment. The unified illumination map identifies, for different portions of the environment, a probability that the portion is actually exposed to high illumination.

    Determining direction of illumination

    公开(公告)号:US10664962B1

    公开(公告)日:2020-05-26

    申请号:US15841243

    申请日:2017-12-13

    Abstract: Described is a method for processing image data to determine if a portion of the imaged environment is exposed to high illumination, such as sunlight. In some implementations, image data from multiple different imaging devices may be processed to produce for each imaging device a respective illumination mask that identifies pixels that represent a portion of the environment that is exposed to high illumination. Overlapping portions of those illumination masks may then be combined to produce a unified illumination map of an area of the environment. The unified illumination map identifies, for different portions of the environment, a probability that the portion is actually exposed to high illumination.

    System for determining embedding from multiple inputs

    公开(公告)号:US11670104B1

    公开(公告)日:2023-06-06

    申请号:US17097707

    申请日:2020-11-13

    CPC classification number: G06V40/11 G06V10/469 G06V40/13 G06V40/117

    Abstract: A scanner acquires a set of images of a hand of a user to facilitate identification. These images may vary, due to changes in relative position, pose, lighting, obscuring objects such as a sleeve, and so forth. A first neural network determines output data comprising a spatial mask and a feature map for individual images in the set. The output data for two or more images is combined to provide aggregate data that is representative of the two or more images. The aggregate data may then be processed using a second neural network, such as convolutional neural network, to determine an embedding vector. The embedding vector may be stored and associated with a user account. At a later time, images acquired from the scanner may be processed to produce an embedding vector that is compared to the stored embedding vector to identify a user at the scanner.

    System for synthesizing data
    5.
    发明授权

    公开(公告)号:US11537813B1

    公开(公告)日:2022-12-27

    申请号:US17038648

    申请日:2020-09-30

    Abstract: During a training phase, a first machine learning system is trained using actual data, such as multimodal images of a hand, to generate synthetic image data. During training, the first system determines latent vector spaces associated with identity, appearance, and so forth. During a generation phase, latent vectors from the latent vector spaces are generated and used as input to the first machine learning system to generate candidate synthetic image data. The candidate image data is assessed to determine suitability for inclusion into a set of synthetic image data that may be used for subsequent use in training a second machine learning system to recognize an identity of a hand presented by a user. For example, the candidate synthetic image data is compared to previously generated synthetic image data to avoid duplicative synthetic identities. The second machine learning system is then trained using the approved candidate synthetic image data.

    System for mapping images to a canonical space

    公开(公告)号:US12230052B1

    公开(公告)日:2025-02-18

    申请号:US16712655

    申请日:2019-12-12

    Abstract: Images of a hand are obtained by a camera. A pose of the hand relative to the camera may vary due to rotation, translation, articulation of joints in the hand, and so forth. Avatars comprising texture maps from images of actual hands and three-dimensional models that describe the shape of those hands are manipulated into different poses and articulations to produce synthetic images. Given that the mapping of points on an avatar to the synthetic image is known, highly accurate annotation data is produced that relates particular points on the avatar to the synthetic image. An artificial neural network (ANN) is trained using the synthetic images and corresponding annotation data. The trained ANN processes a first image of a hand to produce a second image of the hand that appears to be in a standardized or canonical pose. The second image may then be processed to identify the user.

    System to reduce data retention
    7.
    发明授权

    公开(公告)号:US12086225B1

    公开(公告)日:2024-09-10

    申请号:US17448437

    申请日:2021-09-22

    CPC classification number: G06F21/32 G06F18/213 G06F18/214 G06F21/6245

    Abstract: An image of at least a portion of a user during enrollment to a biometric identification system is acquired and processed with a first model to determine a first embedding that is representative of features in that image in a first embedding space. The first embedding may be stored for later comparison to identify the user, while the image is not stored. A second model that uses a second embedding space may be later developed. A transformer is trained to accept as input an embedding from the first model and produce as output an embedding consistent with the second embedding space. The previously stored first embedding may be converted to a second embedding in a second embedding space using the transformer. As a result, new embedding models may be implemented without requiring storage of user images for later reprocessing with the new models or requiring re-enrollment by users.

    Image data and simulated illumination maps

    公开(公告)号:US11354885B1

    公开(公告)日:2022-06-07

    申请号:US16914959

    申请日:2020-06-29

    Abstract: Described is a method for processing image data to determine if a portion of the imaged environment is exposed to high illumination, such as sunlight. In some implementations, image data from multiple different imaging devices may be processed to produce for each imaging device a respective illumination mask that identifies pixels that represent a portion of the environment that is exposed to high illumination. Overlapping portions of those illumination masks may then be combined to produce a unified illumination map of an area of the environment. The unified illumination map identifies, for different portions of the environment, a probability that the portion is actually exposed to high illumination.

    System for biometric identification

    公开(公告)号:US11734949B1

    公开(公告)日:2023-08-22

    申请号:US17210170

    申请日:2021-03-23

    Abstract: Images of a hand are obtained by a camera. These images may depict the fingers and palm of the user. A pose of the hand relative to the camera may vary due to rotation, translation, articulation of joints in the hand, and so forth. One or more canonical images are generated by mapping the images to a canonical model. A first embedding model is used to determine a first embedding vector representative of the palm as depicted in the canonical images. A second embedding model is used to determine a set of second embedding vectors, each representative of individual fingers as depicted in the canonical images. Embedding distances in the embedding space from the embedding vectors to a closest match of previously stored embedding vectors are multiplied together to determine an overall distance. If the overall distance is less than a threshold value, an identity of a user is asserted.

Patent Agency Ranking