Merging split-pixel data for deeper depth of field

    公开(公告)号:US12118697B2

    公开(公告)日:2024-10-15

    申请号:US17753279

    申请日:2021-02-24

    Applicant: Google LLC

    CPC classification number: G06T5/73 G06T5/50 H04N25/704

    Abstract: A method includes obtaining split-pixel image data including a first sub-image and a second sub-image. The method also includes determining, for each respective pixel of the split-pixel image data, a corresponding position of a scene feature represented by the respective pixel relative to a depth of field, and identifying, based on the corresponding positions, out-of-focus pixels. The method additionally includes determining, for each respective out-of-focus pixel, a corresponding pixel value based on the corresponding position, a location of the respective out-of-focus pixel within the split-pixel image data, and at least one of: a first value of a corresponding first pixel in the first sub-image or a second value of a corresponding second pixel in the second sub-image. The method further includes generating, based on the corresponding pixel values, an enhanced image having an extended depth of field.

    Estimating depth using a single camera

    公开(公告)号:US11210799B2

    公开(公告)日:2021-12-28

    申请号:US16652568

    申请日:2017-12-05

    Applicant: Google LLC

    Abstract: A camera may capture an image of a scene and use the image to generate a first and a second subpixel image of the scene. The pair of subpixel images may be represented by a first set of subpixels and a second set of subpixels from the image respectively. Each pixel of the image may include two green subpixels that are respectively represented in the first and second subpixel images. The camera may determine a disparity between a portion of the scene as represented by the pair of subpixel images and may estimate a depth map of the scene that indicates a depth of the portion relative to other portions of the scene based on the disparity and a baseline distance between the two green subpixels. A new version of the image may be generated with a focus upon the portion and with the other portions of the scene blurred.

    Aperture supervision for single-view depth prediction

    公开(公告)号:US11113832B2

    公开(公告)日:2021-09-07

    申请号:US16759808

    申请日:2017-11-03

    Applicant: Google LLC

    Abstract: Example embodiments allow for training of artificial neural networks (ANNs) to generate depth maps based on images. The ANNs are trained based on a plurality of sets of images, where each set of images represents a single scene and the images in such a set of images differ with respect to image aperture and/or focal distance. An untrained ANN generates a depth map based on one or more images in a set of images. This depth map is used to generate, using the image(s) in the set, a predicted image that corresponds, with respect to image aperture and/or focal distance, to one of the images in the set. Differences between the predicted image and the corresponding image are used to update the ANN. ANNs tramed in this manner are especially suited for generating depth maps used to perform simulated image blur on small-aperture images.

    Depth Prediction from Dual Pixel Images

    公开(公告)号:US20210056349A1

    公开(公告)日:2021-02-25

    申请号:US17090948

    申请日:2020-11-06

    Applicant: Google LLC

    Abstract: Apparatus and methods related to using machine learning to determine depth maps for dual pixel images of objects are provided. A computing device can receive a dual pixel image of at least a foreground object. The dual pixel image can include a plurality of dual pixels. A dual pixel of the plurality of dual pixels can include a left-side pixel and a right-side pixel that both represent light incident on a single dual pixel element used to capture the dual pixel image. The computing device can be used to train a machine learning system to determine a depth map associated with the dual pixel image. The computing device can provide the trained machine learning system.

    Depth Prediction from Dual Pixel Images
    17.
    发明申请

    公开(公告)号:US20200226419A1

    公开(公告)日:2020-07-16

    申请号:US16246280

    申请日:2019-01-11

    Applicant: Google LLC

    Abstract: Apparatus and methods related to using machine learning to determine depth maps for dual pixel images of objects are provided. A computing device can receive a dual pixel image of at least a foreground object. The dual pixel image can include a plurality of dual pixels. A dual pixel of the plurality of dual pixels can include a left-side pixel and a right-side pixel that both represent light incident on a single dual pixel element used to capture the dual pixel image. The computing device can be used to train a machine learning system to determine a depth map associated with the dual pixel image. The computing device can provide the trained machine learning system.

    High Resolution Inpainting with a Machine-learned Augmentation Model and Texture Transfer

    公开(公告)号:US20250069206A1

    公开(公告)日:2025-02-27

    申请号:US18949447

    申请日:2024-11-15

    Applicant: Google LLC

    Abstract: Systems and methods for augmenting images can utilize one or more image augmentation models and one or more texture transfer blocks. The image augmentation model can process input images and one or more segmentation masks to generate first output data. The first output data and the one or more segmentation masks can be processed with the texture transfer block to generate an augmented image. The input image can depict a scene with one or more occlusions, and the augmented image can depict the scene with the one or more occlusions replaced with predicted pixel data.

    TECHNIQUES TO CAPTURE AND EDIT DYNAMIC DEPTH IMAGES

    公开(公告)号:US20240214542A1

    公开(公告)日:2024-06-27

    申请号:US18596293

    申请日:2024-03-05

    Applicant: Google LLC

    CPC classification number: H04N13/271 G06T7/55 G06T2207/10028

    Abstract: Implementations described herein relate to a computer-implemented method that includes capturing image data using one or more cameras, wherein the image data includes a primary image and associated depth values. The method further includes encoding the image data in an image format. The encoded image data includes the primary image encoded in the image format and image metadata that includes a device element that includes a profile element indicative of an image type and a first camera element, wherein the first camera element includes an image element and a depth map based on the depth values. The method further includes, after the encoding, storing the image data in a file container based on the image format. The method further includes causing the primary image to be displayed.

    Merging Split-Pixel Data For Deeper Depth of Field

    公开(公告)号:US20230153960A1

    公开(公告)日:2023-05-18

    申请号:US17753279

    申请日:2021-02-24

    Applicant: Google LLC

    CPC classification number: G06T5/003 G06T5/50

    Abstract: A method includes obtaining split-pixel image data including a first sub-image and a second sub-image. The method also includes determining, for each respective pixel of the split-pixel image data, a corresponding position of a scene feature represented by the respective pixel relative to a depth of field, and identifying, based on the corresponding positions, out-of-focus pixels. The method additionally includes determining, for each respective out-of-focus pixel, a corresponding pixel value based on the corresponding position, a location of the respective out-of-focus pixel within the split-pixel image data, and at least one of: a first value of a corresponding first pixel in the first sub-image or a second value of a corresponding second pixel in the second sub-image. The method further includes generating, based on the corresponding pixel values, an enhanced image having an extended depth of field.

Patent Agency Ranking