Multi-view scene segmentation and propagation

    公开(公告)号:US10275892B2

    公开(公告)日:2019-04-30

    申请号:US15462752

    申请日:2017-03-17

    Applicant: Google LLC

    Abstract: A depth-based effect may be applied to a multi-view video stream to generate a modified multi-view video stream. User input may designate a boundary between a foreground region and a background region, at a different depth from the foreground region, of a reference image of the video stream. Based on the user input, a reference mask may be generated to indicate the foreground region and the background region. The reference mask may be used to generate one or more other masks that indicate the foreground and background regions for one or more different images, from different frames and/or different views from the reference image. The reference mask and other mask(s) may be used to apply the effect to the multi-view video stream to generate the modified multi-view video stream.

    Combining light-field data with active depth data for depth map generation

    公开(公告)号:US11328446B2

    公开(公告)日:2022-05-10

    申请号:US15635894

    申请日:2017-06-28

    Applicant: Google LLC

    Abstract: Depths of one or more objects in a scene may be measured with enhanced accuracy through the use of a light-field camera and a depth sensor. The light-field camera may capture a light-field image of the scene. The depth sensor may capture depth sensor data of the scene. Light-field depth data may be extracted from the light-field image and used, in combination with the sensor depth data, to generate a depth map indicative of distance between the light-field camera and one or more objects in the scene. The depth sensor may be an active depth sensor that transmits electromagnetic energy toward the scene; the electromagnetic energy may be reflected off of the scene and detected by the active depth sensor. The active depth sensor may have a 360° field of view; accordingly, one or more mirrors may be used to direct the electromagnetic energy between the active depth sensor and the scene.

    4D camera tracking and optical stabilization

    公开(公告)号:US10545215B2

    公开(公告)日:2020-01-28

    申请号:US15703553

    申请日:2017-09-13

    Applicant: Google LLC

    Abstract: A light-field video stream may be processed to modify the camera pathway from which the light-field video stream is projected. A plurality of target pixels may be selected, in a plurality of key frames of the light-field video stream. The target pixels may be used to generate a camera pathway indicative of motion of the camera during generation of the light-field video stream. The camera pathway may be adjusted to generate an adjusted camera pathway. This may be done, for example, to carry out image stabilization. The light-field video stream may be projected to a viewpoint defined by the adjusted camera pathway to generate a projected video stream with the image stabilization.

    4D CAMERA TRACKING AND OPTICAL STABILIZATION

    公开(公告)号:US20190079158A1

    公开(公告)日:2019-03-14

    申请号:US15703553

    申请日:2017-09-13

    Applicant: Google LLC

    Abstract: A light-field video stream may be processed to modify the camera pathway from which the light-field video stream is projected. A plurality of target pixels may be selected, in a plurality of key frames of the light-field video stream. The target pixels may be used to generate a camera pathway indicative of motion of the camera during generation of the light-field video stream. The camera pathway may be adjusted to generate an adjusted camera pathway. This may be done, for example, to carry out image stabilization. The light-field video stream may be projected to a viewpoint defined by the adjusted camera pathway to generate a projected video stream with the image stabilization.

    Multi-view back-projection to a light-field

    公开(公告)号:US10354399B2

    公开(公告)日:2019-07-16

    申请号:US15605037

    申请日:2017-05-25

    Applicant: GOOGLE LLC

    Abstract: Dense light-field data can be generated from image data that does not include light-field data, or from image data that includes sparse light-field data. In at least one embodiment, the source light-field data may include one or more sub-aperture images that may be used to reconstruct the light-field in denser form. In other embodiments, the source data can take other forms. Examples include data derived from or ancillary to a set of sub-aperture images, synthetic data, or captured image data that does not include full light-field data. Interpolation, back-projection, and/or other techniques are used in connection with source sub-aperture images or their equivalents, to generate dense light-field data.

Patent Agency Ranking