Depth-based image blurring
    1.
    发明授权

    公开(公告)号:US10552947B2

    公开(公告)日:2020-02-04

    申请号:US15824574

    申请日:2017-11-28

    Applicant: Google LLC

    Abstract: An image such as a light-field image may be processed to provide depth-based blurring. The image may be received in a data store. At an input device, first and second user input may be received to designate a first focus depth and a second focus depth different from the first focus depth, respectively. A processor may identify one or more foreground portions of the image that have one or more foreground portion depths, each of which is less than the first focus depth. The processor may also identify one or more background portions of the image that have one or more background portion depths, each of which is greater than the second focus depth. The processor may also apply blurring to the one or more foreground portions and the one or more background portions to generate a processed image, which may be displayed on a display device.

    Layered content delivery for virtual and augmented reality experiences

    公开(公告)号:US10546424B2

    公开(公告)日:2020-01-28

    申请号:US15729918

    申请日:2017-10-11

    Applicant: GOOGLE LLC

    Abstract: A virtual reality or augmented reality experience of a scene may be presented to a viewer using layered data retrieval and/or processing. A first layer of a video stream may be retrieved, and a first viewer position and/or orientation may be received. The first layer may be processed to generate first viewpoint video of the scene from a first virtual viewpoint corresponding to the first viewer position and/or orientation. The first viewpoint video may be displayed for the viewer. Then, a second layer of the video stream may be retrieved, and a second viewer position and/or orientation may be received. The second layer may be processed to generate second viewpoint video of the scene from a second virtual viewpoint corresponding to the second viewer position and/or orientation, with higher quality than the first viewpoint video. The second viewpoint video may be displayed for the viewer.

    Capturing light-field images with uneven and/or incomplete angular sampling

    公开(公告)号:US10897608B2

    公开(公告)日:2021-01-19

    申请号:US16032261

    申请日:2018-07-11

    Applicant: Google LLC

    Abstract: A light-field camera may generate four-dimensional light-field data indicative of incoming light. The light-field camera may have an aperture configured to receive the incoming light, an image sensor, and a microlens array configured to redirect the incoming light at the image sensor. The image sensor may receive the incoming light and, based on the incoming light, generate the four-dimensional light-field data, which may have first and second spatial dimensions and first and second angular dimensions. The first angular dimension may have a first resolution higher than a second resolution of the second angular dimension.

    Vantage generation and interactive playback

    公开(公告)号:US10444931B2

    公开(公告)日:2019-10-15

    申请号:US15590841

    申请日:2017-05-09

    Applicant: Google LLC

    Inventor: Kurt Akeley

    Abstract: Video data of an environment may be prepared for presentation to a user in a virtual reality or augmented reality experience. According to one method, a plurality of locations distributed throughout a viewing volume may be designated, at which a plurality of vantages are to be positioned to facilitate viewing of the environment from proximate the locations. For each location, a plurality of images of the environment, captured from viewpoints proximate the location, may be retrieved. For each location, the images may be reprojected to a three-dimensional shape and combined to generate a combined image. The combined image may be applied to one or more surfaces of the three-dimensional shape to generate a vantage. The vantages may be stored such that the vantages can be used to generate viewpoint video of the scene, as viewed from a virtual viewpoint corresponding to an actual viewer's viewpoint within the viewing volume.

    Generation of virtual reality with 6 degrees of freedom from limited viewer data

    公开(公告)号:US10474227B2

    公开(公告)日:2019-11-12

    申请号:US15897994

    申请日:2018-02-15

    Applicant: Google LLC

    Abstract: A virtual reality or augmented reality experience may be presented for a viewer through the use of input including only three degrees of freedom. The input may include orientation data indicative of a viewer orientation at which a head of the viewer is oriented. The viewer orientation may be mapped to an estimated viewer location. Viewpoint video may be generated of a scene as viewed from a virtual viewpoint with a virtual location corresponding to the estimated viewer location, from along the viewer orientation. The viewpoint video may be displayed for the viewer. In some embodiments, mapping may be carried out by defining a ray at the viewer orientation, locating an intersection of the ray with a three-dimensional shape, and, based on a location of the intersection, generating the estimated viewer location. The shape may be generated via calibration with a device that receives input including six degrees of freedom.

    Stereo image generation and interactive playback

    公开(公告)号:US10540818B2

    公开(公告)日:2020-01-21

    申请号:US15730481

    申请日:2017-10-11

    Applicant: Google LLC

    Inventor: Kurt Akeley

    Abstract: Video data of an environment may be prepared for stereoscopic presentation to a user in a virtual reality or augmented reality experience. According to one method, a plurality of locations distributed throughout a viewing volume may be designated, at which a plurality of vantages are to be positioned to facilitate viewing of the environment from proximate the locations. For each location, a plurality of images of the environment, captured from viewpoints proximate the location, may be retrieved. For each location, the images may be reprojected to a three-dimensional shape and combined to generate a combined image. The combined image may be applied to one or more surfaces of the three-dimensional shape to generate a vantage. The vantages may be stored such that the vantages can be used to generate stereoscopic viewpoint video of the scene, as viewed from at least two virtual viewpoints corresponding to viewpoints of an actual viewer's eyes within the viewing volume.

Patent Agency Ranking