-
1.
公开(公告)号:US11978225B2
公开(公告)日:2024-05-07
申请号:US18135678
申请日:2023-04-17
Applicant: Google LLC
Inventor: Tali Dekel , Forrester Cole , Ce Liu , William Freeman , Richard Tucker , Noah Snavely , Zhengqi Li
CPC classification number: G06T7/579 , G06T7/246 , G06T7/73 , G06T2207/10016 , G06T2207/10028 , G06T2207/20081 , G06T2207/30244
Abstract: A method includes obtaining a reference image and a target image each representing an environment containing moving features and static features. The method also includes determining an object mask configured to mask out the moving features and preserves the static features in the target image. The method additionally includes determining, based on motion parallax between the reference image and the target image, a static depth image representing depth values of the static features in the target image. The method further includes generating, by way of a machine learning model, a dynamic depth image representing depth values of both the static features and the moving features in the target image. The model is trained to generate the dynamic depth image by determining depth values of at least the moving features based on the target image, the object mask, and the static depth image.
-
2.
公开(公告)号:US20210090279A1
公开(公告)日:2021-03-25
申请号:US16578215
申请日:2019-09-20
Applicant: Google LLC
Inventor: Tali Dekel , Forrester Cole , Ce Liu , William Freeman , Richard Tucker , Noah Snavely , Zhengqi Li
Abstract: A method includes obtaining a reference image and a target image each representing an environment containing moving features and static features. The method also includes determining an object mask configured to mask out the moving features and preserves the static features in the target image. The method additionally includes determining, based on motion parallax between the reference image and the target image, a static depth image representing depth values of the static features in the target image. The method further includes generating, by way of a machine learning model, a dynamic depth image representing depth values of both the static features and the moving features in the target image. The model is trained to generate the dynamic depth image by determining depth values of at least the moving features based on the target image, the object mask, and the static depth image.
-
公开(公告)号:US11288857B2
公开(公告)日:2022-03-29
申请号:US16837612
申请日:2020-04-01
Applicant: Google LLC
Inventor: Moustafa Meshry , Ricardo Martin Brualla , Sameh Khamis , Daniel Goldman , Hugues Hoppe , Noah Snavely , Rohit Pandey
Abstract: According to an aspect, a method for neural rerendering includes obtaining a three-dimensional (3D) model representing a scene of a physical space, where the 3D model is constructed from a collection of input images, rendering an image data buffer from the 3D model according to a viewpoint, where the image data buffer represents a reconstructed image from the 3D model, receiving, by a neural rerendering network, the image data buffer, receiving, by the neural rerendering network, an appearance code representing an appearance condition, and transforming, by the neural rerendering network, the image data buffer into a rerendered image with the viewpoint of the image data buffer and the appearance condition specified by the appearance code.
-
公开(公告)号:US20200320777A1
公开(公告)日:2020-10-08
申请号:US16837612
申请日:2020-04-01
Applicant: Google LLC
Inventor: Moustafa Meshry , Ricardo Martin Brualla , Sameh Khamis , Daniel Goldman , Hugues Hoppe , Noah Snavely , Rohit Pandey
Abstract: According to an aspect, a method for neural rerendering includes obtaining a three-dimensional (3D) model representing a scene of a physical space, where the 3D model is constructed from a collection of input images, rendering an image data buffer from the 3D model according to a viewpoint, where the image data buffer represents a reconstructed image from the 3D model, receiving, by a neural rerendering network, the image data buffer, receiving, by the neural rerendering network, an appearance code representing an appearance condition, and transforming, by the neural rerendering network, the image data buffer into a rerendered image with the viewpoint of the image data buffer and the appearance condition specified by the appearance code.
-
5.
公开(公告)号:US20230260145A1
公开(公告)日:2023-08-17
申请号:US18135678
申请日:2023-04-17
Applicant: Google LLC
Inventor: Tali Dekel , Forrester Cole , Ce Liu , William Freeman , Richard Tucker , Noah Snavely , Zhengqi Li
CPC classification number: G06T7/579 , G06T7/246 , G06T7/73 , G06T2207/10016 , G06T2207/30244 , G06T2207/20081 , G06T2207/10028
Abstract: A method includes obtaining a reference image and a target image each representing an environment containing moving features and static features. The method also includes determining an object mask configured to mask out the moving features and preserves the static features in the target image. The method additionally includes determining, based on motion parallax between the reference image and the target image, a static depth image representing depth values of the static features in the target image. The method further includes generating, by way of a machine learning model, a dynamic depth image representing depth values of both the static features and the moving features in the target image. The model is trained to generate the dynamic depth image by determining depth values of at least the moving features based on the target image, the object mask, and the static depth image.
-
6.
公开(公告)号:US11663733B2
公开(公告)日:2023-05-30
申请号:US17656165
申请日:2022-03-23
Applicant: Google LLC
Inventor: Tali Dekel , Forrester Cole , Ce Liu , William Freeman , Richard Tucker , Noah Snavely , Zhengqi Li
CPC classification number: G06T7/579 , G06T7/246 , G06T7/73 , G06T2207/10016 , G06T2207/10028 , G06T2207/20081 , G06T2207/30244
Abstract: A method includes obtaining a reference image and a target image each representing an environment containing moving features and static features. The method also includes determining an object mask configured to mask out the moving features and preserves the static features in the target image. The method additionally includes determining, based on motion parallax between the reference image and the target image, a static depth image representing depth values of the static features in the target image. The method further includes generating, by way of a machine learning model, a dynamic depth image representing depth values of both the static features and the moving features in the target image. The model is trained to generate the dynamic depth image by determining depth values of at least the moving features based on the target image, the object mask, and the static depth image.
-
7.
公开(公告)号:US20220215568A1
公开(公告)日:2022-07-07
申请号:US17656165
申请日:2022-03-23
Applicant: Google LLC
Inventor: Tali Dekel , Forrester Cole , Ce Liu , William Freeman , Richard Tucker , Noah Snavely , Zhengqi Li
Abstract: A method includes obtaining a reference image and a target image each representing an environment containing moving features and static features. The method also includes determining an object mask configured to mask out the moving features and preserves the static features in the target image. The method additionally includes determining, based on motion parallax between the reference image and the target image, a static depth image representing depth values of the static features in the target image. The method further includes generating, by way of a machine learning model, a dynamic depth image representing depth values of both the static features and the moving features in the target image. The model is trained to generate the dynamic depth image by determining depth values of at least the moving features based on the target image, the object mask, and the static depth image.
-
8.
公开(公告)号:US11315274B2
公开(公告)日:2022-04-26
申请号:US16578215
申请日:2019-09-20
Applicant: Google LLC
Inventor: Tali Dekel , Forrester Cole , Ce Liu , William Freeman , Richard Tucker , Noah Snavely , Zhengqi Li
Abstract: A method includes obtaining a reference image and a target image each representing an environment containing moving features and static features. The method also includes determining an object mask configured to mask out the moving features and preserves the static features in the target image. The method additionally includes determining, based on motion parallax between the reference image and the target image, a static depth image representing depth values of the static features in the target image. The method further includes generating, by way of a machine learning model, a dynamic depth image representing depth values of both the static features and the moving features in the target image. The model is trained to generate the dynamic depth image by determining depth values of at least the moving features based on the target image, the object mask, and the static depth image.
-
-
-
-
-
-
-