-
公开(公告)号:US11100646B2
公开(公告)日:2021-08-24
申请号:US16562819
申请日:2019-09-06
Applicant: Google LLC
Inventor: Suhani Vora , Reza Mahjourian , Soeren Pirk , Anelia Angelova
Abstract: A method for generating a predicted segmentation map for potential objects in a future scene depicted in a future image is described. The method includes receiving input images that depict a same scene; processing a current input image to generate a segmentation map for potential objects in the current input image and a respective depth map; generating a point cloud for the current input image; processing the input images to generate, for each pair of two input images in the sequence, a respective ego-motion output that characterizes motion of the camera between the two input images; processing the ego-motion outputs to generate a future ego-motion output; processing the point cloud of the current input image and the future ego-motion output to generate a future point cloud; and processing the future point cloud to generate the predicted segmentation map for potential objects in the future scene depicted in the future image.
-
公开(公告)号:US20210073997A1
公开(公告)日:2021-03-11
申请号:US16562819
申请日:2019-09-06
Applicant: Google LLC
Inventor: Suhani Vora , Reza Mahjourian , Soeren Pirk , Anelia Angelova
Abstract: This disclosure describes a system including one or more computers and one or more non-transitory storage devices storing instructions that, when executed by one or more computers, cause the one or more computers to perform operations for generating a predicted segmentation map for potential objects in a future scene depicted in a future image. The operations includes: receiving a sequence of input images that depict the same scene, the input images being captured by a camera at different time steps, the sequence of input images comprising a current input image and one or more input images preceding the current image in the sequence; processing the current input image to generate a segmentation map for potential objects in the current input image and a respective depth map for the current input image; generating a point cloud for the current input image using the segmentation map and the depth map of the current input image, wherein the point cloud is a 3-dimensional (3D) structure representation of the scene as depicted in the current input image; processing the sequence of input images using an ego-motion estimation neural network to generate, for each pair of two consecutive input images in the sequence, a respective ego-motion output that characterizes motion of the camera between the two consecutive input images; processing the ego-motion outputs using a future ego-motion prediction neural network to generate a future ego-motion output that is a prediction of future motion of the camera from the current input image in the sequence to a future image, wherein the future image is an image that would be captured by the camera at a future time step; processing the point cloud of the current input image and the future ego-motion output to generate a future point cloud that is a predicted 3D representation of a future scene as depicted in the future image; and processing the future point cloud to generate a predicted segmentation map for potential objects in the future scene depicted in the future image.
-