UNSUPERVISED DEPTH PREDICTION NEURAL NETWORKS

    公开(公告)号:US20230419521A1

    公开(公告)日:2023-12-28

    申请号:US18367888

    申请日:2023-09-13

    Applicant: Google LLC

    Abstract: A system for generating a depth output for an image is described. The system receives input images that depict the same scene, each input image including one or more potential objects. The system generates, for each input image, a respective background image and processes the background images to generate a camera motion output that characterizes the motion of the camera between the input images. For each potential object, the system generates a respective object motion output for the potential object based on the input images and the camera motion output. The system processes a particular input image of the input images using a depth prediction neural network (NN) to generate a depth output for the particular input image, and updates the current values of parameters of the depth prediction NN based on the particular depth output, the camera motion output, and the object motion outputs for the potential objects.

    TRAINING NEURAL NETWORKS USING CONSISTENCY MEASURES

    公开(公告)号:US20210279511A1

    公开(公告)日:2021-09-09

    申请号:US17194090

    申请日:2021-03-05

    Applicant: Google LLC

    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a neural network using consistency measures. One of the methods includes processing a particular training example from a mediator training data set using a first neural network to generate a first output for a first machine learning task; processing the particular training example in the mediator training data set using each of one or more second neural networks, wherein each second neural network is configured to generate a second output for a respective second machine learning task; determining, for each second machine learning task, a consistency target output for the first machine learning task; determining, for each second machine learning task, an error between the first output and the consistency target output corresponding to the second machine learning task; and generating a parameter update for the first neural network from the determined errors.

    FUTURE SEMANTIC SEGMENTATION PREDICTION USING 3D STRUCTURE

    公开(公告)号:US20210073997A1

    公开(公告)日:2021-03-11

    申请号:US16562819

    申请日:2019-09-06

    Applicant: Google LLC

    Abstract: This disclosure describes a system including one or more computers and one or more non-transitory storage devices storing instructions that, when executed by one or more computers, cause the one or more computers to perform operations for generating a predicted segmentation map for potential objects in a future scene depicted in a future image. The operations includes: receiving a sequence of input images that depict the same scene, the input images being captured by a camera at different time steps, the sequence of input images comprising a current input image and one or more input images preceding the current image in the sequence; processing the current input image to generate a segmentation map for potential objects in the current input image and a respective depth map for the current input image; generating a point cloud for the current input image using the segmentation map and the depth map of the current input image, wherein the point cloud is a 3-dimensional (3D) structure representation of the scene as depicted in the current input image; processing the sequence of input images using an ego-motion estimation neural network to generate, for each pair of two consecutive input images in the sequence, a respective ego-motion output that characterizes motion of the camera between the two consecutive input images; processing the ego-motion outputs using a future ego-motion prediction neural network to generate a future ego-motion output that is a prediction of future motion of the camera from the current input image in the sequence to a future image, wherein the future image is an image that would be captured by the camera at a future time step; processing the point cloud of the current input image and the future ego-motion output to generate a future point cloud that is a predicted 3D representation of a future scene as depicted in the future image; and processing the future point cloud to generate a predicted segmentation map for potential objects in the future scene depicted in the future image.

Patent Agency Ranking