-
公开(公告)号:US20220335624A1
公开(公告)日:2022-10-20
申请号:US17721288
申请日:2022-04-14
Applicant: Waymo LLC
Inventor: Daniel Rudolf Maurer , Austin Charles Stone , Alper Ayvaci , Anelia Angelova , Rico Jonschkowski
Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a neural network to predict optical flow. One of the methods includes obtaining a batch of one or more training image pairs; for each of the pairs: processing the first training image and the second training image using the neural network to generate a final optical flow estimate; generating a cropped final optical flow estimate from the final optical flow estimate; and training the neural network using the cropped optical flow estimate.
-
公开(公告)号:US12229972B2
公开(公告)日:2025-02-18
申请号:US17721288
申请日:2022-04-14
Applicant: Waymo LLC
Inventor: Daniel Rudolf Maurer , Austin Charles Stone , Alper Ayvaci , Anelia Angelova , Rico Jonschkowski
Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a neural network to predict optical flow. One of the methods includes obtaining a batch of one or more training image pairs; for each of the pairs: processing the first training image and the second training image using the neural network to generate a final optical flow estimate; generating a cropped final optical flow estimate from the final optical flow estimate; and training the neural network using the cropped optical flow estimate.
-
公开(公告)号:US20230035454A1
公开(公告)日:2023-02-02
申请号:US17384637
申请日:2021-07-23
Applicant: Waymo LLC
Inventor: Daniel Rudolf Maurer , Alper Ayvaci , Robert William Anderson , Rico Jonschkowski , Austin Charles Stone , Anelia Angelova , Nichola Abdo , Christopher John Sweeney
Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for generating an optical flow label from a lidar point cloud. One of the methods includes obtaining data specifying a training example, including a first image of a scene in an environment captured at a first time point and a second image of the scene in the environment captured at a second time point. For each of a plurality of lidar points, a respective second corresponding pixel in the second image is obtained and a respective velocity estimate for the lidar point at the second time point is obtained. A respective first corresponding pixel in the first image is determined using the velocity estimate for the lidar point. A proxy optical flow ground truth for the training example is generated based on an estimate of optical flow of the pixel between the first and second images.
-
-