-
公开(公告)号:US10692244B2
公开(公告)日:2020-06-23
申请号:US16137064
申请日:2018-09-20
Applicant: NVIDIA Corporation
Inventor: Jinwei Gu , Samarth Manoj Brahmbhatt , Kihwan Kim , Jan Kautz
Abstract: A deep neural network (DNN) system learns a map representation for estimating a camera position and orientation (pose). The DNN is trained to learn a map representation corresponding to the environment, defining positions and attributes of structures, trees, walls, vehicles, etc. The DNN system learns a map representation that is versatile and performs well for many different environments (indoor, outdoor, natural, synthetic, etc.). The DNN system receives images of an environment captured by a camera (observations) and outputs an estimated camera pose within the environment. The estimated camera pose is used to perform camera localization, i.e., recover the three-dimensional (3D) position and orientation of a moving camera, which is a fundamental task in computer vision with a wide variety of applications in robot navigation, car localization for autonomous driving, device localization for mobile navigation, and augmented/virtual reality.
-
公开(公告)号:US20200167943A1
公开(公告)日:2020-05-28
申请号:US16565885
申请日:2019-09-10
Applicant: NVIDIA Corporation
Inventor: Kihwan Kim , Jinwei Gu , Chen Liu , Jan Kautz
Abstract: Planar regions in three-dimensional scenes offer important geometric cues in a variety of three-dimensional perception tasks such as scene understanding, scene reconstruction, and robot navigation. Image analysis to detect planar regions can be performed by a deep learning architecture that includes a number of neural networks configured to estimate parameters for the planar regions. The neural networks process an image to detect an arbitrary number of plane objects in the image. Each plane object is associated with a number of estimated parameters including bounding box parameters, plane normal parameters, and a segmentation mask. Global parameters for the image, including a depth map, can also be estimated by one of the neural networks. Then, a segmentation refinement network jointly optimizes (i.e., refines) the segmentation masks for each instance of the plane objects and combines the refined segmentation masks to generate an aggregate segmentation mask for the image.
-
公开(公告)号:US10467763B1
公开(公告)日:2019-11-05
申请号:US16537986
申请日:2019-08-12
Applicant: NVIDIA Corporation
Inventor: Deqing Sun , Xiaodong Yang , Ming-Yu Liu , Jan Kautz
Abstract: A method, computer readable medium, and system are disclosed for estimating optical flow between two images. A first pyramidal set of features is generated for a first image and a partial cost volume for a level of the first pyramidal set of features is computed, by a neural network, using features at the level of the first pyramidal set of features and warped features extracted from a second image, where the partial cost volume is computed across a limited range of pixels that is less than a full resolution of the first image, in pixels, at the level. The neural network processes the features and the partial cost volume to produce a refined optical flow estimate for the first image and the second image.
-
公开(公告)号:US20190278983A1
公开(公告)日:2019-09-12
申请号:US16290643
申请日:2019-03-01
Applicant: NVIDIA Corporation
Inventor: Umar Iqbal , Pavlo Molchanov , Thomas Michael Breuel , Jan Kautz
Abstract: Estimating a three-dimensional (3D) pose of an object, such as a hand or body (human, animal, robot, etc.), from a 2D image is necessary for human-computer interaction. A hand pose can be represented by a set of points in 3D space, called keypoints. Two coordinates (x,y) represent spatial displacement and a third coordinate represents a depth of every point with respect to the camera. A monocular camera is used to capture an image of the 3D pose, but does not capture depth information. A neural network architecture is configured to generate a depth value for each keypoint in the captured image, even when portions of the pose are occluded, or the orientation of the object is ambiguous. Generation of the depth values enables estimation of the 3D pose of the object.
-
公开(公告)号:US10402697B2
公开(公告)日:2019-09-03
申请号:US15660719
申请日:2017-07-26
Applicant: NVIDIA Corporation
Inventor: Xiaodong Yang , Pavlo Molchanov , Jan Kautz
Abstract: A method, computer readable medium, and system are disclosed for classifying video image data. The method includes the steps of processing training video image data by at least a first layer of a convolutional neural network (CNN) to extract a first set of feature maps and generate classification output data for the training video image data. Spatial classification accuracy data is computed based on the classification output data and target classification output data and spatial discrimination factors for the first layer are computed based on the spatial classification accuracies and the first set of feature maps.
-
公开(公告)号:US10311589B2
公开(公告)日:2019-06-04
申请号:US15823370
申请日:2017-11-27
Applicant: NVIDIA Corporation
Inventor: Gregory P. Meyer , Shalini Gupta , Iuri Frosio , Nagilla Dikpal Reddy , Jan Kautz
Abstract: One embodiment of the present invention sets forth a technique for estimating a head pose of a user. The technique includes acquiring depth data associated with a head of the user and initializing each particle included in a set of particles with a different candidate head pose. The technique further includes performing one or more optimization passes that include performing at least one iterative closest point (ICP) iteration for each particle and performing at least one particle swarm optimization (PSO) iteration. Each ICP iteration includes rendering the three-dimensional reference model based on the candidate head pose associated with the particle and comparing the three-dimensional reference model to the depth data. Each PSO iteration comprises updating a global best head pose associated with the set of particles and modifying at least one candidate head pose. The technique further includes modifying a shape of the three-dimensional reference model based on depth data.
-
公开(公告)号:US20190164268A1
公开(公告)日:2019-05-30
申请号:US16200192
申请日:2018-11-26
Applicant: NVIDIA Corporation
Inventor: Orazio Gallo , Jinwei Gu , Jan Kautz , Patrick Wieschollek
Abstract: When a computer image is generated from a real-world scene having a semi-reflective surface (e.g. window), the computer image will create, at the semi-reflective surface from the viewpoint of the camera, both a reflection of a scene in front of the semi-reflective surface and a transmission of a scene located behind the semi-reflective surface. Similar to a person viewing the real-world scene from different locations, angles, etc., the reflection and transmission may change, and also move relative to each other, as the viewpoint of the camera changes. Unfortunately, the dynamic nature of the reflection and transmission negatively impacts the performance of many computer applications, but performance can generally be improved if the reflection and transmission are separated. The present disclosure uses deep learning to separate reflection and transmission at a semi-reflective surface of a computer image generated from a real-world scene.
-
58.
公开(公告)号:US20190158884A1
公开(公告)日:2019-05-23
申请号:US16191174
申请日:2018-11-14
Applicant: NVIDIA Corporation
Inventor: Yi-Hsuan Tsai , Ming-Yu Liu , Deqing Sun , Ming-Hsuan Yang , Jan Kautz
IPC: H04N19/85
Abstract: A method, computer readable medium, and system are disclosed for identifying residual video data. This data describes data that is lost during a compression of original video data. For example, the original video data may be compressed and then decompressed, and this result may be compared to the original video data to determine the residual video data. This residual video data is transformed into a smaller format by means of encoding, binarizing, and compressing, and is sent to a destination. At the destination, the residual video data is transformed back into its original format and is used during the decompression of the compressed original video data to improve a quality of the decompressed original video data.
-
59.
公开(公告)号:US20190156154A1
公开(公告)日:2019-05-23
申请号:US16188641
申请日:2018-11-13
Applicant: NVIDIA Corporation
Inventor: Wei-Chih Tu , Ming-Yu Liu , Varun Jampani , Deqing Sun , Ming-Hsuan Yang , Jan Kautz
Abstract: Segmentation is the identification of separate objects within an image. An example is identification of a pedestrian passing in front of a car, where the pedestrian is a first object and the car is a second object. Superpixel segmentation is the identification of regions of pixels within an object that have similar properties An example is identification of pixel regions having a similar color, such as different articles of clothing worn by the pedestrian and different components of the car. A pixel affinity neural network (PAN) model is trained to generate pixel affinity maps for superpixel segmentation. The pixel affinity map defines the similarity of two points in space. In an embodiment, the pixel affinity map indicates a horizonal affinity and vertical affinity for each pixel in the image. The pixel affinity map is processed to identify the superpixels.
-
公开(公告)号:US10192525B2
公开(公告)日:2019-01-29
申请号:US15421364
申请日:2017-01-31
Applicant: NVIDIA Corporation
Inventor: Iuri Frosio , Jan Kautz
Abstract: A system, method and computer program product are provided for generating one or more values for a signal patch using neighboring patches collected based on a distance dynamically computed from a noise distribution of the signal patch. In use, a reference patch is identified from a signal, and a reference distance is computed based on a noise distribution in the reference patch. Neighbor patches are then collected from the signal based on the computed reference distance from the reference patch. Further, the collected neighbor patches are processed with the reference patch to generate one or more values for the reference patch.
-
-
-
-
-
-
-
-
-