-
公开(公告)号:US20250073897A1
公开(公告)日:2025-03-06
申请号:US18242193
申请日:2023-09-05
Applicant: NVIDIA Corporation
Inventor: Motoya Ohnishi , Iretiayo Akinola , Ajay Uday Mandlekar , Jie Xu , Fabio Tozeto Ramos
Abstract: Apparatuses, systems, and techniques to determine a trajectory of an object along a path. In at least one embodiment, one or more path signatures are used to identify one or more actions to be performed by an object to track a reference path.
-
2.
公开(公告)号:US20240371082A1
公开(公告)日:2024-11-07
申请号:US18772058
申请日:2024-07-12
Applicant: NVIDIA Corporation
Inventor: Ankit Goyal , Valts Blukis , Jie Xu , Yijie Guo , Yu-Wei Chao , Dieter Fox
Abstract: In various examples, an autonomous system may use a multi-stage process to solve three-dimensional (3D) manipulation tasks from a minimal number of demonstrations and predict key-frame poses with higher precision. In a first stage of the process, for example, the disclosed systems and methods may predict an area of interest in an environment using a virtual environment. The area of interest may correspond to a predicted location of an object in the environment, such as an object that an autonomous machine is instructed to manipulate. In a second stage, the systems may magnify the area of interest and render images of the virtual environment using a 3D representation of the environment that magnifies the area of interest. The systems may then use the rendered images to make predictions related to key-frame poses associated with a future (e.g., next) state of the autonomous machine.
-
公开(公告)号:US20240273810A1
公开(公告)日:2024-08-15
申请号:US18430113
申请日:2024-02-01
Applicant: NVIDIA Corporation
Inventor: Ankit Goyal , Jie Xu , Yijie Guo , Valts Blukis , Yu-Wei Chao , Dieter Fox
IPC: G06T15/10 , G05D1/243 , G05D101/15 , G06T7/55
CPC classification number: G06T15/10 , G05D1/2435 , G06T7/55 , G05D2101/15 , G06T2207/20084
Abstract: In various examples, a machine may generate, using sensor data capturing one or more views of an environment, a virtual environment including a 3D representation of the environment. The machine may render, using one or more virtual sensors in the virtual environment, one or more images of the 3D representation of the environment. The machine may apply the one or more images to one or more machine learning models (MLMs) trained to generate one or more predictions corresponding to the environment. The machine may perform one or more control operations based at least on the one or more predictions generated using the one or more MLMs.
-
-