-
公开(公告)号:US20240273810A1
公开(公告)日:2024-08-15
申请号:US18430113
申请日:2024-02-01
Applicant: NVIDIA Corporation
Inventor: Ankit Goyal , Jie Xu , Yijie Guo , Valts Blukis , Yu-Wei Chao , Dieter Fox
IPC: G06T15/10 , G05D1/243 , G05D101/15 , G06T7/55
CPC classification number: G06T15/10 , G05D1/2435 , G06T7/55 , G05D2101/15 , G06T2207/20084
Abstract: In various examples, a machine may generate, using sensor data capturing one or more views of an environment, a virtual environment including a 3D representation of the environment. The machine may render, using one or more virtual sensors in the virtual environment, one or more images of the 3D representation of the environment. The machine may apply the one or more images to one or more machine learning models (MLMs) trained to generate one or more predictions corresponding to the environment. The machine may perform one or more control operations based at least on the one or more predictions generated using the one or more MLMs.
-
2.
公开(公告)号:US20240371082A1
公开(公告)日:2024-11-07
申请号:US18772058
申请日:2024-07-12
Applicant: NVIDIA Corporation
Inventor: Ankit Goyal , Valts Blukis , Jie Xu , Yijie Guo , Yu-Wei Chao , Dieter Fox
Abstract: In various examples, an autonomous system may use a multi-stage process to solve three-dimensional (3D) manipulation tasks from a minimal number of demonstrations and predict key-frame poses with higher precision. In a first stage of the process, for example, the disclosed systems and methods may predict an area of interest in an environment using a virtual environment. The area of interest may correspond to a predicted location of an object in the environment, such as an object that an autonomous machine is instructed to manipulate. In a second stage, the systems may magnify the area of interest and render images of the virtual environment using a 3D representation of the environment that magnifies the area of interest. The systems may then use the rendered images to make predictions related to key-frame poses associated with a future (e.g., next) state of the autonomous machine.
-
公开(公告)号:US20240261971A1
公开(公告)日:2024-08-08
申请号:US18232217
申请日:2023-08-09
Applicant: NVIDIA Corporation
Inventor: Yuzhe Qin , Wei Yang , Yu-Wei Chao , Dieter Fox
CPC classification number: B25J9/1689 , B25J9/1697 , B25J19/023 , G06T7/50 , G06T7/70 , G06V10/82 , G06V40/10 , G06T2207/10028 , G06T2207/20084
Abstract: Apparatuses, systems, and techniques to generate control commands. In at least one embodiment, control commands are generated based on, for example, one or more images depicting a hand.
-
公开(公告)号:US20230294277A1
公开(公告)日:2023-09-21
申请号:US17854730
申请日:2022-06-30
Applicant: Nvidia Corporation
Inventor: Wei Yang , Balakumar Sundaralingam , Christopher Jason Paxton , Maya Cakmak , Yu-Wei Chao , Dieter Fox , Iretiayo Akinola
IPC: B25J9/16 , G05B19/4155
CPC classification number: B25J9/1612 , G05B19/4155 , B25J9/1666 , B25J9/1605 , G05B2219/50391 , G05B2219/40269
Abstract: Approaches presented herein provide for predictive control of a robot or automated assembly in performing a specific task. A task to be performed may depend on the location and orientation of the robot performing that task. A predictive control system can determine a state of a physical environment at each of a series of time steps, and can select an appropriate location and orientation at each of those time steps. At individual time steps, an optimization process can determine a sequence of future motions or accelerations to be taken that comply with one or more constraints on that motion. For example, at individual time steps, a respective action in the sequence may be performed, then another motion sequence predicted for a next time step, which can help drive robot motion based upon predicted future motion and allow for quick reactions.
-
公开(公告)号:US20230294276A1
公开(公告)日:2023-09-21
申请号:US18148548
申请日:2022-12-30
Applicant: Nvidia Corporation
Inventor: Yu-Wei Chao , Yu Xiang , Wei Yang , Dieter Fox , Chris Paxton , Balakumar Sundaralingam , Maya Cakmak
IPC: B25J9/16
CPC classification number: B25J9/1605 , B25J9/163 , G05B2219/39001
Abstract: Approaches presented herein provide for simulation of human motion for human-robot interactions, such as may involve a handover of an object. Motion capture can be performed for a hand grasping and moving an object to a location and orientation appropriate for a handover, without a need for a robot to be present or an actual handover to occur. This motion data can be used to separately model the hand and the object for use in a handover simulation, where a component such as a physics engine may be used to ensure realistic modeling of the motion or behavior. During a simulation, a robot control model or algorithm can predict an optimal location and orientation to grasp an object, and an optimal path to move to that location and orientation, using a control model or algorithm trained, based at least in part, using the motion models for the hand and object.
-
公开(公告)号:US20230202031A1
公开(公告)日:2023-06-29
申请号:US18116118
申请日:2023-03-01
Applicant: NVIDIA Corporation
Inventor: Wei Yang , Christopher Jason Paxton , Yu-Wei Chao , Dieter Fox
CPC classification number: B25J9/1612 , G06T7/50 , G06V20/30 , G06V20/64 , G06V40/107 , G06T2207/10028 , B25J9/1697 , B25J9/16
Abstract: A robotic control system directs a robot to take an object from a human grasp by obtaining an image of a human hand holding an object, estimating the pose of the human hand and the object, and determining a grasp pose for the robot that will not interfere with the human hand. In at least one example, a depth camera is used to obtain a point cloud of the human hand holding the object. The point cloud is provided to a deep network that is trained to generate a grasp pose for a robotic gripper that can take the object from the human's hand without pinching or touching the human's fingers.
-
公开(公告)号:US20240157557A1
公开(公告)日:2024-05-16
申请号:US18125503
申请日:2023-03-23
Applicant: NVIDIA Corporation
Inventor: Sammy Joe Christen , Wei Yang , Claudia Perez D'Arpino , Dieter Fox , Yu-Wei Chao
IPC: B25J9/16 , G05B19/4155 , G06N3/08
CPC classification number: B25J9/1666 , B25J9/161 , B25J9/1612 , B25J9/163 , B25J9/1697 , G05B19/4155 , G06N3/08 , G05B2219/40202
Abstract: Apparatuses, systems, and techniques to control a real-world and/or virtual device (e.g., a robot). In at least one embodiment, the device is controlled based, at least in part on, for example, one or more neural networks. Parameter values for the neural network(s) may be obtained by training the neural network(s) to control movement of a first agent with respect to at least one first target while avoiding collision with at least one stationary first holder of the at least one first target, and updating the parameter values by training the neural network(s) to control movement of a second agent with respect to at least one second target while avoiding collision with at least one non-stationary second holder of the at least one second target.
-
公开(公告)号:US11597078B2
公开(公告)日:2023-03-07
申请号:US16941339
申请日:2020-07-28
Applicant: NVIDIA Corporation
Inventor: Wei Yang , Christopher Jason Paxton , Yu-Wei Chao , Dieter Fox
Abstract: A robotic control system directs a robot to take an object from a human grasp by obtaining an image of a human hand holding an object, estimating the pose of the human hand and the object, and determining a grasp pose for the robot that will not interfere with the human hand. In at least one example, a depth camera is used to obtain a point cloud of the human hand holding the object. The point cloud is provided to a deep network that is trained to generate a grasp pose for a robotic gripper that can take the object from the human's hand without pinching or touching the human's fingers.
-
公开(公告)号:US20220032454A1
公开(公告)日:2022-02-03
申请号:US16941339
申请日:2020-07-28
Applicant: NVIDIA Corporation
Inventor: Wei Yang , Christopher Jason Paxton , Yu-Wei Chao , Dieter Fox
Abstract: A robotic control system directs a robot to take an object from a human grasp by obtaining an image of a human hand holding an object, estimating the pose of the human hand and the object, and determining a grasp pose for the robot that will not interfere with the human hand. In at least one example, a depth camera is used to obtain a point cloud of the human hand holding the object. The point cloud is provided to a deep network that is trained to generate a grasp pose for a robotic gripper that can take the object from the human's hand without pinching or touching the human's fingers.
-
公开(公告)号:US20210081752A1
公开(公告)日:2021-03-18
申请号:US16931211
申请日:2020-07-16
Applicant: NVIDIA Corporation
Inventor: Yu-Wei Chao , De-An Huang , Christopher Jason Paxton , Animesh Garg , Dieter Fox
Abstract: Apparatuses, systems, and techniques to identify a goal of a demonstration. In at least one embodiment, video data of a demonstration is analyzed to identify a goal. Object trajectories identified in the video data are analyzed with respect to a task predicate satisfied by a respective object trajectory, and with respect to motion predicate. Analysis of the trajectory with respect to the motion predicate is used to assess intentionality of a trajectory with respect to the goal.
-
-
-
-
-
-
-
-
-