THREE-DIMENSIONAL REASONING USING MULTI-STAGE INFERENCE FOR AUTONOMOUS SYSTEMS AND APPLICATIONS

    公开(公告)号:US20240371082A1

    公开(公告)日:2024-11-07

    申请号:US18772058

    申请日:2024-07-12

    Abstract: In various examples, an autonomous system may use a multi-stage process to solve three-dimensional (3D) manipulation tasks from a minimal number of demonstrations and predict key-frame poses with higher precision. In a first stage of the process, for example, the disclosed systems and methods may predict an area of interest in an environment using a virtual environment. The area of interest may correspond to a predicted location of an object in the environment, such as an object that an autonomous machine is instructed to manipulate. In a second stage, the systems may magnify the area of interest and render images of the virtual environment using a 3D representation of the environment that magnifies the area of interest. The systems may then use the rendered images to make predictions related to key-frame poses associated with a future (e.g., next) state of the autonomous machine.

    SIMULATING PHYSICAL INTERACTIONS FOR AUTOMATED SYSTEMS

    公开(公告)号:US20230294276A1

    公开(公告)日:2023-09-21

    申请号:US18148548

    申请日:2022-12-30

    CPC classification number: B25J9/1605 B25J9/163 G05B2219/39001

    Abstract: Approaches presented herein provide for simulation of human motion for human-robot interactions, such as may involve a handover of an object. Motion capture can be performed for a hand grasping and moving an object to a location and orientation appropriate for a handover, without a need for a robot to be present or an actual handover to occur. This motion data can be used to separately model the hand and the object for use in a handover simulation, where a component such as a physics engine may be used to ensure realistic modeling of the motion or behavior. During a simulation, a robot control model or algorithm can predict an optimal location and orientation to grasp an object, and an optimal path to move to that location and orientation, using a control model or algorithm trained, based at least in part, using the motion models for the hand and object.

    MACHINE LEARNING CONTROL OF OBJECT HANDOVERS

    公开(公告)号:US20220032454A1

    公开(公告)日:2022-02-03

    申请号:US16941339

    申请日:2020-07-28

    Abstract: A robotic control system directs a robot to take an object from a human grasp by obtaining an image of a human hand holding an object, estimating the pose of the human hand and the object, and determining a grasp pose for the robot that will not interfere with the human hand. In at least one example, a depth camera is used to obtain a point cloud of the human hand holding the object. The point cloud is provided to a deep network that is trained to generate a grasp pose for a robotic gripper that can take the object from the human's hand without pinching or touching the human's fingers.

    IMITATION LEARNING SYSTEM
    10.
    发明申请

    公开(公告)号:US20210081752A1

    公开(公告)日:2021-03-18

    申请号:US16931211

    申请日:2020-07-16

    Abstract: Apparatuses, systems, and techniques to identify a goal of a demonstration. In at least one embodiment, video data of a demonstration is analyzed to identify a goal. Object trajectories identified in the video data are analyzed with respect to a task predicate satisfied by a respective object trajectory, and with respect to motion predicate. Analysis of the trajectory with respect to the motion predicate is used to assess intentionality of a trajectory with respect to the goal.

Patent Agency Ranking