Techniques for generating controllers for robots

    公开(公告)号:US11607806B2

    公开(公告)日:2023-03-21

    申请号:US16892216

    申请日:2020-06-03

    Applicant: AUTODESK, INC.

    Abstract: A model generator implements a data-driven approach to generating a robot model that describes one or more physical properties of a robot. The model generator generates a set of basis functions that generically describes a range of physical properties of a wide range of systems. The model generator then generates a set of coefficients corresponding to the set of basis functions based on one or more commands issued to the robot, one or more corresponding end effector positions implemented by the robot, and a sparsity constraint. The model generator generates the robot model by combining the set of basis functions with the set of coefficients. In doing so, the model generator disables specific basis functions that do not describe physical properties associated with the robot. The robot model can subsequently be used within a robot controller to generate commands for controlling the robot.

    Shaped-based techniques for exploring design spaces

    公开(公告)号:US11380045B2

    公开(公告)日:2022-07-05

    申请号:US16174110

    申请日:2018-10-29

    Applicant: Autodesk, Inc.

    Abstract: In various embodiments, a training application generates a trained encoder that automatically generates shape embeddings having a first size and representing three-dimensional (3D) geometry shapes. First, the training application generates a different view activation for each of multiple views associated with a first 3D geometry based on a first convolutional neural network (CNN) block. The training application then aggregates the view activations to generate a tiled activation. Subsequently, the training application generates a first shape embedding having the first size based on the tiled activation and a second CNN block. The training application then generates multiple re-constructed views based on the first shape embedding. The training application performs training operation(s) on at least one of the first CNN block and the second CNN block based on the views and the re-constructed views to generate the trained encoder.

    Shaped-based techniques for exploring design spaces

    公开(公告)号:US11126330B2

    公开(公告)日:2021-09-21

    申请号:US16174119

    申请日:2018-10-29

    Applicant: Autodesk, Inc.

    Abstract: In various embodiments, a training application generates a trained encoder that automatically generates shape embeddings having a first size and representing three-dimensional (3D) geometry shapes, First, the training application generates a different view activation for each of multiple views associated with a first 3D geometry based on a first convolutional neural network (CNN) block. The training application then aggregates the view activations to generate a tiled activation. Subsequently, the training application generates a first shape embedding having the first size based on the tiled activation and a second CNN block. The training application then generates multiple re-constructed views based on the first shape embedding. The training application performs training operation(s) on at least one of the first CNN block and the second CNN block based on the views and the re-constructed views to generate the trained encoder.

    Shaped-based techniques for exploring design spaces

    公开(公告)号:US11928773B2

    公开(公告)日:2024-03-12

    申请号:US17678609

    申请日:2022-02-23

    Applicant: AUTODESK, INC.

    CPC classification number: G06T15/20 G06F30/00 G06N3/04 G06N3/088

    Abstract: In various embodiments, a training application generates a trained encoder that automatically generates shape embeddings having a first size and representing three-dimensional (3D) geometry shapes. First, the training application generates a different view activation for each of multiple views associated with a first 3D geometry based on a first convolutional neural network (CNN) block. The training application then aggregates the view activations to generate a tiled activation. Subsequently, the training application generates a first shape embedding having the first size based on the tiled activation and a second CNN block. The training application then generates multiple re-constructed views based on the first shape embedding. The training application performs training operation(s) on at least one of the first CNN block and the second CNN block based on the views and the re-constructed views to generate the trained encoder.

Patent Agency Ranking