-
公开(公告)号:US20190076949A1
公开(公告)日:2019-03-14
申请号:US15702637
申请日:2017-09-12
Applicant: AUTODESK, INC.
Inventor: Evan ATHERTON , David THOMASSON , Heather KERRICK , Hui LI
Abstract: A control application implements computer vision techniques to cause a positioning robot and a welding robot to perform fabrication operations. The control application causes the positioning robot to place elements of a structure at certain positions based on real-time visual feedback captured by the positioning robot. The control application also causes the welding robot to weld those elements into place based on real-time visual feedback captured by the welding robot. By analyzing the real-time visual feedback captured by both robots, the control application adjusts the positioning and welding operations in real time.
-
公开(公告)号:US20190337161A1
公开(公告)日:2019-11-07
申请号:US16513548
申请日:2019-07-16
Applicant: AUTODESK, INC.
Inventor: Evan ATHERTON , David THOMASSON , Heather KERRICK , Maurice CONTI
Abstract: One embodiment of the present invention sets forth a technique for determining a location of an object that is being manipulated or processed by a robot. The technique includes capturing a digital image of the object while the object is disposed by the robot within an imaging space, wherein the digital image includes a direct view of the object and a reflected view of the object, detecting a visible feature of the object in the direct view and the visible feature of the object in the reflected view, and computing a first location of the visible feature in a first direction based on a position of the visible feature in the direct view. The technique further includes computing a second location of the visible feature in a second direction based on a position of the visible feature in the reflected view and causing the robot to move the object to a processing station based at least in part on the first location and the second location.
-
公开(公告)号:US20180345496A1
公开(公告)日:2018-12-06
申请号:US15995005
申请日:2018-05-31
Applicant: AUTODESK, INC.
Inventor: Hui LI , Evan Patrick ATHERTON , Erin BRADNER , Nicholas COTE , Heather KERRICK
IPC: B25J9/16
Abstract: One embodiment of the present invention sets forth a technique for controlling the execution of a physical process. The technique includes receiving, as input to a machine learning model that is configured to adapt a simulation of the physical process executing in a virtual environment to a physical world, simulated output for controlling how the physical process performs a task in the virtual environment and real-world data collected from the physical process performing the task in the physical world. The technique also includes performing, by the machine learning model, one or more operations on the simulated output and the real-world data to generate augmented output. The technique further includes transmitting the augmented output to the physical process to control how the physical process performs the task in the physical world.
-
公开(公告)号:US20180341730A1
公开(公告)日:2018-11-29
申请号:US15607289
申请日:2017-05-26
Applicant: AUTODESK, INC.
Inventor: Evan Patrick ATHERTON , David THOMASSON , Maurice Ugo CONTI , Heather KERRICK , Nicholas COTE
Abstract: A robotic assembly cell is configured to generate a physical mesh of physical polygons based on a simulated mesh of simulated triangles. A control application configured to operate the assembly cell selects a simulated polygon in the simulated mesh and then causes a positioning robot in the cell to obtain a physical polygon that is similar to the simulated polygon. The positioning robot positions the polygon on the physical mesh, and a welding robot in the cell then welds the polygon to the mesh. The control application captures data that reflects how the physical polygon is actually positioned on the physical mesh, and then updates the simulated mesh to be geometrically consistent with the physical mesh. In doing so, the control application may execute a multi-objective solver to generate an updated simulated mesh that meets specific design criteria.
-
公开(公告)号:US20180304550A1
公开(公告)日:2018-10-25
申请号:US15495945
申请日:2017-04-24
Applicant: AUTODESK, INC.
Inventor: Evan ATHERTON , David THOMASSON , Maurice Ugo CONTI , Heather KERRICK , Nicholas COTE
CPC classification number: B29C67/0088 , B23K9/04 , B25J11/005 , B33Y10/00 , B33Y50/02 , G05B19/29 , G05B2219/40557 , G05B2219/45135
Abstract: A robot system is configured to fabricate three-dimensional (3D) objects using closed-loop, computer vision-based control. The robot system initiates fabrication based on a set of fabrication paths along which material is to be deposited. During deposition of material, the robot system captures video data and processes that data to determine the specific locations where the material is deposited. Based on these locations, the robot system adjusts future deposition locations to compensate for deviations from the fabrication paths. Additionally, because the robot system includes a 6-axis robotic arm, the robot system can deposit material at any locations, along any pathway, or across any surface. Accordingly, the robot system is capable of fabricating a 3D object with multiple non-parallel, non-horizontal, and/or non-planar layers.
-
公开(公告)号:US20170151676A1
公开(公告)日:2017-06-01
申请号:US15363956
申请日:2016-11-29
Applicant: Autodesk, Inc.
Inventor: Evan ATHERTON , David THOMASSON , Heather KERRICK , Maurice CONTI
CPC classification number: B25J13/088 , B25J11/00 , H04N5/2256
Abstract: One embodiment of the present invention sets forth a technique for determining a location of an object that is being manipulated or processed by a robot. The technique includes capturing a digital image of the object while the object is disposed by the robot within an imaging space, wherein the digital image includes a direct view of the object and a reflected view of the object, detecting a visible feature of the object in the direct view and the visible feature of the object in the reflected view, and computing a first location of the visible feature in a first direction based on a position of the visible feature in the direct view. The technique further includes computing a second location of the visible feature in a second direction based on a position of the visible feature in the reflected view and causing the robot to move the object to a processing station based at least in part on the first location and the second location.
-
-
-
-
-