Shared Dense Network with Robot Task-Specific Heads

    公开(公告)号:US20210181716A1

    公开(公告)日:2021-06-17

    申请号:US16717498

    申请日:2019-12-17

    Abstract: A method includes receiving image data representing an environment of a robotic device from a camera on the robotic device. The method further includes applying a trained dense network to the image data to generate a set of feature values, where the trained dense network has been trained to accomplish a first robot vision task. The method additionally includes applying a trained task-specific head to the set of feature values to generate a task-specific output to accomplish a second robot vision task, where the trained task-specific head has been trained to accomplish the second robot vision task based on previously generated feature values from the trained dense network, where the second robot vision task is different from the first robot vision task. The method also includes controlling the robotic device to operate in the environment based on the task-specific output generated to accomplish the second robot vision task.

    Pixelwise filterable depth maps for robots

    公开(公告)号:US11618167B2

    公开(公告)日:2023-04-04

    申请号:US16726769

    申请日:2019-12-24

    Abstract: A method includes receiving sensor data from a plurality of robot sensors on a robot. The method includes generating a depth map that includes a plurality of pixel depths. The method includes determining, for each respective pixel depth, based on the at least one robot sensor associated with the respective pixel depth, a pixelwise confidence level indicative of a likelihood that the respective pixel depth accurately represents a distance between the robot and a feature of the environment. The method includes generating a pixelwise filterable depth map for a control system of the robot. The pixelwise filterable depth map is filterable to produce a robot operation specific depth map. The robot operation specific depth map is determined based on a comparison of each respective pixelwise confidence level with a confidence threshold corresponding to at least one operation of the robot controlled by the control system of the robot.

    Engagement Detection and Attention Estimation for Human-Robot Interaction

    公开(公告)号:US20220366725A1

    公开(公告)日:2022-11-17

    申请号:US17815361

    申请日:2022-07-27

    Abstract: A method includes receiving, from a camera disposed on a robotic device, a two-dimensional (2D) image of a body of an actor and determining, for each respective keypoint of a first subset of a plurality of keypoints, 2D coordinates of the respective keypoint within the 2D image. The plurality of keypoints represent body locations. Each respective keypoint of the first subset is visible in the 2D image. The method also includes determining a second subset of the plurality of keypoints. Each respective keypoint of the second subset is not visible in the 2D image. The method further includes determining, by way of a machine learning model, an extent of engagement of the actor with the robotic device based on (i) the 2D coordinates of keypoints of the first subset and (ii) for each respective keypoint of the second subset, an indicator that the respective keypoint is not visible.

    Shared dense network with robot task-specific heads

    公开(公告)号:US11587302B2

    公开(公告)日:2023-02-21

    申请号:US16717498

    申请日:2019-12-17

    Abstract: A method includes receiving image data representing an environment of a robotic device from a camera on the robotic device. The method further includes applying a trained dense network to the image data to generate a set of feature values, where the trained dense network has been trained to accomplish a first robot vision task. The method additionally includes applying a trained task-specific head to the set of feature values to generate a task-specific output to accomplish a second robot vision task, where the trained task-specific head has been trained to accomplish the second robot vision task based on previously generated feature values from the trained dense network, where the second robot vision task is different from the first robot vision task. The method also includes controlling the robotic device to operate in the environment based on the task-specific output generated to accomplish the second robot vision task.

    Fusing multiple depth sensing modalities

    公开(公告)号:US11450018B1

    公开(公告)日:2022-09-20

    申请号:US16726771

    申请日:2019-12-24

    Abstract: A method includes receiving a first depth map that includes a plurality of first pixel depths and a second depth map that includes a plurality of second pixel depths. The first depth map corresponds to a reference depth scale and the second depth map corresponds to a relative depth scale. The method includes aligning the second pixel depths with the first pixel depths. The method includes transforming the aligned region of the second pixel depths such that transformed second edge pixel depths of the aligned region are coextensive with first edge pixel depths surrounding the corresponding region of the first pixel depths. The method includes generating a third depth map. The third depth map includes a first region corresponding to the first pixel depths and a second region corresponding to the transformed and aligned region of the second pixel depths.

    Engagement detection and attention estimation for human-robot interaction

    公开(公告)号:US11436869B1

    公开(公告)日:2022-09-06

    申请号:US16707835

    申请日:2019-12-09

    Abstract: A method includes receiving, from a camera disposed on a robotic device, a two-dimensional (2D) image of a body of an actor and determining, for each respective keypoint of a first subset of a plurality of keypoints, 2D coordinates of the respective keypoint within the 2D image. The plurality of keypoints represent body locations. Each respective keypoint of the first subset is visible in the 2D image. The method also includes determining a second subset of the plurality of keypoints. Each respective keypoint of the second subset is not visible in the 2D image. The method further includes determining, by way of a machine learning model, an extent of engagement of the actor with the robotic device based on (i) the 2D coordinates of keypoints of the first subset and (ii) for each respective keypoint of the second subset, an indicator that the respective keypoint is not visible.

    Object Association Using Machine Learning Models

    公开(公告)号:US20220388175A1

    公开(公告)日:2022-12-08

    申请号:US17817076

    申请日:2022-08-03

    Abstract: A method includes receiving sensor data representing a first object in an environment and generating, based on the sensor data, a first state vector that represents physical properties of the first object. The method also includes generating, by a first machine learning model and based on the first state vector and a second state vector that represents physical properties of a second object previously observed in the environment, a metric indicating a likelihood that the first object is the same as the second object. The method further includes determining, based on the metric, to update the second state vector and updating, by a second machine learning model configured to maintain the second state vector over time and based on the first state vector, the second state vector to incorporate into the second state vector information concerning physical properties of the second object as represented in the first state vector.

    Fusing Multiple Depth Sensing Modalities

    公开(公告)号:US20220366590A1

    公开(公告)日:2022-11-17

    申请号:US17878535

    申请日:2022-08-01

    Abstract: A method includes receiving a first depth map that includes a plurality of first pixel depths and a second depth map that includes a plurality of second pixel depths. The first depth map corresponds to a reference depth scale and the second depth map corresponds to a relative depth scale. The method includes aligning the second pixel depths with the first pixel depths. The method includes transforming the aligned region of the second pixel depths such that transformed second edge pixel depths of the aligned region are coextensive with first edge pixel depths surrounding the corresponding region of the first pixel depths. The method includes generating a third depth map. The third depth map includes a first region corresponding to the first pixel depths and a second region corresponding to the transformed and aligned region of the second pixel depths.

    Pixelwise Filterable Depth Maps for Robots

    公开(公告)号:US20210187748A1

    公开(公告)日:2021-06-24

    申请号:US16726769

    申请日:2019-12-24

    Abstract: A method includes receiving sensor data from a plurality of robot sensors on a robot. The method includes generating a depth map that includes a plurality of pixel depths. The method includes determining, for each respective pixel depth, based on the at least one robot sensor associated with the respective pixel depth, a pixelwise confidence level indicative of a likelihood that the respective pixel depth accurately represents a distance between the robot and a feature of the environment. The method includes generating a pixelwise filterable depth map for a control system of the robot. The pixelwise filterable depth map is filterable to produce a robot operation specific depth map. The robot operation specific depth map is determined based on a comparison of each respective pixelwise confidence level with a confidence threshold corresponding to at least one operation of the robot controlled by the control system of the robot.

Patent Agency Ranking