Pixelwise filterable depth maps for robots

    公开(公告)号:US11618167B2

    公开(公告)日:2023-04-04

    申请号:US16726769

    申请日:2019-12-24

    Abstract: A method includes receiving sensor data from a plurality of robot sensors on a robot. The method includes generating a depth map that includes a plurality of pixel depths. The method includes determining, for each respective pixel depth, based on the at least one robot sensor associated with the respective pixel depth, a pixelwise confidence level indicative of a likelihood that the respective pixel depth accurately represents a distance between the robot and a feature of the environment. The method includes generating a pixelwise filterable depth map for a control system of the robot. The pixelwise filterable depth map is filterable to produce a robot operation specific depth map. The robot operation specific depth map is determined based on a comparison of each respective pixelwise confidence level with a confidence threshold corresponding to at least one operation of the robot controlled by the control system of the robot.

    Recovering Material Properties with Active Illumination and Camera on a Robot Manipulator

    公开(公告)号:US20220168898A1

    公开(公告)日:2022-06-02

    申请号:US17106889

    申请日:2020-11-30

    Inventor: Guy Satat

    Abstract: A method includes identifying a target surface in an environment of a robotic device. The method further includes controlling a moveable component of the robotic device to move along a motion path relative to the target surface, wherein the moveable component comprises a light source and a camera. The method additionally includes receiving a plurality of images from the camera when the moveable component is at a plurality of poses along the motion path and when the light source is illuminating the target surface. The method also includes determining bidirectional reflectance distribution function (BRDF) image data, wherein the BRDF image data comprises the plurality of images converted to angular space with respect to the target surface. The method further includes determining, based on the BRDF image data and by applying at least one pre-trained machine learning model, a material property of the target surface.

    Monitoring of surface touch points for precision cleaning

    公开(公告)号:US11642780B2

    公开(公告)日:2023-05-09

    申请号:US17382561

    申请日:2021-07-22

    CPC classification number: B25J9/04 B25J9/163 B25J9/1615

    Abstract: A system includes a robotic device, a sensor disposed on the robotic device, and circuitry configured to perform operations. The operations include determining a map that represents stationary features of an environment and receiving, from the sensor, sensor data representing the environment. The operations also include determining, based on the sensor data, a representation of an actor within the environment, where the representation includes keypoints representing corresponding body locations of the actor. The operations also include determining that a portion of a particular stationary feature is positioned within a threshold distance of a particular keypoint and, based on thereon, updating the map to indicate that the portion is to be cleaned. The operations further include, based on the map as updated, causing the robotic device to clean the portion of the particular stationary feature.

    Combined UV Imaging and Sanitization

    公开(公告)号:US20220118133A1

    公开(公告)日:2022-04-21

    申请号:US17450173

    申请日:2021-10-07

    Abstract: A system includes a robotic device, an ultraviolet (UV) illuminator disposed on the robotic device, an image sensor disposed on the robotic device and configured to sense UV light, and circuitry configured to perform operations. The operations include causing the UV illuminator to emit the UV light towards a feature of an environment, and receiving, from the image sensor, UV image data representing the feature illuminated by the UV light. The operations also include identifying, based on the UV image data, a portion of the feature to be sanitized by the robotic device, and based on the identifying the portion, adjusting a parameter of the UV illuminator from a first value associated with UV imaging to a second value associated with UV sanitization. The operations further include causing the robotic device to sanitize the portion of the feature by emitting, by the UV illuminator, the UV light towards the portion.

    Fusing multiple depth sensing modalities

    公开(公告)号:US11450018B1

    公开(公告)日:2022-09-20

    申请号:US16726771

    申请日:2019-12-24

    Abstract: A method includes receiving a first depth map that includes a plurality of first pixel depths and a second depth map that includes a plurality of second pixel depths. The first depth map corresponds to a reference depth scale and the second depth map corresponds to a relative depth scale. The method includes aligning the second pixel depths with the first pixel depths. The method includes transforming the aligned region of the second pixel depths such that transformed second edge pixel depths of the aligned region are coextensive with first edge pixel depths surrounding the corresponding region of the first pixel depths. The method includes generating a third depth map. The third depth map includes a first region corresponding to the first pixel depths and a second region corresponding to the transformed and aligned region of the second pixel depths.

    Combined UV and Color Imaging System

    公开(公告)号:US20220124260A1

    公开(公告)日:2022-04-21

    申请号:US17354908

    申请日:2021-06-22

    Abstract: A system includes a color camera configured to detect visible light and ultraviolet (UV) light, a UV illuminator, and a processor configured to perform operations. The operations include causing the UV illuminator to emit UV light towards a portion of an environment and receiving, from the color camera, a color image that represents the portion illuminated by the emitted UV light and by visible light incident thereon. The operations also include determining a first extent to which UV light is attenuated in connection with pixels of the color camera that have a first color and a second extent to which UV light is attenuated in connection with pixels of the color camera that have a second color. The operations further include generating a UV image based on the color image, the first extent, and the second extent, and identifying a feature of the environment by processing the UV image.

    Object Association Using Machine Learning Models

    公开(公告)号:US20220388175A1

    公开(公告)日:2022-12-08

    申请号:US17817076

    申请日:2022-08-03

    Abstract: A method includes receiving sensor data representing a first object in an environment and generating, based on the sensor data, a first state vector that represents physical properties of the first object. The method also includes generating, by a first machine learning model and based on the first state vector and a second state vector that represents physical properties of a second object previously observed in the environment, a metric indicating a likelihood that the first object is the same as the second object. The method further includes determining, based on the metric, to update the second state vector and updating, by a second machine learning model configured to maintain the second state vector over time and based on the first state vector, the second state vector to incorporate into the second state vector information concerning physical properties of the second object as represented in the first state vector.

    Fusing Multiple Depth Sensing Modalities

    公开(公告)号:US20220366590A1

    公开(公告)日:2022-11-17

    申请号:US17878535

    申请日:2022-08-01

    Abstract: A method includes receiving a first depth map that includes a plurality of first pixel depths and a second depth map that includes a plurality of second pixel depths. The first depth map corresponds to a reference depth scale and the second depth map corresponds to a relative depth scale. The method includes aligning the second pixel depths with the first pixel depths. The method includes transforming the aligned region of the second pixel depths such that transformed second edge pixel depths of the aligned region are coextensive with first edge pixel depths surrounding the corresponding region of the first pixel depths. The method includes generating a third depth map. The third depth map includes a first region corresponding to the first pixel depths and a second region corresponding to the transformed and aligned region of the second pixel depths.

    Monitoring of surface touch points for precision cleaning

    公开(公告)号:US11097414B1

    公开(公告)日:2021-08-24

    申请号:US17131252

    申请日:2020-12-22

    Abstract: A system includes a robotic device, a sensor disposed on the robotic device, and circuitry configured to perform operations. The operations include determining a map that represents stationary features of an environment and receiving, from the sensor, sensor data representing the environment. The operations also include determining, based on the sensor data, a representation of an actor within the environment, where the representation includes keypoints representing corresponding body locations of the actor. The operations also include determining that a portion of a particular stationary feature is positioned within a threshold distance of a particular keypoint and, based on thereon, updating the map to indicate that the portion is to be cleaned. The operations further include, based on the map as updated, causing the robotic device to clean the portion of the particular stationary feature.

    Pixelwise Filterable Depth Maps for Robots

    公开(公告)号:US20210187748A1

    公开(公告)日:2021-06-24

    申请号:US16726769

    申请日:2019-12-24

    Abstract: A method includes receiving sensor data from a plurality of robot sensors on a robot. The method includes generating a depth map that includes a plurality of pixel depths. The method includes determining, for each respective pixel depth, based on the at least one robot sensor associated with the respective pixel depth, a pixelwise confidence level indicative of a likelihood that the respective pixel depth accurately represents a distance between the robot and a feature of the environment. The method includes generating a pixelwise filterable depth map for a control system of the robot. The pixelwise filterable depth map is filterable to produce a robot operation specific depth map. The robot operation specific depth map is determined based on a comparison of each respective pixelwise confidence level with a confidence threshold corresponding to at least one operation of the robot controlled by the control system of the robot.

Patent Agency Ranking