Abstract:
Methods and systems for recognizing machine-readable information on three-dimensional (3D) objects are described. A robotic manipulator may move at least one physical object through a designated area in space. As the at least one physical object is being moved through the designated area, one or more optical sensors may determine a location of a machine-readable code on the at least one physical object and, based on the determined location, scan the machine-readable code so as to determine information associated with the at least one physical object encoded in the machine-readable code. Based on the information associated with the at least one physical object, a computing device may then determine a respective location in a physical environment of the robotic manipulator at which to place the at least one physical object. The robotic manipulator may then be directed to place the at least one physical object at the respective location.
Abstract:
One or more images of a physical environment may be received, where the one or more images may include one or more objects. A type of surface feature predicted to be contained on a portion of one or more surfaces of a single object may be determined. Surface features of the type within regions of the one or more images may then be identified. The regions may then be associated to corresponding objects in the physical environment based on the identified surface features. Based at least in part on the regions associated to the corresponding objects, a virtual representation of the physical environment may be determined, the representation including at least one distinct object segmented from a remaining portion of the physical environment so as to virtually distinguish a boundary of the at least one distinct object from boundaries of objects present in the remaining portion of the physical environment.
Abstract:
Methods and systems for determining depth information using a combination of stereo and structured-light processing are provided. An example method involves receiving a plurality of images captured with at least two optical sensors, and determining a first depth estimate for at least one surface based on corresponding features between a first image and a second image. Further, the method involves causing a texture projector to project a known texture pattern, and determining, based on the first depth estimate, at least one region of at least one image of the plurality of images within which to search for a particular portion of the known texture pattern. And the method involves determining points corresponding to the particular portion of the known texture pattern within the at least one region, and determining a second depth estimate for the at least one surface based on the determined points corresponding to the known texture pattern.
Abstract:
Example systems and methods may be used to determine a trajectory for moving an object using a robotic device. One example method includes determining a plurality of possible trajectories for moving an object with an end effector of a robotic manipulator based on a plurality of possible object measurements. The method may further include causing the robotic manipulator to pick up the object with the end effector. After causing the robotic manipulator to pick up the object with the end effector, the method may also include receiving sensor data from one or more sensors indicative of one or more measurements of the object. Based on the received sensor data, the method may additionally include selecting a trajectory for moving the object from the plurality of possible trajectories. The method may further include causing the robotic manipulator to move the object through the selected trajectory.
Abstract:
Methods and systems for depth sensing are provided. A system includes a first and second optical sensor each including a first plurality of photodetectors configured to capture visible light interspersed with a second plurality of photodetectors configured to capture infrared light within a particular infrared band. The system also includes a computing device configured to (i) identify first corresponding features of the environment between a first visible light image captured by the first optical sensor and a second visible light image captured by the second optical sensor; (ii) identify second corresponding features of the environment between a first infrared light image captured by the first optical sensor and a second infrared light image captured by the second optical sensor; and (iii) determine a depth estimate for at least one surface in the environment based on the first corresponding features and the second corresponding features.
Abstract:
An example suction gripper is disclosed that includes a contacting pillow including a plurality of particles inside a non-rigid membrane that allow the contacting pillow to conform to a shape of an object when the contacting pillow is pressed against the object, a plurality of suction cups arranged on the non-rigid membrane of the contacting pillow, and a vacuum system coupled to the contacting pillow and to the plurality of suction cups. The vacuum system may be configured to apply suction to the object through at least one of the plurality of suction cups that is in contact with the object when the contacting pillow is pressed against the object and increase stiffness of the contacting pillow by removing air between the plurality of particles inside the non-rigid membrane of the contacting pillow.
Abstract:
Example systems and methods may be used to determine a trajectory for moving an object using a robotic device. One example method includes determining a plurality of possible trajectories for moving an object with an end effector of a robotic manipulator based on a plurality of possible object measurements. The method may further include causing the robotic manipulator to pick up the object with the end effector. After causing the robotic manipulator to pick up the object with the end effector, the method may also include receiving sensor data from one or more sensors indicative of one or more measurements of the object. Based on the received sensor data, the method may additionally include selecting a trajectory for moving the object from the plurality of possible trajectories. The method may further include causing the robotic manipulator to move the object through the selected trajectory.
Abstract:
An example method includes receiving a plurality of detected depth points indicative of depths of at least one surface and determining a projection of the detected depth points onto a plane. The method may also include identifying a plurality of first detected points, where a first detected point comprises a first point at a particular location of the plane in the projection. The method may also include storing digital entries corresponding to points located within a threshold buffer from one of the first detected points relative to the plane. The method may additionally include determining values for the digital entries, where a value for a digital entry corresponding to a particular point comprises an accumulation of distances from detected depth points that cross the particular point. The method may further include determining a digital height map representative of heights of the at least one surface relative to the plane.
Abstract:
Example methods and systems may provide for a system that includes a control system communicatively coupled to a first robotic device and a second robotic device. The control system may identify a collaborative operation to be performed by a first robotic device and a second robotic device that is based on a relative positioning between the first robotic device and the second robotic device. The control system may also determine respective locations of the first robotic device and the second robotic device. The control system may further initiate a movement of the first robotic device along a path from the determined location of the first robotic device towards the determined location of the second robotic device. The first robotic device and the second robotic device may then establish a visual handshake that indicates the relative positioning between the first robotic device and the second robotic device for the collaborative operation.
Abstract:
Example methods and systems may provide for a system that includes a control system communicatively coupled to a first robotic device and a second robotic device. The control system may identify a collaborative operation to be performed by a first robotic device and a second robotic device that is based on a relative positioning between the first robotic device and the second robotic device. The control system may also determine respective locations of the first robotic device and the second robotic device. The control system may further initiate a movement of the first robotic device along a path from the determined location of the first robotic device towards the determined location of the second robotic device. The first robotic device and the second robotic device may then establish a visual handshake that indicates the relative positioning between the first robotic device and the second robotic device for the collaborative operation.