Abstract:
Visual task feedback for workstations in a materials handling facility may be implemented. Image data of a workstation surface may be obtained from image sensors. The image data may be evaluated with regard to the performance of an item-handling task at the workstation. The evaluation of the image data may identify items located on the workstation surface, determine a current state of the item-handling task, or recognize an agent gesture at the workstation. Based, at least in part on the evaluation, one or more visual task cues may be selected to project onto the workstation surface. The projection of the selected visual task cues onto the workstation surface may then be directed.
Abstract:
Various examples are directed to systems and methods for utilizing depth videos to analyze material handling tasks. A material handling facility may comprise a depth video system and a control system programmed to receive a plurality of depth videos including performances of the material handling task. For each of the plurality of depth videos, training data may identify sub-tasks of the material handling task and corresponding portions of the video including the sub-tasks. The plurality of depth videos and the training data may be used to train a model to identify the sub-tasks from depth videos. The control system may apply the model to a captured depth video of a human agent performing the material handling task at a workstation to identify a first sub-task of the material handling task being performed by the human agent.
Abstract:
Methods and systems for collecting camera calibration data using at least one fixed calibration target are described. Calibration parameters of a camera that is attached to a vehicle may be accessed. A set of calibration instructions for the camera may be determined based at least in part on the calibration parameters. The set of calibration instructions may include navigational instructions for the vehicle to follow to present the camera to the at least one fixed calibration target. Calibration data collected by the camera viewing the at least one fixed calibration target may be received. The camera may be calibrated based at least in part on the calibration data.
Abstract:
Described are techniques for storing and retrieving items using a robotic manipulator. Images depicting a human interacting with an item, sensor data from sensors instrumenting the human or item, data regarding physical characteristics of the item, and constraint data relating to the robotic manipulator or the item may be used to generate one or more configurations for the robotic manipulator. The configurations may include points of contact and force vectors for contacting the item using the robotic manipulator.
Abstract:
Information regarding actions or activities to be performed at a workstation may be projected upon a portion of the workstation using one or more projectors. The information may include one or more arrows or other indicators referencing specific tools, materials or objects that may be used to perform one or more of the actions or activities. Such arrows or indicators may be rendered in a manner that simulates a three-dimensional or floating appearance thereof from a perspective of a user that may be adjusted or modified based on changes in the perspective of the user, and with respect to one or more physical or virtual sources of light.
Abstract:
Disclosed are various embodiments for coordination of autonomous vehicles in a roadway. A roadway management system can generate lane configurations for a roadway or a portion of the roadway. The roadway management system can determine the direction of travel for lanes in a roadway and direct autonomous automobiles to enter the roadway in a particular lane.
Abstract:
Visual task feedback for workstations in a materials handling facility may be implemented. Image data of a workstation surface may be obtained from image sensors. The image data may be evaluated with regard to the performance of an item-handling task at the workstation. The evaluation of the image data may identify items located on the workstation surface, determine a current state of the item-handling task, or recognize an agent gesture at the workstation. Based, at least in part on the evaluation, one or more visual task cues may be selected to project onto the workstation surface. The projection of the selected visual task cues onto the workstation surface may then be directed.
Abstract:
An unmanned aerial vehicle (UAV) expandable landing marker system may include a an expandable volume. The landing marker may be expanded prior to arrival of a UAV delivering an item to be received by the landing marker. The landing marker may be expanded by regulating an amount of fluid in the volume. An anchor may be coupled to the landing marker to restrain movement of the expanded landing marker. An optional retraction mechanism may retract the landing marker. The landing marker can be retracted with the deposited item, moving the item to a location for later retrieval.
Abstract:
A delivery robot may provide an approach notification to enable people to understand and interpret actions by an unmanned aerial vehicle (UAV), such as an intention to land or deposit a package at a particular location. The delivery robot may include a display, lights, a speaker, and one or more sensors to enable the robot to provide information, barcodes, and text to the UAV and/or bystanders. The robot can provide final landing authority, and can “wave-off” the UAV, if an obstacle or person exists in the landing zone. The delivery robot can receive packages and (1) store them for retrieval (2) deliver them to the delivery location and/or (3) deliver them to an automated locker system. The delivery robot can temporarily close lanes or streets to enable a package to be delivered by UAV. The system may include a shelter to secure, maintain, and charge the delivery robot.
Abstract:
This disclosure describes a device and system for verifying the content of items in a bin within a materials handling facility. In some implementations, a bin content verification apparatus may pass by one or more bins and capture images of those bins. The images may be processed to determine whether the content included in the bins has changed since the last time images of the bins were captured. A determination may also be made as to whether a change to the bin content was expected and, if so, if the determined change corresponds with the expected change.