Abstract:
Preferred points or regions in space for performing a task at a location, e.g., the delivery of an item to the location, may be defined based on sensed positions obtained during the prior performance of tasks at the location. The sensed positions may be identified using a GPS sensor or like system. Vectors including coordinates of the sensed position, and uncertainties of such coordinates, may be clustered into groups at the location. Subsequently identified vectors including coordinates and uncertainties may further refine a cluster, or be used to generate a new cluster. A preferred point or region in space may be identified based on such location hypotheses and utilized in the performance of tasks. Some preferred points or regions may be used for routing vehicles to the location, while others may correspond to delivery points for items at the location.
Abstract:
Identifiers or references to supplemental information or content regarding images may be steganographically encoded into the images. The identifiers or references may be encoded into least significant bits or less significant bits of pixels within the image that may be selected on any basis. The identifiers or references may include alphanumeric characters, bar codes, symbols or other features. When an image is captured of an image having one or more identifiers or references steganographically encoded therein, the identifiers or references may be interpreted, and the supplemental information or content may be accessed and displayed on a computer display. In some embodiments, the supplemental information or content may identify and relate to a commercial product expressed in an image, and may include a link to one or more pages or functions for purchasing the commercial product.
Abstract:
Route images that have been acquired along delivery routes may be automatically analyzed for visual saliency (e.g., based on image motion, image content, sensor information, carrier actions, etc.) and stored along with related route information as part of a visual route book data set. The route images may be acquired by carriers utilizing mobile recording devices (e.g., mobile phones, wearable cameras, vehicle mounted cameras, etc.) while travelling along delivery routes for delivering items. For assisting a carrier in navigating along a delivery route, route images and related route information, such as visual cues associated with salient objects or features, may be selected from the visual route book data set and presented to the carrier (e.g., as part of a visual route summary and/or as based on a current location of the carrier).
Abstract:
A grasp management system is described. The grasp management system may be configured to determine a grasp strategy for a robotic manipulator. Information about an initial state of an object may be accessed. Information about a final state of the object may also be accessed. The final state may enable a subsequent interaction with the object. An anticipated pose space may be determined that enables the subsequent interaction with the object. An initial pose for the robotic manipulator may be determined based at least in part on the anticipated pose space. The initial pose may be used by the robotic manipulator to grasp the object.
Abstract:
Techniques for automated quality control of containers and items are disclosed. Images of a container can be successively captured over time. A consolidated image can be generated from the captured images. A non-image representation of the consolidated image can be determined. The non-image representation can be used to determine whether the container satisfies a condition. An imaging system can include a visual reference object or an object sensor used to detect entry of the container into a view volume of an electronic camera. A motion system can transport the container into the view volume. Some examples operate in an automated-warehouse environment.
Abstract:
Directed fragmentation of an unmanned aerial vehicle (UAV) is described. In one embodiment, the UAV includes various components, such one or more motors, batteries, sensors, a housing, casing or shell, and a payload for delivery. Additionally, the UAV includes a controller. The controller determines a flight path and controls a flight operation of the UAV. During the flight operation, the controller develops a release timing and a release location for one or more of the components based on the flight path, the flight conditions, and terrain topology information, among other factors. The controller can also detect a disruption in the flight operation of the UAV and, in response, direct fragmentation of one or more of the components apart from the UAV. In that way, a controlled, directed fragmentation of the UAV can be accomplished upon any disruption to the flight operation of the UAV.
Abstract:
Optical networks may store information or data therein by maintaining the information or data in motion. The optical networks may include optical fiber rings configured to receive optical signals comprising the information or data and to circulate the optical signals within the optical fiber rings. The optical signals and the information or data may be transferred out of the optical fiber rings in order to amplify the optical signals (e.g., to overcome losses due to attenuation within the optical fiber rings), to analyze the optical signals according to one or more processing techniques, or to transfer the information or data to another computer device upon request. If continued storage of the information or data is required, an optical signal including the information or data may be transferred back into the optical fiber rings and may continue to circulate therein.
Abstract:
Disclosed are various embodiments for coordination of autonomous vehicles in a roadway. A roadway management system can generate lane configurations for a roadway or a portion of the roadway. The roadway management system can determine the direction of travel for lanes in a roadway and direct autonomous automobiles to enter the roadway in a particular lane.
Abstract:
Techniques for managing notifications may be described. In an example, the notifications may relate to an item and may be provided to a user device. An active device may be associated with the item. The active device may store a token for communication with a local area network associated with a location. Based on the communication, a determination may be made that the item may be in proximity to the location. Corresponding notifications may be sent to the user device.
Abstract:
Images of an environment that are captured from two or more imaging devices may be captured and evaluated in order to identify a state of the environment, or an interaction that placed the environment in the state. The content of the images may be analyzed in order to recognize observed information or data expressed therein. The information or data may be associated with a given state according to one or more observation functions, and the state may be used to identify an action according to one or more transition functions. The observation function uses conditional probabilities to transfer the probability of making an observation by one imaging device to the observation made by the other imaging device. The observation functions and the transition functions may be derived based on historical training data including clips that are labeled to identify states or interactions expressed therein.