Abstract:
Approaches are described for managing a display of content on a computing device. Content (e.g., images, application data, etc.) is displayed on an interface of the device. An activation movement performed by a user (e.g., a double-tap) can cause the device to enable a content view control mode (such as a zoom control mode) that can be used to adjust a portion of the content being displayed on the interface. The activation movement can also be used to set an area of interest and display a graphical element indicating that the content view control mode is activated. In response to a motion being detected (e.g., a forward tilt or backward of the device), the device can adjust a portion of the content being displayed on the interface, such as displaying a “zoomed-in” portion or a “zoomed-out” portion of the image.
Abstract:
A fleet management system may create and manage tasks within an environment associated with fulfilling, sorting, inducting, and/or distributing packages, such as a warehouse, packaging facility, sortation center, or distribution center. The fleet management system may assign the tasks to agents within the environment. Each of the agents may have capabilities or tasks that the agents are configured to perform and the fleet management system may use these capabilities to assign the tasks to the agents. Additionally, tasks may be assigned based on a location of the agents within the environment. As tasks are generated, the fleet management system may determine a suitable agent to perform the task and may transmit instructions to the agent for carrying out the task. The fleet management system may provide a centralized platform to manage agents, optimize the assignment of tasks, and increase productivity within the environments.
Abstract:
Systems and approaches are provided for presenting incoming notifications to a computing device based on a determined context of the computing device. Various sensors of the device can be used to determine the context of the device with respect to a user, the state of the device itself, or the context of the device with respect to the environment in which the device is situated. The user can then be informed of received notifications in a manner likely to get the user's attention while not being overly obtrusive to the user or others within the vicinity of the user.
Abstract:
Approaches are described which enable a computing device (e.g., mobile phone, tablet computer) to display alternate views or layers of information within a window on the display screen when a user's finger (or other object) is detected to be within a particular range of the display screen of the device. For example, a device displaying a road map view on the display screen may detect a user's finger near the screen and, in response to detecting the finger, render a small window that shows a portion of a satellite view of the map proximate to the location of the user's finger. As the user's finger moves laterally above the screen, the window can follow the location of the user's finger and display the satellite views of the various portions of the map over which the user's finger passes.
Abstract:
An electronic device can be configured to enable a user to provide input via a tap of the device without the use of touch sensors (e.g., resistive, capacitive, ultrasonic or other acoustic, infrared or other optical, or piezoelectric touch technologies) and/or mechanical switches. Such a device can include other sensors, including inertial sensors (e.g., accelerometers, gyroscopes, or a combination thereof), microphones, proximity sensors, ambient light sensors, and/or cameras, among others, that can be used to capture respective sensor data. Feature values with respect to the respective sensor data can be extracted, and the feature values can be analyzed using machine learning to determine when the user has tapped on the electronic device. Detection of a single tap or multiple taps performed on the electronic device can be utilized to control the device.
Abstract:
In one example, an intersection of a first path and a second path may be determined. The first path may be associated with a first mobile drive unit and the second path may be associated with a second mobile drive unit. A plurality of velocity sets may be determined based on the intersection. A velocity set may be selected from the plurality of velocity sets. The velocity set may include velocity values that correspond to the first mobile drive unit and the second mobile drive unit. The selected velocity set may be provided to at least one of the first mobile drive unit or the second mobile drive unit.
Abstract:
Approaches are described for providing input to a portable computing device, such as a mobile phone. A user's hand can be detected based on data (e.g., one or more images) obtained by at least one sensor of the device, such as camera, and the images can be analyzed to locate the hand of the user. As part of the location computation, the device can determine a motion being performed by the hand of the user, and the device can determine a gesture corresponding to the motion. In the situation where the device is controlling a media player capable of playing media content, the gesture can be interpreted by the device to cause the device to, e.g., pause a media track or perform another function with respect to the media content being presented via the device.
Abstract:
A computing device can capture audio data representative of audio content present in a current environment. The captured audio data can be compared with audio models to locate a matching audio model. The matching audio model can be associated with an environment. The current environment can be identified based on the environment associated with the matching audio model. In some embodiments, information about the identified current environment can be provided to at least one application executing on the computing device. The at least one the application can be configured to adjust at least one functional aspect based at least in part upon the determined current environment. In some embodiments, one or more computing tasks performed by the computing device can be improved based on information relating to the identified current environment. These computing tasks can include location refinement, location classification, and speech recognition.
Abstract:
An electronic device can be configured to enable a user to provide input via a tap of the device without the use of touch sensors (e.g., resistive, capacitive, ultrasonic or other acoustic, infrared or other optical, or piezoelectric touch technologies) and/or mechanical switches. Such a device can include other sensors, including inertial sensors (e.g., accelerometers, gyroscopes, or a combination thereof), microphones, proximity sensors, ambient light sensors, and/or cameras, among others, that can be used to capture respective sensor data. Feature values with respect to the respective sensor data can be extracted, and the feature values can be analyzed using machine learning to determine when the user has tapped on the electronic device. Detection of a single tap or multiple taps performed on the electronic device can be utilized to control the device.
Abstract:
An extendable augmented reality (AR) system for recognizing user-selected objects in different contexts. A user may select certain entities (text, objects, etc.) that are viewed on an electronic device and create notes or additional content associated with the selected entities. The AR system may remember those entities and indicate to the user when those entities are encountered by the user in a different context, such as in a different application, on a different device, etc. The AR system offers the user the ability to access the user created note or content when the entities are encountered in the new context.