Abstract:
Approaches are described for providing input to a portable computing device, such as a mobile phone. A user's hand can be detected based on data (e.g., one or more images) obtained by at least one sensor of the device, such as camera, and the images can be analyzed to locate the hand of the user. As part of the location computation, the device can determine a motion being performed by the hand of the user, and the device can determine a gesture corresponding to the motion. In the situation where the device is controlling a media player capable of playing media content, the gesture can be interpreted by the device to cause the device to, e.g., pause a media track or perform another function with respect to the media content being presented via the device.
Abstract:
Various approaches enable automatic communication generation based on patterned behavior in a particular context. For example, a computing device can monitor behavior of a user to determine patterns of communication behavior in certain situations. In response to detecting multiple occurrences of the certain situation, a computing device can prompt a user to perform an action corresponding to the pattern of behavior. In some embodiments, a set of speech models corresponding to a type of contact is generated. The speech models include language consistent with patterns of speech between a user and the type of contact. Based on context and on the contact, a message using language consistent with past communications between the user and contact is generated from a speech model associated with the type of contact.
Abstract:
Computing devices can collaborate in order to take advantage of various components distributed across those devices. In various embodiments, image information captured by multiple devices can be used to identify and determine the relative locations of various persons and objects near those devices, even when not every device can view those persons or objects. In some embodiments, one or more audio or video capture elements can be selected based on their proximity and orientation to an object to be captured. In some embodiments, the information captured from the various audio and/or video elements can be combined to provide three-dimensional imaging, surround sound, and other such capture data.
Abstract:
Systems and approaches are provided for presenting incoming notifications to a computing device based on a determined context of the computing device. Various sensors of the device can be used to determine the context of the device with respect to a user, the state of the device itself, or the context of the device with respect to the environment in which the device is situated. The user can then be informed of received notifications in a manner likely to get the user's attention while not being overly obtrusive to the user or others within the vicinity of the user.
Abstract:
Image information displayed on an electronic device can be adjusted based at least in part upon a relative position of a viewer with respect to a device. In some embodiments, image stabilization can be provided such that an image remains substantially consistent from the point of view of the viewer, not the display element of the device. The image can be stretched, rotated, compressed, or otherwise manipulated based at least in part upon the relative viewing position. Similarly, the viewer can move relative to the device to obtain different views, but views that are consistent with the viewer looking at an object, for example, through a piece of glass. The device can overlay information on the image that will adjust with the adjusted image. Three-dimensional modeling and display can be used to offset parallax and focus point effects.
Abstract:
Approaches are described which enable a computing device (e.g., mobile phone, tablet computer) to display alternate views or layers of information within a window on the display screen when a user's finger (or other object) is detected to be within a particular range of the display screen of the device. For example, a device displaying a road map view on the display screen may detect a user's finger near the screen and, in response to detecting the finger, render a small window that shows a portion of a satellite view of the map proximate to the location of the user's finger. As the user's finger moves laterally above the screen, the window can follow the location of the user's finger and display the satellite views of the various portions of the map over which the user's finger passes.
Abstract:
Various embodiments provide a dynamic antenna system that adapts, by adjusting various antenna circuit parameters, to accommodate a particular circumstance or set of conditions being imposed on the computing device at a given time. For example, signal strength of the antenna system can be monitored and, upon detecting a change in the signal strength, a condition associated with the change, such as the user holding the device with two hands, can be identified based on offline testing, measurement, and pattern recognition. Accordingly, the one or more parameters of the antenna system, which can include multiple antennas and other reconfigurable components, can be adjusted to optimize the antenna efficiency for the particular condition associated with the change in signal strength.
Abstract:
Approaches are described for managing the processing of image and/or video data captured by an electronic device. A user can capture an image using a camera of a computing device, where metadata obtained by sensor(s) of the device can be stored along with the image. The image can be transmitted to a network service, where the network service can divide the image into a plurality of image portions, and for each image portion, the network service can search a library of image patches in attempt to find at least one library patch that substantially matches a respective image portion. If one of the library image portions matches the image portion within an allowable threshold, the network service can modify the image portion such as by applying image modifications made to the library image patch to the image portion or merging the library patch image with the image portion.
Abstract:
Systems and approaches are provided for presenting incoming notifications to a computing device based on a determined context of the computing device. Various sensors of the device can be used to determine the context of the device with respect to a user, the state of the device itself, or the context of the device with respect to the environment in which the device is situated. The user can then be informed of received notifications in a manner likely to get the user's attention while not being overly obtrusive to the user or others within the vicinity of the user.
Abstract:
Systems and approaches are provided for presenting incoming notifications to a computing device based on a determined context of the computing device. Various sensors of the device can be used to determine the context of the device with respect to a user, the state of the device itself, or the context of the device with respect to the environment in which the device is situated. The user can then be informed of received notifications in a manner likely to get the user's attention while not being overly obtrusive to the user or others within the vicinity of the user.