Abstract:
A device includes a memory and one or more processors configured to process image data corresponding to a user's face to generate face data. The one or more processors are configured to process sensor data to generate feature data and to generate a representation of an avatar based on the face data and the feature data. The one or more processors are also configured to generate an audio output for the avatar based on the sensor data.
Abstract:
Methods, systems, computer-readable media, and apparatuses for providing haptic feedback to assist in capturing images are presented. In some embodiments, a method for providing haptic feedback to assist in capturing images includes obtaining, via an image capture device, an ambient light measurement of an environment in which the image capture device is present. The method further includes detecting, via the image capture device, one or more objects within one or more image frames captured by the image capture device. The method also includes changing, via the image capture device, a manner in which haptic feedback is provided to a user of the image capture device, based at least in part on the obtained ambient light measurement and the detected one or more objects.
Abstract:
Methods executed by a processor of computing device for launching a selected application on a computing device are disclosed. Various embodiments may include authorizing a user based on a fingerprint of a finger detected on a fingerprint sensor portion of a touchscreen display matching a fingerprint of an authorized user of the computing device, determining a selected application installed on the computing device from a selective engagement of the finger on the touchscreen display, continuous from the fingerprint sensor portion and unlocking the selected application in response to the selective engagement of the finger on the touchscreen display. In some embodiments, selection of an application may be based a continuous swipe movement by the finger on the touchscreen display from the fingerprint sensor portion toward an icon on the touchscreen display representing the selected application.
Abstract:
Disclosed are techniques for calculating a predicted location of a location tracking device. In an aspect, a wireless communications device detects a breach of a geofence made by the location tracking device, receives data representing a state of the location tracking device, the state of the location tracking device comprising at least a current location of the location tracking device and a velocity of the location tracking device, and determines, based on the data representing the state of the location tracking device, the predicted location of the location tracking device.
Abstract:
A device includes a memory and one or more processors configured to process sensor data to determine a semantical context associated with the sensor data. The one or more processors are also configured to generate adjusted face data based on the determined semantical context and face data. The adjusted face data includes an avatar facial expression that corresponds to the semantical context.
Abstract:
Certain aspects of the present disclosure provide techniques for selectively activating a fingerprint sensor in an electronic device. A method that may be performed by the electronic device includes detecting a finger hover above the display module, activating the fingerprint sensor based, at least in part, on the detected finger hover, and providing, in response to detecting the finger hover, feedback information to assist in scanning the finger using the fingerprint sensor.
Abstract:
Systems and methods according to one or more embodiments of the present disclosure provide improved multitasking on user devices. In an embodiment, a method for multitasking comprises detecting a non-touch gesture input received by a user device. The method also comprises associating the non-touch gesture input with an application running in a background, wherein a different focused application is running in a foreground. And the method also comprises controlling the background application with the associated non-touch gesture input without affecting the foreground application.
Abstract:
Certain aspects of the present disclosure provide techniques for beamforming pressure waves. A method for operating an apparatus configured to beamform ultrasonic pressure waves may generally comprise emitting, via a pressure wave module of the apparatus, beamformed ultrasonic pressure waves through a display module of the apparatus, wherein: the display module comprises a first plurality of layers; the pressure wave module comprises a second plurality of layers; the second plurality of layers comprises at least a copolymer layer, a conductive layer, a dielectric protection layer, and a thin film transistor (TFT) glass layer; and an order of the second plurality of layers in the pressure wave module depends on an acoustic resonance value associated with the display module.
Abstract:
Disclosed are techniques for calculating a predicted location of a location tracking device. In an aspect, a wireless communications device detects a breach of a geofence made by the location tracking device, receives data representing a state of the location tracking device, the state of the location tracking device comprising at least a current location of the location tracking device and a velocity of the location tracking device, and determines, based on the data representing the state of the location tracking device, the predicted location of the location tracking device.
Abstract:
In an embodiment, a user equipment (UE) groups a plurality of images. The UE displays a first image among the plurality of images, determines an object of interest within the first image and a desired level of zoom, and determines to lock onto the object of interest in association with one or more transitions between the plurality of images. The UE determines to transition to a second image among the plurality of images, and detects, based on the lock determination, the object of interest within the second image. The UE displays the second image by zooming-in upon the object of interest at a level of zoom that corresponds to the desired level of zoom.