Abstract:
An apparatus for gesture recognition, according to aspects of the disclosure contained herein, include a processing system configured to obtain at least one physical dimension of a user and determine a gesture of the user based on the at least one physical dimension independent of a location of the user relative to the apparatus. A method for gesture recognition is also disclosed.
Abstract:
Wireless power transfer for integrated cycle drive systems is described. A cycle power system includes a rim that is connected to, and positioned concentrically with, a sealed housing that can rotate about an axis. The cycle power system also includes an integrated drive system disposed within the housing. The integrated drive system includes a battery and a motor for driving a cycle by causing rotational movement of the rim about the axis. Additionally, the cycle power system includes an inductive structure that is disposed within the housing, and that wirelessly charges the battery through induction between the inductive structure and remote a charging station.
Abstract:
Methods, apparatuses, and systems are provided to facilitate the deployment of media content within an augmented reality environment. In at least one implementation, a method is provided that includes extracting a three-dimensional feature of a real-world object captured in a camera view of a mobile device, and attaching a presentation region for a media content item to at least a portion of the three-dimensional feature responsive to a user input received at the mobile device.
Abstract:
In an embodiment, a user equipment (UE) groups a plurality of images. The UE displays a first image among the plurality of images, determines an object of interest within the first image and a desired level of zoom, and determines to lock onto the object of interest in association with one or more transitions between the plurality of images. The UE determines to transition to a second image among the plurality of images, and detects, based on the lock determination, the object of interest within the second image. The UE displays the second image by zooming-in upon the object of interest at a level of zoom that corresponds to the desired level of zoom.
Abstract:
Aspects of the disclosure are related to a method for determining a touch pressure level on a touchscreen, comprising: detecting a touch event by the touchscreen; obtaining data relating to features associated with the touch event comprising a capacitance value, a touch area, and/or a touch duration; and determining a touch pressure level based on one or more of the features.
Abstract:
Methods, devices, and computer program products for using touch orientation to distinguish between users are disclosed herein. In one aspect, a method of identifying a user of a touch device from a plurality of users of the touch device is described. The method includes receiving touch data from a touch panel of the touch device, the touch data indicating a user's touch on the touch screen. The method further includes determining an orientation of the user's touch based on the received touch data. Finally, the method includes identifying the user of the plurality of users which touched the device, based at least in part on the orientation of the touch.
Abstract:
Systems and methods for performing an action based on a detected gesture are provided. The systems and methods provided herein detect a direction of an initial touchless gesture and process subsequent touchless gestures based on the direction of the initial touchless gesture. The systems and methods translate a coordinate system related to a user device and a gesture library based on the detected direction such that subsequent touchless gestures can be processed based on the detected direction. The systems and methods allow a user to make a touchless gesture over a device to interact with the device independent of the orientation of the device since the direction of the initial gesture can set the coordinate system or context for subsequent gesture detection.
Abstract:
Method and apparatus for controlling an augmented reality interface are disclosed. In one embodiment, a method for use with an augmented reality enabled device (ARD) comprises receiving image data for tracking a plurality of objects, identifying an object to be selected from the plurality of objects, determining whether the object has been selected based at least in part on a set of selection criteria, and causing an augmentation to be rendered with the object if it is determined that the object has been selected.
Abstract:
Method, computer program product, and apparatus for providing interactions of tangible and augmented reality objects are disclosed. In one embodiment, a method of controlling a real object using a device having a camera comprises receiving a selection of at least one object, tracking the at least one object in a plurality of images captured by the camera, and causing control signals to be transmitted from the device to the real object via a machine interface based at least in part on the tracking.
Abstract:
An electronic device is operated by determining its location on a body of a human or an animal, as an ending point of a path from another electronic device. The path is predetermined by measuring at multiple frequencies, a property indicative of loss of an AC signal that propagates through the body along the path between the pair of electronic devices, to obtain measurements. The multiple measurements are thereafter used to select a particular path through the body, from among a group of paths through the body which are characterized in one or more training phases, e.g. by use of a classifier. After a particular path through the body is identified, based on an ending point of the particular path, an electronic device at that ending point is configured, e.g. by turning on or turning off a specific sensor, or by setting a rate of transmission of data.