Abstract:
Various embodiments include processing devices and methods for managing multisensor inputs on a mobile computing device. Various embodiments may include receiving multiple inputs from multiple touch sensors, identifying types of user interactions with the touch sensors from the multiple inputs, identifying sensor input data in a multisensor input data structure corresponding with the types of user interactions, and determining whether the multiple inputs combine as a multisensor input in an entry in the multisensor input data structure having the sensor input data related to a multisensor input response. Various embodiments may include detecting a trigger for a multisensor input mode, entering the multisensor input mode in response to detecting the trigger, and enabling processing of an input from a touch sensor.
Abstract:
In some embodiments, a processor of the mobile computing device may receive an input for performing a function with respect to content at the mobile device in which the content at the mobile device is segmented into at least a first command layer having one or more objects and a second command layer having one or more objects. The processor may determine whether the received input is associated with a first object of the first command layer or a second object of the second command layer. The processor may determine a function to be performed on one of the first or second objects based on whether the first command layer or the second command layer is determined to be associated with the received input, and the processor may perform the determined function on the first object or the second object.
Abstract:
A method performed by an electronic device is described. The method includes obtaining one or more trip objectives. The method also includes obtaining one or more evaluation bases. The method further includes identifying an association between at least one site and the one or more trip objectives. The method additionally includes obtaining sensor data from the at least one site. The sensor data includes at least image data. The method also includes performing analysis on the image data to determine dynamic destination information corresponding to the at least one site. The method further includes performing trip planning based on the dynamic destination information, the one or more trip objectives, and the one or more evaluation bases. The method additionally includes providing one or more suggested routes based on the trip planning.
Abstract:
A method performed by an electronic device is described. The method includes obtaining one or more trip objectives. The method also includes obtaining one or more evaluation bases. The method further includes identifying an association between at least one site and the one or more trip objectives. The method additionally includes obtaining sensor data from the at least one site. The sensor data includes at least image data. The method also includes performing analysis on the image data to determine dynamic destination information corresponding to the at least one site. The method further includes performing trip planning based on the dynamic destination information, the one or more trip objectives, and the one or more evaluation bases. The method additionally includes providing one or more suggested routes based on the trip planning.
Abstract:
Disclosed is a mobile device that selects an authentication process based upon sensor inputs and mobile device capabilities. The mobile device may include: a plurality of sensors; and a processor. The processor may be configured to: determine multiple authentication processes based upon sensor inputs and mobile device capabilities for authentication with at least one of an application or a service provider; select an authentication process from the multiple authentication processes that satisfies a security requirement; and execute the authentication process.
Abstract:
Systems, methods, and non-transitory media are provided for tracking operations using data received from a wearable device. An example method can include determining a first position of a wearable device in a physical space; receiving, from the wearable device, position information associated with the wearable device; determining a second position of the wearable device based on the received position information; and tracking, based on the first position and the second position, a movement of the wearable device relative to an electronic device.
Abstract:
Various embodiments include processing devices and methods for managing multisensor inputs on a mobile computing device. Various embodiments may include receiving multiple inputs from multiple touch sensors, identifying types of user interactions with the touch sensors from the multiple inputs, identifying sensor input data in a multisensor input data structure corresponding with the types of user interactions, and determining whether the multiple inputs combine as a multisensor input in an entry in the multisensor input data structure having the sensor input data related to a multisensor input response. Various embodiments may include detecting a trigger for a multisensor input mode, entering the multisensor input mode in response to detecting the trigger, and enabling processing of an input from a touch sensor.
Abstract:
Systems, apparatuses (or devices), methods, and computer-readable media are provided for generating virtual content. For example, a device (e.g., an extended reality device) can obtain an image of a scene of a real-world environment, wherein the real-world environment is viewable through a display of the extended reality device as virtual content is displayed by the display. The device can detect at least a part of a physical hand of a user in the image. The device can generate a virtual keyboard based on detecting at least the part of the physical hand. The device can determine a position for the virtual keyboard on the display of the extended reality device relative to at least the part of the physical hand. The device can display the virtual keyboard at the position on the display.
Abstract:
Techniques and systems are provided for dynamically adjusting virtual content provided by an extended reality system. In some examples, a system determines a level of distraction of a user of the extended reality system due to virtual content provided by the extended reality system. The system determines whether the level of distraction of the user due to the virtual content exceeds or is less than a threshold level of distraction, where the threshold level of distraction is determined based at least in part on one or more environmental factors associated with a real world environment in which the user is located. The system also adjusts one or more characteristics of the virtual content based on the determination of whether the level of distraction of the user due to the virtual content exceeds or is less than the threshold level of distraction.
Abstract:
Various embodiments include processing devices and methods for managing multisensor inputs on a mobile computing device. Various embodiments may include receiving multiple inputs from multiple touch sensors, identifying types of user interactions with the touch sensors from the multiple inputs, identifying sensor input data in a multisensor input data structure corresponding with the types of user interactions, and determining whether the multiple inputs combine as a multisensor input in an entry in the multisensor input data structure having the sensor input data related to a multisensor input response. Various embodiments may include detecting a trigger for a multisensor input mode, entering the multisensor input mode in response to detecting the trigger, and enabling processing of an input from a touch sensor.