Abstract:
A user interface implemented on a mobile device touchscreen may detect a user input to the touchscreen triggering activation of an expanded reach mode. In an expanded reach mode, implemented functions may include identifying a touch location based on a detected touch event on the touchscreen, identifying a selectable graphical user interface (GUI) object having an edge closest to a touch-extension position that is based on the identified touch location, selecting the identified GUI object as a closest GUI object and displaying a first selection indicator in association with the identified GUI object, and determining whether the identified GUI object has remained the closest GUI object for longer than a predetermined time threshold. If the identified GUI object has remained the closest GUI object longer than a time threshold, activation of the identified GUI object may be enabled. An indication of the touch-extension position may be projected on the touchscreen.
Abstract:
Disclosed is a mobile device that selects an authentication process based upon sensor inputs and mobile device capabilities. The mobile device may include: a plurality of sensors; and a processor. The processor may be configured to: determine multiple authentication processes based upon sensor inputs and mobile device capabilities for authentication with at least one of an application or a service provider; select an authentication process from the multiple authentication processes that satisfies a security requirement; and execute the authentication process.
Abstract:
Techniques and systems are provided for dynamically adjusting virtual content provided by an extended reality system. In some examples, a system determines a level of distraction of a user of the extended reality system due to virtual content provided by the extended reality system. The system determines whether the level of distraction of the user due to the virtual content exceeds or is less than a threshold level of distraction, where the threshold level of distraction is determined based at least in part on one or more environmental factors associated with a real world environment in which the user is located. The system also adjusts one or more characteristics of the virtual content based on the determination of whether the level of distraction of the user due to the virtual content exceeds or is less than the threshold level of distraction.
Abstract:
Various embodiments include processing devices and methods for managing multisensor inputs on a mobile computing device. Various embodiments may include receiving multiple inputs from multiple touch sensors, identifying types of user interactions with the touch sensors from the multiple inputs, identifying sensor input data in a multisensor input data structure corresponding with the types of user interactions, and determining whether the multiple inputs combine as a multisensor input in an entry in the multisensor input data structure having the sensor input data related to a multisensor input response. Various embodiments may include detecting a trigger for a multisensor input mode, entering the multisensor input mode in response to detecting the trigger, and enabling processing of an input from a touch sensor.
Abstract:
Techniques and systems are provided for providing recommendations for extended reality systems. In some examples, a system determines one or more environmental features associated with a real-world environment of an extended reality system. The system determines one or more user features associated with a user of the extended reality system. The system also outputs, based on the one or more environmental features and the one or more user features, a notification associated with at least one application supported by the extended reality system.
Abstract:
A head-mounted device may include a processor configured to receive information from a sensor that is indicative of a position of the head-mounted device relative to a reference point on a face of a user; and adjust a rendering of an item of virtual content based on the position or a change in the position of the device relative to the face. The sensor may be distance sensor, and the processor may be configured to adjust the rendering of the item of virtual content based a measured distance or change of distance between the head-mounted device and the point of reference on the user's face. The point of reference on the user's face may be one or both of the user's eyes.
Abstract:
Methods, devices, non-transitory processor-readable media of various embodiments may enable contextual operation of a mobile computing device including a capacitive input sensor, which may be a rear area capacitive input sensor. In various embodiments, a processor of a mobile computing device including a rear area capacitive input sensor may monitor sensor measurements and generate an interaction profile based on the sensor measurements. The processor of the mobile computing device may determine whether the interaction profile is inconsistent with in-hand operation and may increase sensitivity of the capacitive input sensor in response to determining that the interaction profile is inconsistent with in-hand operation.
Abstract:
Methods, devices, non-transitory processor-readable media of various embodiments may enable contextual operation of a mobile computing device including a capacitive input sensor, which may be a rear area capacitive input sensor. In various embodiments, a processor of a mobile computing device including a rear area capacitive input sensor may monitor sensor measurements and generate an interaction profile based on the sensor measurements. The processor of the mobile computing device may determine whether the interaction profile is inconsistent with in-hand operation and may increase sensitivity of the capacitive input sensor in response to determining that the interaction profile is inconsistent with in-hand operation.
Abstract:
Aspects may relate to a device to authenticate a user that comprises a processor and a sensor. The processor coupled to the sensor may be configured to: receive at least one fingerprint scan from the sensor inputted by the user during an enrollment process to define a fingerprint password, the at least one fingerprint scan including one or more partial fingerprint scans from a same finger or different fingers of the user; and authenticates the user based upon the defined fingerprint password inputted through the sensor by the user.
Abstract:
Various embodiments include processing devices and methods for managing multisensor inputs on a mobile computing device. Various embodiments may include receiving multiple inputs from multiple touch sensors, identifying types of user interactions with the touch sensors from the multiple inputs, identifying sensor input data in a multisensor input data structure corresponding with the types of user interactions, and determining whether the multiple inputs combine as a multisensor input in an entry in the multisensor input data structure having the sensor input data related to a multisensor input response. Various embodiments may include detecting a trigger for a multisensor input mode, entering the multisensor input mode in response to detecting the trigger, and enabling processing of an input from a touch sensor.