Abstract:
System and method for determining positional and activity information of a mobile device in synchronization with the wake-up period of the mobile device to perform antenna beam management and adjusting the wake-up period based on the positional and activity information of the mobile device. A mobile device comprises: a memory; at least one sensor for detecting data: a processor communicatively coupled to the memory, the processor is configured to: synchronize the at least one sensor with a wake-up period of the mobile device; receive the data detected by the at least one sensor; determine positional information based on the received data; determine activity information based on the received data; estimate a forward position of the mobile device based on the positional information and the activity information; and perform a management of antenna beams of the mobile device based on the positional information, the activity information and the forward position.
Abstract:
Systems and techniques are described herein. For example, a process can include obtaining first sensor measurement data associated with a and second sensor measurement from one or more sensors. In some cases, the first measurement data can be associated with a first time and the second sensor measurement data can be associated with a second time occurring after the first time. In some aspects, the process includes determining that the first sensor measurement data and the second sensor measurement data satisfy at least one batching condition. In some examples, the process includes, based on determining that the first sensor measurement data and the second sensor measurement data satisfy the at least one batching condition, generating a sensor measurement data batch including the first sensor measurement data, the second sensor measurement data, and at least one target sensor measurement data. Ins examples the process includes outputting the sensor measurement data batch.
Abstract:
In some aspects, the present disclosure provides a method for managing a command queue in a universal flash storage (UFS) host device. The method includes determining to power on a first subsystem of a system-on-a-chip (SoC), wherein the determination to power on the first subsystem is made by a second subsystem of the SoC based on detection of user identity data contained in a first image frame during an initial biometric detection process. In certain aspects, the second subsystem is configured to operate independent of the first subsystem and control power to the first subsystem. In certain aspects, the second subsystem includes a second optical sensor, a set of ambient sensors, and a second processor configured to detect, via a set of ambient sensors, an event comprising one or more of an environmental event outside of the device or a motion event of the device.
Abstract:
Disclosed is an apparatus and method for power efficient processor scheduling of features. In one embodiment, features may be scheduled for sequential computing, and each scheduled feature may receive a sensor data sample as input. In one embodiment, scheduling may be based at least in part on each respective feature's estimated power usage. In one embodiment, a first feature in the sequential schedule of features may be computed and before computing a second feature in the sequential schedule of features, a termination condition may be evaluated.
Abstract:
Techniques for inertial navigation aided with multi-interval pose measurements are disclosed. The techniques can include obtaining inertial measurement unit (IMU) data from an IMU, generating, based on the IMU data, a respective pose measurement vector according to each of a plurality of machine-learning models, resulting in a plurality of pose measurement vectors, wherein each of the plurality of pose measurement vectors is associated with a respective one of multiple motion classes, and determining a device pose estimate based on the IMU data and the plurality of pose measurement vectors.
Abstract:
Disclosed are systems, apparatuses, processes, and computer-readable media for estimating shock severity during handling or delivery of assets. For example, an example of a process includes capturing, by a device, a measured acceleration for an asset associated with the device; declipping, by the device, the measured acceleration for the asset to determine a reconstructed acceleration for the asset; determining, by the device, a velocity estimate for the asset based on the reconstructed acceleration; and determining, by the device, whether the asset has experienced a severe shock based on the velocity estimate.
Abstract:
In an aspect, a user equipment (UE) determines a target orientation associated with the UE for a two-way satellite communication session that provides a line of sight (LOS) alignment between an antenna of the UE with a satellite antenna, determines a current orientation associated with the UE, determines difference information associated with a difference between the current orientation of the UE and the target orientation of the UE; and transmits, to a user feedback component, an instruction to provide user feedback that is based on the difference information and that comprises non-visual user feedback or visual feedback that is separate from a display screen of the UE.
Abstract:
Techniques and systems are provided for attention evaluation by an extended reality system. In some examples, the system determines one or more regions of interest (ROI) for an image displayed to a user. The system may also receive eye tracking information indicating an area of the image that the user is looking at. The system may further generate focus statistics based on the area of the image at which the user is looking at and the one or more ROI; and output the generated focus statistics.
Abstract:
A user equipment (UE) controls power consumption of the UE based on mobility information and channel conditions experienced by the UE. In one instance, the UE determines its level of mobility based on a Doppler frequency spread of received communications. The UE disables a motion sensor when the level of mobility is above a first threshold. The UE then controls the communications based on the motion sensor and channel conditions experienced by the UE when the level of mobility is below the first threshold.