Abstract:
The present disclosure is directed at pairing a host electronic device with a peripheral electronic device using visual recognition and deep learning techniques. In particular, the host device may receive an indication of a peripheral device via a camera or as a result of searching for the peripheral device (e.g., due startup of a related application or periodic scanning). The host device may also receive an image of the peripheral device (e.g., captured via the camera, and determine a visual distance to the peripheral device based on the image. The host device may also determine a signal strength of the peripheral device, and determine a signal distance to the peripheral device based on the signal strength. The host device may pair with the peripheral device if the visual distance and the signal distance are approximately equal.
Abstract:
An embodiment of a semiconductor package apparatus may include technology to analyze an electronic image to determine indirect information including one or more of shadow information and reflection information, and provide the indirect information to a vehicle guidance system. Other embodiments are disclosed and claimed.
Abstract:
Various systems and methods for optimizing use of environmental and operational sensors are provided. A system for improving sensor efficiency includes object recognition circuitry (108) implementable in a vehicle (102) to detect an object ahead of the vehicle (102), the object recognition circuitry (108) to use an object detection operation to detect the object from sensor data of a sensor array, and the object recognition circuitry (108) configured to use at least one object tracking operation to track the object between successive object detection operations; and a processor (106) subsystem to: calculate a relative velocity of the object with respect to the vehicle (102); and configure the object recognition circuitry (108) to adjust intervals between successive object detection operations based on the relative velocity of the object.
Abstract:
A mechanism is described for facilitating deep learning-based real-time detection and correction of compromised sensors in autonomous machines according to one embodiment. An apparatus of embodiments, as described herein, includes detection and capturing logic to facilitate one or more sensors to capture one or more images of a scene, where an image of the one or more images is determined to be unclear, where the one or more sensors include one or more cameras. The apparatus further comprises classification and prediction logic to facilitate a deep learning model to identify, in real-time, a sensor associated with the image.
Abstract:
A mechanism is described for facilitating deep learning-based real-time detection and correction of compromised sensors in autonomous machines according to one embodiment. An apparatus of embodiments, as described herein, includes detection and capturing logic to facilitate one or more sensors to capture one or more images of a scene, where an image of the one or more images is determined to be unclear, where the one or more sensors include one or more cameras. The apparatus further comprises classification and prediction logic to facilitate a deep learning model to identify, in real-time, a sensor associated with the image.
Abstract:
A mechanism is described for facilitating deep learning-based real-time detection and correction of compromised sensors in autonomous machines according to one embodiment. An apparatus of embodiments, as described herein, includes detection and capturing logic to facilitate one or more sensors to capture one or more images of a scene, where an image of the one or more images is determined to be unclear, where the one or more sensors include one or more cameras. The apparatus further comprises classification and prediction logic to facilitate a deep learning model to identify, in real-time, a sensor associated with the image.
Abstract:
Various systems and methods for optimizing use of environmental and operational sensors are described herein. A system for improving sensor efficiency includes object recognition circuitry implementable in a vehicle to detect an object ahead of the vehicle, the object recognition circuitry configured to use an object detection operation to detect the object from sensor data of a sensor array, and the object recognition circuitry configured to use at least one object tracking operation to track the object between successive object detection operations; and a processor subsystem to: calculate a relative velocity of the object with respect to the vehicle; and configure the object recognition circuitry to adjust intervals between successive object detection operations based on the relative velocity of the object.
Abstract:
Example haptic gloves for virtual reality systems and related methods are disclosed herein. An example apparatus disclosed herein includes a glove to be worn on a hand of a user, an ultrasonic array disposed on an inner surface of the glove, and a control unit to activate the ultrasonic array device to generate haptic feedback on the hand of the user.
Abstract:
In an embodiment, a method includes receiving user interface information having event registrations for a user interface to be displayed on a display of a system, partitioning the display into an unused display area and an active display area based on the event registrations, and power managing the unused display area while maintaining the active display area fully powered. Other embodiments are described and claimed.