Abstract:
System and method for correcting for impulse noise in speech recognition systems. One example system includes a microphone, a speaker, and an electronic processor. The electronic processor is configured to receive an audio signal representing an utterance. The electronic processor is configured to detect, within the utterance, the impulse noise, and, in response, generate an annotated utterance including a timing of the impulse noise. The electronic processor is configured to segment the annotated utterance into silence, voice content, and other content, and, when a length of the other content is greater than or equal to an average word length for the annotated utterance, determine, based on the voice content, an intent portion and an entity portion. The electronic processor is configured to generate a voice prompt based on the timing of the impulse noise and the intent portion and/or the entity portion, and to play the voice prompt.
Abstract:
A system and methods for content presentation selection. One method includes displaying, on a display of a portable device, a plurality of tiles. The method includes receiving a first gesture-based input corresponding to a selected tile of the plurality of tiles. The method includes selecting a first application based on the content of the selected tile. The method includes superimposing, on or near a first portion of the selected tile, a first icon corresponding to the first application. The method includes receiving a second gesture-based input selecting the first icon. The method includes retrieving, from the first application, a first application view based on the content. The method includes replacing the selected tile with the first application view.
Abstract:
A head-mounted display for displaying out of focus notification. The head-mounted display includes a display projector, a lens system, and an eye tracking assembly capable of tracking a direction of an eye. The head-mounted display further includes an electronic processor that controls the display projector based on received data from the eye tracking assembly. The electronic processor determines, based on received data from the eye tracking assembly, at least one of a first focal distance, a second focal distance, and a third focal distance. The electronic processor controls the display projector to display an icon associated with a notification at the second focal distance. The second focal distance is out of focus with respect to the first focal distance. The electronic processor further controls the display projector to display information associated with the notification in response to changes in focal distance determined by the electronic processor.
Abstract:
Before hibernating a computing device (102), system software components (116) are notified of an upcoming hibernation process. The notifications are conveyed through an application program interface (API) (114). At least a portion of the system software components (116) can perform one or more pre-hibernation activities to place that system software component (116) in a ready-to-resume state. Each system software component indicates when it is ready for hibernation. Responsive to receiving the indication from each of the system software components (116), the hibernation process can complete. The completed hibernation process creates a snapshot (122) in nonvolatile memory. The snapshot (122) saves state information (124) for each of the system software components (116). The state information (124) is for the ready-to-resume state of the system software components (116). The computing device (102) can be restored after hibernation using a resume process (130), which reads the state (124) information from the snapshot (122).
Abstract:
A mobile device collaboration method includes provisioning a first mobile device with unique user identification related to a role and skill set of an associated user of the first mobile device, detecting a second mobile device responsive to a condition at the first mobile device, communicating the unique user identification to the second mobile device, authenticating the first mobile device through the second mobile device communicating the unique user identification to an external database, and providing access for the first mobile device through the second mobile device if the authenticating is successful. A mobile device collaboration system and a mobile device are also described.
Abstract:
A process and system for enabling a 360-degree threat detection sensor system that is physically coupled to a vehicle to monitor an area of interest surrounding the vehicle. An electronic computing device selects an area of interest surrounding a vehicle stop location to be monitored by the sensor system. When the sensor system has an obstructed field-of-view of the area of interest, the electronic computing device determines a new vehicle stop location at which the sensor system has an unobstructed field-of-view of the area of interest when the vehicle is to be stopped at the new vehicle stop location. The electronic computing device then transmits an instruction to a target electronic device to provide an electronic indication identifying the new vehicle stop location to a registered occupant of the vehicle, or autonomously control the vehicle to stop at the new vehicle stop location.
Abstract:
A process at an electronic computing device that tailors an electronic digital assistant generated inquiry response as a function of previously detected user ingestion of related information includes receiving, from a video capture device configured to track a gaze direction of a first user, a video stream including a first field-of-view of the first user. An object is then identified in the video stream first field-of-view remaining in the first field-of-view for a determined threshold period of time, and the object processed via a video processing algorithm to produce object information, which is then stored. Subsequently, an inquiry is received from the first user for information, and it is determined that the inquiry is related to the object information. The electronic digital assistant then provides a response to the inquiry as a function of the object information.
Abstract:
A process for real-time language detection and language heat map data structure modification includes a computing device receiving, from a first electronic audio source, first audio content and identifying a first geographic location of the first audio content. The computing device then determines that the first audio content includes first speech audio and identifies a first language in which the first speech audio is spoken. A first association is created between the first geographic location and the first language, and a real-time language heat-map data structure modified to include the created first association. Then a further action is taken by the computing device as a function of the modified real-time language heat-map data structure.
Abstract:
Method and system for authenticating a session on a communication device. One method includes determining a use context of the communication device and an authentication status of the communication device. The method further includes determining a predetermined period of time based on at least one of the use context and the authentication status. The method further includes generating biometric templates based on at least one of the use context and the authentication status. The method further includes selecting a matching threshold for the biometric templates based on at least one of the use context and the authentication status. The method further includes comparing a match score of each of the biometric templates to the matching threshold to determine a passing amount of biometric templates with match scores that meet or exceed the matching threshold. The method further includes authenticating the session on the communication device.
Abstract:
Before hibernating a computing device (102), system software components (116) are notified of an upcoming hibernation process. The notifications are conveyed through an application program interface (API) (114). At least a portion of the system software components (116) can perform one or more pre-hibernation activities to place that system software component (116) in a ready-to-resume state. Each system software component indicates when it is ready for hibernation. Responsive to receiving the indication from each of the system software components (116), the hibernation process can complete. The completed hibernation process creates a snapshot (122) in nonvolatile memory. The snapshot (122) saves state information (124) for each of the system software components (116). The state information (124) is for the ready-to-resume state of the system software components (116). The computing device (102) can be restored after hibernation using a resume process (130), which reads the state (124) information from the snapshot (122).