Abstract:
A system for exploiting visual information for enhancing audio signals via source separation and beamforming is disclosed. The system may obtain visual content associated with an environment of a user, and may extract, from the visual content, metadata associated with the environment. The system may determine a location of the user based on the extracted metadata. Additionally, the system may load, based on the location, an audio profile corresponding to the location of the user. The system may also load a user profile of the user that includes audio data associated with the user. Furthermore, the system may cancel, based on the audio profile and user profile, noise from the environment of the user. Moreover, the system may include adjusting, based on the audio profile and user profile, an audio signal generated by the user so as to enhance the audio signal during a communications session of the user.
Abstract:
A system for sensor enhanced speech recognition is disclosed. The system may obtain visual content or other content associated with a user and an environment of the user. Additionally, the system may obtain, from the visual content, metadata associated with the user and the environment of the user. The system may also include determining, based on the visual content and metadata, if the user is speaking. If the user is determined to be speaking, the system may obtain audio content associated with the user and the environment. The system may then adapt, based on the visual content, audio content, and metadata, one or more acoustic models that match the user and the environment. Once the one or more acoustic models are adapted and loaded, the system may enhance a speech recognition process or other process associated with the user.
Abstract:
Concepts and technologies are disclosed herein for providing navigation routes and/or providing navigation route updates. According to various embodiments of the concepts and technologies disclosed herein, a navigation application can be configured to obtain route data from a routing service. The routing service can be configured to use navigation data locally stored and/or obtained from a number of sources to generate navigation routes and/or to update navigation routes. The generated and/or updated navigation routes can be provided to the user device as route data that can be used to provide navigation directions to a user.
Abstract:
Disclosed herein are systems, methods, and computer-readable storage devices for processing audio signals. An example system configured to practice the method receives audio at a device to be transmitted to a remote speech processing system. The system analyzes one of noise conditions, need for an enhanced speech quality, and network load to yield an analysis. Based on the analysis, the system determines to bypass user-defined options for enhancing audio for speech processing. Then, based on the analysis, the system can modify an audio transmission parameter used to transmit the audio from the device to the remote speech processing system. The audio transmission parameter can be one of an amount of coding, a chosen codec, an amount of coding, or a number of audio channels, for example.
Abstract:
A system and method for processing speech includes receiving a first information stream associated with speech, the first information stream comprising micro-modulation features and receiving a second information stream associated with the speech, the second information stream comprising features. The method includes combining, via a non-linear multilayer perceptron, the first information stream and the second information stream to yield a third information stream. The system performs automatic speech recognition on the third information stream. The third information stream can also be used for training HMMs.
Abstract:
A system for sensor enhanced speech recognition is disclosed. The system may obtain visual content or other content associated with a user and an environment of the user. Additionally, the system may obtain, from the visual content, metadata associated with the user and the environment of the user. The system may also include determining, based on the visual content and metadata, if the user is speaking. If the user is determined to be speaking, the system may obtain audio content associated with the user and the environment. The system may then adapt, based on the visual content, audio content, and metadata, one or more acoustic models that match the user and the environment. Once the one or more acoustic models are adapted and loaded, the system may enhance a speech recognition process or other process associated with the user.
Abstract:
A network of intelligent electronic public signs interacts with one or many devices. A central server manages the electronic public signs and determines which one of the electronic public signs should display content related to a device. The central server may thus pair devices to electronic public signs for public display of individual content requests. Should any interaction involve personal or private information, the central server may exclude the corresponding response from public display. Any personal or private interactions may, instead, be privately conducted to prevent public display.