Abstract:
A system for exerting forces on a user. The system includes a user-mounted device including one or more masses, one or more sensors configured to acquire sensor data, and a processor coupled to the one or more sensors. The processor is configured to determine at least one of an orientation and a position associated with the user-mounted device based on the sensor data. The processor is further configured to compute a force to be exerted on the user via the one or more masses based on a force direction associated with a force event and at least one of the orientation and the position. The processor is further configured to generate, based on the force, a control signal to change a position of the one or more masses relative to the user-mounted device.
Abstract:
The various embodiments set forth a headphone apparatus that includes a first earcup, a sensor included in the first earcup, and a controller. The first earcup is coupled to a headband, and includes a loudspeaker system and a thermal control subsystem. The controller is configured to detect an output generated by the sensor, and, based on the output, transmit a signal to the thermal control subsystem included in the first earcup to modify a temperature associated with least the first earcup.
Abstract:
A method for generating an auditory environment for a user may include receiving a signal representing an ambient auditory environment of the user, processing the signal using a microprocessor to identify at least one of a plurality of types of sounds in the ambient auditory environment, receiving user preferences corresponding to each of the plurality of types of sounds, modifying the signal for each type of sound in the ambient auditory environment based on the corresponding user preference, and outputting the modified signal to at least one speaker to generate the auditory environment for the user. A system may include a wearable device having speakers, microphones, and various other sensors to detect a noise context. A microprocessor processes ambient sounds and generates modified audio signals using attenuation, amplification, cancellation, and/or equalization based on user preferences associated with particular types of sounds.
Abstract:
One embodiment of the present invention sets forth a technique for calibrating an audio system. The technique includes transmitting information to a robotic vehicle for positioning a microphone at a plurality of different listening locations within a listening environment and, for each different listening location, acquiring a sound measurement via the microphone. The technique further includes calibrating at least one audio characteristic of the audio system based on the sound measurements acquired at the different listening locations.
Abstract:
One embodiment of the present invention sets forth a technique for modifying an audio parameter based on a gesture. The technique includes acquiring sensor data associated with a hand of a user and analyzing the sensor data to determine at least one hand position. The technique further includes determining, based on the at least one hand position, an interaction between a first virtual object that corresponds to an audio event and a second virtual object that corresponds to the hand of the user. The technique further includes, based on the interaction, modifying a spatial audio parameter associated with the audio event to generate a modified audio stream and causing the modified audio stream to be reproduced for output to the user.
Abstract:
In one embodiment of the present invention, a haptic engine dynamically configures a haptic controller to provide information regarding the state of an interactive system via sensations of texture. In operation, as the state of the interactive system changes, the haptic engine dynamically updates a haptic state that includes texture characteristics, such as texture patterns, that are designed to reflect the state of the interactive system in an intuitive fashion. The haptic engine then generates signals that configure touch surfaces included in the haptic controller to convey these texture characteristics. Advantageously, providing information regarding the correct state and/or operation of the interactive system based on sensations of texture reduces distractions attributable to many conventional audio and/or visual interactive systems. Notably, in-vehicle infotainment systems that provide dynamically updated texture data increase driving safety compared to conventional in-vehicle infotainment systems that often induce drivers to take their eyes off the road.
Abstract:
A system, and method for communicating navigation information to a vehicle operator through actuators in a steering mechanism for a vehicle. The steering mechanism can include multiple actuators along or in a surface a vehicle operator touches to steer the vehicle. Different numbers of actuators can be actuated under an operator's hand to communicate the severity of an upcoming turn. Alternatively, an actuator can be actuated by different amounts to communicate the severity of an upcoming turn. Proximity of an upcoming turn can be communicated by cycling actuators at different rates for different turns. By actuating actuators under an operator's left hand or the left portion of a hand, a left-hand turn can be communicated. Conversely, by actuating actuators under an operator's right hand or the right portion of a hand, a right-hand turn can be communicated.
Abstract:
One embodiment of the present invention sets forth a technique for providing audio enhancement to a user of a listening device. The technique includes reproducing a first audio stream, such as an audio stream associated with a media player. The technique further includes detecting a voice trigger. The voice trigger may be associated with a name of a user of the listening device. The technique further includes pausing or attenuating the first audio stream and reproducing a second audio stream associated with ambient sound in response to detecting the voice trigger.
Abstract:
Approaches are disclosed for generating auditory scenes. A computing device includes a wireless network interface and a processor. The processor is configured to receive, via a microphone, a first auditory signal that includes a first plurality of voice components. The processor is further configured to receive a request to at least partially suppress a first voice component included in the first plurality of voice components. The processor is further configured to generate a second auditory signal that includes the first plurality of voice components with the first voice component at least partially suppressed. The processor is further configured to transmit the second auditory signal to a speaker for output.
Abstract:
A personal safety system detects imminent danger or other event of interest and then modifies a sound panorama to focus a user's attention towards the direction of the danger or other event of interest. The personal safety system may isolate sounds originating from the direction of the danger or other event of interest, collapse the sound panorama towards the direction of the danger or other event of interest, or compress the sound panorama to align with the direction of the danger or other event of interest. The personal safety system may be integrated into an automobile or a wearable system physically attached to the user. Advantageously, the attention of the user may be drawn towards imminent danger or other events of interest without introducing additional sensory information to the user, thereby reducing the likelihood of startling or distracting the user.