Abstract:
A camera includes a first microphone, a second microphone, one or more drains, and a processor. The processor determines a correlation metric between portions of audio signals obtained from the first and second microphones. The one or more drains drain water away from the first microphone, the second microphone, or both. The camera includes a memory to store the portions of the audio signals as portions of an output audio signal.
Abstract:
A spherical content capture system captures spherical video and audio content. In one embodiment, captured metadata or video/audio processing is used to identify content relevant to a particular user based on time and location information. The platform can then generate an output video from one or more shared spherical content files relevant to the user. The output video may include a non-spherical reduced field of view such as those commonly associated with conventional camera systems. Particularly, relevant sub-frames having a reduced field of view may be extracted from each frame of spherical video to generate an output video that tracks a particular individual or object of interest. For each sub-frame, a corresponding portion of an audio track is generated that includes a directional audio signal having a directionality based on the selected sub-frame.
Abstract:
A spherical content capture system captures spherical video and audio content. In one embodiment, captured metadata or video/audio processing is used to identify content relevant to a particular user based on time and location information. The platform can then generate an output video from one or more shared spherical content files relevant to the user. The output video may include a non-spherical reduced field of view such as those commonly associated with conventional camera systems. Particularly, relevant sub-frames having a reduced field of view may be extracted from each frame of spherical video to generate an output video that tracks a particular individual or object of interest. For each sub-frame, a corresponding portion of an audio track is generated that includes a directional audio signal having a directionality based on the selected sub-frame.
Abstract:
A camera system capable of capturing images of an event in a dynamic environment includes two microphones configured to capture stereo audio of the event. The microphones are on orthogonal surfaces of the camera system. Because the microphones are on orthogonal surfaces of the camera system, the camera body can impact the spatial response of the two recorded audio channels differently, leading to degraded stereo recreation if standard beam forming techniques are used. The camera system includes tuned beam forming techniques to generate multi-channel audio that more accurately recreates the stereo audio by compensating for the shape of the camera system and the orientation of microphones on the camera system. The tuned beam forming techniques include optimizing a set of beam forming parameters, as a function of frequency, based on the true spatial response of the recorded audio signals.
Abstract:
An audio capture system for a sports camera includes at least one “enhanced” microphone and at least one “reference” microphone. The enhanced microphone includes a drainage enhancement feature to enable water to drain from the microphone more quickly than the reference microphone. A microphone selection controller selects between the microphones based on a microphone selection algorithm to enable high quality in conditions where the sports camera transitions in and out of water during activities such as surfing, water skiing, swimming, or other wet environments.
Abstract:
A camera system capable of capturing images of an event in a dynamic environment includes two microphones configured to capture stereo audio of the event. The microphones are on orthogonal surfaces of the camera system. Because the microphones are on orthogonal surfaces of the camera system, the camera body can impact the spatial response of the two recorded audio channels differently, leading to degraded stereo recreation if standard beam forming techniques are used. The camera system includes tuned beam forming techniques to generate multi-channel audio that more accurately recreates the stereo audio by compensating for the shape of the camera system and the orientation of microphones on the camera system. The tuned beam forming techniques include optimizing a set of beam forming parameters, as a function of frequency, based on the true spatial response of the recorded audio signals.
Abstract:
A spherical content capture system captures spherical video and audio content. In one embodiment, captured metadata or video/audio processing is used to identify content relevant to a particular user based on time and location information. The platform can then generate an output video from one or more shared spherical content files relevant to the user. The output video may include a non-spherical reduced field of view such as those commonly associated with conventional camera systems. Particularly, relevant sub-frames having a reduced field of view may be extracted from each frame of spherical video to generate an output video that tracks a particular individual or object of interest. For each sub-frame, a corresponding portion of an audio track is generated that includes a directional audio signal having a directionality based on the selected sub-frame.
Abstract:
An audio capture system for a sports camera includes at least one “enhanced” microphone and at least one “reference” microphone. The enhanced microphone includes a drainage enhancement feature to enable water to drain from the microphone more quickly than the reference microphone. A microphone selection controller selects between the microphones based on a microphone selection algorithm to enable high quality in conditions where the sports camera transitions in and out of water during activities such as surfing, water skiing, swimming, or other wet environments.
Abstract:
An audio system encodes and decodes audio captured by a microphone array system in the presence of wind noise. The encoder encodes the audio signal in a way that includes beamformed audio signal and a “hidden” representation of a non-beamformed audio signal. The hidden signal is produced by modulating the low frequency signal to a high frequency above the audible range. A decoder can then either output the beamformed audio signal or can use the hidden signal to generate a reduced wind noise audio signal that includes the non-beamformed audio in the low frequency range.
Abstract:
A spherical content capture system captures spherical video and audio content. In one embodiment, captured metadata or video/audio processing is used to identify content relevant to a particular user based on time and location information. The platform can then generate an output video from one or more shared spherical content files relevant to the user. The output video may include a non-spherical reduced field of view such as those commonly associated with conventional camera systems. Particularly, relevant sub-frames having a reduced field of view may be extracted from each frame of spherical video to generate an output video that tracks a particular individual or object of interest. For each sub-frame, a corresponding portion of an audio track is generated that includes a directional audio signal having a directionality based on the selected sub-frame.