Inter frame watermark in a digital video

    公开(公告)号:US09877036B2

    公开(公告)日:2018-01-23

    申请号:US14598135

    申请日:2015-01-15

    Applicant: GoPro, Inc.

    Abstract: Watermark data is converted to watermark coefficients, which may be embedded in an image by converting the image to a frequency domain, embedding the watermark in image coefficients corresponding to medium-frequency components, and converting the modified coefficients to the spatial domain. The watermark data is extracted from the modified image by converting the modified image to a frequency domain, extracting the watermark coefficients from the image coefficients, and determining the watermark data from the watermark coefficients. The watermark data may be truncated image data bits such as truncated least significant data bits. After extraction from the watermark, the truncated image data bits may be combined with data bits representing the original image to increase the bit depth of the image. Watermark data may include audio data portions corresponding to a video frame, reference frames temporally proximate to a video frame, high-frequency content, sensor calibration information, or other image data.

    Synthesizing audio corresponding to a virtual microphone location

    公开(公告)号:US09749738B1

    公开(公告)日:2017-08-29

    申请号:US15187700

    申请日:2016-06-20

    Applicant: GoPro, Inc.

    Abstract: Disclosed is a system and method for generating a model of the geometric relationships between various audio sources recorded by a multi-camera system. The spatial audio scene module associates source signals, extracted from recorded audio, of audio sources to visual objects identified in videos recorded by one or more cameras. This association may be based on estimated positions of the audio sources based on relative signal gains and delays of the source signal received at each microphone. The estimated positions of audio sources are tracked indirectly by tracking the associated visual objects with computer vision. A virtual microphone module may receive a position for a virtual microphone and synthesize a signal corresponding to the virtual microphone position based on the estimated positions of the audio sources.

    SYSTEMS AND METHODS FOR SPATIALLY ADAPTIVE VIDEO ENCODING

    公开(公告)号:US20170237983A1

    公开(公告)日:2017-08-17

    申请号:US15334213

    申请日:2016-10-25

    Applicant: GoPro, Inc.

    Abstract: Systems and methods for providing video content using spatially adaptive video encoding. Panoramic and/or virtual reality content may be viewed by a client device using a viewport with viewing dimension(s) configured smaller than available dimension(s) of the content. Client device may include a portable media device characterized by given energy and/or computational resources. Video content may be encoded using spatially varying encoding. For image playback, portions of panoramic image may be pre-encoded using multiple quality bands. Pre-encoded image portions, matching the viewport, may be provided and reduce computational and/or energy load on the client device during consumption of panoramic content. Quality distribution may include gradual quality transition area allowing for small movements of the viewport without triggering image re-encoding. Larger movements of the viewport may automatically trigger transition to another spatial encoding distribution.

    UNIFIED IMAGE PROCESSING FOR COMBINED IMAGES BASED ON SPATIALLY CO-LOCATED ZONES

    公开(公告)号:US20170091970A1

    公开(公告)日:2017-03-30

    申请号:US14872063

    申请日:2015-09-30

    Applicant: GoPro, Inc.

    Abstract: A unified image processing algorithm results in better post-processing quality for combined images that are made up of multiple single-capture images. To ensure that each single-capture image is processed in the context of the entire combined image, the combined image is analyzed to determine portions of the image (referred to as “zones”) that should be processed with the same parameters for various image processing algorithms. These zones may be determined based on the content of the combined image. Alternatively, these zones may be determined based on the position of each single-capture image with respect to the entire combined image or the other single-capture images. Once zones and their corresponding image processing parameters are determined for the combined image, they are translated to corresponding zones each of the single-capture images. Finally, the image processing algorithms are applied to each of the single-capture images using the zone-specified parameters.

Patent Agency Ranking