Interactive cinemagrams
    31.
    发明授权

    公开(公告)号:US10586367B2

    公开(公告)日:2020-03-10

    申请号:US15650531

    申请日:2017-07-14

    Abstract: A method, apparatus, and computer readable medium for interactive cinemagrams. The method includes displaying a still frame of a cinemagram on a display of an electronic device, the cinemagram having an animated portion. The method also includes after the displaying, identifying occurrence of a triggering event based on an input from one or more sensors of the electronic device. Additionally, the method includes initiating animation of the animated portion of the cinemagram in response to identifying the occurrence of the triggering event. The method may also include generating the image as a cinemagram by identifying a reference frame from a plurality of frames and an object in the reference frame, segmenting the object from the reference frame, tracking the object across multiple of the frames, determining whether a portion of the reference frame lacks pixel information during motion of the object, and identifying pixel information to add to the portion.

    PROGRESSIVE COMPRESSED DOMAIN COMPUTER VISION AND DEEP LEARNING SYSTEMS

    公开(公告)号:US20190246130A1

    公开(公告)日:2019-08-08

    申请号:US15892141

    申请日:2018-02-08

    CPC classification number: H04N19/48 H04N19/11 H04N19/167 H04N19/44

    Abstract: Methods and systems for compressed domain progressive application of computer vision techniques. A method for decoding video data includes receiving a video stream that is encoded for multi-stage decoding. The method includes partially decoding the video stream by performing one or more stages of the multi-stage decoding. The method includes determining whether a decision for a computer vision system can be identified based on the partially decoded video stream. Additionally, the method includes generating the decision for the computer vision system based on decoding of the video stream. A system for encoding video data includes a processor configured to receive the video data from a camera, encode the video data received from the camera into a video stream for consumption by a computer vision system, and include metadata with the encoded video stream to indicate whether a decision for the computer vision system can be identified from the metadata.

    Array-based depth estimation
    36.
    发明授权

    公开(公告)号:US11816855B2

    公开(公告)日:2023-11-14

    申请号:US17027106

    申请日:2020-09-21

    Abstract: A method includes obtaining at least three input image frames of a scene captured using at least three imaging sensors. The input image frames include a reference image frame and multiple non-reference image frames. The method also includes generating multiple disparity maps using the input image frames. Each disparity map is associated with the reference image frame and a different non-reference image frame. The method further includes generating multiple confidence maps using the input image frames. Each confidence map identifies weights associated with one of the disparity maps. In addition, the method includes generating a depth map of the scene using the disparity maps and the confidence maps. The imaging sensors are arranged to define multiple baseline directions, where each baseline direction extends between the imaging sensor used to capture the reference image frame and the imaging sensor used to capture a different non-reference image frame.

    System and method for synthetic depth-of-field effect rendering for videos

    公开(公告)号:US11449968B2

    公开(公告)日:2022-09-20

    申请号:US17139894

    申请日:2020-12-31

    Abstract: A method includes obtaining, using at least one processor of an electronic device, multiple video frames of a video stream and multiple depth frames corresponding to the multiple video frames. The method also includes generating, using the at least one processor, multiple blur kernel maps based on the multiple depth frames. The method further includes reducing, using the at least one processor, depth errors in each of the multiple blur kernel maps. The method also includes performing, using the at least one processor, temporal smoothing on the multiple blur kernel maps to suppress temporal artifacts between different ones of the multiple blur kernel maps. In addition, the method includes generating, using the at least one processor, blur effects in the video stream using the multiple blur kernel maps.

    SYSTEM AND METHOD FOR SYNTHETIC DEPTH-OF-FIELD EFFECT RENDERING FOR VIDEOS

    公开(公告)号:US20220207655A1

    公开(公告)日:2022-06-30

    申请号:US17139894

    申请日:2020-12-31

    Abstract: A method includes obtaining, using at least one processor of an electronic device, multiple video frames of a video stream and multiple depth frames corresponding to the multiple video frames. The method also includes generating, using the at least one processor, multiple blur kernel maps based on the multiple depth frames. The method further includes reducing, using the at least one processor, depth errors in each of the multiple blur kernel maps. The method also includes performing, using the at least one processor, temporal smoothing on the multiple blur kernel maps to suppress temporal artifacts between different ones of the multiple blur kernel maps. In addition, the method includes generating, using the at least one processor, blur effects in the video stream using the multiple blur kernel maps.

Patent Agency Ranking