Super-resolution depth map generation for multi-camera or other environments

    公开(公告)号:US11503266B2

    公开(公告)日:2022-11-15

    申请号:US16811585

    申请日:2020-03-06

    Abstract: A method includes obtaining, using at least one processor, first and second input image frames, where the first and second input image frames are associated with first and second image planes, respectively. The method also includes obtaining, using the at least one processor, a depth map associated with the first input image frame. The method further includes producing another version of the depth map by performing one or more times: (a) projecting, using the at least one processor, the first input image frame to the second image plane in order to produce a projected image frame using (i) the depth map and (ii) information identifying a conversion from the first image plane to the second image plane and (b) adjusting, using the at least one processor, at least one of the depth map and the information identifying the conversion from the first image plane to the second image plane.

    ARRAY-BASED DEPTH ESTIMATION
    32.
    发明申请

    公开(公告)号:US20210248769A1

    公开(公告)日:2021-08-12

    申请号:US17027106

    申请日:2020-09-21

    Abstract: A method includes obtaining at least three input image frames of a scene captured using at least three imaging sensors. The input image frames include a reference image frame and multiple non-reference image frames. The method also includes generating multiple disparity maps using the input image frames. Each disparity map is associated with the reference image frame and a different non-reference image frame. The method further includes generating multiple confidence maps using the input image frames. Each confidence map identifies weights associated with one of the disparity maps. In addition, the method includes generating a depth map of the scene using the disparity maps and the confidence maps. The imaging sensors are arranged to define multiple baseline directions, where each baseline direction extends between the imaging sensor used to capture the reference image frame and the imaging sensor used to capture a different non-reference image frame.

    SYSTEM AND METHOD FOR CONVOLUTIONAL LAYER STRUCTURE FOR NEURAL NETWORKS

    公开(公告)号:US20200349439A1

    公开(公告)日:2020-11-05

    申请号:US16400007

    申请日:2019-04-30

    Abstract: An electronic device, method, and computer readable medium for 3D association of detected objects are provided. The electronic device includes a memory and at least one processor coupled to the memory. The at least one processor configured to convolve an input to a neural network with a basis kernel to generate a convolution result, scale the convolution result by a scalar to create a scaled convolution result, and combine the scaled convolution result with one or more of a plurality of scaled convolution results to generate an output feature map.

    SYSTEM AND METHOD FOR INVERTIBLE WAVELET LAYER FOR NEURAL NETWORKS

    公开(公告)号:US20200349411A1

    公开(公告)日:2020-11-05

    申请号:US16399998

    申请日:2019-04-30

    Abstract: An electronic device, method, and computer readable medium for an invertible wavelet layer for neural networks are provided. The electronic device includes a memory and at least one processor coupled to the memory. The at least one processor is configured to receive an input to a neural network, apply a wavelet transform to the input at a wavelet layer of the neural network, and generate a plurality of subbands of the input as a result of the wavelet transform.

    PROGRESSIVE COMPRESSED DOMAIN COMPUTER VISION AND DEEP LEARNING SYSTEMS

    公开(公告)号:US20190246130A1

    公开(公告)日:2019-08-08

    申请号:US15892141

    申请日:2018-02-08

    CPC classification number: H04N19/48 H04N19/11 H04N19/167 H04N19/44

    Abstract: Methods and systems for compressed domain progressive application of computer vision techniques. A method for decoding video data includes receiving a video stream that is encoded for multi-stage decoding. The method includes partially decoding the video stream by performing one or more stages of the multi-stage decoding. The method includes determining whether a decision for a computer vision system can be identified based on the partially decoded video stream. Additionally, the method includes generating the decision for the computer vision system based on decoding of the video stream. A system for encoding video data includes a processor configured to receive the video data from a camera, encode the video data received from the camera into a video stream for consumption by a computer vision system, and include metadata with the encoded video stream to indicate whether a decision for the computer vision system can be identified from the metadata.

Patent Agency Ranking