MOBILE DATA AUGMENTATION ENGINE FOR PERSONALIZED ON-DEVICE DEEP LEARNING SYSTEM

    公开(公告)号:US20210248722A1

    公开(公告)日:2021-08-12

    申请号:US16946989

    申请日:2020-07-14

    Abstract: A method includes processing, using at least one processor of an electronic device, each of multiple images using a photometric augmentation engine, where the photometric augmentation engine performs one or more photometric augmentation operations. The method also includes applying, using the at least one processor, multiple layers of a convolutional neural network to each of the images, where each layer generates a corresponding feature map. The method further includes processing, using the at least one processor, at least one of the feature maps using at least one feature augmentation engine between consecutive layers of the multiple layers, where the at least one feature augmentation engine performs one or more feature augmentation operations.

    APPARATUS AND METHOD FOR DYNAMIC MULTI-CAMERA RECTIFICATION USING DEPTH CAMERA

    公开(公告)号:US20210174479A1

    公开(公告)日:2021-06-10

    申请号:US16703712

    申请日:2019-12-04

    Abstract: A method includes obtaining, using first and second image sensors of an electronic device, first and second images, respectively, of a scene. The method also includes obtaining, using an image depth sensor of the electronic device, a third image and a first depth map of the scene, the first depth map having a resolution lower than a resolution of the first and second images. The method further includes undistorting the first and second images using the third image and the first depth map. The method also includes rectifying the first and second images using the third image and the first depth map. The method further includes generating a disparity map using the first and second images that have been undistorted and rectified. In addition, the method includes generating a second depth map using the disparity map and the first depth map, where the second depth map has a resolution that is higher than the resolution of the first depth map.

    MULTI-TASK FUSION NEURAL NETWORK ARCHITECTURE

    公开(公告)号:US20210158142A1

    公开(公告)日:2021-05-27

    申请号:US16693112

    申请日:2019-11-22

    Abstract: A method includes identifying, by at least one processor, multiple features of input data using a common feature extractor. The method also includes processing, by the at least one processor, at least some identified features using each of multiple pre-processing branches. Each pre-processing branch includes a first set of neural network layers and generates initial outputs associated with a different one of multiple data processing tasks. The method further includes combining, by the at least one processor, at least two initial outputs from at least two pre-processing branches to produce combined initial outputs. In addition, the method includes processing, by the at least one processor, at least some initial outputs or at least some combined initial outputs using each of multiple post-processing branches. Each post-processing branch includes a second set of neural network layers and generates final outputs associated with a different one of the multiple data processing tasks.

    Multi-frame optical flow network with lossless pyramid micro-architecture

    公开(公告)号:US12148175B2

    公开(公告)日:2024-11-19

    申请号:US17590998

    申请日:2022-02-02

    Abstract: A method includes obtaining a first optical flow vector representing motion between consecutive video frames during a previous time step. The method also includes generating a first predicted optical flow vector from the first optical flow vector using a trained prediction model, where the first predicted optical flow vector represents predicted motion during a current time step. The method further includes refining the first predicted optical flow vector using a trained update model to generate a second optical flow vector representing motion during the current time step. The trained update model uses the first predicted optical flow vector, a video frame of the previous time step, and a video frame of the current time step to generate the second optical flow vector.

    Array-based depth estimation
    20.
    发明授权

    公开(公告)号:US11816855B2

    公开(公告)日:2023-11-14

    申请号:US17027106

    申请日:2020-09-21

    Abstract: A method includes obtaining at least three input image frames of a scene captured using at least three imaging sensors. The input image frames include a reference image frame and multiple non-reference image frames. The method also includes generating multiple disparity maps using the input image frames. Each disparity map is associated with the reference image frame and a different non-reference image frame. The method further includes generating multiple confidence maps using the input image frames. Each confidence map identifies weights associated with one of the disparity maps. In addition, the method includes generating a depth map of the scene using the disparity maps and the confidence maps. The imaging sensors are arranged to define multiple baseline directions, where each baseline direction extends between the imaging sensor used to capture the reference image frame and the imaging sensor used to capture a different non-reference image frame.

Patent Agency Ranking