HAND MOTION PATTERN MODELING AND MOTION BLUR SYNTHESIZING TECHNIQUES

    公开(公告)号:US20230252608A1

    公开(公告)日:2023-08-10

    申请号:US17666166

    申请日:2022-02-07

    Abstract: A method includes obtaining, using a stationary sensor of an electronic device, multiple image frames including first and second image frames. The method also includes generating, using multiple previously generated motion vectors, a first motion-distorted image frame using the first image frame and a second motion-distorted image frame using the second image frame. The method further includes adding noise to the motion-distorted image frames to generate first and second noisy motion-distorted image frames. The method also includes performing (i) a first multi-frame processing (MFP) operation to generate a ground truth image using the motion-distorted image frames and (ii) a second MFP operation to generate an input image using the noisy motion-distorted image frames. In addition, the method includes storing the ground truth and input images as an image pair for training an artificial intelligence/machine learning (AI/ML)-based image processing operation for removing image distortions caused by handheld image capture.

    Apparatus and method for dynamic multi-camera rectification using depth camera

    公开(公告)号:US11195259B2

    公开(公告)日:2021-12-07

    申请号:US16703712

    申请日:2019-12-04

    Abstract: A method includes obtaining, using first and second image sensors of an electronic device, first and second images, respectively, of a scene. The method also includes obtaining, using an image depth sensor of the electronic device, a third image and a first depth map of the scene, the first depth map having a resolution lower than a resolution of the first and second images. The method further includes undistorting the first and second images using the third image and the first depth map. The method also includes rectifying the first and second images using the third image and the first depth map. The method further includes generating a disparity map using the first and second images that have been undistorted and rectified. In addition, the method includes generating a second depth map using the disparity map and the first depth map, where the second depth map has a resolution that is higher than the resolution of the first depth map.

    SUPER-RESOLUTION DEPTH MAP GENERATION FOR MULTI-CAMERA OR OTHER ENVIRONMENTS

    公开(公告)号:US20210281813A1

    公开(公告)日:2021-09-09

    申请号:US16811585

    申请日:2020-03-06

    Abstract: A method includes obtaining, using at least one processor, first and second input image frames, where the first and second input image frames are associated with first and second image planes, respectively. The method also includes obtaining, using the at least one processor, a depth map associated with the first input image frame. The method further includes producing another version of the depth map by performing one or more times: (a) projecting, using the at least one processor, the first input image frame to the second image plane in order to produce a projected image frame using (i) the depth map and (ii) information identifying a conversion from the first image plane to the second image plane and (b) adjusting, using the at least one processor, at least one of the depth map and the information identifying the conversion from the first image plane to the second image plane.

    MULTI-FRAME OPTICAL FLOW NETWORK WITH LOSSLESS PYRAMID MICRO-ARCHITECTURE

    公开(公告)号:US20230245328A1

    公开(公告)日:2023-08-03

    申请号:US17590998

    申请日:2022-02-02

    CPC classification number: G06T7/269 G06T2207/10016 G06T2207/20081

    Abstract: A method includes obtaining a first optical flow vector representing motion between consecutive video frames during a previous time step. The method also includes generating a first predicted optical flow vector from the first optical flow vector using a trained prediction model, where the first predicted optical flow vector represents predicted motion during a current time step. The method further includes refining the first predicted optical flow vector using a trained update model to generate a second optical flow vector representing motion during the current time step. The trained update model uses the first predicted optical flow vector, a video frame of the previous time step, and a video frame of the current time step to generate the second optical flow vector.

    Mobile data augmentation engine for personalized on-device deep learning system

    公开(公告)号:US11631163B2

    公开(公告)日:2023-04-18

    申请号:US16946989

    申请日:2020-07-14

    Abstract: A method includes processing, using at least one processor of an electronic device, each of multiple images using a photometric augmentation engine, where the photometric augmentation engine performs one or more photometric augmentation operations. The method also includes applying, using the at least one processor, multiple layers of a convolutional neural network to each of the images, where each layer generates a corresponding feature map. The method further includes processing, using the at least one processor, at least one of the feature maps using at least one feature augmentation engine between consecutive layers of the multiple layers, where the at least one feature augmentation engine performs one or more feature augmentation operations.

    Multi-task fusion neural network architecture

    公开(公告)号:US11556784B2

    公开(公告)日:2023-01-17

    申请号:US16693112

    申请日:2019-11-22

    Abstract: A method includes identifying, by at least one processor, multiple features of input data using a common feature extractor. The method also includes processing, by the at least one processor, at least some identified features using each of multiple pre-processing branches. Each pre-processing branch includes a first set of neural network layers and generates initial outputs associated with a different one of multiple data processing tasks. The method further includes combining, by the at least one processor, at least two initial outputs from at least two pre-processing branches to produce combined initial outputs. In addition, the method includes processing, by the at least one processor, at least some initial outputs or at least some combined initial outputs using each of multiple post-processing branches. Each post-processing branch includes a second set of neural network layers and generates final outputs associated with a different one of the multiple data processing tasks.

    ARRAY-BASED DEPTH ESTIMATION
    10.
    发明申请

    公开(公告)号:US20210248769A1

    公开(公告)日:2021-08-12

    申请号:US17027106

    申请日:2020-09-21

    Abstract: A method includes obtaining at least three input image frames of a scene captured using at least three imaging sensors. The input image frames include a reference image frame and multiple non-reference image frames. The method also includes generating multiple disparity maps using the input image frames. Each disparity map is associated with the reference image frame and a different non-reference image frame. The method further includes generating multiple confidence maps using the input image frames. Each confidence map identifies weights associated with one of the disparity maps. In addition, the method includes generating a depth map of the scene using the disparity maps and the confidence maps. The imaging sensors are arranged to define multiple baseline directions, where each baseline direction extends between the imaging sensor used to capture the reference image frame and the imaging sensor used to capture a different non-reference image frame.

Patent Agency Ranking