DIGITAL IMAGE SUB-DIVISION
    21.
    发明公开

    公开(公告)号:US20230360317A1

    公开(公告)日:2023-11-09

    申请号:US17662061

    申请日:2022-05-04

    CPC classification number: G06T15/205 G06T7/85 G06V20/64 G06T2207/20021

    Abstract: A digital image processing method performed by a computer is disclosed. A digital image captured by a real camera having intrinsic and extrinsic parameters is received. The intrinsic parameters include a native principal point defined relative to an origin of a coordinate system of the digital image. The digital image is sub-divided into a plurality of sub-images. For each sub-image of the plurality of sub-images, the sub-image is associated with a synthesized recapture camera having synthesized intrinsic and extrinsic parameters mapped from the real camera. The synthesized intrinsic parameters include the native principal point defined relative to an origin of a coordinate system of the sub-image.

    SYSTEMS AND METHODS FOR STRUCTURED LIGHT DEPTH COMPUTATION USING SINGLE PHOTON AVALANCHE DIODES

    公开(公告)号:US20230254603A1

    公开(公告)日:2023-08-10

    申请号:US18299201

    申请日:2023-04-12

    CPC classification number: H04N25/705 G02B27/0172 G02B2027/0138

    Abstract: A system for structured light depth computation using single photon avalanche diodes (SPADs) is configurable to, over a frame capture time period, selectively activate the illuminator to perform interleaved structured light illumination operations. The interleaved structured light illumination operations comprise alternately emitting at least a first structured light pattern from the illuminator and emitting at least a second structured light pattern from the illuminator. The system is also configurable to, over the frame capture time period, perform a plurality of sequential shutter operations to configure each SPAD pixel of the SPAD array to enable photon detection. The plurality of sequential shutter operations generates, for each SPAD pixel of the SPAD array, a plurality of binary counts indicating whether a photon was detected during each of the plurality of sequential shutter operations.

    GENERATE SUPER-RESOLUTION IMAGES FROM SPARSE COLOR INFORMATION

    公开(公告)号:US20230214962A1

    公开(公告)日:2023-07-06

    申请号:US18108788

    申请日:2023-02-13

    CPC classification number: G06T3/4061 H04N23/10 H04N23/951 G06T3/4007

    Abstract: Techniques for generating a high resolution full color output image from lower resolution sparse color input images are disclosed. A camera generates images. The camera's sensor has a sparse Bayer pattern. While the camera is generating the images, IMU data for each image is acquired. The IMU data indicates a corresponding pose the camera was in while the camera generated each image. The images and IMU data are fed into a motion model, which performs temporal filtering on the images and uses the IMU data to generate a red-only image, a green-only image, a blue-only image, and a monochrome image. The color images are up-sampled to match the resolution of the monochrome image. A high resolution output color image is generated by combining the up-sampled images and the monochrome image.

    RAPID TARGET ACQUISITION USING GRAVITY AND NORTH VECTORS

    公开(公告)号:US20230148231A1

    公开(公告)日:2023-05-11

    申请号:US17524270

    申请日:2021-11-11

    CPC classification number: G06T15/205 G06T7/33 G06T7/97

    Abstract: Techniques for aligning images generated by two cameras are disclosed. This alignment is performed by computing a relative 3D orientation between the two cameras. A first gravity vector for a first camera and a second gravity vector for a second camera are determined. A first camera image is obtained from the first camera, and a second camera image is obtained from the second camera. A first alignment process is performed to partially align the first camera's orientation with the second camera's orientation. This process is performed by aligning the gravity vectors, thereby resulting in two degrees of freedom of the relative 3D orientation being eliminated. Visual correspondences between the two images are identified. A second alignment process is performed to fully align the orientations. This process is performed by using the identified visual correspondences to identify and eliminate a third degree of freedom of the relative 3D orientation.

    Smooth and Jump-Free Rapid Target Acquisition

    公开(公告)号:US20230115537A1

    公开(公告)日:2023-04-13

    申请号:US17500088

    申请日:2021-10-13

    Abstract: Techniques for correcting an overlay misalignment between an external camera image and a system camera image are disclosed. A first system camera image and a first external camera image are acquired. A first visual alignment is performed between those two images to produce an overlaid image. Some of the content in the overlaid image is surrounded by a bounding element. A position of the bounding element is modified based on movements of the system camera and/or the external camera. In response to performing a second visual alignment using new images, an update vector is computed. Relative movement between the two cameras is determined. Based on the movement and based on the update vector, the bounding element is progressively transitioned to a corrected position in the overlaid image. A speed by which the bounding element is progressively transitioned is proportional to the amount of movement.

    SYSTEMS AND METHODS FOR EFFICIENT GENERATION OF SINGLE PHOTON AVALANCHE DIODE IMAGERY WITH PERSISTENCE

    公开(公告)号:US20220353489A1

    公开(公告)日:2022-11-03

    申请号:US17246477

    申请日:2021-04-30

    Abstract: A system for efficiently generating SPAD imagery with persistence is configurable to capture an image frame, capture pose data associated with the capturing of the image frame, and access a persistence frame. The persistence frame includes a preceding composite image frame generated based on at least two preceding image frames. The at least two preceding image frames are associated with timepoints that precede a capture timepoint associated with the image frame. The system is configurable to generate a persistence term based on (i) the pose data, (ii) a similarity comparison based on the image frame and the persistence frame, or (iii) a signal strength associated with the image frame. The system is configurable to generate a composite image based on the image frame, the persistence frame, and the persistence term. The persistence term defines a contribution of the image frame and the persistence frame to the composite image.

Patent Agency Ranking