INFRARED AND OTHER COLORIZATION USING GENERATIVE NEURAL NETWORKS

    公开(公告)号:US20250117980A1

    公开(公告)日:2025-04-10

    申请号:US18484122

    申请日:2023-10-10

    Abstract: In various examples, infrared image data (e.g., frames of an infrared video feed) may be colorized by applying the infrared image data and/or a corresponding edge map to a generator of a generative adversarial network (GAN). The GAN may be trained with or without paired ground truth RGB and infrared (and/or edge map) images. In an example of the latter scenario, a first generator G(IR)→RGB and a second generator G(RGB)→IR may be trained in a first chain, their positions may be swapped in a second chain, and the second chain may be trained. In some embodiments, edges may be emphasized by weighting edge pixels (e.g., determined from a corresponding edge map) higher than non-edge pixels when backpropagating loss. After training, G(IR)→RGB may be used to generate RGB image data from infrared image data (and/or a corresponding edge map).

    INFRARED AND OTHER COLORIZATION WITH RGB IMAGE DATA USING GENERATIVE NEURAL NETWORKS

    公开(公告)号:US20250117981A1

    公开(公告)日:2025-04-10

    申请号:US18484306

    申请日:2023-10-10

    Abstract: In various examples, infrared image data (e.g., frames of an infrared (IR) video feed) may be colorized by transferring color statistics from an RGB image with an overlapping field of view, by modifying one or more dimensions of an encoded representation of a generated RGB image, and/or otherwise. For example, segmentation may be applied to the IR and RGB image data, and the one or more colors or statistics may be transferred from a segmented region of the RGB image data to a corresponding segmented region of the IR image data. In some embodiments, synthesized RGB image data may be fined tuned by transferring color or color statistic(s) from corresponding real RGB image data, and/or by modifying one or more dimensions of an encoded representation of the synthesized RGB image data.

    OPTIMIZED VISUALIZATION STREAMING FOR VEHICLE ENVIRONMENT VISUALIZATION

    公开(公告)号:US20230316773A1

    公开(公告)日:2023-10-05

    申请号:US18173630

    申请日:2023-02-23

    CPC classification number: G06V20/58 B60W30/06 B60W2420/42 B60W2420/52

    Abstract: In various examples, sensor data may be captured by sensors of an ego-object, such as a vehicle traveling in a physical environment, and a representation of the sensor data may be streamed from the ego-object to a remote location to facilitate various remote experiences, such as streaming to a remote viewer (e.g., a friend or relative), streaming to a remote or fleet operator, streaming to a mobile app configured to self-park or summon an ego-object, rendering a 3D augmented reality (AR) or virtual reality (VR) representation of the physical environment, and/or others. In some embodiments, the stream includes one or more command channels used to control data collection, rendering, stream content, or even vehicle maneuvers, such as during an emergency, self-park, or summon scenario.

    DISTORTION CORRECTION FOR ENVIRONMENT VISUALIZATIONS WITH WIDE ANGLE VIEWS

    公开(公告)号:US20250022223A1

    公开(公告)日:2025-01-16

    申请号:US18221018

    申请日:2023-07-12

    Abstract: In various examples, a visualization of an environment may be generated using a Panini projection that is optimized based on detected scene content. For example, image data of an environment may be perspective projected (e.g., using a rectilinear projection) to generate a reference projection image, which may be analyzed to detect the presence of vanishing points and/or horizontal lines (e.g., in a central region). The image data of the environment may be projected using a Panini projection that is optimized based on distances to detected objects, the absence of a detected vanishing point, and/or the presence of a detected horizontal line to generate a Panini projection image. In some embodiments, vertical compression is applied to the Panini projection image to correct for distortion of horizontal lines (e.g., based on the presence of a detected horizontal line).

    TEMPORAL MASKING FOR STITCHED IMAGES AND SURROUND VIEW VISUALIZATIONS

    公开(公告)号:US20250022218A1

    公开(公告)日:2025-01-16

    申请号:US18353441

    申请日:2023-07-17

    Abstract: In various examples, updates to a dynamic seam placement and/or fitted 3D bowl may be at least partially concealed using temporal masking. A future time in which a predicted change in dynamic seam placement and/or fitted 3D bowl exceeds some threshold may be determined. A predicted dynamic seam placement and/or fitted 3D bowl update may be temporally masked by triggering the update before arriving at the future time to compensate for the latency of the temporal filtering and/or by adjusting the temporal filter size (e.g., shortening a temporal window over which temporal filtering is applied) in anticipation of the predicted dynamic seam placement and/or fitted 3D bowl update, effectively maintaining some of the smoothing effects of temporal filtering, while reducing the latency.

    IMAGE STITCHING WITH DYNAMIC SEAM PLACEMENT BASED ON OBJECT SALIENCY FOR SURROUND VIEW VISUALIZATION

    公开(公告)号:US20230316458A1

    公开(公告)日:2023-10-05

    申请号:US18173589

    申请日:2023-02-23

    CPC classification number: G06T3/4038 G06T7/74

    Abstract: In various examples, dynamic seam placement is used to position seams in regions of overlapping image data to avoid crossing salient objects or regions. Objects may be detected from image frames representing overlapping views of an environment surrounding an ego-object such as a vehicle. The images may be aligned to create an aligned composite image or surface (e.g., a panorama, a 360° image, bowl shaped surface) with regions of overlapping image data, and a representation of the detected objects and/or salient regions (e.g., a saliency mask) may be generated and projected onto the aligned composite image or surface. Seams may be positioned in the overlapping regions to avoid or minimize crossing salient pixels represented in the projected masks, and the image data may be blended at the seams to create a stitched image or surface (e.g., a stitched panorama, stitched 360° image, stitched textured surface).

    SELECTIVE OPERATING MODE SWITCHING FOR VISIBLE AND INFRARED IMAGING

    公开(公告)号:US20250142208A1

    公开(公告)日:2025-05-01

    申请号:US18494138

    申请日:2023-10-25

    Abstract: In various examples, an image processing pipeline may switch between different operating or switching modes based on speed of ego-motion and/or the active gear (e.g., park vs. drive) of a vehicle or other ego-machine in which an RGB/IR camera is being used. For example, a first operating or switching mode that toggles between IR and RGB imaging modes at a fixed frame rate or interval may be used when the vehicle is in motion, in a particular gear (e.g., drive), and/or traveling above a threshold speed. In another example, a second operating or switching mode that toggles between IR and RGB imaging modes based on detected light intensity may be used when the vehicle is in stationary, in park (or out of gear), and/or traveling below a threshold speed.

    IMAGE HARMONIZATION FOR IMAGE STITCHING SYSTEMS AND APPLICATIONS

    公开(公告)号:US20250157170A1

    公开(公告)日:2025-05-15

    申请号:US18507740

    申请日:2023-11-13

    Abstract: In various examples, metadata-based image harmonization for image stitching systems and applications are disclosed. Systems and methods are disclosed that preprocess images with respect to rendering parameters, with the effect of blending those parameters at a border between images to facilitate a smooth rendering when those images are stitched together. An image signal processing (ISP) parameter harmonization function may input metadata parameters associated with a set of images to match and blend one or more of the rendering parameters across an overlapping border between images prior to applying those images to a stitching algorithm. A scaling of the metadata parameter may be performed using a parameter gain function. Pixels in both images located along the border are adjusted to the same boundary metadata parameter value, and smoothed based on the parameter gain function. A discontinuity in rendering parameters is avoided, substantially avoiding corresponding artifacts in the resulting stitched image.

Patent Agency Ranking