Abstract:
System and methods used to inspect a moving web (112) include a plurality of image capturing devices (113) that image a portion of the web at an imaging area. The image data captured by each of the image capturing devices at the respective imaging areas is combined to form a virtual camera data array (105) that represents an alignment of the image data associated with each of the imaging areas to the corresponding physical positioning of the imaging areas relative to the web. The image output signals generated by each of the plurality of image capturing devices may be processed by a single image processor, or a number of image processors (114) that is less than the number of image capturing devices. The processor or processors are arranged to generate the image data forming the virtual camera array.
Abstract:
The invention relates to a method for representing an environmental region (13) of a motor vehicle (1) in an image (16), in which real images of the environmental region (13) are captured by a plurality of real cameras (5, 6, 7, 8) of the motor vehicle (1) and the image (16) is generated from these real images, which at least partially represents the environmental region (13), wherein the image (16) is represented from a perspective of a virtual camera (14) arranged in the environmental region (13), and the image (16) is generated as a bowl shape, wherein at least one virtual elongated distance marker (18, 19, 20) is represented in the image (16), by which a distance to the motor vehicle (1) is symbolized in the virtual bowl shape. The invention also relates to a computer program product and a display system (2) for a motor vehicle (1).
Abstract:
Die Erfindung betrifft ein Surroundview-System (1) für ein Fahrzeug (2). Dieses Surroundview-System (1) weist eine erste und eine zweite Kamera (21, 22, 23, 24) und eine Steuereinheit (10). Die erste und die zweite Kamera (21, 22, 23, 24) sind dazu eingerichtet, Bilddaten mit einer Mehrzahl an Pixeln zu erzeugen. Die Bilddaten der ersten Kamera (21, 22, 23, 24) weisen eine erste Pixeldichte auf und die Bilddaten der zweiten Kamera (21, 22, 23, 24) weisen eine zweite Pixeldichte auf. Ferner weisen die Bilddaten der ersten und der zweiten Kamera (21, 22, 23, 24) wenigstens einen, zumindest teilweise, überlappenden Bildbereich (31, 32, 33, 34) auf. Des Weiteren ist die Steuereinheit (10) dazu eingerichtet, für das Zusammenfügen der ersten und der zweiten Bilddaten die Pixel der Bilddaten im überlappenden Bildbereich (31, 32, 33, 34) anhand der Pixeldichte zu bestimmen, auszuwählen und/oder zu gewichten, um ein Surroundview-Bild mit einer möglichst hohen Auflösung zu erzeugen.
Abstract:
Systems and methods for property feature detection and extraction using digital images. The image sources could include aerial imagery, satellite imagery, ground-based imagery, imagery taken from unmanned aerial vehicles (UAVs), mobile device imagery, etc. The detected geometric property features could include tree canopy, pools and other bodies of water, concrete flatwork, landscaping classifications (gravel, grass, concrete, asphalt, etc.), trampolines, property structural features (structures, buildings, pergolas, gazebos, terraces, retaining walls, and fences), and sports courts. The system can automatically extract these features from images and can then project them into world coordinates relative to a known surface in world coordinates (e.g., from a digital terrain model).
Abstract:
A method and system for image processing are provided in the present disclosure. The method may include obtaining a plurality of source images generated by a plurality of imaging sensors. The method may also include processing each of the plurality of source images by: retrieving a plurality of source image blocks from the source image according to block position information associated with the corresponding imaging sensor; generating, for each of the plurality of source image blocks, a target image block based on the source image block; and forming a target image based on the generated target image blocks. The method may further include generating a combined image based on the target images.
Abstract:
A system and a method for processing an overlapping region in a surround view. The system includes a plurality of cameras for capturing images; a process configured to determine whether a point in a bird-view image obtained from the captured images is located in an overlapping region; and upon the condition that the point is located in the overlapping region, retrieve a blending mask corresponding to the coordinate of the point, and determine a new pixel value of the point according to the blending mask and one or more original pixel values of the point. The present disclosure provides a solution which fast blends the overlapping regions in a surround view with low computational complexity.
Abstract:
There is provided a method of processing digital images, comprising: selecting a reference digital image according to a uniform distribution requirement of pixel intensity values, performing for each certain overlapping image that overlaps with at least one other image at a respective overlapping region: computing a value for each respective gamma parameter for each channel of the certain overlapping image to obtain a correlation between pixel intensity values, corrected with the respective gamma parameters, of each channel of each overlapping region of the certain overlapping image, and pixel intensity values, corrected with respective gamma parameters, computed for each of the at least one other image, for each channel of each respective overlapping region, and creating corrected images by applying the computed value of each respective gamma parameter to the overlapping and non-overlapping regions of each certain overlapping image of the plurality of overlapping images.
Abstract:
A system comprising:at least a first camera configured to have a first unobstructed field of view volume and to capture a first image defined by a first in-use field of view volume;at least a second camera configured to capture a second image defined by a second in-use field of view volume, and positioned within the first unobstructed field of view volume of the first camera but not within the first in-use field of view volume of the first camerain front of an obstructing object; and a controller configured to define a new image by using at least a second image portion of the second image captured by the secondcamera instead of at least a portion of the first image captured by the first camera.