Abstract:
A system for high dynamic range (HDR) imaging and methods for making and using same is disclosed. An HDR module in a camera initializes a set of lookup tables (LUTs) in YUV color space based on exposure configurations of a set of images taken as HDR imaging. The HDR module calculates weights of luminance Y components of the set of images in YUV color space. Based on the calculated weights, the HDR module blends the Y component of the set of images to generate blended Y components. The HDR module combines the blended Y components with corresponding UV components to generate a single image in YUV space. Thereby, the HDR module advantageously combines a set of images into a blended HDR image with only blending the Y component of the set of images.
Abstract:
System and method can support an image processing device. The image processing device operates to obtain a first set of characterization values, which represents a first group of pixels that are associated with a denoising pixel in an image. Also, the image processing device can obtain a second set of characterization values, which represents a second group of pixels that are associated with a denoising reference pixel. Furthermore, the image processing device operates to use the first set of characterization values and the second set of characterization values to determine a similarity between the denoising pixel and the denoising reference pixel. Then, the image processing device can calculate a denoised value for the denoising pixel based on the determined similarity between the denoising pixel and the denoising reference pixel.
Abstract:
System and method can support an image processing device. The image processing device operates to obtain a first set of characterization values, which represents a first group of pixels that are associated with a denoising pixel in an image. Also, the image processing device can obtain a second set of characterization values, which represents a second group of pixels that are associated with a denoising reference pixel. Furthermore, the image processing device operates to use the first set of characterization values and the second set of characterization values to determine a similarity between the denoising pixel and the denoising reference pixel. Then, the image processing device can calculate a denoised value for the denoising pixel based on the determined similarity between the denoising pixel and the denoising reference pixel.
Abstract:
A system for automatic focusing with a lens and methods for making and use the same are provided. When performing a focusing operation, a controller calculates a focus measure value for each lens position of a plurality of lens positions. The focus measure values are calculated based on each of the window evaluation values and a respective weight for image focusing windows within a set of image focusing windows. The controller then compares the calculated focus measure values of the plurality of lens positions in order to select an optimal lens position. The set of image focusing windows can be selected based on one or more sets of image focusing window selection rules derived from statistical data. In addition, the respective weights for image focusing windows can also be calculated based on the statistical data.
Abstract:
Image processing method, drone, and drone-camera system are provided. The method includes acquiring, according to a current environmental parameter of the drone, a target sky image that matches the current environmental parameter; and determining a direction parameter of the camera device when capturing a to-be-stitched image. The to-be-stitched image is an image captured under the current environmental parameter. The method further includes stitching the target sky image with the to-be-stitched image according to the direction parameter to obtain a panoramic image.
Abstract:
An imaging system includes a first optical module including a first image sensor, a second optical module comprising a second image sensor, and an image processor. The first optical module has a first focal length range. The second optical module has a second focal length range. The first focal length range and the second focal length range are different. The image processor is configured to receive image data for a first image from the first optical module and/or image data for a second image from the second optical module and generate data to show the first image and/or the second image within a display.
Abstract:
Imaging systems having counterweights are provided. The counterweights may be operably coupled to one or more lens elements and may be configured to maintain a stability of the imaging system.
Abstract:
An unmanned aerial vehicle (UAV) with audio filtering components includes a background noise-producing component, a background microphone, and a noise emitter. The background noise-producing component is configured to produce a background noise. The background microphone is positioned within a proximity sufficiently close to collect interfering noise from the background noise producing component. The background microphone is configured to collect audio data including the background noise. The noise emitter is disposed within a proximity sufficiently close to the background noise-producing component to reduce the interfering noise. The noise emitter is configured to emit an audio signal having a reverse phase of the audio data collected by the background microphone.
Abstract:
Systems and methods can support a data processing apparatus. The data processing apparatus can include a data processor that is associated with a data capturing device on a stationary object and/or a movable object. The data processor can receive data in a data flow from one or more data sources, wherein the data flow is configured based on a time sequence. Then, the data processor can receive a control signal, which is associated with a first timestamp, wherein the first timestamp indicates a first time. Furthermore, the data processor can determine a first data segment by applying the first timestamp on the data flow, wherein the first data segment is associated with a time period in the time sequence that includes the first time.
Abstract:
Methods and systems for evaluating a search area for encoding video are provided. The method comprises receiving video captured by an image capture device, the video comprising video frame components. Additionally, the method comprises receiving optical flow field data associated with the video frame component, wherein at least a portion of the optical flow field data is captured by sensors. The method also comprises determining a search area based on the optical flow field data.