Abstract:
A histogram generator generates a histogram that indicates a count of pixels of a designated color in association with coordinates along a basic axis of a screen, for frame image data obtained by filming a real space. A histogram smoother performs smoothing of the generated histogram. A three-dimensional coordinates generator selects a value associated with particular coordinates from among the counts indicated in the smoothed histogram, and performs depth value determination for a target object using the selected value.
Abstract:
This image capture device (100) includes: an image sensor (102); an integrated value calculating section (210) which calculates, on a frame-by-frame basis, a value of a line integral of luminance values with respect to each of a plurality of horizontal lines included in a frame; a memory (220); an average calculating section (230) which calculates a line average value by working out the average of the values of the line integral on the same horizontal line between the newest frame and a number of other frames gotten earlier than the newest one in the memory (220); a waveform data generating section (240) which generates waveform data by normalizing the value of the line integral in the memory (220) based on the value of the line integral and a line average value; and a flicker extracting section (250) which extracts information about the phase and frequency of the flicker based on the waveform data.
Abstract:
An estimation apparatus using reflected light of infrared light reflected on an object, the estimation apparatus including: a single input unit including a first pixel having first spectral sensitivity characteristics in a wavelength range of the infrared light and a second pixel having second spectral sensitivity characteristics different from the first spectral sensitivity characteristics in the wavelength range of the infrared light; and an estimator that estimates at least either one of a color or a material of the object based on a first output value that is an output value of the reflected light from the first pixel and based on a second output value that is an output value of the reflected light from the second pixel.
Abstract:
An image noise removing apparatus which removes, after removal of noise from a first image, noise included in a second image includes: a spatial noise removing unit executing an operation for removing the noise included in the second image using a pixel value included in the second image, thereby generating a spatial noise-free image; a reliability calculating unit calculating a reliability indicating how dynamic the second image is, based on the spatial noise-free image, the second image, and a first noise-free image which is generated from the first image with the noise therein removed; and a temporal blending unit performing, based on the reliability, a weighted summation on the second image and the first noise-free image, thereby removing the noise included in the second image.
Abstract:
This image capture device (100) includes: an image sensor (102); an integrated value calculating section (210) which calculates, on a frame-by-frame basis, a value of a line integral of luminance values with respect to each of a plurality of horizontal lines included in a frame; a memory (220); an average calculating section (230) which calculates a line average value by working out the average of the values of the line integral on the same horizontal line between the newest frame and a number of other frames gotten earlier than the newest one in the memory (220); a waveform data generating section (240) which generates waveform data by normalizing the value of the line integral in the memory (220) based on the value of the line integral and a line average value; and a flicker extracting section (250) which extracts information about the phase and frequency of the flicker based on the waveform data.
Abstract:
An image processing apparatus separates a captured image into a region of object (B) 1001B and a region of object (A) 1001A shallower than the region of object (B) 1001B in depth indicated by depth information, duplicates pixels constituting the region of object (B) 1001B and positioned in neighborhood of boundary between the region of object (B) 1001B and the region of object (A) 1001A, onto the neighborhood of the boundary outside the region of object (B) 1001B, thereby generating an extended region (B) 5001B, performs blur processing on the extended region (B) 5001B and the region of object (A) 1001A based on the depth of the region of object (A) 1001A indicated by the depth information, and after the processing, composites a value of each pixel constituting the extended region (B) 5001B and a value of one of pixels constituting the region of object (A) 1001A corresponding in position.