Abstract:
An endoscope apparatus includes a processor. The processor performs controlling a focus position of an objective optical system, acquiring images sequentially captured by an image sensor, and combining the images in N frames thus captured into a depth of field extended image in one frame. The processor controls the focus position such that focus positions at timings when the respective images in N frames are captured differ from each other. The processor combines the images in N frames that have been controlled to receive a constant quantity of light emission of illumination light or the images in N frames that have undergone a correction process to make image brightness constant, into the depth of field extended image.
Abstract:
An imaging device includes a processor including hardware. The processor is configured to implement controlling a focus position of an objective optical system configured to form an image of a subject on an image sensor, acquiring L×N images per second captured by the image sensor, and combining acquired M images into one extended depth of field image to extend a depth of field, and outputting L extended depth of field images per second. The processor sets one of the M images as a reference image, performs positioning of the other image or images of the M images with respect to the reference image, and combines the thus positioned M images into the one extended depth of field image.
Abstract:
A focus control device includes a processor including hardware. The processor sets a plurality of regions, each including a plurality of pixels, to an image acquired by an imaging section, obtains a direction determination result for each region in some or all of the plurality of regions set, by determining whether a target focusing position that is a target of an in-focus object plane position is on a NEAR side or a FAR side relative to a reference position, determines an in-focus direction by performing weighted comparison between NEAR area information and FAR area information, based on the direction determination result and weight information, and controls the in-focus object plane position based on the in-focus direction.
Abstract:
The imaging device includes an image sensor that includes a plurality of pixels that generate an image signal from an object image that is formed by an imaging optical system, a scaling section that performs a scaling process on an original image that includes image signals that correspond to the plurality of pixels, and an output section that outputs an image after the scaling process as a scaled image, the scaling section performing the scaling process using a different scaling factor corresponding to the position of a pixel of interest within the scaled image, and the image sensor including pixels in a number larger than the number of pixels of the scaled image as the plurality of pixels.
Abstract:
An imaging device includes an objective lens that includes a movable lens that is configured so that an in-focus object plane position is changed along with a change in angle of view, an image sensor that acquire an image, a focus control section that implements an autofocus (AF) operation by controlling a position of the movable lens, a reference lens position setting section that sets a reference lens position based on a moving range of the movable lens during a wobbling operation, the wobbling operation being performed every given cycle during the AF operation, and a magnification correction section that performs a magnification correction process that reduces a change in the angle of view of the image due to a change in the position of the movable lens during the wobbling operation relative to the reference lens position.
Abstract:
An imaging device includes a processor including hardware. The processor is configured to implement controlling a focus position of an objective optical system configured to form an image of a subject on an image sensor, acquiring L×N images per second captured by the image sensor, and combining acquired M images into one extended depth of field image to extend a depth of field, and outputting L extended depth of field images per second. The processor sets one of the M images as a reference image, performs positioning of the other image or images of the M images with respect to the reference image, and combines the thus positioned M images into the one extended depth of field image.
Abstract:
An imaging device includes a processor. The processor, in a manual focus mode, sets a focus evaluation area to have a larger size than the focus evaluation area set in an auto focus mode, generates assist information assisting adjustment of the in-focus object plane position based on a focus evaluation value obtained from an image of the focus evaluation area, and outputting the assist information to a display section.
Abstract:
An image capturing device includes a processor. The processor is configured to implement: a switching control process for switching between a manual focus (MF) mode and an auto focus (AF) mode of performing auto focus control; a process for controlling driving of a focus lens; scene status determination process for performing a detection process for detecting a scene change during the MF mode and an estimation process for estimating distance change information indicating distance change between the image capturing section and an object. The processor is configured to implement: controlling the driving of the focus lens based on lens drive information; switching control for switching from the MF mode to the AF mode when the scene change is detected; and controlling the driving of the focus lens to bring the object into focus based on the distance change information.
Abstract:
An endoscope apparatus includes a processor comprising hardware, a processor performs a focus control that controls a position of a focus lens to an in-focus position based on an in-focus evaluation value, the focus lens being included in an optical system that forms an image of a captured image that is acquired by an imaging section, and the in-focus evaluation value being calculated from a first area within the captured image, and a change-in-scene detection process that detects whether or not a change in scene has occurred from a second area that includes an area that differs from the first area, wherein the processor is set to a standby state when the position of the focus lens has been controlled to the in-focus position, and resumes the focus control process when a change in scene has been detected when the focus control section is set to the standby state.