Abstract:
An image generation apparatus includes a plurality of irradiators, and a control circuit. The control circuit performs an operation including generating an in-focus image of an object in each of a plurality of predetermined focal planes, extracting a contour of at least one or more cross sections of the object represented in the plurality of in-focus images, generating at least one or more circumferences based on the contour of the at least one or more cross sections, generating a sphere image in the form of a three-dimensional image of at least one or more spheres, each sphere having one of the circumferences, generating a synthetic image by processing the sphere image such that a cross section appears, and displaying the resultant synthetic image on a display.
Abstract:
An image output device according to the present disclosure includes: an image acquisition unit that acquires an image with a first resolution; a high-resolution image acquisition unit that acquires an image with a second resolution, being an image of higher resolution than the image with the first resolution; an enlargement input unit that accepts input of an enlargement ratio; a determination unit that determines whether or not an evaluation score determined based on the accepted enlargement ratio is higher than a certain value; and a transmission unit that transmits the image with the second resolution if the evaluation score is determined to be higher than the certain value, and does not transmit the image with the second resolution if the evaluation score is determined not to be higher than the certain value.
Abstract:
An image acquisition device according to the present disclosure includes a lighting system and an irradiation direction decision section. In a module, a subject and an imaging element are integrally formed. The lighting system sequentially irradiates the subject with illumination light in a plurality of different irradiation directions based on the subject such that the illumination light transmitted through the subject is incident on the imaging element. The module acquires a plurality of images according to the plurality of different irradiation directions. Before the plurality of images are acquired according to the plurality of different irradiation directions, the irradiation direction decision section decides the plurality of different irradiation directions based on a difference between a first preliminary image and a second preliminary image. The first preliminary image is acquired when the subject is irradiated with first illumination light in a first irradiation direction, and the second preliminary image is acquired when the subject is irradiated with second illumination light in a second irradiation direction.
Abstract:
An image processing apparatus includes a divider that generates a plurality pieces of third image information on the basis of a plurality of pieces of first image information and a plurality of pieces of second image information, a determiner that determines, on the basis of information regarding a sample, a filter to be used for the plurality of pieces of third image information, and a processor that deconvolutes each of the plurality of pieces of third image information using the determined filter. An image sensor that has received first resulting light emitted from a sample that has received first light emitted from a first angle outputs the plurality of pieces of first image information. The image sensor that has received second resulting light emitted from the sample that has received second light emitted from a second angle outputs the plurality of pieces of second image information.
Abstract:
The present disclosure provides a technique which makes it possible to evaluate a state of a cell aggregation of one or more spheroids. In the culture state determination device according to the present disclosure, a plurality of light sources sequentially illuminate a plurality of cell aggregations put on an image sensor. The image sensor acquires captured images of the plurality of the cell aggregations each time when the plurality of the light sources illuminate the plurality of the cell aggregations. Control circuitry extracts a region including an image of the cell aggregation in the captured image; generates three-dimensional image information of the region using a plurality of the captured images; extracts an outer shape of the cell aggregation and a cavity part inside the cell aggregation using the three-dimensional image information; calculates a first volume that is a volume based on the outer shape of each of the cell aggregation and a second volume that is a volume of the cavity part based on the cavity part of each of the cell aggregation in the three-dimensional image information; and determines a culture state of the cell aggregations using the first volume and the second volume.
Abstract:
A cell culture container includes a container that houses therein a liquid mixture including one or more cells and a culture solution, an irradiator that irradiates the liquid mixture with light, and an image sensor that receives transmitted light that is the light that has been emitted from the irradiator and has passed through the liquid mixture. The light emitted from the irradiator includes a plurality of rays, and the plurality of rays do not cross each other between the irradiator and the image sensor.
Abstract:
A socket includes a first base member that includes a module mount unit allowing a module including an imaging device and an object to be placed thereon and an electric connector that electrically connects the imaging device to an external apparatus, a second base member having an opening, and an engagement unit that causes the first base member to be engaged with the second base member under a condition that the module placed on the module mount unit is sandwiched by the first and second base members. When the first base member is engaged with the second base member by the engagement unit under a condition that the module placed on the module mount unit is sandwiched by the first base member and the second base member, the electric connector is electrically connected to the imaging device, and the object receives illumination light from a light source through the opening.
Abstract:
An image generation device generates a plurality of reference in-focus images of an object placed on a surface of an image sensor by using a plurality of images captured by the image sensor using sensor pixels when the object is irradiated with light by a plurality of illuminators. Each of the reference in-focus images is an in-focus image corresponding to one of a plurality of virtual reference focal planes that are located between the image sensor and the plurality of illuminators. The plurality of reference focal planes pass through the object and are spaced apart from one another. The image generation device generates a three-dimensional image of the object by using the reference in-focus images and displays the three-dimensional image on a display screen.
Abstract:
Provided is an evaluation method for learning data that facilitates generation of learning data that can contribute to the improvement of the recognition rate of a model. An evaluation method for learning data includes a first evaluation step and a second evaluation step. The first evaluation step is a step of evaluating the performance of learned model machine-learned by using learning data generated by the data extension processing. The second evaluation step is a step of evaluating a parameter on the basis of the evaluation obtained in the first evaluation step and the possible range of the parameter of the data extension processing.
Abstract:
An image generation device generates an image of an object placed on a surface of an image sensor by using a plurality of images each of which is captured by the image sensor when the object is irradiated with a corresponding one of a plurality of illuminators. The object includes first object and one or more second objects included in the first object. The image generation device determines a section of the first object including a largest number of feature points of second objects, generates an in-focus image using the section as a virtual focal plane, and causes the in-focus image to be displayed on a display screen.