Abstract:
An automation level determination section selects one of automation levels defined at a plurality of stages based on a deviation degree of reliability, the deviation degree corresponding to each of a plurality of kinds of driving behaviors that are estimation results obtained using a driving behavior model. A generator generates presentation information by applying the plurality of kinds of driving behaviors to an output template corresponding to one selected automation level in output templates corresponding to the automation levels defined at the plurality of stages respectively. An output unit outputs the presentation information that is generated.
Abstract:
An image generation device generates an image of an object placed on a surface of an image sensor by using a plurality of images each of which is captured by the image sensor when the object is irradiated with a corresponding one of a plurality of illuminators. The object includes first object and one or more second objects included in the first object. The image generation device determines a section of the first object including a largest number of feature points of second objects, generates an in-focus image using the section as a virtual focal plane, and causes the in-focus image to be displayed on a display screen.
Abstract:
An image processing apparatus for estimating a position in an image which an operator who observes the image is likely to observe as the candidate of a next position is provided. The image processing apparatus includes a next observation estimating unit that estimates a position selected from among a plurality of positions using a parameter indicating an operation history and information regarding an estimation result at least at a current time on the basis of a probability value obtained from a predetermined probability distribution and a displayed image generating unit that generates an image to be displayed so that at least the candidate of the next position is visually recognizable.
Abstract:
An image generation apparatus includes a plurality of irradiators, and a control circuit. The control circuit performs an operation including generating an in-focus image of an object in each of a plurality of predetermined focal planes, extracting a contour of at least one or more cross sections of the object represented in the plurality of in-focus images, generating at least one or more circumferences based on the contour of the at least one or more cross sections, generating a sphere image in the form of a three-dimensional image of at least one or more spheres, each sphere having one of the circumferences, generating a synthetic image by processing the sphere image such that a cross section appears, and displaying the resultant synthetic image on a display.
Abstract:
An image generating system that generates a focal image of a target object on a virtual focal plane located between a plurality of illuminators and an image sensor (b) carries out the following (c) through (f) for each of a plurality of pixels constituting the focal image, (c) carries out the following (d) through (f) for each of the positions of the plurality of illuminators, (d) calculates a position of a target point that is a point of intersection of a straight line connecting a position of the pixel on the focal plane and a position of the illuminator and a light receiving surface of the image sensor, (e) calculates a luminance value of the target point in the captured image by the illuminator on the basis of the position of the target point, (f) applies the luminance value of the target point to the luminance value of the pixel, and (g) generates the focal image of the target object on the focal plane by using a result of applying the luminance value at each of the plurality of pixels.
Abstract:
A transfer learning apparatus includes a transfer target data evaluator and an output layer adjuster. The transfer target data evaluator inputs a plurality of labeled transfer target data items each assigned a label of a corresponding evaluation item from among one or more evaluation items to a neural network apparatus having been trained by using a plurality of labeled transfer source data items and including in an output layer output units, the number of which is larger than or equal to the number of evaluation items, and obtains evaluation values output from the respective output units. The output layer adjuster preferentially assigns, to each of the one or more evaluation items, an output unit from which the evaluation value having the smallest difference from the label of the evaluation item is obtained with a higher frequency, as an output unit that outputs the evaluation value of the evaluation item.
Abstract:
An object recognition apparatus includes a light source, an image sensor, a control circuit, and a signal processing circuit. The control circuit causes the light source to emit first light toward a scene and subsequently emit second light toward the scene, the first light having a first spatial distribution, the second light having a second spatial distribution. The control circuit causes the image sensor to detect first reflected light and second reflected light in the same exposure period, the first reflected light being caused by reflection of the first light from the scene, the second reflected light being caused by reflection of the second light from the scene. The signal processing circuit recognizes an object included in a scene based on photodetection data output from the image sensor, and based on an object recognition model pre-trained by a machine learning algorithm.
Abstract:
A display control apparatus includes a memory and a circuit. The circuit obtains an electrophoretic image from the memory, causes a display to display the electrophoretic image as a first display image, receives a selection of a first pixel in the electrophoretic image displayed on the display, obtains one or more first useful proteins corresponding to the first pixel and one or more second useful proteins corresponding to one or more second pixels, a length between the first pixel and each of the one or more second pixels being less than or equal to a first length among the pixels, and causes the display to display the electrophoretic image, the one or more first useful proteins, and the one or more second useful proteins as a second display image.
Abstract:
An image generating apparatus is provided with a first light source and a second light source, an image sensor, a mask including a light-transmitting part and a light-blocking part, and a dark image processing unit. The image sensor acquires a first image of the material when illuminated by the first light source, and acquires a second image of the material when illuminated by the second light source. The image sensor includes a first pixel region and a second pixel region. The light-blocking part is positioned between the first pixel region and the first light source. The light-blocking part is positioned between the first pixel region and the first light source. The dark image processing unit uses first pixel information corresponding to a first pixel region in the first image and second pixel information corresponding to a second pixel region in the second image to generate a third image.
Abstract:
An image generation device generates a plurality of reference in-focus images of an object placed on a surface of an image sensor by using a plurality of images captured by the image sensor using sensor pixels when the object is irradiated with light by a plurality of illuminators. Each of the reference in-focus images is an in-focus image corresponding to one of a plurality of virtual reference focal planes that are located between the image sensor and the plurality of illuminators. The plurality of reference focal planes pass through the object and are spaced apart from one another. The image generation device generates a three-dimensional image of the object by using the reference in-focus images and displays the three-dimensional image on a display screen.