Abstract:
An information processing apparatus of the present application includes a drawing unit that draws a line on an image in a drawing area displayed in a fixed mode, which displays an image by fixing a position of the image; a judgment unit that judges whether a position of the line during drawing by the drawing unit is positioned on a boundary indicating the drawing area; a mode switching unit that switches from the fixed mode to a predetermined mode other than the fixed mode in a case in which the position during drawing is judged as being positioned on the boundary indicating the drawing area by the judgment unit; and a display control unit that executes control to display the image based on the mode switched by the mode switching unit.
Abstract:
A multi-class identifier identifies a kind of an imager, and identifies in detail with respect to a specified kind of a group. The multi-class identifier includes: an identification fault counter providing the image for test that includes any of class labels to the kind identifiers so that the kind identifiers individually identify the kind of the provided image, and counting, for a combination of arbitrary number of kinds among the plurality of kinds, the number of times of incorrect determination in the arbitrary number of kinds that belongs to the combination; a grouping processor, for a group of the combination for which count result is equal to or greater than a predetermined threshold, adding a group label corresponding to the group to the image for learning that includes the class label corresponding to any of the arbitrary number of kinds that belongs to the group.
Abstract:
An image capture device (1) includes an image acquiring unit (71), image specifying unit (72), image selecting unit (73), and combined image generating unit (74). The image acquiring unit (71) acquires data of a plurality of images indicative of a sequence of actions of an object. The image specifying unit (72) specifies a partial predetermined action in the sequence of the actions of the object from the plurality of images acquired by the image acquiring unit (71). The image selecting unit (73) selects data of a plurality of images corresponding to the predetermined action from among the data of the plurality of images based on a specifying result by the image specifying unit (72). The composite image generating unit (74) generates one composite image from the data of a plurality of images selected by the image selecting unit (73).
Abstract:
A reconstructed image which further reflects the photo shooting information of a main object is generated. A layer determiner defines a layer of a reconstructed image. A layer image generator reconstructs an image of an object included in the allocated depth range from a light field image and the depth map of the light field image for each layer. A conversion pixel extractor extracts corresponding pixels on a conversion layer which corresponds to an object to be modified. The object to be modified is designated by an operation acquired by a modification operation acquirer. A reconstructed image generator converts layer images using a conversion matrix defined by a conversion matrix determiner and generates an image whose composition has been modified by superimposing the layer images.
Abstract:
A state estimation apparatus includes: at least one processor; and a memory configured to store a program executable by the at least one processor; wherein the at least one processor is configured to: acquire a biological signal of a subject, in a certain period in which the biological signal is being acquired, set as a plurality of extraction time windows a plurality of time windows having mutually different time lengths, extract a feature value of the biological signal in each of the plurality time windows, and estimate a state of the subject based on the extracted feature value.
Abstract:
An aspect of the disclosure relates to a training device including a memory storing a program, and at least one processor configured to execute the program stored in the memory, in which the processor is configured to acquire pulse wave data to which biological reaction information is imparted, extract a local maximum point of a baseline or a local minimum point of a baseline derived from the pulse wave data as an identification reference point and set a correct answer label for the identification reference point based on the biological reaction information, set an analysis window for the extracted identification reference point and determine a feature vector of the identification reference point in the analysis window, and train a discriminator that identifies a cyclic alternating pattern (CAP) indicating a periodic brain wave activity by training data including the feature vector and the correct answer label.
Abstract:
A bioinformation acquiring apparatus includes at least one processor; and a memory configured to store a program to be executed in the processor. The processor acquires bioinformation in a chronological order; derives outlier level parameters, the outlier level parameter indicating a level of inclusion of outliers of the bioinformation in pieces of bioinformation acquired within a first duration; derives correction terms based on the bioinformation after removal of the outliers of the bioinformation from pieces of bioinformation acquired within a second duration that is longer than the first duration; selects one or both of a first correction procedure and a second correction procedure based on the outlier level parameters, as a correction procedure, the first correction procedure using the correction terms, the second correction procedure involving interpolation irrelevant to the correction terms; and corrects the outliers of the bioinformation within the first duration by the selected correction procedure.
Abstract:
An acceleration acquirer of a controller of an in-bed and out-of-bed determination device acquires, in a time series, acceleration of a subject from a sensor that is an acceleration sensor attached by the subject, a body motion determiner determines, based on the acquired acceleration, whether there is a body motion of the subject at each time, an evaluator evaluates a distribution of the acceleration at a time at which it is determined that there is not a body motion, an estimator estimates a body axis of the subject based on the evaluated distribution of the acceleration, and an in-bed and out-of-bed determiner determines at least one of in bed and out of bed of the subject based on the estimated body axis of the subject that is estimated by the estimator.
Abstract:
A voice input unit has predetermined directivity for acquiring a voice. A sound source arrival direction estimation unit operating as a first direction detection unit detects a first direction, which is an arrival direction of a signal voice of a predetermined target, from the acquired voice. Moreover, a sound source arrival direction estimation unit operating as a second direction detection unit detects a second direction, which is an arrival direction of a noise voice, from the acquired voice. A sound source separation unit, a sound volume calculation unit, and a detection unit having an S/N ratio calculation unit detect a sound source separation direction or a sound source separation position, based on the first direction and the second direction.
Abstract:
A determination result is easily obtained even in expression determination on a face image that is not a front view. A Robot includes a camera, a face detector, a face angle estimator, and an expression determiner. The camera acquires image data. The face detector detects a face of a person from the image data acquired by the camera. The face angle estimator estimates an angle of the face detected by the face detector. The expression determiner determines an expression of the face based on the angle estimated by the face angle estimator.