Abstract:
In an object detection device, the plurality of object detection units output a score indicating a probability that a predetermined object exists for each partial region set with respect to inputted image data. The weight computation unit uses weight computation parameters to compute a weight for each of the plurality of object detection units on a basis of the image data and outputs of the plurality of object detection units, the weight being used when the scores outputted by the plurality of object detection units are merged. The merging unit merges the scores outputted by the plurality of object detection units for each partial region according to the weights computed by the weight computation unit. The first loss computation unit computes a difference between a ground truth label of the image data and the score merged by the merging unit as a first loss. Then, the first parameter correction unit corrects the weight computation parameters so as to reduce the first loss.
Abstract:
In an object detection device, a plurality of object detection units output a score indicating the probability that a predetermined object exists for each partial region set with respect to inputted image data. On the basis of the image data, a weight computation unit uses weight computation parameters to compute weights for each of the plurality of object detection units, the weights being used when the scores outputted by the plurality of object detection units are merged. A merging unit merges the scores outputted by the plurality of object detection units for each partial region according to the weights computed by the weight computation unit. A loss computation unit computes a difference between a ground truth label of the image data and the scores merged by the merging unit as a loss. Then, a parameter correction unit corrects the weight computation parameters so as to reduce the computed loss.
Abstract:
Identification means 71 identifies an object indicated by data by applying the data to a model learned by machine learning. Determination means 72 determines whether or not the data is transmission target data to be transmitted to a predetermined computer based on a result obtained by applying the data to the model. Data transmission means 73 transmits the data determined to be the transmission target data to the predetermined computer at a predetermined timing.
Abstract:
Provided is an object detection device for efficiently and simply selecting an image for creating instructor data on the basis of the number of detected objects. The object detection device is provided with: a detection unit for detecting an object from each of a plurality of input images using a dictionary; an acceptance unit for displaying, on a display device, a graph indicating the relationship between the input images and the number of subregions in which the objects are detected, and displaying, on the display device, in order to create instructor data, one input image among the plurality of input images in accordance with a position on the graph accepted by operation of an input device; a generation unit for generating the instructor data from the input image; and a learning unit for learning a dictionary from the instructor data.
Abstract:
Provided is an object detection device for efficiently and simply selecting an image for creating instructor data on the basis of the number of detected objects. The object detection device is provided with: a detection unit for detecting an object from each of a plurality of input images using a dictionary; an acceptance unit for displaying, on a display device, a graph indicating the relationship between the input images and the number of subregions in which the objects are detected, and displaying, on the display device, in order to create instructor data, one input image among the plurality of input images in accordance with a position on the graph accepted by operation of an input device; a generation unit for generating the instructor data from the input image; and a learning unit for learning a dictionary from the instructor data.
Abstract:
When the presence of a subject is detected within the image-capturing range of an image sensor, a frame image is created to acquire the identifier of a corresponding product. A feature quantity of the frame image is stored in a storage device in connection with the acquired identifier, and product information associated with the identifier is acquired from a product information DB to perform sales processing.
Abstract:
An object detection apparatus, etc., capable of detecting an object area with greater precision is disclosed. Such an object detection apparatus is provided with: a part area indication means for indicating a part area which is an area including a target part among parts forming an object including an detection-target object, from a plurality of images including the object; an appearance probability distribution generation means for generating an appearance probability distribution and the absence probability distribution of the part area based on the appearance frequency of the part area associated with each position in the images; and an object determination means for determining, in an input image, the area including the object, with reference to the appearance probability distribution and the absence probability distribution of the part area.
Abstract:
The present disclosure provides an image processing apparatus capable of efficiently generating a learning model having a high accuracy. An image processing apparatus (1) includes a data acquisition unit (2), a data generation unit (4), a recognition accuracy calculation unit (6), and a learning data output unit (8). The data acquisition unit (2) acquires input image data. The data generation unit (4) converts the input image data by using a data conversion parameter and newly generates image data. The recognition accuracy calculation unit (6) calculates a recognition accuracy of the image data generated by the data generation unit (4) by using a learning model stored in advance. The learning data output unit (8) outputs, as learning data, the image data of which the recognition accuracy calculated by the recognition accuracy calculation unit (6) is lower than a first threshold.
Abstract:
To more reliably land a flying body at a desired point, a flying body includes a determiner that determines whether the flying body is taking off and ascending from a takeoff point or descending to land, a camera mounted in the flying body, a recorder that records a lower image captured by the camera if it is determined that the flying body is taking off and ascending and a guider that, if it is determined that the flying body is descending to land, guides the flying body to the takeoff point while descending using a lower image recorded in the recorder during takeoff/ascent and a lower image captured during the descent.
Abstract:
The server device receives a model information from a plurality of terminal devices, and generates an integrated model by integrating the model information received from the plurality of terminal devices. The server device generates an updated model by learning a model defined by the model information received from the terminal device of update-target using the integrated model. Then, the server device transmits the model information of the updated model to the terminal device. Thereafter, the terminal device executes recognition processing using updated model.