Abstract:
The first parameter generation unit 811 generates a first parameter, which is a parameter of a first recognizer, using first learning data including a combination of data to be recognized, a correct label of the data, and domain information indicating a collection environment of the data. The second parameter generation unit 812 generates a second parameter, which is a parameter of a second recognizer, using second learning data including a combination of data to be recognized that is collected in a predetermined collection environment, a correct label of the data, and target domain information indicating the predetermined collection environment, based on the first parameter. The third parameter generation unit 813 integrates the first parameter and the second parameter to generate a third parameter to be used for pattern recognition of input data by learning using the first learning data.
Abstract:
An object tracking apparatus, method and computer-readable medium for detecting an object from output information of sensors, tracking the object on a basis of a plurality of detection results, generating tracking information of the object represented in a common coordinate system, outputting the tracking information, and detecting the object on a basis of the tracking information.
Abstract:
A state acquisition unit (2020) acquires a state of a monitoring target in a captured image captured by a camera (3040). A monitoring point acquisition unit (2040) acquires, from a monitoring point information storage unit (3020), a monitoring point corresponding to the state of the monitoring target acquired by the state acquisition unit (2020) The monitoring point indicates a position to be monitored in the captured image. A presentation unit (2060) presents the monitoring point on the captured image.
Abstract:
Provided is an image processing apparatus (2000) including an index value calculation unit (2020) and a presentation unit (2040). The index value calculation unit (2020) acquires a plurality of images captured by a camera (3000) (captured images), and calculates an index value indicating the degree of change in the state of a monitoring target in the captured image, using the acquired captured image. The presentation unit (2040) presents an indication based on the index value calculated by the index value calculation unit (2020) on the captured image captured by the camera (3000).
Abstract:
A similarity computation unit (130) derives a first probability P indicating that a first moving body appearing in the first video is the same as a second moving body appearing in the second video on the basis of similarity of feature value of the moving bodies. A non-appearance probability computation unit (140) derives a second probability Q indicating that the first moving body is not the same as the second moving body on the basis of an elapsed time after the first moving body exits from the first video. A person determination unit (150) determines whether the first moving body is the same as the second moving body by comparing the probability P and Q.
Abstract:
[Problem] To provide a motion condition estimation device, a motion condition estimation method and a motion condition estimation program capable of accurately estimating the motion condition of monitored subjects even in a crowded environment. [Solution] A motion condition estimation device according to the present invention is provided with a quantity estimating means 81 and a motion condition estimating means 82. The quantity estimating means 81 uses a plurality of chronologically consecutive images to estimate a quantity of monitored subjects for each local region in each image. The motion condition estimating means 82 estimates the motion condition of the monitored subjects from chronological changes in the quantities estimated in each local region.
Abstract:
A state acquisition unit (2020) acquires a state of a monitoring target in a captured image captured by a camera (3040). A monitoring point acquisition unit (2040) acquires, from a monitoring point information storage unit (3020), a monitoring point corresponding to the state of the monitoring target acquired by the state acquisition unit (2020). The monitoring point indicates a position to be monitored in the captured image. A presentation unit (2060) presents the monitoring point on the captured image.
Abstract:
A detection system which detects a mobile object includes: an image input unit for receiving an input of a plurality of image frames having different capturing times; an inter-background model distance calculation unit for calculating differences between a first background model generated based on an image frame at the time of processing, a second background model in which an influence of an image frame at the time of processing is smaller than that of the first background model, and a third background model in which an influence of an image frame at the time of processing is smaller than that of the second background model; and a mobile object detection unit for detecting a first region in an image frame.
Abstract:
An action analysis device includes: an acoustic analysis unit 1 for analyzing input acoustic information, and generating acoustic analysis information indicating a feature of the acoustic information; a time difference determination unit 2 for determining a time difference between when an acoustic event identified by the acoustic analysis information occurs and when an event corresponding to the acoustic event occurs in input video obtained by capturing an image of a crowd; and an action analysis unit 3 for analyzing an action of the crowd corresponding to the acoustic event, using the input video, the acoustic analysis information, and the time difference.
Abstract:
An object tracking apparatus, method and computer-readable medium for detecting an object from output information of sensors, tracking the object on a basis of a plurality of detection results, generating tracking information of the object represented in a common coordinate system, outputting the tracking information, and detecting the object on a basis of the tracking information.