Abstract:
To provide an image processing system, an image processing method, and a program, capable of detecting a group with high irregularity. An image processing system is provided with: a group detector that detects a group based on an input image captured with an image capturing at a first time; a repeating group analyzer that determines that a detected group has been previously detected; and an alert module that reports when the detected group has been determined by the repeating group analyzer to have been previously detected.
Abstract:
A video surveillance system includes: a detection unit that detects a predetermined event on the basis of an image captured by a first imaging apparatus; and a control unit that controls a second imaging apparatus such that the second imaging apparatus captures an image of a predetermined position after the detection of the predetermined event.
Abstract:
An object information extraction apparatus (2000) includes an image acquisition unit (2020), a frequency determination unit (2040), and an information extraction unit (2060). The image acquisition unit (2020) acquires a plurality of images corresponding to a predetermined unit time of a video. The frequency determination unit (2040) generates frequency information for each of a plurality of partial areas included in each of a plurality of images. The information extraction unit (2060) extracts information of an object included in each partial area from each image corresponding to the number indicated by the frequency information for each partial region among a plurality of images corresponding to a predetermined unit time for each partial area.
Abstract:
A device (100) for detecting circumventing behavior includes an estimation unit (101) that estimates a degree of crowd congestion in relation to each of a plurality of partial areas of a target image, and a detection unit (102) that detects circumventing behavior of a crowd by using a distribution state and a temporal transition of the degree of congestion estimated by the estimation unit (101).
Abstract:
Although persons on board in a driver's seat and a passenger seat can be detected, it is difficult to detect persons on board in a back seat and thus the number of passengers cannot be accurately measured. Image obtaining means 100 photographs the inside of a vehicle from the outside of the vehicle (a shoulder, etc.). View determining means 102 estimates the way in which the vehicle in the image photographed by the image obtaining means 100 is viewed, and outputs a view determination result. Person detecting means 101 performs front-face, side-face, and angled-face detection on the image photographed by the image obtaining means 100, to detect persons and outputs a person detection result. In-vehicle position estimating means 103 determines at which positions in the vehicle the detected persons are present, using the person detection information and the view determination result information, and outputs in-vehicle position estimation information. Result integrating means 104 integrates detection results obtained for a plurality of images which are photographed consecutively, to calculate the number of passengers.
Abstract:
Although persons on board in a driver's seat and a passenger seat can be detected, it is difficult to detect persons on board in a back seat and thus the number of passengers cannot be accurately measured. Image obtaining means 100 photographs the inside of a vehicle from the outside of the vehicle (a shoulder, etc.). View determining means 102 estimates the way in which the vehicle in the image photographed by the image obtaining means 100 is viewed, and outputs a view determination result. Person detecting means 101 performs front-face, side-face, and angled-face detection on the image photographed by the image obtaining means 100, to detect persons and outputs a person detection result. In-vehicle position estimating means 103 determines at which positions in the vehicle the detected persons are present, using the person detection information and the view determination result information, and outputs in-vehicle position estimation information. Result integrating means 104 integrates detection results obtained for a plurality of images which are photographed consecutively, to calculate the number of passengers.
Abstract:
A video signature extraction device includes an each-picture feature extraction unit which extracts a feature of each picture, which is a frame or a field, as an each-picture visual feature from an input video; a time axial direction change region extraction unit which analyzes an image change in a time direction with respect to predetermined regions in a picture from the video, obtains a region having a large image change, and generates change region information which is information designating the region; an each-region feature extraction unit which extracts a feature of the region corresponding to the change region information as an each-region visual feature from the video; and a multiplexing unit which multiplexes the each-picture visual feature, the each-region visual feature, and the change region information, and generates a video signature.
Abstract:
An object tracking device includes a location information acquisition means configured to acquire location information of an object detected by a sensor, a sensor speed acquisition means configured to acquire speed information of the sensor, a parameter control means configured to generate parameter control information including information for controlling a parameter for use in a tracking process of the object on the basis of the speed information acquired by the sensor speed acquisition means, and an object tracking means configured to perform the tracking process using the parameter control information generated by the parameter control means and the location information acquired by the location information acquisition means.
Abstract:
An information processing apparatus (10) includes a time and space information acquisition unit (110) that acquires high-risk time and space information indicating a spatial region with an increased possibility of an accident occurring or of a crime being committed and a corresponding time slot, a possible surveillance target acquisition unit (120) that identifies a video to be analyzed from among a plurality of videos generated by capturing an image of each of a plurality of places, on the basis of the high-risk time and space information, and analyzes the identified video to acquire information of a possible surveillance target, and a target time and space identification unit (130) that identifies at least one of a spatial region where surveillance is to be conducted which is at least a portion of the spatial region or a time slot when surveillance is to be conducted, from among the spatial region and the time slot indicated by the high-risk time and space information, on the basis of the information of the possible surveillance target.
Abstract:
An information processing device according to the present invention includes: a storage means that stores reference attribute information representing an attribute of a person corresponding to a target place; an extraction means that extracts person attribute information representing an attribute of a person in a captured image obtained by capturing an image of the target place; and a detection means that detects a predetermined person in the captured image based on the reference attribute information and the person attribute information.