Abstract:
An appearance presentation system is provided. The appearance presentation system can present the appearance of an object to be monitored at a position that the user designates in an image such that the user can previously grasp the extent to which the image to be captured with a camera is suitable for an image recognition process. A display control unit 10 displays, on a display device, an image obtained by superimposing an object indicator indicating the object to be monitored on an image to be captured when the camera of which position and posture are determined shoots a predetermined region to be monitored. A position designation reception unit 4 receives the designation of the position of the object indicator in the image. An image generation unit 3 generates an image to be captured when the camera shoots a state in which the object to be monitored is placed at a position that is in the region to be monitored and that corresponds to the position designated in the image. Then, the display control unit 10 extracts a part corresponding to the object to be monitored from the image generated in the image generation unit 3, and displays the part on the display device.
Abstract:
To provide a time synchronization information computation device capable of synchronizing the times of a plurality of cameras without the need of a special device, a time synchronization information computation method and a time synchronization information computation program. There are provided a plurality of video acquisition means 100-1 to 100-N for acquiring videos, a plurality of visual event detection means 101-1 to 101-N provided corresponding to the plurality of video acquisition means for analyzing the videos acquired by the plurality of video acquisition means 100-1 to 100-N to detect visual events, and generating visual event detection information including time information on when the visual events occur, and visual event integration means 102 for integrating the visual event detection information generated by the plurality of visual event detection means 101-1 to 101-N and synchronizing the times of the videos acquired by the plurality of video acquisition means 100-1 to 100-N.
Abstract:
A guidance processing apparatus (100) includes an information acquisition unit (101) that acquires a plurality of different pieces of guidance information on the basis of states of a plurality of people within one or more images, and a control unit (102) that performs control of a plurality of target devices present in different spaces or time division control of a target device so as to set a plurality of different states corresponding to the plurality of pieces of guidance information.
Abstract:
An information processing device of the present invention includes a detection means that detects the content of an image, a determination means that determines a processing mode for the image based on the result of detection of the content of the image, and an execution means that executes processing for a captured image corresponding to the processing mode.
Abstract:
An information processing device of the present invention includes a detection means that detects the content of an image, a determination means that determines a processing mode for the image based on the result of detection of the content of the image, and an execution means that executes processing for a captured image corresponding to the processing mode.
Abstract:
At least one processor generates a crowd state image as an image in which a person image corresponding to a person state is synthesized with previously-prepared image at a predetermined size. The previously-prepared image is an background image that include no person. The at least one processor specifies a training label for the crowd state image. The at least one processor outputs a pair of crowd state image and training label.
Abstract:
An information processing apparatus (2000) includes a first analyzing unit (2020), a second analyzing unit (2040), and an estimating unit (2060). The first analyzing unit (2020) calculates a flow of a crowd in a capturing range of a fixed camera (10) using a first surveillance image (12). The second analyzing unit (2040) calculates a distribution of an attribute of objects in a capturing range of a moving camera (20) using a second surveillance image (22). The estimating unit (2060) estimates an attribute distribution for a range that is not included in the capturing range of the moving camera (20).
Abstract:
An information processing apparatus (2000) includes a recognizer (2020). An image (10) is input to the recognizer (2020). The recognizer (2020) outputs, for a crowd included in the input image (10), a label (30) describing a type of the crowd and structure information (40) describing a structure of the crowd. The structure information (40) indicates a location and a direction of an object included in the crowd. The information processing apparatus (2000) acquires training data (50) which includes a training image (52), a training label (54), and training structure information (56). The information processing apparatus (2000) performs training of the recognizer (2020) using the label (30) and the structure information (40), which are acquired by inputting the training image (52) with respect to the recognizer (2020, and the training label (54) and the training structure information (56).
Abstract:
To detect a stagnant object by an image analysis at high accuracy, the present invention provides an image processing apparatus 10 including: an acquisition unit 11 that acquires partial region information for each frame image, the partial region information indicating a situation of a target object in each of a plurality of partial regions within one image for each frame image; and an extraction unit 12 that extracts the partial region from a plurality of the partial regions, based on the partial region information for the each frame image, the partial region to be extracted satisfying at least one of a condition that presence of a plurality of the target objects continues at a predetermined level or higher and a condition that uniformity of a state of an aggregation constituted by a plurality of the target objects being present continues at a predetermined level or higher.
Abstract:
A crowd type classification system of an aspect of the present invention includes: a staying crowd detection unit that detects a local region indicating a crowd in staying from a plurality of local regions determined in an image acquired by an image acquisition device; a crowd direction estimation unit that estimates a direction of the crowd for an image of a part corresponding to the detected local region, and appends the direction of the crowd to the local region; and a crowd type classification unit that classifies a type of the crowd including a plurality of staying persons for the local region to which the direction is appended by using a relative vector indicating a relative positional relationship between two local regions and directions of crowds in the two local regions, and outputs the type and positions of the crowds.