Abstract:
A video transmission apparatus (10) includes: a specified-position information acquisition unit (110) that acquires specified-position information indicating a specified position in an area to be surveilled including each imaging region corresponding to each of a plurality of imaging apparatuses; and a video transmission control unit (120) that selects at least one imaging apparatus from among the plurality of imaging apparatuses using imaging position information indicating a position or an imaging range of each of the plurality of imaging apparatuses and the specified-position information and transmits a video captured by the selected imaging apparatus to a video sharing apparatus that is to share the video.
Abstract:
An object tracking apparatus, method and computer-readable medium for detecting an object from output information of sensors, tracking the object on a basis of a plurality of detection results, generating tracking information of the object represented in a common coordinate system, outputting the tracking information, and detecting the object on a basis of the tracking information.
Abstract:
A surveillance information generation apparatus (2000) includes a first surveillance image acquisition unit (2020), a second surveillance image acquisition unit (2040), and a generation unit (2060). The first surveillance image acquisition unit (2020) acquires a first surveillance image (12) generated by a fixed camera (10). The second surveillance image acquisition unit (2040) acquires a second surveillance image (22) generated by a moving camera (20). The generation unit (2060) generates surveillance information (30) relating to object surveillance, using the first surveillance image (12) and first surveillance information (14).
Abstract:
A surveillance information generation apparatus (2000) includes a first surveillance image acquisition unit (2020), a second surveillance image acquisition unit (2040), and a generation unit (2060). The first surveillance image acquisition unit (2020) acquires a first surveillance image (12) generated by a fixed camera (10). The second surveillance image acquisition unit (2040) acquires a second surveillance image (22) generated by a moving camera (20). The generation unit (2060) generates surveillance information (30) relating to object surveillance, using the first surveillance image (12) and first surveillance information (14).
Abstract:
A surveillance apparatus (2000) includes a first calculation unit (2020), an extraction unit (2040), and a notification unit (2060). The first calculation unit (2020) calculates a risk index value in a first region (40) on a current route (20). The risk index value of the first region (40) indicates a degree of concern that a risk (such as crowd confusion) caused by congestion of people occurs in the first region (40). For calculation of the risk index value in the first region (40), a captured image generated by a camera (50) is used. The extraction unit (2040) extracts a bypass route (30) when the risk index value of the first region (40) is equal to or greater than a predetermined threshold value. The notification unit (2060) notifies a user that a route through which a person is caused to pass is to be switched to the extracted bypass route (30).
Abstract:
To efficiency search for an object associated with a sensed event, an information processing apparatus includes a sensor that analyzes a captured video and senses whether a predetermined event has occurred, a determining unit that determines a type of an object to be used as query information based on a type of the event in response to sensing of the event occurrence, and a generator that detects the object of the determined type from the video and generates the query information based on the detected object.
Abstract:
A monitoring device includes a crowd behavior analysis unit 21 and an abnormality degree calculation unit 24. The crowd behavior analysis unit 21 specifies a behavior pattern of a crowd from input video. The abnormality degree calculation unit 24 calculates an abnormality degree from a change of the behavior pattern.
Abstract:
An action analysis device includes: an acoustic analysis unit 1 for analyzing input acoustic information, and generating acoustic analysis information indicating a feature of the acoustic information; a time difference determination unit 2 for determining a time difference between when an acoustic event identified by the acoustic analysis information occurs and when an event corresponding to the acoustic event occurs in input video obtained by capturing an image of a crowd; and an action analysis unit 3 for analyzing an action of the crowd corresponding to the acoustic event, using the input video, the acoustic analysis information, and the time difference.
Abstract:
An image processing system, an image processing method, and a program capable of suppressing errors related to association of a person appearing in a video are provided. An image processing system includes: an image acquiring unit which accepts input of videos captured by a plurality of video cameras; a next camera predicting unit which predicts a video camera on which an object detected in a video is to appear next; and a display control unit which announces a confusability of an object according to a similarity between the detected object and another object that is likely to appear in a video of the video camera predicted by the next camera predicting unit and which causes a display device to display a video from the video camera predicted by the next camera predicting unit.
Abstract:
An image processing system, an image processing method, and a program capable of suppressing errors related to association of a person appearing in a video are provided. An image processing system includes: an image acquiring unit which accepts input of videos captured by a plurality of video cameras; a next camera predicting unit which predicts a video camera on which an object detected in a video is to appear next; and a display control unit which announces a confusability of an object according to a similarity between the detected object and another object that is likely to appear in a video of the video camera predicted by the next camera predicting unit and which causes a display device to display a video from the video camera predicted by the next camera predicting unit.