Abstract:
An information processing apparatus (2000) includes a determination unit (2020) and a notification unit (2040). The determination unit (2020) determines whether a field of view of a second camera is correct, on the basis of a first captured image (40), a second captured image (50), and relationship information (information indicating the relationship to be satisfied between a field of view of a first camera and the field of view of the second camera). In a case in which the field of view of the second camera is not correct, the notification unit (2040) notifies that the field of view of the second camera is not correct. The first camera is provided in a head-mounted display worn by a person. The second camera is provided in a part other than the head-mounted display.
Abstract:
A surveillance system (1) includes an area information acquisition unit (101), a request information provision unit (102), and a participation consent reception unit (103). The area information acquisition unit (101) acquires information of a surveillance-desired area. The request information provision unit (102) provides participation request information for surveillance that is conducted by using an image capturing unit of a portable terminal in the surveillance-desired area. The participation consent reception unit (103) receives participation consent as a response to the participation request information, from the portable terminal.
Abstract:
The present invention is directed to a helmet wearing determination system including a imaging means that is installed in a predetermined position and images a two-wheel vehicle that travels on a road; and a helmet wearing determination means that processes an image imaged by the imaging means, estimates a rider head region corresponding to a head of a person who rides on the two-wheel vehicle that travels on the road, compares image characteristics of the rider head region with image characteristics according to the head at a time when a helmet is worn or/and at a time when a helmet is not worn, and determines whether or not the rider wears the helmet.
Abstract:
In an object detection device, the plurality of object detection units output a score indicating a probability that a predetermined object exists for each partial region set with respect to inputted image data. The weight computation unit uses weight computation parameters to compute a weight for each of the plurality of object detection units on a basis of the image data and outputs of the plurality of object detection units, the weight being used when the scores outputted by the plurality of object detection units are merged. The merging unit merges the scores outputted by the plurality of object detection units for each partial region according to the weights computed by the weight computation unit. The first loss computation unit computes a difference between a ground truth label of the image data and the score merged by the merging unit as a first loss. Then, the first parameter correction unit corrects the weight computation parameters so as to reduce the first loss.
Abstract:
An information processing apparatus (2000) includes a first analyzing unit (2020), a second analyzing unit (2040), and an estimating unit (2060). The first analyzing unit (2020) calculates a flow of a crowd in a capturing range of a fixed camera (10) using a first surveillance image (12). The second analyzing unit (2040) calculates a distribution of an attribute of objects in a capturing range of a moving camera (20) using a second surveillance image (22). The estimating unit (2060) estimates an attribute distribution for a range that is not included in the capturing range of the moving camera (20).
Abstract:
In an object detection device, a plurality of object detection units output a score indicating the probability that a predetermined object exists for each partial region set with respect to inputted image data. On the basis of the image data, a weight computation unit uses weight computation parameters to compute weights for each of the plurality of object detection units, the weights being used when the scores outputted by the plurality of object detection units are merged. A merging unit merges the scores outputted by the plurality of object detection units for each partial region according to the weights computed by the weight computation unit. A loss computation unit computes a difference between a ground truth label of the image data and the scores merged by the merging unit as a loss. Then, a parameter correction unit corrects the weight computation parameters so as to reduce the computed loss.
Abstract:
An information processing apparatus (10) includes an event detection unit (110), an input reception unit (120), and a processing execution unit (130). The event detection unit (110) detects a specific event from video data. The input reception unit (120) receives, from a user, input for specifying processing to be executed. The processing execution unit (103) executes first processing specified by input received by the input reception unit (120), and executes second processing of generating learning data used for machine learning and storing the generated learning data in a learning data storage unit (40). The processing execution unit (130) discriminates, in the second processing, based on a classification of the first processing specified by input received by the input reception unit (120), whether a detection result of a specific event is correct, and generates learning data including at least a part of video data, category information indicating a category of a specific event detected by the event detection unit (110), and correct/incorrect information indicating whether a detection result of an specific event is correct or incorrect.
Abstract:
A surveillance system (1) includes an area information acquisition unit (101), a position information acquisition unit (102), a candidate determination unit (103), and a notification unit (104). The area information acquisition unit (101) acquires information of a surveillance-desired area. The position information acquisition unit (102) acquires pieces of position information of a plurality of portable terminals (20), each portable terminal performing surveillance using an image capturing unit. The candidate determination unit (103) determines a candidate portable terminal (20) to be moved to the surveillance-desired area from among the plurality of portable terminals (20) based on the acquired pieces of position information of the plurality of portable terminals (20). The notification unit (104) outputs a notification to the candidate portable terminal requesting to move to the surveillance-desired area.
Abstract:
The present invention is directed to a helmet wearing determination system including a imaging means that is installed in a predetermined position and images a two-wheel vehicle that travels on a road; and a helmet wearing determination means that processes an image imaged by the imaging means, estimates a rider head region corresponding to a head of a person who rides on the two-wheel vehicle that travels on the road, compares image characteristics of the rider head region with image characteristics according to the head at a time when a helmet is worn or/and at a time when a helmet is not worn, and determines whether or not the rider wears the helmet.
Abstract:
A surveillance system (1) includes an area information acquisition unit (101), a position information acquisition unit (102), a candidate determination unit (103), and a notification unit (104). The area information acquisition unit (101) acquires information of a surveillance-desired area. The position information acquisition unit (102) acquires pieces of position information of a plurality of portable terminals (20), each portable terminal performing surveillance using an image capturing unit. The candidate determination unit (103) determines a candidate portable terminal (20) to be moved to the surveillance-desired area from among the plurality of portable terminals (20) based on the acquired pieces of position information of the plurality of portable terminals (20). The notification unit (104) outputs a notification to the candidate portable terminal requesting to move to the surveillance-desired area.