Abstract:
The present invention relates to a surveillance system having a method for tampering detection and correction. The surveillance system is able to detect tampering of the camera view and thereon, adjust its video analytics configuration parameters to perform video analytics even though the orientation of the camera (10) has been changed as long as a partial of the ROI is within the tampered camera view. The surveillance system comprises of at least one camera (10), a video acquisition module (20), a storage device (30), an image processing module (40), a display module (50) and a post detection module (60).
Abstract:
With the growing market for video surveillance in security area, there is a need for an automated system which provides a way to track and detect human intention based on a particular human motion. The present invention relates to a system and a method for identifying human behavioral intention based on effective motion analysis wherein, the system obtains a sequence of raw images taken from live scene and processes the raw images in an activity analysis component. The activity analysis component is further provided with an activity enrollment component and activity detection component.
Abstract:
The present invention relates to a system (100) and method for monitoring human behaviour. The system (100) comprising a video acquisition module (102) configured to convert the video into a plurality of image frames; a lighting adaptation unit (501) configured to detect a low lighting condition in the image frames; an object detection unit (502) configured to detect any moving object in the image frames; an object tracking unit (503) configured to perform object tracking in the image frames; an event detection unit (504) configured to detect a predefined event in the image frames; and a monitoring module (104) configured to display where the detected event occurs and alert an operator if any aggressive behaviour is detected.
Abstract:
The present invention relates to a system (1000) and method for video surveillance and monitoring. The system (1000) comprising a recording unit (200) configured to record at least one video stream for storage, a video acquisition unit (300) to obtain the video stream from at least one video source such as a surveillance camera or a storage medium, and a configuration unit (400) configured to add details of the surveillance camera to a database and choose at least one video analytics type depending on the surveillance camera, monitoring areas, and video analytics parameter values. The system (1000) further comprising a display unit (600) configured to compute display variable values and display an adaptive video stream for monitoring and an video processing unit (500) configured to process a video stream with video analytics enabled and a video stream without video analytics enabled simultaneously.
Abstract:
A method for extracting foreground objects of a currently observed image is provided herewith. The method comprises segmenting background objects of a previously observed image into regions of homogeneous brightness and setting initial threshold values for each segmented regions to initialize a background image information that includes background image and initial threshold values of the currently observed image, subtracting the currently observed image from the background image information and thresholding the image difference using the initial threshold values to extract foreground of the currently observed image, and comparing the foreground of the currently observed image against foreground of the previously observed image to update the initial threshold values of the background image information.
Abstract:
A method for detecting human using attire is provided. The method comprises detecting and subtracting group motion blob on image sequences from background image; detecting human attire in the group motion blob with help of at least one predefined attire template stored in a database; and validating and extracting individual human image from the image sequences using the detected human attire.
Abstract:
The present invention discloses a system for computing motion information from detected moving objects, the system comprising a background model module (9), an object presence module (10), and a descriptor module (11). The system is configured for computing a plurality of background models (103) based on inputs from a plurality of learning rates (101), determining whether a detected moving object is present in a captured image (201) for each learning rate (101), and computing at least one descriptor (300, 400, 500, 600) based on the presence of the detected moving objects in the captured image (201) for each learning rate (101).
Abstract:
The present invention discloses a system to detect a moving object in a dynamic environment from a sequence of captured images, comprising a learning rate module (9), an image filtering module (10), an object likelihood level module (11), and an object identification module (12). The system executes the following steps: determining a learning rate seed (301) based on the speed rate (306) of the moving object, applying the determined learning rate seed (301) to a data training module (13) for filtering of background images (403) from captured images, calculating an object likelihood level (505) by computing statistical properties of the filtered images (406), and identifying the moving object based on the object likelihood level (505), wherein the learning rate seed (301) and the object likelihood level (505) are updated based on a processing of one or more new images.
Abstract:
The present invention relates to a system (1000) and method for classifying level of aggressiveness. The system is configured to classify an aggressive behaviour into a level of aggressiveness based on a video stream. The system comprises a video acquisition unit (10) configured to acquire at least one video stream from at least one video source, an image processing unit (20) configured to convert the at least one video stream into a sequence of image frames and performs data formatting on the sequence of image frames to generate a plurality of volumetric rectangular prisms and an image representation for each of the volumetric rectangular prisms, a training unit (30) configured to perform data training on the plurality of volumetric rectangular prisms and the image representation of each of the volumetric rectangular prisms using a machine learning model and a deep learning model, and an online inferencing unit (40) configured to perform an online fusion of the machine learning model and the deep learning model.
Abstract:
The present invention relates to a machine learning-based system and method for for processing a moving image. The system (10) comprises an input unit (11) for receiving two more image frames of the moving image, wherein the moving image includes one or more subjects being monitored. A processing unit (12) processes the received image frames to predict an incident, wherein the incident is a fight or quarrel between two or more people in the moving image. An output unit (13) outputs prediction result, wherein an alert message is outputted as the prediction result if the incident is predicted.