Abstract:
The present invention relates to a system and method for for processing a moving image. The system (10) comprises an input unit (11) for receiving an image frame of the moving image, a parsing unit (12) for parsing the image frame into one or more image patches of preset dimensions, and a filtering unit (13) for processing each image patch to identify and extract an identification (ID) data if the ID data is captured in the image patches. A storage device (14) connected to the filtering unit (13) stores the pre-classified noise patches, wherein the filtering unit (13) updates the storage device (14) with image patches identified as noise patches during each process cycle.
Abstract:
The present invention provides a system having a video camera, a location sensor and a monitor for monitoring streetlights and reporting location of un-illuminated streetlights comprising; a video acquisition unit (2) to extract video images and convert them into a image frames from the camera; a location acquisition unit (4) to extract location and orientation information from the location sensor; a video storage (5) to record the image frames from the video acquisition unit (2); a location storage (6) to record the location and orientation information from the location acquisition unit (4); a database having a region of interest (ROI) information and streetlight locations information; a streetlight detector (8) to identify any unilluminated streetlight based on the streetlight locations information; a reporting unit (9) to display the location of unilluminated streetlights on the monitor (10); and a map generator unit (7), wherein the map generator (7) generates a map based on the region of interest (ROI) information and streetlight locations information using the image frames from the video storage (5) and location and orientation information from location acquisition unit (4) to display at the monitor (10), in which the streetlight detector (8) detect the unilluminated streetlight based on the map and report using reporting unit (9). Further, the streetlight detector (8) uses Hough transform technique to identify any unilluminated streetlight. Preferably, the map generator unit (7) is a geographic information system (GIS). Preferably, the location acquisition unit (4) is a Global Positioning System (GPS) received. Preferably, the streetlight detector is a photodetector.
Abstract:
A surveillance camera system contemplated in our invention includes at least one steerable or PTZ camera (305) and a plurality of static or fixed view cameras (303), comprising individual static cameras SCI, SC2, SC3 which are configured or implemented to automatically select and track an object 307 last detected by any one of the cameras. Our method comprises the following steps or process stages, enumerated as F1 to F6 for reference: F1 - Listen for incoming event-detection message, i.e. when an object of interest is detected. F2 - If only one event is detected, the object-tracking data or information is then derived from the object-detection message and the process proceeds to F6 below; if more than one event is detected, proceeds to F3. F3 - The user selects from among the multiple objects detected a global target the particular target to be tracked by the system overriding all other objects previously detected and tracked by the system. F4 - Local offset of the selected target is then determined. F5 - Object-tracking data is then obtained from the target whereby the pan, tilt and zoom (PTZ) movement commands may be derived. F6 - The derived PTZ commands is sent to the steerable camera to start tracking the user selected target.
Abstract:
The present invention relates to a system (100) for determining priority of visuals (220) and a method thereof. The system (100) comprises a plurality of visual capturing components (10) for capturing visuals (220); an event detecting component (50) for detecting events in the visuals (220) associated with each of the visual capturing component (10) and a visual priority determining component (110) for determining the priority of the visuals (220) of the visual capturing component (10) based on a set of parameters.
Abstract:
The present invention relates to a surveillance system having a method for tampering detection and correction. The surveillance system is able to detect tampering of the camera view and thereon, adjust its video analytics configuration parameters to perform video analytics even though the orientation of the camera (10) has been changed as long as a partial of the ROI is within the tampered camera view. The surveillance system comprises of at least one camera (10), a video acquisition module (20), a storage device (30), an image processing module (40), a display module (50) and a post detection module (60).
Abstract:
The present invention discloses a system and a method for identifying a text region in a video captured by a moving or still camera. The system comprises a video acquisition unit to obtain images recorded in an input video, an image processing unit comprising a seed point extraction unit and a text box determination unit to identify seed points of the images and locate potential text regions therefrom, and an inferencing unit comprising a classification module to characterize and verify the potential text regions. The seed point extraction unit comprises a statistical moment analysis module, a K-means clustering module, a N x N kernel for convolution module and a linearity evaluation module.
Abstract:
The present invention relates to a system (1000) and method for classifying level of aggressiveness. The system is configured to classify an aggressive behaviour into a level of aggressiveness based on a video stream. The system comprises a video acquisition unit (10) configured to acquire at least one video stream from at least one video source, an image processing unit (20) configured to convert the at least one video stream into a sequence of image frames and performs data formatting on the sequence of image frames to generate a plurality of volumetric rectangular prisms and an image representation for each of the volumetric rectangular prisms, a training unit (30) configured to perform data training on the plurality of volumetric rectangular prisms and the image representation of each of the volumetric rectangular prisms using a machine learning model and a deep learning model, and an online inferencing unit (40) configured to perform an online fusion of the machine learning model and the deep learning model.
Abstract:
The present invention relates to a machine learning-based system and method for for processing a moving image. The system (10) comprises an input unit (11) for receiving two more image frames of the moving image, wherein the moving image includes one or more subjects being monitored. A processing unit (12) processes the received image frames to predict an incident, wherein the incident is a fight or quarrel between two or more people in the moving image. An output unit (13) outputs prediction result, wherein an alert message is outputted as the prediction result if the incident is predicted.