Video surveillance: Item monitoring against theft
This research aims to address the limitations of traditional Closed-Circuit Televisions (CCTVs), which lack innovative and intelligent computer vision and video analytics capabilities that can protect human assets. Besides, the conventional one would rely on human operators to be constantly obser...
| Main Author: | |
|---|---|
| Format: | Final Year Project / Dissertation / Thesis |
| Published: |
2023
|
| Subjects: | |
| Online Access: | http://eprints.utar.edu.my/5517/ http://eprints.utar.edu.my/5517/1/fyp_CS_2023_CYS.pdf |
| Summary: | This research aims to address the limitations of traditional Closed-Circuit Televisions
(CCTVs), which lack innovative and intelligent computer vision and video analytics
capabilities that can protect human assets. Besides, the conventional one would rely on human
operators to be constantly observing the scenes captured so that crimes could be discovered
earlier and therefore prevented.
Here, we have developed an intelligent video surveillance system that is capable of identifying
the close loitering event and monitoring the select item in the scene by integrating these
approaches, such as the You-Only-Look-Once (YOLO) version 5 object detection algorithm,
close-contact detection, feature matching, significant movement and dissimilarity detections.
Firstly, the system will preprocess the incoming frame if necessary to reduce the computational
resources. The initial frame of the video will be saved as a background frame for tracking
purpose. Then, the system operator or owner is required to manually select the item being
monitored in the scene by drawing a bounding box for isolation purpose. The selected object
will be utilized for initial feature extraction and descriptor computation. Next, the system will
invoke the person detection module using YOLO every 1 second to detect the presence of
human. If there is human being detected in the scene, the system will check whether the human
is having a close-contact with the registered item by finding the intersection area of the human’s
bounding box and the registered object’s bounding box. If the two objects are found intersected,
the close contact time will be added by 1. Else, the time will be set to 0.
After that, the system will get the current frame to find the common features with the
background frame by using a feature matching algorithm. The respective counter for matching
will be increased by 1 if the average match calculated is under certain thresholds. Then, the
system continues to track the matched feature points in the current frame by calculating the
optical flow. If significant changes in the positions of the matched feature points have been
detected, the motion counter will be incremented by 1. After that, the system identifies the
dissimilarity between the background and current frames to detect the presence of occlusion
and the absence of the registered item. If the dissimilarity is greater than the threshold defined,
vii
the respective counter will be increased by 1. All the mentioned counters will be set to 0 if the
conditions specified are not met.
Based on the predetermined conditions for each type of risk, the appropriate alarms will be
activated while a risk occurs. These alarms serve as an early warning for the system operator
or owner, allowing them to take any necessary actions to address the risk.
After testing, our developed system has demonstrated an impressive accuracy of 92.5% in
detecting abnormal behaviours in 40 video inputs. |
|---|