The Cyclops Real-time Artificial Intelligence Data and Reporting…. system “RAIDAR” uses Computer Vision and Deep Learning to automatically detect a speed enforcement area.
The automatic detection of speed enforcement areas will reduce the reliance on crowdsourcing the current location of mobile speed cameras via manual reports from users.
This needed to be available for from a mobile device and the ‘Connected Car’.
WHAT WAS THE PROBLEM?
A deep learning and computer vision would need to be utilised to detect the speed enforcement area. This detection will be in the form of identifying common features of a speed enforcement area, e.g. a police mobile camera van in the UK.
Initially, we decided to focus on UK Mobile Camera Vans, as these tend to be very distinct. When a positive match is detected the system recorded the image from the video frame, the current time, the current vehicle speed, the current vehicle direction of travel and the current GPS coordinates.
The detection process will attempt to identify a Mobile Camera Van in images captured from a live camera feed, processing around 30 images per second. When a Mobile Camera Van is detected
in the video feed, Step 1 will produce a number of positive match results which will be analysed, Once a set of positive matches has been collected it will choose the best match to submit to the server, based on bounding box size and confidence rating.
WHAT WERE THE CHALLENGES?
This had never been done for speed cameras before, so SourceCloud needed to use a training a model for use in CoreML and Vision, this requires an input set of images from which the algorithm can learn what a particular object looks like and a different set of images to test the result of the training against.
The training set is then fed into the Image Classification or Object Detection algorithms, which use the annotation to focus on the elements of the images inside the bounded area.
There are a number of algorithms that are used to process images for the purpose of classification and object detection. SourceCloud decided that we would use YOLO (You Only Look Once), specifically YOLOv3.
WHAT WAS THE SOLUTION?
SourceCloud implemented a combinatorial approach, identifying Mobile Camera Vans by training a model to identify individual elements. The images were annotated, drawing bounding boxes around the entire vehicle, the text ‘POLICE’ when it appears, the image of a ‘Box Brownie’, emergency chevrons and the blue and white checker pattern, which is present on many police vehicles. A proof of concept application was created to integrate the trained model with CoreML and Vision. The application combined the confidence ratings, reported by Vision, for the presence of each of the desired elements. If this combined confidence was greater than a predefined overall confidence rating, then the application considered a Mobile Camera Van to be detected.
The data set, GPS coordinates, the direction of travel, vehicle speed, detection time, bounding boxes, confidence rating and the image will be uploaded to the Cyclops platform through a new API.
a standard live alert report will be generated and submitted to the Cyclops platform, alerting all the other Cyclops system users that a mobile speed camera has been detected in their direction of travel.
WHAT WAS THE OUTCOME?
Using the proof of concept iOS application we were able to successfully detect UK Mobile Camera Van from a live camera feed.
WHAT TECHNOLOGY DID WE USE?