AirCamRTM: Enhancing Vehicle Detection for Efficient Aerial Camera-Based Road Traffic Monitoring

Abstract:

Efficient road traffic monitoring is playing a fundamental role in successfully resolving traffic congestion in cities.Unmanned Aerial Vehicles (UAVs) or drones equipped with cameras are an attractive proposition to provide flexible and infrastructure-free traffic monitoring. However, real-time traffic monitoring from UAV imagery poses several challenges, due to the large image sizes and presence of non relevant targets. In this paper, we propose the AirCam-RTM framework that combines road segmentation and vehicle detection to focus only on relevant vehicles, which as a result, improves the monitoring performance by approximately 2x and provides approximately 18% accuracy improvement. Furthermore,through a real experimental setup we qualitatively evaluate the performance of the proposed approach, and also demonstrate how it can be used for real-time traffic monitoring and management using UAVs.

Introduction:

Road traffic monitoring (RTM) is an important component of intelligent transportation system and is critical towards providing and analyzing traffic data to characterize the performance of a roadway system. Therefore, information gathered from traffic monitoring can determine areas with high traffic congestion that have to be addressed. The need to revolutionize the intelligent transportation sector has led to a number of technologies being employed from GPS, WiFi, UAVs , surveillance cameras. Perhaps, the most promising out of these technologies are camera-equipped UAVs. The affordability of UAVs and ease of data capturing along with the advancement of computer vision and deep learning techniques provides a great opportunity to integrate these technologies together for the purposes of road traffic monitoring. Such capabilities are useful for a wide scenario of emerging traffic monitoring applications, such as persistent monitoring of an area for traffic regulation purposes, periodical data collection for extraction of traffic statistics, and live traffic density estimation in the surrounding area of a moving target (e.g. for assisting emergency vehicle navigation).

Figure 1. - 
In contrast to traditional aerial road traffic monitoring methods, AirCamRTM enhances vehicle detection through road segmentation to improve monitoring performance and accuracy.
In contrast to traditional aerial road traffic monitoring methods, AirCamRTM enhances vehicle detection through road segmentation to improve monitoring performance and accuracy.

Through the literature most of the works focus on tackling one component of the more complicated traffic monitoring process with UAVs. Research on the topic has focused mainly on addressing the challenges of detecting vehicles in aerial and UAV imagery. Such works generally exploit generic detectors and retrain them for the task of vehicle detection and classification. Usually, they do not consider other algorithms, post-processing or pre-processing methods that could be useful for a more efficient traffic monitoring. Some challenges faced by existing approaches are 1) how to pay attention only to the vehicles that matter to the traffic monitoring process, especially in scenarios of complex urban cities, and 2) how to reduce the computational complexity when monitoring hundreds of vehicles.

In order to meet the demands for more efficient monitoring with UAVs and facilitate real-time performance, we propose a composite visual processing pipeline, referred to as AirCamRTM. The main contribution is the integration of road segmentation with vehicle detection into a synergis tic framework based on deep learning that can tremendously enhance and improve the overall traffic monitoring by focusing only on the regions of interest (figure above). First, we explore and evaluate different architectures for both vehicle detection and road segmentation on task-specific datasets and select the modules that provide the best accuracy-performance trade-off. Second, we integrate the aforementioned techniques into a visual traffic monitoring pipeline, and demonstrate how they can operate synergistically to improve the overall performance. Through evaluation on a composite dataset, we show accuracy improvements of up to ~ 18% for road-vehicle detection and observe a ~ 2 × speedup compared to a road-agnostic approach, making it more suitable for real-time operation. Finally, through a real experimental setup, we show how the framework can be used to extract important traffic state information.

CVF Publication ,

IEEE Publication,

Video Presentation