E_WP3: Unified Detection Localization and Classification DLC in complex environments

The aim of this work package is to understand and model difficult and complex environments. Traditional algorithms for detection, classification or identification are based on simplistic models of noise, clutter or multipath. Most of them therefore fail to achieve useful or meaningful results. We aim to develop realistic physical based models for the full sensing chain from the sensors themselves to the complex interaction with clutter/target and propagation into the environment. A physical understanding of the clutter rather than ad-hoc and simple statistical models will help to develop new DLC algorithms with optimal performances with reduced computational power as well as in-situ environment adaptability for greater robustness.

E_WP 3.1 Estimating targets in scenarios with spatio-temporally correlated clutter

So far target tracking algorithms rely on a Poisson clutter, spatially uniform and uncorrelated. If this assumption simplifies greatly tracking algorithms, they do not reflect realistic environments. New algorithms are needed to estimate multi-targets in complex and realistic cluttered environment.

E_WP 3.2 Physical Modelling for DLC

Explicit physical model for target and clutter and development of adaptive algorithms for detection. classification and identification.

E_WP 3.3 Man-made object detection

With the introduction of IEDs, new algorithms need to work with no prior target models. Clutter rejection then become a key issue for man-made object detection.

This work relates to Dstl technical challenge #8 (Maximising Information Capture from LiDAR returns) and challenge #12 (Signal Processing Algorithms and Techniques to Manage Noisy 3D point clouds). In particular, we address the following problem: "How best to process Multi-Spectral full-waveform LiDar (MSL) data and resulting λ-D spectral point cloud, sensed using ground based/aerial LiDAR sensors to improve situational awareness?"

Recent work in Edinburgh has focussed on both large and small foot-print sensing using active multi-spectral LiDARs (MSLs) to retrieve structural and physiological properties of vegetation. In this work, we consider how Anomaly Detection (AD) and Automatic Target Recognition (ATR) scenarios can benefit from spectrally enhanced LiDAR sensors in complex and cluttered environments, for example, a land based scenario, the detection of targets hidden and camouflaged under dense foliage or a bathymetric scenarios looking for objects of interests afloat a shore or in shallow waters.

The issue of how to allocate computational resources in such multi-sensor systems will be considered in conjunction with E_WP6.3. These ideas could be used to assess and detect potential threats from the point of view of the land vehicle, with reference to mapping data where available.

This on-going work outlines an efficient target localisation and recognition framework that operates over large distances and provides robust foliage penetration in cluttered urban and forest environments. We enhance traditional discrete return LiDAR sensors to Multi-Spectral full-waveform LiDAR (MSL) sensors that measure the intensity of light reflected from objects continuously over a period of time across several bands in the EM spectrum.

When operating over large urban/forest environment we aim to detect anomalies in the signal domain and process the Full-Waveform (FW) backscatter. Once such anomalies are localised we perform a dense scan on the region using small-footprint LiDAR scan in order to generate a dense point cloud and perform object classification.

Figure 1: (A) Once the object is localised a dense scan is carried out to get dense point cloud. In (B) we show a rendered forest scene and range image in (C) with man-made objects (buildings and two T-90 tanks).

Figure 2: Anomalies are a function of reflectance (β), wavelength (λ) and range (m). Here we aim to detect a man-made object (target) hidden behind dense foliage. In the above illustration, we hid a T-90 underneath a conifer tree.

Figure 3: Comparison using confusion matrix and F1 score between Local Regional Histograms (Left) against robust Spin Images (right). We show some object exemplars used in the study.