Track 3 Deep Neural Networks and Machine Learning methods

WP3.1 Robust Generative Neural Networks
WP3.1 Summary presentation slides

Watch the presentation video here

In this work package we will develop new underpinning machine learning methods to learn from sparse and heterogeneous multi-modal data whilst being reliable, trustworthy and robust.    
Generative models (GM) can describe complex signal models by learning the distribution manifold via latent variables and offer anomaly detection capability through the evaluation of the likelihood for new unseen data. Defence scenarios, however, are typically complex due to: limited data; heterogeneous sources; and data distributions that evolve over time. The challenge in this research is to robustly learn GMs that can handle such multi-modal and dynamic data.        
Generative adversarial networks (GANs) are recent state-of-the art GMs that can learn data distributions by playing a game between two different neural networks: a generator that synthesizes data and a discriminator that tries to distinguish the synthetic from the real. Prior work points to the great potential of GANs in anomaly detection, yet relied on predetermined features and heuristic decisions, and did not handle multi-modal sources. We will develop new GANs for multi-modal anomaly detection leveraging the discriminator as the detector.

WP3.2 Verifiable Deep Learning
As Deep Neural Networks (DNNs) are deployed in increasingly many and increasingly mission-critical applications, it becomes ever more important to be confident that their outputs can be relied upon for decision-making.  This is a particularly salient concern when DNN-based systems are deployed and exposed to new, unexpected, and potentially adversarial inputs.  
This WP addresses methods for certifying and verifying that DNNs are fit for purpose, even when extrapolating in the presence of novel inputs. From a certification perspective, we will explore both meta-learning based approaches for increasing reliability in a probabilistic sense, as well as formal approaches for reliability guarantees. For verification, we will develop explainable artificial intelligence (XAI) approaches, so that the system’s reasoning process can be manually checked for validity; as well as theoretical approaches to guarantee that the DNN’s decisions are robust to adversarial attacks.  

  • L. Ericsson, H. Gouk and T. Hospedales,  "How Well Do Self-Supervised Models Transfer?," in 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 2021 pp. 5410-5419. doi: 10.1109/CVPR46437.2021.00537 - View video

WP3.3 Domain Adaptation with Low-Quality Target Data

In response to complex defence-specific tasks that cannot be deduced from data and labels alone, knowledge must be transferred and fused across sensors and modalities. When attempting to achieve local situational awareness in urban scenarios, multiple user requirements change in response to available human intelligence, such as which targets to search for and how quickly, mission duration, and time of day or night. Consequently, we must tailor our knowledge transfer and sensor fusion strategies to the mission at hand and our available resources.

We will explore the above-described topic through the lens of Domain Adaptation. DA, which mimics the human vision system, is a type of transfer learning (TL) that uses labelled data from one or more source domains to perform new tasks in a target domain. Furthermore, most imaging techniques utilised in the defence industry produce noisy and low-quality data, resulting in a larger domain gap for DA. Therefore, our focus in this research will be on Domain adaptation in low-quality settings, one of the most challenging environments for DA algorithms. Many factors can influence image quality, including noise, sensor data integration, sensitivity, distortion, artefacts, etc. Low-quality DA is a new field that has not yet been explored. Our research aims to demonstrate the domain's applicability to real-world environments such as defence. It will enable us to dynamically adjust sensing and fusion strategies in response to varying sensor/modality reliability and current mission requirements to maximise the collection of pertinent information.