Track 3 Deep Neural Networks and Machine Learning methods

WP3.1 Robust Generative Neural Networks
In this work package we will develop new underpinning machine learning methods to learn from sparse and heterogeneous multi-modal data whilst being reliable, trustworthy and robust.    
Generative models (GM) can describe complex signal models by learning the distribution manifold via latent variables and offer anomaly detection capability through the evaluation of the likelihood for new unseen data. Defence scenarios, however, are typically complex due to: limited data; heterogeneous sources; and data distributions that evolve over time. The challenge in this research is to robustly learn GMs that can handle such multi-modal and dynamic data.        
Generative adversarial networks (GANs) are recent state-of-the art GMs that can learn data distributions by playing a game between two different neural networks: a generator that synthesizes data and a discriminator that tries to distinguish the synthetic from the real. Prior work points to the great potential of GANs in anomaly detection, yet relied on predetermined features and heuristic decisions, and did not handle multi-modal sources. We will develop new GANs for multi-modal anomaly detection leveraging the discriminator as the detector.

WP3.2 Verifiable Deep Learning
As Deep Neural Networks (DNNs) are deployed in increasingly many and increasingly mission-critical applications, it becomes ever more important to be confident that their outputs can be relied upon for decision-making.  This is a particularly salient concern when DNN-based systems are deployed and exposed to new, unexpected, and potentially adversarial inputs.  
This WP addresses methods for certifying and verifying that DNNs are fit for purpose, even when extrapolating in the presence of novel inputs. From a certification perspective, we will explore both meta-learning based approaches for increasing reliability in a probabilistic sense, as well as formal approaches for reliability guarantees. For verification, we will develop explainable artificial intelligence (XAI) approaches, so that the system’s reasoning process can be manually checked for validity; as well as theoretical approaches to guarantee that the DNN’s decisions are robust to adversarial attacks.  
 

WP3.3 Deep Reinforcement Learning and Multi-Task Fusion
There is a need to transfer and fuse knowledge across sensors and modalities in response to complex defence-specific tasks that cannot be simply derived from data and labels. For example, when striving for local situation awareness in urban scenarios there are multiple user requirements that change in response to available human intelligence, e.g. which targets to search for and how quickly, length of mission, and time of day or night. We must thus vary knowledge transfer and sensor fusion strategy with current mission and available resources.
This research will address the problem of task-specific sensor management and fusion using deep reinforcement learning (DRL). DRL can dynamically adjust sensing and fusion strategies in response to varying reliability of sensors/modalities and current mission requirements, in order to maximise relevant information gathering. Using DRL means that defence-relevant metrics can be exploited such as accuracy-latency trade-off, or time-to-detection, without being constrained by differentiability, or to myopic sensing policies. These models can also fuse heterogeneous modalities, such as embedding graph structured data from social networks (see also WP1.3), and identifying favourable side information from different modalities.