Archived UDRC Technical Challenges

Archived technical challenges

  1. Enhancing Electronic Surveillance Performance in High Signal Densities

    Electronic Surveillance systems can be overwhelmed by environments of high signal density, inhibiting the ability to associate multiple pulses with a common emitter. Modern RADAR systems are becoming more complex in operation, incorporating features such as pseudo random frequency changes, variations in transmission type and active beamforming. Such agility also inhibits the ability to perform pulse association. Modern communications systems are encroaching into traditional RADAR frequency bands, For example: a frequency hopping transmission may be interpreted as a large number of single pulse emitters. This challenge is to establish novel signal processing methods to overcome the problems faced by future Electronic Surveillance systems in high pulse density environments.

  2. Extending Frequency Coverage of Electronic Surveillance Systems

    In Electronic Surveillance, the required frequency range of receiver systems is increasing, as RADAR and communications systems extend their frequency ranges of operation. Technologies such as compressed sensing offer the ability to sense over an extended frequency range, but there still exists a challenge to develop efficient algorithms to process and understand wide band data collections and present meaningful information to a human operator. The application of more processing power is not a practical and sustainable solution. Alternative approaches to processing algorithms are sought to process high data rates, and perform the data reduction process whilst retaining the increased information content.

  3. Hierarchically Decomposable Signal Processing Quality Metrics

    Signal processing is generally conducted in a number of steps each of which has associated metrics of quality. Enabling a set of distributed sensors to operate collectively and collaboratively in an intelligent manner requires the ability to go from high level goals down to lower level goals - and, as such, requires an understanding of the links between the 'quality' metrics of different steps in the processing chain. The challenge is to develop a sound (logical/mathematical) framework to link the metrics used at different stages of the signal processing chain. For example, this enables a goal aimed at track quality level, in terms of continuity, clarity or completeness, to be decomposed along the signal processing chain into the actions required at lower levels.

  4. Mass Movement Signal Processing

    A number of sensing techniques are able to produce very large track based data sets of the movement of objects (some of which can carry other objects). This results in three sub-challenges: (a) deriving understanding from this information by 'mining' the data sets for patterns of 'known'/'hypothesised' interest (inc. temporal patterns of change) and 'anomalies' (b) summarising of such a large data set such that humans can understand/visualise the broad patterns of movement, and thus use this to support reasoning about the potential impact of, for example, a road being blocked; (c) understanding how the quality of the track based picture impacts on the quality of understanding achieved (e.g. from (a) and (b)) for different types of object behaviour, track lifetimes and densities.

  5. 3D Synthetic Aperture Radar (SAR) Processing

    This challenge is to devise a sparse sampling processing algorithm which will produce 3D SAR images with reasonable vertical resolution from a limited number of aircraft SAR passes with relatively large vertical increments. For example, this could use compressed sampling techniques for. In the 3D technique an aircraft flies a number of passes past a vertically extended target at different heights. One refinement is to use Low Frequency (LF) SAR to image buildings internally in 3D. It can be shown that in order to achieve good vertical resolution using conventional methods an impracticably large number of passes are required with very fine vertical spacing. There is therefore a need to simplify the technique by exploiting sparse sampling methods.

  6. Shape Boundaries in SAR Images

    This challenge is to define and determine the location of a multiple scale ragged edge from a very high resolution SAR image. For example, this might be accomplished by using concepts from the geometry of fractal sets (Hausdorff measure). The edge detector is an old problem which is nevertheless still a topic of current research interest. It is a basic building block of shape recognition algorithms. SAR images are composed of speckle. The speckle is very noisy and conventional edge detectors do not perform well on SAR images. The speckle size sets the inner scale for these images - this will be very small for future high resolution systems. The outer scale is set by the targets and there are multiple scales in between, hence the need for multi-scale techniques.

  7. To Decode Periodic Information in Incomplete and Noisy Signals

    Signals that are measured by a microphone, or hydrophone, contain more information than just the frequency content:temporal characteristics and periodicity are two examples. Many animals depend on reliable identification of this periodicity in the acoustic signals they perceive in their natural environment in order to navigate and communicate. Periodicity can be utilised to discriminate between man-made and natural objects or events but the background can contain many more, and persistent, objects or events that appear target-like (but are not) and can confuse existing methods. The challenge is to develop signal processing that can decode weak and multiple, periodic information in noisy acoustic signals, and also when the information is degraded or distorted by the background. This could be used to detect and identify objects observed in complex underwater environments with noise and clutter.

  8. Maximising the Information Capture from Lidar Returns

    Sensors based on current Lidar instruments may suffer from a limited number of multiple returns and poor range estimation. Additionally, there is little information about the characteristics of the illuminated surface. However, latest generation full-waveform Lidar systems are able to record the entire waveform of each received laser pulse. The challenge is to use this waveform data to create more accurate and reliable target images than has previously proven possible. Recent work has developed a new waveform decomposition approach,- Rigorous Gaussian Detection (RGD). The challenge is to implement full-waveform processing in real time and assess the increase in precision of target detection. A further challenge is to investigate the use of waveform derived information such as pulse width and backscatter cross-section to extract potential target information.

  9. Recognition of Orthogonal Frequency Distribution Multiplexing (OFDM) in Low SNR and Dense Signal Environments

    The world of communications is rapidly changing and there is an ever increasing demand for high data rate communications links. Modern communications standards such as Worldwide Interoperability for Microwave Access (WiMAX) and Long Term Evolution (LTE) use Orthogonal Frequency Division Multiplexing (OFDM) as their core technology. OFDM is spectrally efficient and resistant to multipath fading. Standard techniques used to recognise and identify OFDM signals involve periodicity and statistical normality tests. However, recognising and identifying OFDM signals in low (negative) SNR conditions and dense signal environments is a difficult problem requiring more advanced processing and spectrum sensing techniques. The rapidly evolving world of communications means that a blind solution to this technical challenge is required.

  10. Predictive Target Tracking with Quantified Uncertainty

    Multispectral or wide area surveillance sensors can yield a high quantity of data. For tracking and mapping of objects such as aerosol clouds, this data could be dramatically reduced by using approximate size, shape and location (or outline) of the target at a given time (i.e. the target state). However, probabilistic forward and backward prediction models can require integration over all possible data states that could reduce to the given instance of the target state, negating any computational savings achieved in the data reduction. This challenge is distillation of the data to the key information whilst maintaining a sound understanding of the increased uncertainty in target state, yielding an interface to model likelihoods and prediction outside the sensor field of view. Success would be measured by understanding how data reduction affects uncertainty in predictive output, thus, leading to higher accuracy and performance of source term prediction than current sub-sampling techniques.

  11. Reliable identification of Absence of Entities, Activities and Behaviours

    Conventional detection problems involve distinguishing a signature (object, signal or activity) from background. However, there may be situations in the future in which it would be vital to the military commander to know, with certainty, that there are no objects, signals or activities of particular classes within a defined area or volume being observed. The challenge here is to consider techniques for processing sensor signals to determine and declare the absence of signatures, and to measure the confidence in that declaration. The target is to achieve the declaration of absence, with understood confidence (inc. understood 2nd order uncertainty), both from an individual sensor/source and from a set of sensor/source systems (including mobile systems). Thus, for example, in a Moving Target Indication (MTI) radar sensor a target is only declared if detected in n out of m bursts; hence absence of target report in this case will vary with n (and expected Receiver Operating Characteristic (ROC) curve for hypothesised target characteristics). A specific challenge in this case is to calculate the confidence of absence against target characteristics, and show how the confidence changes in relationship to depth of knowledge of the processing chain/sensor performance/context (e.g. weather effects).

  12. Signal Processing Algorithms and Techniques to Manage Noisy 3D Point Clouds

    The challenge is to develop algorithms and techniques to exploit 3D point clouds with range noise to enable features to be extracted and labelled. The challenge can be broken down in three distinct areas:

    • There is a requirement to reduce the range noise in the points clouds recorded against hard surfaces
    • There is a need merge multiple point clouds collected from different viewing angles that have low commonality (overlap)
    • The goal is to fit primitive shapes to the point clouds to facilitate labelling and data reduction
  13. Provide Accurate Contact Bearings from a Passive UW Acoustic Line Array.

    The challenge is to provide accurate contact bearings from a passive UW acoustic line array. Passive acoustic line arrays are towed from ships and submarines and bearings are derived through a beamforming process, where a set of beams are established at a pre-determined set of steer-angles. A particular challenge is coping with the uncertainties from the systematic bearing error that results from the conical beam geometry. Under this geometry, energy is received at a steer-angle that is dependent on not just the bearing but also the elevation of the contact. Influences on the elevation angle include the depth between source and receiver, refraction of energy in the water column and reflection from the sea-bed and surface boundaries. Therefore through a better understanding of the environment, specifically the sound-velocity-profile and the likely position of the contact, it is anticipated that the degree of coning error may be reduced. Indeed, if the energy can be independently identified as coming from a surface contact, this information could be used to improve the bearing estimate. Furthermore, other sensors may be able to provide measurements of contact elevation which may also improve the bearing estimate.

  14. Theoretical Techniques to Counter SAR Jamming.

    In the past, digital adaptive beamforming algorithms for airborne and naval radars have been developed to exploit multiple sub-arrayed Active Element Scanned Array (AESA) antennas. These algorithms sample the received data and produce a weighted antenna pattern on receive to cancel out jammers. The task would be to investigate SAR image formation techniques to cancel low-power ground-based noise jammers for airborne radars which do not have themselves multiple sub-arrays. The emphasis would be on the theoretical possibility of doing this and less on he real-time processing aspects (as this would be an implementation issue)

  15. Optimum Processing of Spatially (Distant and Close) Separated Sensors with Partially Correlated Acoustic Signals

    This challenge is required to develop processing techniques for separated sensors with partially correlated signals (as a result of propagation) to provide the optimum method for detection, classification and localisation. Conventional processing techniques, such as beamforming and cross-correlation, result in significant degradation of detection classification and localisation under these conditions. The work should consider two cases:

    • The source signal is unknown, source position is unknown and the propagation conditions could be partially correlated between the source and the separated sensors
    • The source signal and position is known, the propagation is to a target, the reflections from the target propagate to the separated sensors under partially correlated propagation conditions

    The technique should include performance in partially correlated fluctuating noise

  16. A Cognitive Approach to De-Conflicting RF Spectrum Users

    The RF spectrum is cluttered and incompatibility issues exist between different users. Military communications and Electronic Counter Measures (ECM) struggle to operate side-by-side in the communications bands. Techniques to recognise interference and proactively seek mitigation by waveform adaptation are required. These could be applied to the source or receiving system. These would seek to provide a "cognitive" approach to interference mitigation where the system automatically adapts during operation, in time, space, frequency or modulation to reduce sensitivity to interference. The challenge should focus initially on compatibility between force protection ECM and communications. Indicative data may be generated and provided using validated simulation environments that are representative of difficult problems for which current solutions have undesirable operational constraints. Solutions that give greater freedom to the communication role are sought.

  17. Through-Water Electro-Optic Imagery Correction

    Any remote sensing technology requires atmospheric compensation to correct for scintillation effects and non-uniform absorption across the spectrum. Techniques already exist to perform atmospheric compensation in electro-optic imagery, but these cannot be directly applied to underwater imagery. Underwater remote sensing has the additional problem of omni-directional surface reflections, underwater scintillation and strong non-uniform absorption, dependent on local turbidity and marine constituents. Limited techniques are available to address these issues within a specified environment. Airborne through-water sensing has to correct for all atmospheric, underwater and boundary effects within the same image. Furthermore, the specific environmental parameters may be unknown, so cannot be fed into standard image compensation algorithms. A generic method for near real-time correction of airborne underwater imagery is sought, that can compensate for both atmospheric and underwater effects in all environments and for all environmental features. Particular emphasis is placed on the through-water part of the problem, as this is the crucial challenge for the particular applications. Of particular military interest are visual, hyperspectral and lidar imagery/data, so complete spectral correction and image recognition need to be considered.

  18. Novel Techniques for Active ASW Search Sonar Classification

    Active search sonar generally suffers from high clutter levels particularly in shallow water reverberation limited conditions where multi-path propagation is the norm. Novel techniques are required that can assist an operator in the classification process by identifying contacts of interest in such challenging conditions. Freedom to transmit novel active waveforms and adapt the processing chain accordingly may be assumed. Novel techniques that could derive contact depth, target length & aspect, evidence of propulsion system and other physical attributes would be attractive. This challenge would be suitable for a theoretical study with synthetic experiments.

  19. Fusion of Geolocation Target Motion Analysis (TMA) and Geographic Data

    Conventional target tracking techniques, both passive and active, inherently assume a target dynamic model, which is normally assumed to be unhindered by the physical parameters of the space that the target operates in. For example, vehicular traffic is not normally considered to be constrained to the existing road network and achievable acceleration rates. In the domain of tracking land and littoral targets via a passive geolocation sensor network, the fusion of tracks with geographic constraints, pattern of life or cultural rules could yield substantial benefits to tracking performance. The main use would be to augment the tracking process and supplement the data association method in the multiple emitter situation. A secondary, but nonetheless important use would be in target identification.

  20. MIMO Signal Processing for Perimeter Defence & Base Protection

    Improvements in perimeter defence & base protection are important in many application areas. Various solutions exist but a common challenge is the inability to observe dead ground, i.e. ground that is not in the line-of-sight. RF technologies offer the ability to overcome line-of-sight issues, e.g. wi-fi, mobile phones etc. The challenge for this project is to consider the signal processing to facilitate a multiple input multiple output (MIMO) RF system solution to the dead ground issue. The essential signal processing problem will be to estimate a set of suitable parameters that characterise objects in the scene, e.g. size, bulk motion, micro-motion. Performance metrication could then proceed via statistical estimation theory, e.g. how closely the proposed processing approaches the Cramer Rao lower bound and to what extent this allows discrimination between objects of interest and confusers.

  21. Video Pattern Recognition

    The challenge is to recognise particular video signals by recognising patterns in the footage, which may be high bandwidth (at least equivalent resolution and bandwidth to broadcast Freeview TV). For example, a news video could be identified by the logos and message bars used. It is desirable to recognise patterns from a database or training set within 1 second, to at least 95% accuracy. An example of the challenge is to determine whether a video sample of the above resolution/bandwidth contains a certain feature or logo (e.g. a brand logo), which could be anywhere within the frame.

  22. Accurate Geo-Location of Objects Regardless of Their Motion

    Synthetic aperture radars (SAR) and ground moving target indicator (GMTI) radars are commonly used for ground surveillance from the air. The first is ideal for imaging static scenes and stationary objects, as moving objects when imaged tend become defocused and spatially shifted. The second is ideal for detecting and tracking moving objects which have a radial component of velocity along the radar bore sight. The two modes are commonly incorporated in a single sensor, but employ different antenna beam steering, waveforms and signal processing, and so do not operate simultaneously. This prevents continuous all-weather surveillance. The challenge here is to devise a means of processing a common waveform/beam which simultaneously detects stationary and moving objects (regardless of their speed and direction of motion), including objects which repeatedly stop and start, so as to provide uninterrupted surveillance.

  23. Multi-Modal Sparse Signal Processing for Fleeting Signals

    Recent advances in compressive sensing show potential in achieving sub-Nyquist sensing using random Gaussian/Bernoulli sensing matrices but assume that signals are persistent and randomly distributed in low dimensional feature space. The reality is that most signals of interest have correlated (but unknown) structure over a large number of dimensions and the sampling opportunities are constrained. As an example, one challenge is to (non-acoustically) detect an urban detonation event over a number of sensing modalities (such as CCTV, Infra-red imagery, internet monitoring, etc), where each sensing modality has sparse, constrained sensing opportunities.

  24. Low-Computation Blind Separation of Multiple Overlapping Signals

    Any situation where the number of interwoven signals is large compared with the number of sensors detecting them creates signal separation problems. With acoustic signals this is known as the 'Cocktail Party problem' but it also occurs in many other modalities of signal. Most work in this area relies upon Independent Component Analysis (ICA) based techniques, but these create high computational load with the result that real-time blind signal separation is not feasible. The challenge is to develop low complexity techniques which can perform blind acoustic signal separation in real time or near real time. Solutions which use domain knowledge or learning techniques to lower the rank of the computation may be applicable here.

  25. Statistical Anomaly Detection in an Under-Sampled State Space

    Statistical anomalies are signals that appear to deviate from the underlying distributions in the background, as distinct from defined or rule-based anomalies. Statistical anomaly detection becomes challenging where the state space is high-dimensional and under-sampled. However, recent advances in Compressive Sensing provide benefits in an under-sampled state space. The challenge is to operate statistical anomaly detection alongside compressive techniques, particularly when the background statistics are non-Gaussian and/or non-stationary.

  26. Decentralised Signal ProcessingAcross Sparsely Distributed, Low-Bandwidth, Heterogeneous Networks

    The challenge is to develop signal processing across sparsely distributed, low-bandwidth, heterogeneous networks. Decentralised architectures, with their inherent problems of data-incest, data conflict and bandwidth overload are a notorious signal processing problem. Dense, homogeneous networks of simple sensors have been widely studied in the academic literature, but sparse, low-bandwidth, heterogeneous networks of complex data sources still present challenges. An example of such a network is a sparse CCTV and RFID entry point network where nodes can communicate but no one node has access to all data. Here there is also a challenge to build decentralised sensor management into the solution, since the CCTV cameras can be assumed to have direction and field of view control.

  27. Accreditable Machine Learning or Data-Driven Techniques

    Machine learning and data-driven techniques have been studied extensively in the academic literature. However, implementation of safety critical systems which rely on such techniques is severely problematic because testing and accreditation of the behaviour of the algorithms is unreliable. The challenge is to be able to calculate the confidence that a machine learning technique will work in any given scenario. This may require a new paradigm for such techniques and similarity metrics for measuring the similarity of the scenario to the training set.

  28. [Classified Technical Challenge]

  29. Reducing Size, Weight and Power Requirements Through Efficient Processing

    Modern Electronic Surveillance systems are predominantly signal processing based, with detection algorithms operating on computer processing hardware. As the processing requirements increase, so also do the size, weight and power requirements of those systems. A challenge currently being faced is to reduce those size, weight and power requirements, without compromising surveillance capability. Efficient signal processing solutions that can extract the requisite information from data with a reduced number of computational operations can enable significant reductions in size, weight and power. Such reductions can enable the fitting of capable electronic warfare systems to platforms that previously would not have been able to support such systems, whilst also providing significant benefits to existing platforms. As an example, Dstl is interested in using pattern matching algorithms such as Gaussian Mixture Model (GMM) and Support Vector Machine (SVM). The training phase for these algorithms is computationally intensive for high dimensional data and sometimes must be performed in real time. The challenge is to produce an optimised method for training GMM and SVM based algorithms for data with high dimensionality, that can be implemented in software across a range of computer platforms, ie: not an operating system specific code optimisation. A desirable output of this research would be an algorithm implementation in C or MATLAB.

  30. To exploit raw synthetic aperture sonar information to detect and identify objects of interest

    For an effective minehunting capability, the information that is available in high resolution sonar data, such as Synthetic Aperture Sonar (SAS), needs to be fully exploited to discriminate between mines and mine-like (non-mine) objects in the environment, such as rocks and other sea bed objects. The data is typically transformed into SAS imagery, that represents the backscatter from the seabed, and this imagery is then used as the starting point for target recognition - this is not always satisfactory. It is well known in other applications, such as Medical Sonography and Synthetic Aperture Radar, that useful information is lost in this type of transformation. The challenge is to develop signal processing to transform high resolution sonar data into a new feature-space that is optimised to discriminate between mines and non-mines and not just to represent the backscatter from objects near to, and on, the seabed. In doing so, there should be a performance advantage when compared with conventional SAS imagery.

  31. Noise reduction and Interference cancellation using subspace techniques.

    Most urban areas suffer from high levels of electronic interference 'smog'. This becomes an issue if we are looking for weak signals. Various narrow-band techniques are available to reduce interference/noise. For example an adaptive filter could be used or a fixed filter that cancels the interfering signal by introducing an anti phase signal. The filters are limited by their bandwidth. Wide-band techniques are not well established or effective. In the wide band case the signal is split into narrow band segments and the cancellation carried out in each segment. Other methods include converting the signal to the frequency domain, removing the interference by thresholding or other means and reverting back to the time domain. The receiver can be made to operate close to its thermal/noise limit if we can remove most of the interference/environmental noise. We know that vector subspace techniques have been used for noise reduction in speech signals of the order of 25 to 30 dB. Are there fast vector sub-space techniques available to carry this out in real-time? The challenge is to find a fast noise/interference reduction algorithm to deal with broad and narrow band types of signals capable of real-time implementation with low computational power. An integral part of the challenge is to develop suitable performance metrics to compare different techniques. Performance metrics like computational load compared with degree of cancellation both in the narrow-band and wide-band case.

  32. Finding the optimum weight vector selection through regularisation of ill-conditioned covariance/correlation matrices.

    Invariably most linear systems require a solution to Ax=b. This is fine as long as the matrix A is regular (a matrix is said to be regular if there is an inverse). When A is not regular or ill-conditioned the solution becomes challenging. Thus, the challenge is to find an algorithm that will take an ill-conditioned matrix and change it into a well-conditioned matrix without losing too much information and perform fast enough to be used in a real-time application. Most proposed solutions in the literature rely on matrix decomposition techniques. A side lobe canceller or a STAP (space-time adaptive processor) problem can be chosen as a test problem. The optimum weights of the adaptive filter are obtained by solving the covariance/correlation matrix. Interference degrades detection performance, so the optimum weight vector would be one that maximises the SINR (signal to interference plus noise ratio) for a specific probability of false alarm. Performance metrics should also be developed to compare different methods. The metrics could include computational complexity in terms of fast solutions (for real-time multi channel) and a trade off or comparison with decomposition techniques with regard to SINR loss factor and performance for acceptable information loss (of say 3 db).

  33. Robust association of signals from passive sensors with known location

    The challenge is to provide a generic and robust association/tracking/fusion framework for signals derived from homogenous and inhomogeneous passive sensors with known location. A particular problem relates to that of the submarine, where there is a requirement to associate detections made by a number of acoustic arrays and visual and ESM sensors. It is likely that the acoustic sensors will have different frequency coverage. Very occasionally measurements of range, course and speed may supplement bearing and bearing rate which will always be available. Poor understanding of sensor errors, incorrect assumptions about likely contact dynamics and sparse, intermittent detections all conspire to make this a difficult problem. To date, heuristic approaches targeted at individual sensors have been attempted, but it is believed that a more generic approach may be beneficial. One possibility is to adopt a model approach; characterising the sensor, the contact dynamics and the nature of the detected energy. The research should propose a relevant set of metrics so that the approach may be evaluated against real data.

  34. Robust discrimination of shipping contacts from the background noise of the ocean

    For the ship or submarine, the tactically significant information provided by passive sonar is the presence and identity of other ships or submarines in the vicinity of ownship. Other noises in the ocean represent a background that has little tactical relevance. However the background noise often confounds the robust detection, classification and localisation of the shipping contact signal. There are many sources of background noise in the ocean, but biological noise and in particular cetacean noise is by far the most troublesome. Cetacean noise, characterised by whistles and pulsed clicks, is aurally distinct from the rhythmic and often modulated sound of shipping. However feature extraction is complicated by low SNR, intermittent signals and multiple propagation paths. The challenge is to develop robust automatic methods to discriminate shipping contacts from the background noise of the ocean. The researcher should propose an appropriate metric against which the robustness may be assessed and contribute to the concept of operation.

  35. Material identification and probability assessment for hyperspectral imagery

    Most current techniques for identifying materials in hyperspectral and multispectral imagery provide estimates of the fractional abundance of a material in a given pixel, or scores for spectral similarity to a material on an arbitrary scale. However, they do not produce a direct measure of the probability that the material is present. The challenge is to develop an algorithm that would provide such a measure. It is anticipated that the calculated probability will be insensitive to the abundance of the material unless this abundance is below a certain value. The algorithm may make use of knowledge of sources of error in the data collection and processing chain, and may also use spatial information (about the shape of the object to be identified, or about related materials in close proximity). The algorithm may be designed to operate on either calibrated radiance images or images that have been converted to scaled surface reflectance by atmospheric compensation.

  36. Heterogeneous signal processing architectures for graphical anomaly detection

    Many intelligence problems require modelling the state space as a graph structure, with edges between nodes representing associations between entities. This enables the detection of anomalies in the graph structures, with models of normal behaviour evolving over extended periods. Such modelling offers potential solutions to understanding and predicting graph dynamics, with fitting of local models. For thorough retrospective analysis, with the arrival of new data, computational complexity is incurred. Recent work [1] has proposed Bayesian change-point analysis methods, as an attractive and different method for graphical anomaly detection. The challenge is to research, assess and report appropriate, parallel , heterogeneous signal processing architectures, which support and minimise the scalability issues associated with Bayesian change-point analysis applied to graphs and the proposed use of Sequential Monte Carlo methods to estimate new parameters. Success would be demonstrated by handling (in reasonable time) very large graphs and automatically adapting the signal processing to dynamic graph behaviour. Reference: [1] Final report - Data Mining, Dr N Heard, Prof D Hand, Dr K Plananioti, Mr D Weston, Reference SA012, 31 March 2009

  37. Fast Multidimensional Anomaly Detection for RF Emission characterisation

    The RF spectrum is cluttered with emissions from military users - both hostile and friendly - and civil users. These are commonly categorised as red, blue and white users (respectively). Techniques are required for fast analysis of new emissions to assign a probability that they belong to one of the three user groups. This is a notoriously difficult problem when the only data available are the emission signal parameters. However, many environmental parameters could be collected along with the emission characteristics (such as emission time, signal strength, duration, etc) that could be correlated with certain user categories. Thus, analysis of all these parameters could form a multidimensional pattern of life picture, within which one could perform statistical anomaly detection or clustering. Use of multiple networked sensors may give better results than single sensor data and heterogeneous sensors may contribute correlated data. However, classification latency is highly undesirable; a high confidence scoring would be required in <1s. This is the key driver for the challenge. Example data could be generated by Dstl from collections in two environments (rural, urban) with representative "signals of interest" generated at specific times. This will allow collection of significant volumes of data under controlled conditions at UNCLASSIFIED.

  38. Reliable automated detection and identification of underwater objects using unmanned sensor systems

    The aim of this challenge is to develop reliable processing techniques for automatically detecting and identifying large underwater objects such as unmanned underwater vehicles. Arrays of acoustic hydrophone sensors, electric field sensors or magnetic field sensors can be designed to detect radiated energy from objects. However the information obtained from such sensors may be sparse and conventional classification techniques based on comparing extracted features with historical data typically require an operator to eliminate false alarms. The challenge is to develop alternative processing techniques which enable underwater objects to be automatically identified with a high confidence and a low false alarm rate. To achieve this aim the processing techniques must differentiate between underwater objects and other similar contacts such as surface ships. One potential approach is to estimate the depth of a contact using matched field processing techniques. Matched field processing can be applied to data from arrays of acoustic sensors or to data from electric field and magnetic field sensors. However, matched field techniques require precise knowledge of the environment, are computationally intensive to perform and place constraints on the configuration of the array. Advances in matched field processing, alternative novel processing techniques and optimal sensor configurations are required which enable robust and reliable detection and localisation to be achieved in the presence of environmental uncertainty.