L_WP3: Signal separation and broadband distributed beamforming

Extracting signals of interest and suppression of interference from corrupted sensor measurements remain as fundamental challenges in many networked battlespace applications.

L_WP3.1: Multichannel convolutive source separation, broadband distributed beamforming

We will focus on the development of algorithms based on advanced polynomial matrix (PM) techniques. The PM eigenvalue decomposition (PEVD) provides a powerful tool for multichannel convolutive mixing and broadband sensor arrays and has the great advantage of only requiring second order statistics. This avoids the much greater computational load and sample sizes associated with higher order statistics. We will use the PEVD to design algorithms for source separation from multichannel convolutive mixtures. Potentials of using the PEVD to identify subspaces in space-time, and hence reduced-rank solutions, will be exploited in tasks such as broadband angle-of-arrival estimation, beamforming, and distributed systems. Facilitated by the work in L_WP3.2, we will study the potential of extending our dictionary learning algorithm from memoryless to convolutive signals using the PM model. Sparseness constraints on the PEVD model will be considered for addressing the underdetermined source separation problem. Domain knowledge, as discussed in L_WP2, such as the prior information on approximate bearing, expected periodicity, and array geometry, will be incorporated into the algorithm design, leading to a family of semi-blind or softly constrained algorithms. These are expected to provide robust performance for target signal extraction and interference cancellation in noisy environment or under array imperfections. Fast and low-cost implementations of PM techniques will be addressed as part of L_WP5.

L_WP3.2: Underdetermined signal separation with unknown number of target signals

We will consider fundamental studies of sparsity-motivated techniques and build upon our work showing that source separation can be re-formulated as a signal recovery problem in compressed sensing by sparsifying the sensor signals using a dictionary, and then reconstruct the sources from the dictionary atoms using sparse coding algorithms. Adaptive methods, which aim to mitigate the need of training data, will be developed to perform source estimation and dictionary update jointly in an alternating manner. Multi-level hierarchical representations of the dictionary will be designed to improve the computational efficiency of these algorithms and to facilitate their fast implementation in L_WP5. Convolutive mixtures will be addressed in the time-frequency (T-F) domain with the dictionary learned from convolutive signals (as in L_WP3.1). These methods will be compared to the probabilistic T-F masking techniques, where the T-F masks are formed according to the source occupation probability at each T-F point, which is estimated by evaluating the statistical, spatial, temporal and/or spectral cues from the mixtures. The noise variance will be exploited to improve the reliability of the cues evaluated from noisy mixtures and weak signals, as will multivariate dependent source models. A variational Bayesian approach will be used to model each T-F point as a variational mixture of Gaussian distributions, thanks to its robustness to initialisations and its advantage in dealing with the unknown number of target signals (a model uncertainty challenge discussed in L_WP2), as compared to maximum likelihood based expectation maximisation approaches. The above algorithms will be extended for multimodal signals via the cross-modal coherence of multi-modalities. Applications to the MIMO systems in L_WP4 will also be considered.

View all LSSC Consortium Research »