Tutorials can be attended by participants registering either in full registration or in student registration, by paying the corresponding fee of 25 Euros per tutorial.
T1–T3 Morning tutorials (9:00-12:30)
T4–T7 Afternoon tutorials (14:00-17.30)
Both the morning and afternoon tutorials include half time coffee breaks(included in the registration) .
Organizer: Tirza Routtenberg (Ben-Gurion University of the Negev)
Abstract: Graphs are fundamental mathematical structures that are widely used in various fields for network data analysis to model complex relationships within and between data, signals, and processes. In particular, graph signals arise in many modern applications, leading to the emergence of the area of graph signal processing (GSP) in the last decade. GSP theory extends concepts and techniques from traditional digital signal processing (DSP) to data indexed by generic graphs, including the graph Fourier transform (GFT), graph filter design, and sampling and recovery of graph signals. While the early research in this field has focused on purely deterministic settings, this tutorial aims to develop statistical signal processing (SSP) methods and bounds for GSP, named graph SSP (GSSP) theory. The tutorial focuses on estimating graph signals, which has numerous applications in various fields, including computer science, social science, sensor networks, energy systems, transportation, and biology. Furthermore, the tutorial will emphasize the development of GSP estimation methods for power system monitoring, serving as a practical test case while also enriching theoretical GSSP tools. This smart grid application is vital for addressing climate change, advancing energy solutions, and promoting societal well-being.
Organizers: Vincent W. Neo (Imperial College London), Soydan Redif (American University of the Middle East), Stephan Weiss (University of Strathclyde), Patrick A. Naylor (Imperial College London)
Abstract: Multichannel broadband signals arise at the core of many essential military technologies such as radar, sonar and communications, and commercial applications like telecommunications, speech processing, healthcare monitoring and seismic surveillance. The success of these applications often depends on the performance of signal processing tasks such as source localization, channel coding, signal enhancement, and source separation. U n multichannel broadband arrays or convolutively mixed signals, the array signals are generally correlated in time across different sensors. Therefore, the time delays for broadband signals cannot be represented by phase shift alone but need to be explicitly modelled. The relative time shifts are captured using the polynomial space-time covariance matrix, where decorrelation over a range of time shifts can be achieved using a polynomial EVD (PEVD). This tutorial is dedicated to recent developments in PEVD for multichannel broadband signal processing applications. We believe this tutorial and resources, such as code and demo webpages, will motivate and inspire many colleagues and aspiring PhD students working on broadband multichannel signal processing to try PEVD. The applications and demonstrations covered in this proposed tutorial include direction of arrival estimation, beamforming, source identification, weak transient detection, voice activity detection, speech enhancement, source separation and subband coding.
Organizers: Sundeep Prabhakar Chepuri (Indian Institute of Science), Fan Liu (Southern University of Science and Technology)
Abstract: Integrated sensing and communications (ISAC) are envisioned to be an integral part of future wireless systems, especially when operating at the millimeter-wave (mmWave) and terahertz (THz) frequency bands. Operating at these high frequencies is challenging due to the penetrating pathloss, which is so severe that the non-line-of-sight paths can be too weak to be of any practical use, preventing reliable communication or sensing. Recent years have witnessed a growing research and industrial attention in using reconfigurable intelligent surfaces (RISs) to modify the harsh propagation environment and establish reliable links for communication in Multiple-Input Multiple-Output (MIMO) systems. However, unlike the comprehensive treatment that RISs have been receiving in the context of their usage to empower wireless communications, a systematic presentation of their application to sensing as well as ISAC, along with their associated signal processing challenges, have not yet been provided. In this tutorial, we will provide an overview of the application and potential benefits of RISs for sensing and ISAC systems, aiming to encapsulate and highlight the potential benefits and main signal processing challenges which arise from such applications of this emerging technology. Our goal is to expose the existing explored directions and exciting research opportunities arising from the usage of RISs, which have already vastly impacted the wireless communications community, for sensing systems, traditionally studied by signal processing researchers and practitioners.
Organizers: Amir Weiss (MIT), Alejandro Lancho (MIT) and Gary C.F. Lee (MIT)
Abstract: In a landscape that is increasingly dominated by big data and machine learning (ML), what is the future of signal processing? While the answer unfolds, it is now clear that ML methods, particularly deep neural networks (DNNs), not only provide solutions to new emerging problems but also hold the promise of unprecedented gains and capabilities for solutions to classical, longstanding ones. This tutorial will focus on recent developments of contemporary algorithmic solutions for prominent estimation, filtering, and decoding problems, that naturally combine concepts, tools and techniques from both signal processing and ML. Specifically, a structured methodology for the development of ML-aided solutions to complex problems will be presented. This methodology will be demonstrated on the problems of localization, source separation and interference rejection, where special attention will be given to the architectural choices of DNN components in the solutions. The session includes theoretical contributions, DNN-related engineering “rule-of-thumbs”, and a short, starter-level coding session, introducing the “RF Challenge”, which aims to encourage the community in developing new artificial intelligence inspired algorithms for radio-frequency signal processing.
Organizers: Alexander Bertrand (KU Leuven), Cem Ates Musluoglu (KU Leuven), Charles Hovine (KU Leuven)
Abstract: In this tutorial, we focus on distributed data-driven spatial filtering in Wireless Sensor Networks (WSNs). The goal is to estimate and track the spatial correlation across the network and exploit this to spatially combine the various sensor signals in order to generate a single output signal that is optimal in some sense (e.g., maximal SNR, minimal squared error, maximally correlated components, maximal variance, etc.). Making such optimal combination of signals is commonly encountered in various WSN types, such as acoustic sensor networks, body- or neuro-sensor networks, and communication networks. We will present a generic unifying algorithmic framework based on recent works (Musluoglu and Bertrand, ArXiv, 2022a, 2022b) which has several existing distributed spatial filtering algorithms from the literature as a special case, while at the same time generalizing these to admit more general problem formulations. This so-called distributed adaptive signal fusion (DASF) framework is presented as a ‘meta’ algorithm that allows to efficiently translate centralized spatial filtering or signal fusion problems towards a distributed and time-adaptive (tracking) setting. The framework encompasses a large spectrum of problems such as signal enhancement, dimensionality reduction, beamforming or source separation, as used in diverse applications. We will illustrate it based on familiar examples, such as principal component analysis, generalized eigenvalue decompositions, mean-squared error filtering, and minimum variance beamforming as well as an accessible demo how such distributed algorithms can be designed and implemented in practice based on the recently released DASF toolbox.
Organizer: Yu Rong (Arizona State University), Kumar Vijay Mishra (United States DEVCOM Army Research Laboratory) and Daniel W. Bliss (Arizona State University)
Abstract: Radar-based health monitoring meets the requirements of a non-disturbing, ubiquitous-use, all-weather, penetrable, privacy-preserving sensing. This has led to emergence of a rich set of useful and interesting radar-based healthcare applications ranging from clinical to home care, sports training to automotive, and forensic to rescue operations. Unlike wearable sensors, a small-footprint radar measures physiological signals from human body without any mechanical contact with the human skin. Compared to the vision sensors (e.g., cameras), radar signals are capable of penetrating clothing without raising any privacy concerns. Lately, there has been a focus on radar-based sensing for more complex applications while radar practitioners are also striving to achieve accurate and robust biometrics in complex challenging environments. This requires exploiting techniques such as sensor fusion, complex array deployments, multiple wavelengths, and advanced signal processing algorithms. This tutorial will introduce the audience to the latest developments and technologies pertaining to radar remote sensing for physiological measurements, including robust measurement techniques, novel systems, new datasets, and testbeds/platforms in healthcare applications. We will highlight the latest trends in research on signal processing and systems for emerging healthcare applications including plethysmography, THz sensing, and sensor fusion. The tutorial will be highly relevant for participants from diverse backgrounds - academia, industry, and government - all of whom have active interest and stake in enabling smart health in the new era at commercial and military levels.
Organizer: Dirk Slock (EURECOM), Christo K. Thomas (Virginia Tech)
Abstract: We review a number of established and more recent variational Bayesian inference techniques, which we illustrate in particular through the Sparse Bayesian Learning (SBL) problem. SBL, which was initially proposed in the Machine Learning literature, is an efficient and well-studied framework for sparse or more generally underdetermined signal recovery. SBL uses hierarchical Bayes with a decorrelated Gaussian prior in which the variance profile is also to be estimated. This is more sparsity inducing than e.g., a Laplacian prior. However, SBL does not scale with problem dimensions due to the computational complexity associated with the matrix inversion in Linear Minimum Mean Squared Error (LMMSE) estimation. To address this issue, various low complexity approximate Bayesian inference techniques have been introduced for the LMMSE component, including Variational Bayesian (VB) inference, Space Alternating Variational Estimation (SAVE) or Message Passing (MP) algorithms such as Belief Propagation (BP), Expectation Propagation (EP) or Approximate MP (AMP). These algorithms may converge to the correct LMMSE estimate, with various posterior variance estimation qualities. In this tutorial, we provide a detailed overview of the low complexity approximate Bayesian inference techniques and their superiority (in terms of convergence, computational complexity, and robustness w.r.t measurement matrices) compared to other state of the art techniques. LMMSE bricks appear in the hierarchical Bayes approach of SBL and in the Gaussian approximations (e.g., in AMP), performed by EP, which are asymptotically exact as justified by large system analysis. Apart from the generalized linear model application, of which SBL is an instance, we also consider the bilinear model which appears in (semi)blind channel estimation as e.g., in Cell-Free Massive MIMO, and the dynamic instance of SBL, which leads to adaptive and extended Kalman filtering.