A function can be represented in terms of its values at points of its domain or in terms of its Fourier coefficients. In the first case, each value contains information about one position but all frequencies; in the second, about one frequency but all positions. Wavelets lie between the two extremes: wavelet coefficients contain information about a function that is local in both position and frequency. They analyse a function at multiple scales, with a resolution adapted to each scale.
Wavelets have found a plethora of uses in subjects ranging from rigorous analysis of the renormalization group to signal and image processing. Their use in the latter has been transformative. This is largely because many real world signals are 'sparse' in the wavelet basis, i.e. they have many zero or near zero coefficients. This allows the signal to be compressed, because small coefficients can be thrown away, and denoised, because small coefficients are mostly noise and can be set to zero.
Surprisingly, it also allows signal acquisition to be compressed, even though it is not known in advance which will be the large coefficients: instead of acquiring one million pixels, we can make only 100,000 measurements yet still perfectly reconstruct the 1MP image. This leads to the general idea of 'compressed sensing', which can also be viewed as a version of Bayesian experimental design, and analysed using statistical physics techniques such as 'replicas'.
Wavelets have also spawned large numbers of other representations: complex wavelets, wavelet packets, bandelettes, curvelets, warplets, wedgelets, contourlets, ridgelets and beamlets, many of them concerned with multiscale descriptions of geometry.
More recently, wavelets have been used to analyse deep learning networks, particularly those used with image data. These networks are often built as a series of linear filters at different scales interspersed with nonlinear stages, and this can be thought of as a ‘wavelet scattering transform’. Wavelet analysis sheds light on how and why deep learning achieves the results that it does.
A project could take many forms: a study of wavelet-based approximation theory; wavelet denoising; sparsity and compressed sensing; wavelet packets and probabilistic models of texture; the use of wavelets in random field theory, e.g. for efficient sampling; the use of exotic multiscale representations of geometry; the theory and practice of wavelet scattering transforms in deep learning. Each of these topics has a theoretical side with beautiful mathematics, and a practical side that could lead to implementation and testing on signal or image data.
Statistical Inference 2, and in particular Bayesian statistics, would be useful. Depending on the direction of the project, some knowledge of differential and Riemannian geometry and group theory would be an advantage. For the more practical directions, it is important have a working knowledge of a programming language.