In computer vision, many problems can be framed as inferences about labellings of the pixels of an image, e.g. identifying objects ('dog', 'car'), or computing depth from stereo images. A Bayesian formulation results in a posterior probability over labellings P(C | I, K), where C: D → L maps the pixels D to a label set L; I is the image data; and K represents other prior knowledge. As our estimate of the true labelling, we may choose the labelling that maximizes the probability, or minimizes the 'energy' E( C | I, K ) = –ln P( C | I, K ). We thus have to solve an optimization problem: minimize the real-valued function E of a set of variables taking values in a finite set L.
As a class, such problems are not efficiently soluble. However, the functions E that arise in computer vision often have particular structures, and it turns out that important subclasses of problem are efficiently solvable, or at least efficiently approximable. As well as having great practical application, these problems and their solutions have beautiful mathematical structures. For example:
A project in this area could take many directions.
On the theoretical side, one could investigate the structure of the problems that can be solved with graph cuts; study minimum cut and maximum flow algorithms, their algorithmic complexity, or their relation to continuum problems; review 'reductions', ways to transform complex energies to solvable form; look at the solvability criterion and possible generalizations ('higher holonomy', 'hypergraphs'); or evaluate the quality of approximate solutions with tools from statistical physics.
On the practical side, one could study the application of these methods in computer vision problems: the probabilistic models involved; variants of the basic algorithms used for more complex problems; implementations and comparisons with other methods, e.g. 'simulated annealing'; or make a survey of the many varieties of exact and approximate algorithm and the problems they have been used to solve.
Statistical Concepts II, and in particular Bayesian statistics, would be useful, as would Discrete Maths. Operations Research would also be helpful. For the more practical directions, it might also help to know a little about a programming language.