FSUMATH
Florida State University Seal

Department of Mathematics

College of Arts and Sciences

Mathematics Colloquium


Irene Fonseca
CMU

Title: From Phase Separation in Heterogeneous Media to Learning Training Schemes for Image Denoising
Date: Friday, April 17
Place and Time: Love 101, 3:05-3:55 pm

Abstract. What do these two themes have in common? Both are treated variationally, both deal with energies of different dimensionalities, concepts of geometric measure theory prevail in both, and higher order penalizations are considered. Will learning training schemes for choosing these penalizations in imaging may be of use in phase transitions? Phase Separation in Heterogeneous Media: Modern technologies and biological systems, such as temperature-responsive polymers and lipid rafts, take advantage of engineered inclusions, or natural heterogeneities of the medium, to obtain novel composite materials with specific physical properties. To model such situations using a variational approach based on the gradient theory of phase transitions, the potential and the wells may have to depend on the spatial position, even in a discontinuous way, and different regimes should be considered. In the critical case, where the scale of the small heterogeneities is of the same order of the scale governing the phase transition and the wells are fixed, the nteraction between homogenization and the phase transitions process leads to an anisotropic interfacial energy. The supercritical case for fixed wells is also addressed, and in the subcritical case with moving wells, where the heterogeneities of the material are of a larger scale than that of the diffuse interface between different phases, it is observed that there is no macroscopic phase separation. Learning Training Schemes for Image Denoising: Due to their ability to handle discontinuous images while having a well-understood behavior, regularizations with total variation (TV) and total generalized variation (TGV) are some of the best known methods in image denoising. However, like other variational models including a fidelity term, they crucially depend on the choice of their tuning parameters. A remedy is to choose these systematically through multilevel approaches, for example by optimizing performance on noisy/clean image training pairs. These methods with space-dependent parameters that are piecewise constant on dyadic grids are considered, with the grid itself being part of the minimization. Improved performance on some representative test images when compared with constant optimized parameters is demonstrated.