Joint Sparse Recovery of Misaligned Multimodal Images Via Adaptive Local and Nonlocal Cross-Modal Regularization
Abstract Given few noisy linear measurements of distinct misaligned modalities, we aim at recovering the underlying multimodal image using a sparsity promoting algorithm.
Unlike previous multimodal sparse recovery approaches employing side information under the naive assumption of perfect calibration of modalities or of known deformation parameters, we adaptively estimate the deformation parameters from the images separately recovered from the incomplete measurements. We develop a multiscale dense registration method that proceeds alternately by finding block-wise intensity mapping models and a shift vector field which is used to obtain and refine the deformation parameters through a weighted least-squares approximation.
The co-registered images are then jointly recovered in a plug-and-play framework where a collaborative filter leverages the local and nonlocal cross-modal correlations inherent to the multimodal image.
Our experiments with this fully automatic registration and joint recovery pipeline show a better detection and sharper recovery of fine details which could not be separately recovered.