Dipartimento di Matematica "F. Brioschi"

Dottorato di Ricerca in Ingegneria Matematica, XVII ciclo

theory, algorithms and applications

April 2005

Defended on May 24, 2005 at the Department of Mathematics, Politecnico di Milano, Milano, Italy.

Examination Commission: | Supervisor: | |

- Prof. Emilio Barucci (Università degli Studi di Pisa) | - Prof. Karen Egiazarian (Tampere University of Technology) | |

- Prof. Patrizio Campisi (Università degli Studi di Roma III) | Coordinator: | |

- Prof. Stefano Tubaro (Politecnico di Milano), Chair of the Commission | - Prof. Marco Fuhrman (Politecnico di Milano) |

Download complete thesis as PDF file

(12.3 MB)

When estimating an image from its noisy observations, a trade-off between noise suppression (variance) and smoothing (bias) has to be considered. Common images are nonstationary, often characterized by localized features. Therefore, images should be treated adaptively: for example, one would achieve a higher noise suppression where the original image is smooth, than in the vicinity of sharp transitions such as edges, where oversmoothing should be avoided.

So, the desired balance between variance and bias depends on the image's local features. How to control this balance is a key problem in adaptive signal processing. A novel and original strategy to achieve such adaptation is the main subject of this thesis.

The presented approach is based on the Intersection of Confidence Intervals (ICI) rule for pointwise-adaptive estimation. Originally, the method has been developed for 1D signals. The idea was generalized for 2D image processing, where adaptive-size quadrant windows have been used.

The main intention of these techniques is to obtain - in a data-driven way - a largest local neighborhood of the estimation point in which the underlying model fits the data.

Our main assumption is that this vicinity is a starshaped body which can be approximated by some sectorial decomposition with, say, K non-overlapping sectors. We use special directional kernels defined on a sectorial support. By endowing these kernels with a scale parameter, we are able to use the ICI rule for the pointwise selection of an adaptive scale, which defines the length of the sectorial support.

Anisotropy is enabled by allowing different adaptive scales for different directions. Thus, the ICI rule is exploited K times, once for each sector. In this way, we reduce a complex multidimensional shape adaptation problem, to a number of scalar optimizations.

The directional estimates corresponding to the adaptive-scale sectors are combined into the final anisotropic estimate. The resulting estimator is truly anisotropic, and its support can have quite an exotic shape. It is highly sensitive with respect to change-points, and allow to reveal fine elements of images from noisy observations, thus showing a remarkable advantage in the proposed strategy.

Several algorithms are developed, based on this estimator. Denoising is the main, and most natural application, but also deconvolution, and derivative estimation are problems where the anisotropic adaptation approach can play a significant role in order to achieve an improved restoration performance.

The relevance of the work behind this thesis is twofold: not only a quite general theoretical framework for a novel estimator has been developed, but also several algorithmic solutions to concrete problems of applicative interest have been developed, optimized, and compared against other state-of-the-art techniques. In many occasions, the proposed new techniques have proven to outperform the best methods appeared in the literature of our knowledge.

This thesis aims to cover these two rather different aspects, the theoretical and the algorithmical, in order to provide a complete view on the scope of the research that was carried out. In order not to overload the reader, it has been decided not to go too deep into the details of neither of these two aspects. Instead, our motivation was to emphasize the links between the theoretical modeling and the practical implementations. Consequently, in the theoretical exposition some topics are here purposely neglected (e.g. the use of a vector scale-parameter and some convergence analyses that have purely abstract importance), and in the description of the algorithms some aspects are not discussed (e.g. the procedures for the optimization of the algorithm parameters and some computational issues). Likewise, non essential remarks have been dropped or relegated to footnotes.

Nevertheless, we try to keep the exposition on a general-enough level: for example, the local polynomial approximation is presented as a very particular case of the general form of the moving least-squares method, described in the context of Hilbert-space approximations, and the asymptotic accuracy analyses are also done in a rather general way. Although, some finer aspects are eventually and inevitabily "rounded off", the resulting formulas are lean, and allow -- in the author's opinion -- a much easier understanding of the general applicability of the developed methods.

It has been also decided not to follow the standard "definition-theorem-proof-corollary" formalism, typical of most mathematical texts. Instead, a more informal and discoursive style is used.

The thesis is structured in two parts. Firstly, we consider the theoretical background on which the developed technique is based: the LPA, the ICI, and the anisotropic fusing of the directional LPA-ICI estimates are presented. Denoising is considered as the underlying basic problem at which the approach is targeted. A recursive filtering strategy and an extension of the anisotropic denoising method for the gradient estimation problem are also discussed. In the second part we present a number of different algorithms and applications in which the proposed techniques can be successfully exploited: besides denoising, also deblurring, inverse-halftoning and other related problems are considered.

Contents | ||

Foreword | vii | |

Introduction | ix | |

Outline | xi | |

Notation and conventions | xiii | |

I | Theoretical background and basic methods | 1 |

1 | Local approximations, the moving least-squares method, and the local polynomial approximation (LPA) | 3 |

1.1 | Analysis, reconstruction and approximation in Hilbert spaces | 3 |

1.1.1 | Projection onto an orthonormal basis | 4 |

1.1.2 | Non-orthogonal case: frames | 4 |

1.2 | The space L^2 with a windowing measure | 6 |

1.2.1 | Definition of the space | 6 |

1.2.2 | Best approximation in L^2(R^d,mu) | 7 |

1.3 | Pointwise evaluation of the approximation and its kernel representation. | 8 |

1.4 | Moving least-squares (MLS) method | 10 |

1.4.1 | Translation-invariance | 10 |

1.4.2 | Moving least-squares denoising | 11 |

1.5 | Local polynomial approximation (LPA) | 12 |

1.5.1 | Characterization of an LPA | 12 |

1.5.2 | Function and derivative estimation kernels | 12 |

1.5.3 | Vanishing moments | 14 |

1.5.4 | Zero-order LPA | 14 |

1.6 | Finite case andmatrix notation | 14 |

1.6.1 | Best approximation (weighted least-squares solution) | 15 |

1.6.2 | Pointwise evalution of the best approximation and corresponding kernel | 15 |

1.6.3 | Vector form for LPA function and derivative estimation kernels | 16 |

1.7 | Some examples of LPA kernels | 16 |

2 | Adaptive nonparametric estimation | 19 |

2.1 | Parametric vs. nonparametric estimation | 19 |

2.2 | Scale | 20 |

2.2.1 | LPA kernels as smoothers | 21 |

2.3 | Accuracy analysis of the LPA kernels | 21 |

2.3.1 | Bias and variance | 22 |

2.3.2 | Asymptotic error analysis | 23 |

2.3.3 | Ideal scale | 24 |

2.4 | Intersection of Confidence Intervals (ICI) rule | 25 |

2.4.1 | The idea | 25 |

2.4.2 | ICI adaptive-scale selection rule | 26 |

2.4.3 | ICI algorithmpseudo-code | 27 |

2.4.4 | Choice of Gamma | 28 |

2.4.5 | Examples with symmetric windows | 29 |

3 | Directional LPA | 33 |

3.1 | Motivation | 33 |

3.2 | Directional LPA: a general definition | 34 |

3.3 | Discrete directional-LPA kernel construction | 35 |

3.4 | Peculiarities of the directional-LPA kernel design | 35 |

3.5 | Some examples of directional-LPA kernels | 36 |

4 | Anisotropic LPA-ICI | 39 |

4.1 | Motivation and idea | 39 |

4.1.1 | Estimates with support optimization | 40 |

4.1.2 | Estimates with kernel-scale optimization | 42 |

4.2 | Anisotropic estimator based on directional adaptive-scale | 43 |

4.2.1 | Anisotropic LPA-ICI estimator | 43 |

4.2.2 | Adaptive anisotropic kernel and adaptive anisotropic neighborhood | 44 |

4.2.3 | Anisotropic estimation: formal modelling | 45 |

4.3 | Anisotropic LPA-ICI pseudo-code | 49 |

4.4 | Illustrations | 51 |

4.5 | Fusing: why sigma^-2? | 51 |

4.5.1 | Fusing unbiased estimates | 54 |

4.5.2 | Uniform kernels for a uniform anisotropic kernel | 55 |

4.6 | Ideal scale h and the use of ICI for fused estimates | 56 |

4.6.1 | Practical impact of the Gamma parameter | 56 |

4.6.2 | MSE of the anisotropic fused estimate | 57 |

4.6.3 | A simplified analysis | 58 |

4.6.4 | Speculations and results on the value of Gamma | 59 |

4.7 | Uniform fusing for overlapping discrete kernels | 60 |

4.8 | Variance of the fused estimate | 62 |

4.8.1 | Non-overlapping kernels | 63 |

4.8.2 | Origin-overlapping kernels | 63 |

4.8.3 | Uniform fusing | 64 |

4.9 | Robust ICI for anisotropic estimation | 64 |

4.9.1 | WOS filters | 65 |

4.9.2 | Anisotropic LPA-ICI WOS filters | 66 |

4.9.3 | Binary WOS as thresholding of a linear filter | 67 |

4.10 | Algorithm complexity and implementation issues | 68 |

4.10.1 | Complexity | 68 |

4.10.2 | Implementation aspects | 68 |

5 | LPA-ICI for signal-dependant or space-variant noise | 71 |

5.1 | Signal-dependant noise | 71 |

5.1.1 | Poisson observations | 72 |

5.2 | Space-variant noise | 73 |

5.2.1 | Variance for heteroskedastic observations | 74 |

5.2.2 | Variance's asymptotics | 74 |

5.2.3 | Confidence intervals for non-Gaussian distributed estimates | 74 |

5.2.4 | Conclusions | 75 |

6 | Recursive anisotropic LPA-ICI | 77 |

6.1 | An iterative system | 77 |

6.2 | Estimation neighborhoods' enlargement | 78 |

6.3 | Variance of l-th iteration's directional estimates | 79 |

7 | Directional derivative estimation and anisotropic gradient | 81 |

7.1 | Derivative estimation | 81 |

7.1.1 | Derivative estimation LPA kernels | 82 |

7.2 | Anisotropic gradient estimation | 86 |

7.2.1 | Motivation | 88 |

7.2.2 | An illustrative example in the continuous domain | 89 |

7.2.3 | The same example in the discrete domain | 91 |

7.2.4 | Continuous domain anisotropic gradient | 94 |

7.2.5 | Discrete domain anisotropic gradient | 97 |

7.2.6 | More examples | 99 |

II | Algorithms, applications and further examples | 107 |

8 | Denoising | 109 |

8.1 | Additive white Gaussian noise | 109 |

8.2 | Recursive LPA-ICI implementation | 109 |

8.2.1 | Simulations | 112 |

8.3 | Signal-dependant noise | 113 |

8.3.1 | Recursive variance update | 114 |

8.3.2 | Poisson denoising experiments | 116 |

8.3.3 | Simulation results | 118 |

8.3.4 | Other types of noise | 120 |

9 | Deconvolution | 125 |

9.1 | Additive white Gaussian noise | 125 |

9.1.1 | Introduction | 125 |

9.1.2 | Adaptive RI-RWI deblurring algorithm | 126 |

9.1.3 | Derivative estimation and edge detection from noisy blurred observations | 128 |

9.2 | Poisson deconvolution | 128 |

9.2.1 | Introduction | 129 |

9.2.2 | Poissonian RI-RWI algorithm. | 130 |

9.2.3 | Linear inverse with directional adaptive LPA-ICI filtering | 130 |

9.2.4 | Poissonian RI inverse | 131 |

9.2.5 | Poissonian RWI inverse | 132 |

9.2.6 | Comments | 132 |

9.2.7 | Numerical experiments | 133 |

9.3 | Inverse halftoning | 134 |

9.3.1 | Halftoning and inverse halftoning | 136 |

9.3.2 | Error diffusion | 137 |

9.3.3 | Linear model of error diffusion | 138 |

9.3.4 | Anisotropic LPA-ICI inverse-halftoning | 140 |

9.3.5 | Simulation results | 143 |

10 | Other applications | 147 |

10.1 | Video denoising | 147 |

10.1.1 | Introduction | 148 |

10.1.2 | Coordinate system | 148 |

10.1.3 | Video-denoising simulation | 148 |

10.2 | Shading from depth map: Z-buffer shading | 150 |

11 | Hybrid methods: LPA-ICI SA-DCT | 155 |

Bibliography | 157 |

Download complete PDF file (12.3 MB)

Foi, A.,