Ti trovi qui: Home » Program

Program

All presentations are 30 minutes plus an additional 5 minutes for discussion.

Times listed are Central European Summer Time (UTC+2).

  Monday 6 Tuesday 7
09.45 - 10.00 Welcome  
10.00 - 10.35 Siltanen Dostal
10.35 - 11.10 Morigi Donatelli
11.10 - 11.30 Virtual coffee break Virtual coffee break
11.30 - 12.05 Loris di Serafino
12.05 - 12.40 Morini Chouzenoux
12.40 - 14.15 Virtual lunch Virtual lunch
14.15 - 14.50 Loli Piccolomini Calatroni
14.50 - 15.25 Ochs Rebegoldi
15.25 - 16.00 Verri Conclusions

 

Luca Calatroni, CNRS, Université Côte d’Azur, Inria Sophia Antipolis-Mediterranée

Title: “Covariance-based Super-Resolution microscopy with intensity estimation and automatic parameter selection”

Abstract: “Super-resolution fluorescence microscopy overcomes the physical barriers due to light diffraction, allowing for the observation of indistinguishable sub-pixel entities. State-of-the-art super-resolution methods achieve adequate spatio-temporal resolution under rather challenging experimental conditions by means either of costly devices and/or very specific fluorescent molecules. In this talk, we present a method for covariance-based super-resolution microscopy with intensity estimation and automatic parameter estimation which is well suited for live-cell imaging and which allows for improved spatio-temporal resolution by means of common microscopes and conventional  fluorescent dyes. Our approach codifies the assumption of sparse distribution of the  fluorescent molecules as well as the temporal and spatial independence between emitters in a covariance domain where the location of emitters is estimated by solving a possibly non-convex optimisation problem . In order to deal with real data, the proposed approach is further enriched by the estimation of both non-constant background and noise statistics.  A separate intensity estimation step where intensity information is retrieved is then considered. This is a valuable piece of information for the biological interpretation of the results and their use in 3D super-resolution imaging. To make the reconstruction model fully automated, we further detail automatic parameter selection strategies based on algorithmic restarting and on discrepancy-type approaches. Several results both for simulated and real data are reported and comparisons with analogous models such as SRRF and (Learned-) SPARCOM are given. This is joint work with V. Stergiopoulou, L. Blanc-Féraud (CNRS, I3S, Sophia-Antipolis), H. Goulart (IRIT, ENSEEIHT, Toulouse) and S. Schaub (IMEV, Villefranche-sur-Mer).”

 

Emilie Chouzenoux, Université Paris-Saclay

Title: “Proximal gradient algorithm in the presence of adjoint mismatch. Application to Computed Tomography”

Abstract: “The proximal gradient algorithm is a popular iterative algorithm to deal with penalized least-squares minimization problems. Its simplicity and versatility allow one to embed nonsmooth penalties efficiently. In the context of inverse problems arising in signal and image processing, a major concern lies in the computational burden when implementing minimization algorithms. For instance, in tomographic image reconstruction, a bottleneck is the cost for applying the forward linear operator and its adjoint. Consequently, it often happens that these operators are approximated numerically, so that the adjoint property is no longer fulfilled. In this talk, we focus on the proximal gradient algorithm stability properties when such an adjoint mismatch arises. By making use of tools from convex analysis and fixed point theory, we establish conditions under which the algorithm can still converge to a fixed point. We provide bounds on the error between this point and the solution to the minimization problem. We illustrate the applicability of our theoretical results through numerical examples in the context of computed tomography. This is a joint work with M. Savanier, J.C. Pesquet, C. Riddell, and Y. Trousset.”

 

Daniela di Serafino, Università di Napoli “Federico II”

Title: “Directional TGV-based image restoration under Poisson noise”

Abstract: “The problem of restoring noisy and blurry images where the texture mainly follows a single direction (directional images) arises in several applications, e.g., in microscopy and computed tomography for carbon or glass fibers. We focus on images corrupted by Poisson noise, whose restoration is modeled as the minimization of the generalized Kullback-Leibler divergence, and consider the Directional Total Generalized Variation (DTGV) regularization, with the aim of analyzing its behaviour in this case. We also propose a technique for the identification of the main texture direction, which improves upon the techniques described in [R.D. Konkskov and D. Dong, LNCS 10302, 2017; R.D. Konkskov, D. Dong and K. Knudsen, BIT 59, 2019], where DTGV is applied to images with impulse and Gaussian noise. We solve the resulting nonsmooth optimization problem by a suitable formulation of ADMM. Numerical experiments on both phantom and real images demonstrate the effectiveness of our approach. This is joint work with Germana Landi and Marco Viola.”

 

Marco Donatelli, Università dell'Insubria

Title: “Graph Laplacian l2 - lq regularization for image deblurring”

Abstract: “The use of the graph Laplacian as a regularization operator for image denoising has attracted a lot of attention in the last years. We show how it can be used for image deblurring with the classical l2 - lq minimization with the non-negativity constraint of the solution. Firstly, we describe how to construct the graph Laplacian from the observed noisy and blurred image. Once the graph Laplacian has been built, we solve efficiently the proposed minimization problem splitting the convolution operator and the graph Laplacian by the alternating direction method of multipliers (ADMM). We propose automatic strategies that do not need the tuning of any parameter. Moreover, thanks to the projection into properly constructed subspaces of fairly small dimensions, the proposed algorithms can be used for solving large scale problems. This allows applying our method also to computer tomography applications. Some selected numerical examples show the good performances of the proposed algorithm. Joint work with Alessandro Buccini.”

 

Zdenek Dostal, Technical University of Ostrava

Title: “The solution of huge elliptic problems and shape optimization”

Abstract: TBA

 

Elena Loli Piccolomini, Università di Bologna

Title: “Variational and Neural Networks based approaches in inverse problems in imaging”

Abstract: “In the last 5-6 years deep learning techniques have profitably changed the approach to the solution of inverse problems in imaging, in addition to the more traditional variational techniques. In this talk I will consider a hybrid Plug and Play (PnP) approach, exploiting both optimization and neural networks, applied to deblur and CT image reconstruction from projections of medical images. We propose to plug in a gradient-based denoiser prior combined with an external gradient-based regularization prior. Combining the advantages of both, it is possible to get very accurate results.”

 

Ignace Loris, Université Libre de Bruxelles

Title: “Non-euclidean primal-dual proximal algorithms for large scale optimization in inverse problems”

Abstract: “Non-euclidean versions of some primal-dual iterative optimization algorithms are presented. In these algorithms the proximal operator is based on Bregman-divergences instead of euclidean distances. Double loop iterations are also proposed which can be used for the minimization of a convex cost function consisting of a sum of several parts: a differentiable part, a proximable part and the composition of a linear map with a proximable function. While the number of inner iterations is fixed in advance in these algorithms, convergence is guaranteed by virtue of an inner loop warm-start strategy, showing that inner loop “starting rules” can be just as effective as “stopping rules” for guaranteeing convergence. The algorithms are applicable to the numerical solution of convex optimization problems encountered in inverse problems, imaging and statistics and reduce to the classical proximal gradient algorithm in certain special cases and also generalize other existing algorithms.”

 

Serena Morigi, Università di Bologna

Title: “Regularization and neural attention for nonlinear Electrical Impedance Tomography inverse problems”

Abstract: “Neural networks are often augmented with an attention mechanism, that mimics cognitive attention by telling the network where to focus within the input. We propose a neural attention model in which a sparsity-inducing regularization term is designed to augment the objective function and benefit from more structural prior knowledge. This calls for efficient algorithms enabling their use in a neural network trained with backpropagation. The potential of the new attention mechanism is evaluated on the inverse Electrical Impedance Tomography problem which involves collecting electrical measurements on a boundary of a region to determine the spatially varying electrical conductivity distribution within the bounded region.”

 

Benedetta Morini, Università di Firenze

Title: “Solving systems of nonlinear equations via spectral residual methods: stepsize selection and applications”

Abstract: "Spectral residual methods are derivative-free and low-cost per iteration procedures for solving systems of nonlinear equations. They are generally coupled with a nonmonotone linesearch strategy and compare well with Newton-based methods for large nonlinear systems and sequences of non-linear systems.  The residual vector is used as the search direction and the steplength is inspired by the Barzilai Borwein method. Analogously to spectral gradient methods for minimization, choosing the steplength has a crucial impact on the performance of the procedure. In this work we address, both theoretically and experimentally, the steplength selection and provide results on a real application such as a rolling contact problem. This is a joint work with Enrico Meli, Margherita Porcelli and Cristina Sgattoni.”

 

Peter Ochs, University of Tübingen

Title: “Towards Differentiation of Solution Mappings of Non-smooth Optimization Problems”

Abstract: “Gradient based (hyper-) parameter optimization is crucial for large scale applications in Machine Learning and Computer Vision. Such algorithms enjoy several advantages, for example, stability, simplicity, and efficiency. Their main building block is the computation of the derivative (gradient) of the objective function with respect to the parameters. However, in many practical situations, the effect of varying (and therefore differentiating) parameters is only implicitly given, e.g., in bilevel optimization the feedback from parameters is subject to a full reconstruction process (the lower level problem) such as an MRF or a Variational Model in Machine Learning or Computer Vision. In applications, these (lower level) problems are often non-smooth and their dimensionality of the parameters and the optimization variable are large, which requires iterative algorithms for their solution. This talk presents several approaches for computing derivatives with respect to the parameters in such difficult settings where non-smooth features can still be handled by classical differentiation strategies.”

 

Simone Rebegoldi, Università di Firenze

Title: “Stochastic trust-region method with adaptive sample sizes for finite-sum minimization problems”

Abstract: “In this talk, we present SIRTR (Stochastic Inexact Restoration Trust-Region method) for solving finite-sum minimization problems. At each iteration, SIRTR approximates both function and gradient by sampling. The function sample size is computed using a deterministic rule inspired by the inexact restoration method, whereas the gradient sample size can be smaller than the sample size employed in function approximations. Notably, our approach may allow the decrease of the sample sizes at some iterations. We show that SIRTR eventually reaches full precision in evaluating the objective function and we provide a worst-case complexity result on the number of iterations required to achieve full precision. Numerical results on nonconvex binary classification problems confirm that SIRTR is able to provide accurate approximations way before the maximum sample size is reached and without requiring a problem-dependent tuning of the parameters involved.”

 

Samuli Siltanen, University of Helsinki

Title: “Two materials, two energies: regularising X-ray tomography with an inner product penalty”

Abstract: “In classical tomography one measures the attenuation of X-rays as they travel through a physical object. After a couple of processing steps involving logarithm, such data can be seen as a collection of line integrals of a non-negative function called X-ray attenuation coefficient. Recovering that function from its line integrals is an ill-posed inverse problem. This talk focuses on dual-energy X-ray tomography, where attenuation is measured using two different wavelengths. The energy-dependency of the attenuation coefficient is different for various materials, enabling a more detailed reconstruction. A novel method is introduced for decomposing an object into two materials. Assume that the materials are not mixed (in a given location inside the object the is only one material) but can be intertwined in a complicated way. Then the inner product between the two characteristic functions of material domains vanishes. Using that inner product as a penalty term in variational regularisation enables using efficient interior point methods for minimising the regularising penalty functional. It is shown that the proposed approach outperforms the baseline method (Joint Total Variation Regularization).”

 

Alessandro Verri, Università di Genova

Title: “Going deep into shallowness (a somewhat heretical view)”

Abstract: “I discuss the type of research we pursue at the newly establised Machine Learning Genoa Center. While fully aware of the great potential of the machine learning techniques developed over the last decades, i will argue that the elusive quest for Artificial Intelligece is still on and illustrate our stubborn approach for finding the solution to open problems through a few examples.”