Skip to content

Workshop Programme

for period 10 - 14 February 2014

Inverse Problems - follow-up meeting

10 - 14 February 2014

Timetable

Monday 10 February
08:30-09:35 Registration
09:35-09:45 Welcome from John Toland (INI Director)
09:45-10:30 Lassas, M (University of Helsinki)
  Seeing Through Space Time Sem 1
 

We consider inverse problems for the Einstein equation with a time-depending metric on a 4-dimensional globally hyperbolic Lorentzian manifold. We formulate the concept of active measurements for relativistic models. We do this by coupling Einstein equations with equations for scalar fields.

The inverse problem we study is the question, do the observations of the solutions of the coupled system in an open subset of the space-time with the sources supported in an open set determine the properties of the metric in a larger domain. To study this problem we define the concept of light observation sets and show that these sets determine the conformal class of the metric. This corresponds to passive observations from a distant area of space which is filled by light sources.

This is joint work with Y. Kurylev and G Uhlmann

 
10:30-11:00 Morning Coffee
11:00-11:45 Tanner, J (University of Oxford)
  Conjugate gradient iterative hard thresholding for compressed sensing and matrix completion Sem 1
 

Co-authors: Jeffrey D. Blanchard (Grinnell College), Ke Wei (University of Oxford)

Compressed sensing and matrix completion are techniques by which simplicity in data can be exploited for more efficient data acquisition. For instance, if a matrix is known to be (approximately) low rank then it can be recovered from few of its entries. The design and analysis of computationally efficient algorithms for these problems has been extensively studies over the last 8 years. In this talk we present a new algorithm that balances low per iteration complexity with fast asymptotic convergence. This algorithm has been shown to have faster recovery time than any other known algorithm in the area, both for small scale problems and massively parallel GPU implementations. The new algorithm adapts the classical nonlinear conjugate gradient algorithm and shows the efficacy of a linear algebra perspective to compressed sensing and matrix completion.

 
11:45-12:30 Arridge, S (University College London)
  Quantitative PhotoAcoustics Using the Transport Equation Sem 1
 

Quantitative photoacoustic tomography involves the reconstruction of a photoacoustic image from surface measurements of photoacoustic wave pulses followed by the recovery of the optical properties of the imaged region. The latter is, in general, a nonlinear, ill-posed inverse problem, for which model-based inversion techniques have been proposed. Here, the full radiative transfer equation is used to model the light propaga- tion, and the acoustic propagation and image reconstruction solved using a pseudo-spectal time-domain method. Direct inversion schemes are impractical when dealing with real, three-dimensional images. In this talk an adjoint field method is used to efficiently calculate the gradient in a gradient-based optimisation technique for simultaneous recovery of absorption and scattering coefficients.

Joint work with B. Cox, T. Saratoon, T. Tarvainen.

 
12:30-13:30 Lunch at Wolfson Court
13:30-14:15 Lassas, M (University of Helsinki)
  Reconstruction of the wave speed in a geophysical inverse problem Sem 1
 

We analyze the inverse problem, originally formulated by Dix in geophysics, of reconstructing the wave speed inside a domain from boundary measurements associated with the single scattering of seismic waves. We consider a domain $M$ with a varying and possibly anisotropic wave speed which we model as a Riemannian metric $g$. For our data, we assume that $M$ contains a dense set of point scatterers and that in a subset $U\subset M$, modeling the domain that contains the measurement devices, e.g, on the Earth's surface is seismic measurements, we can produce sources and measure the wave fronts of the single scattered waves diffracted from the point scatterers. The inverse problem we study is to recover the metric $g$ in $M$ up to a change of coordinates. To do this we show that the shape operators related to wave fronts produced by the point scatterers within $M$ satisfy a certain system of differential equations which may be solved along geodesics of the metric. In this way, assuming we know $g$ as well as the shape operator of the wave fronts in the region $U$, we may recover $g$ in certain coordinate systems (i.e. Riemannian normal coordinates centered at point scatterers).

The reconstruction of the Riemannian metric reduces to the problem of determination of unknown coefficient functions in a system of Riccati equations that the shape operators satisfy. This generalizes the well-known geophysical method of Dix to metrics which may depend on all spatial variables and be anisotropic. In particular, the novelty of this solution lies in the fact that it can be used to reconstruct the metric also in the presence of the caustics.

The results have been done in collaboration with Maarten de Hoop, Sean Holman, Einar Iversen, and Bjorn Ursin

 
14:15-15:00 Scherzer, O (Universität Wien)
  Mathematical Modeling of Optical Coherence Tomography Sem 1
 

Co-authors: Peter Elbau (University of Vienna), Leonidas Mindrinos (University of Vienna)

In this talk we present mathematical methods to formulate Optical Coherence Tomography (OCT) on the basis of the electromagnetic theory. OCT produces high-resolution images of the inner structure of biological tissues. Images are obtained by measuring the time delay and the intensity of backscattered of back-reflected light from the sample considering also the coherence properties of light. A general mathematical problem for OCT is presented considering the sample field as a solution of the Maxwell's equations. Moreover, we present some imaging formulas.

 
15:00-15:30 Afternoon Tea
15:30-16:15 Siltanen, S (University of Helsinki)
  A Data-Driven Edge-Preserving D-bar Method for Electrical Impedance Tomography Sem 1
 

Co-authors: Sarah Hamilton (University of Helsinki), Andreas Hauptmann (University of Helsinki)

Electrical Impedance Tomography (EIT) is a non-invasive, inexpensive, and portable imaging modality where an unknown physical body is probed with electric currents fed through electrodes positioned on the surface of the body. The resulting voltages at the electrodes are measured, and the goal is to recover the internal electric conductivity of the body from the current-to-voltage boundary measurements. The reconstruction task is a highly ill-posed nonlinear inverse problem, which is very sensitive to noise, and requires the use of regularized solution methods. EIT images typically have low spatial resolution due to smoothing caused by regularization. A new edge-preserving EIT algorithm is proposed, based on applying a deblurring flow stopped at minimal data discrepancy. The method makes heavy use of a novel data fidelity term based on the so-called CGO sinogram. This nonlinear data preprocessing step provides superior robustness over traditional EIT data formats such as curr ent-to-voltage matrix or Dirichlet-to-Neumann operator.

Related Links: http://arxiv.org/abs/1312.5523 - Arxiv preprint

 
16:15-17:00 Cox, B (University College London)
  Photoacoustic tomography: progress and open problems Sem 1
 

Photoacoustic tomography (PAT) is an emerging biomedical imaging modality which exploits the photoacoustic effect, whereby light absorption gives rise to ultrasound waves. It is already being used in a number of applications, such as preclinical and breast imaging, and for cancer and drug research. There are two inverse problems in PAT: an acoustic inversion and a diffuse optical inversion, which can be decoupled because of the differences in the timescale of acoustic and optical propagation. A great deal of work has been done on the former, and progress has been made on the latter in recent years. However, there remain several open image reconstruction problems of considerable practical importance, both acoustic and optical. This talk will give an overview of PAT, describe the various experimental systems available for making PAT measurements, highlight the progress made to date, and introduce some remaining unsolved inverse problems of interest.

 
17:00-18:00 Welcome Wine Reception
Tuesday 11 February
09:00-09:45 Schotland, J (University of Michigan)
  Topological reduction of the inverse Born series Sem 1
 

I will discuss a fast direct method to solve the inverse scattering problem for diffuse waves. Applications to optical tomography will be described.

 
09:45-10:30 Calvetti, D (Case Western Reserve University)
  Sequential Monte Carlo and particle methods in inverse problems Sem 1
 

Co-authors: Andrea Arnold (CWRU), Erkki Somersalo (CWRU)

In sequential Monte Carlo methods, the posterior distribution of an unknown of interest is explored in a sequential manner, by updating the Monte Carlo sample as new data arrive. In a similar fashion, particle filtering encompasses different sampling techniques to track the time course of a probability density that evolves in time based on partial observations of it. Methods that combine particle filters and sequential Monte Carlo have been developed for some time, mostly in connection with estimating unknown parameters in stochastic differential equations. In this talk, we present some new ideas suitable for treating large scale, non-stochastic, severely stiff systems of differential equations combining sequential Monte Carlo methods with classical numerical analysis concepts.

 
10:30-11:00 Morning Coffee
11:00-11:45 Somersalo, E (Case Western Reserve University)
  Bayesian preconditioning for truncated Krylov subspace regularization with an application to Magnetoencephalography (MEG) Sem 1
 

Co-authors: Daniela Calvetti (Case Western Reserve University), Laura Homa (Case Western Reserve University)

We consider the computational problem arising in magnetoencephalography (MEG), where the goal is to estimate the electric activity within the brain non-invasively from extra-cranial measurements of the magnetic field components. The problem is severely ill-posed due to the intrinsic non-uniqueness of the solution, and suffer further from the challenges of starting from a weak data signal, its high dimensionality and complexity of the noise, part of which is due to the brain itself. We propose a new algorithm that is based on truncated conjugate gradient algorithm for least squares (CGLS) with statistically inspired left and right preconditioners. We demonstrate that by carefully accounting for the spatiotemporal statistical structure of the brain noise, and by adopting a suitable prior within the Bayesian framework, we can design a robust and efficient method for the numerical solution of the MEG inverse problem which can improve the spatial and temporal resolution of events of short duration.

 
12:30-13:30 Lunch at INI
Session: Open for Business Afternoon
13:30-14:15 Siltanen, S (University of Helsinki)
  Four-dimensional X-ray tomography Sem 1
 

In recent years, mathematical methods have enabled three-dimensional medical X-ray imaging using much lower radiation dose than before. One example of products based on such approach is the 3D dental X-ray imaging device called VT, manufactured by Palodex Group. The idea is to collect fewer projection images than traditional computerized tomography machines and then use advanced mathematics to reconstruct the tissue from such incomplete data. The idea can be taken further by placing several pairs of X-ray source and detector "filming" the patient from many directions at the same time. This allows in principle recovering the three-dimensional inner structure as a function of time. There are many potential commercial applications of such a novel imaging modality: cardiac imaging, angiography, small animal imaging and nondestructive testing. However, new regularized inversion methods are needed for imaging based on such special type of data. A novel level-set type method is introduced for that purpose, enforcing continuity in space-time in a robust and reliable way. Tentative computational results are shown, based on both simulated and measured data. The results suggest that the new imaging modality is promising for practical applications.

 
14:15-15:00 Horesh, L (IBM Research)
  Optimal Design in Large-Scale Inversion - From Compressive to Comprehensive Sensing Sem 1
 

Co-authors: Eldad Haber (UBC), Luis Tenorio (CSM)

In the quest for improving inversion fidelity of large-scale problems, great consideration has been devoted towards effective solution of ill-posed problems of various regularization configurations. Nevertheless, complementary issues, such as determination of optimal configurations for data acquisition or more generally any other controllable parameters of the apparatus and process were frequently overlooked. While optimal design for well-posed problems has been extensively studied in the past, little consideration has been directed to its ill-posed counterpart. This is strikingly in contrast to the fact that a broad range of real-life problems are of such nature. In this talk, some of the intrinsic difficulties associated with design for ill-posed inverse problems shall be described, further, a coherent formulation to address these challenges will be laid out and finally the importance of design for various inversion problems shall be demonstrated.

Related Links: http://ocrdesign.wix.com/home - Design in Inversion - Open Collaboration Research

https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=9&cad=rja&ved=0CE0QFjAI&url=http%3A%2F%2Fusers.ices.utexas.edu%2F~omar%2Fsantafe2013%2Fslides%2FHoresh.ppsx&ei=1oSyUubrBsaekQfbwYCwBw&usg=AFQjCNGO6s1LcQqgbTrakWPD1TvHwf_ivw&sig2=60jCyY3RY_F3IGfuLnm-5g&bvm=bv.58187178,d.eW0 - Optimal Design for Large-Scale Ill-Posed Problems - Slide deck

 
15:00-15:30 Afternoon Tea
15:30-16:15 Ronchi, E (Tracerco)
  Tracerco Discovery. The world's first gamma ray subsea CT scanner for pipeline integrity and flow assurance Sem 1
 

Tracerco Discovery is the first instrument in the world capable of performing tomographic reconstruction of subsea pipelines online and down to 3000 m depth. It combines advanced nuclear physics and mathematics with state-of-the-art engineering to yield one of the most advanced instruments available today for diagnosing pipeline walls and contents within the oil and gas industry.

The talk will provide an overview of the Discovery technology and its implementation into current and upcoming instruments. Both simulated and experimental results obtained during commissioning and subsea trials will be presented, highlighting some of the technical and scientific challenges encountered and overcome during the design and the production of the instrument.

 
16:15-17:00 Open discussion
17:00-18:00 Wine reception
Wednesday 12 February
09:00-09:45 Hansen, A (University of Cambridge)
  Compressed sensing in the real world - The need for a new theory Sem 1
 

Compressed sensing is based on the three pillars: sparsity, incoherence and uniform random subsampling. In addition, the concepts of uniform recovery and the Restricted Isometry Property (RIP) have had a great impact. Intriguingly, in an overwhelming number of inverse problems where compressed sensing is used or can be used (such as MRI, X-ray tomography, Electron microscopy, Reflection seismology etc.) these pillars are absent. Moreover, easy numerical tests reveal that with the successful sampling strategies used in practice one does not observe uniform recovery nor the RIP. In particular, none of the existing theory can explain the success of compressed sensing in a vast area where it is used. In this talk we will demonstrate how real world problems are not sparse, yet asymptotically sparse, coherent, yet asymptotically incoherent, and moreover, that uniform random subsampling yields highly suboptimal results. In addition, we will present easy arguments explaining why unif orm recovery and the RIP is not observed in practice. Finally, we will introduce a new theory that aligns with the actual implementation of compressed sensing that is used in applications. This theory is based on asymptotic sparsity, asymptotic incoherence and random sampling with different densities. This theory supports two intriguing phenomena observed in reality: 1. the success of compressed sensing is resolution dependent, 2. the optimal sampling strategy is signal structure dependent. The last point opens up for a whole new area of research, namely the quest for the optimal sampling strategies.

 
09:45-10:30 Oksanen, L (University College London)
  Hyperbolic inverse problems and exact controllability Sem 1
 

We will discuss our recent stability results on the hyperbolic inverse boundary value problem and also on the hyperbolic inverse initial source problem. The latter problem arises as a part of the photoacoustic tomography problem. The control theoretic concept of exact controllability plays an important role in the results.

 
10:30-11:00 Morning Coffee
11:00-11:45 Darbon, J (University of California, Los Angeles)
  On Convex Finite-Dimensional Variational Methods in Imaging Sciences, and Hamilton-Jacobi Equations Sem 1
 

We consider standard finite-dimensional variational models used in signal/image processing that consist in minimizing an energy involving a data fidelity term and a regularization term. We propose new remarks from a theoretical perspective which give a precise description on how the solutions of the optimization problem depend on the amount of smoothing effects and the data itself. The dependence of the minimal values of the energy is shown to be ruled by Hamilton-Jacobi equations, while the minimizers $u(x,t)$ for the observed images $x$ and smoothing parameters $t$ are given by $u(\bfx,t) = x - t \nabla H(\nabla_x E(x,t))$ where $E(x,t)$ is the minimal value of the energy and $H$ is a Hamiltonian related to the data fidelity term. Various vanishing smoothing parameter results are derived illustrating the role played by the prior in such limits.

 
11:45-12:30 Haber, E (University of British Columbia)
  On Large Scale Inverse Problems that Cannot be solved Sem 1
 

In recent years data collection systems have improved and we are now able to collect large volume of data over vast regions in space. This lead to large scale inverse problems that involve with multiple scales and many data. To invert this data sets, we must rethink our numerical treatment of the problems starting from our discretization, to the optimization technique to be used and the efficient way we can parallelize these problems. In this talk we introduce a new multiscale asynchronous method for the treatment of such data and apply it to airborne Electromagnetic data.

 
12:30-13:30 Lunch at Wolfson Court
13:30-14:15 Wu, H (Stanford University)
  Alternating Projection, Ptychographic Imaging and connection graph Laplacian Sem 1
 

Co-authors: Yu-Chao Tu (Mathematics, Princeton University), Stefano Marchesini (Lawrence Berkeley Lab)

In this talk, we demonstrate the global convergence of the alternating projection (AP) algorithm to a unique solution up to a global phase factor in the ptychographic imaging. Additionally, we survey the intimate relationship between the AP algorithm and the notion of ``phase synchronization''. Based on this relationship, the recently developed technique connection graph Laplacian is applied to quickly construct an accurate initial guess, and accelerate convergence speed for large scale diffraction data problems. This is a joint work with Stefano Marchesini and Yu-Chao Tu.

 
Thursday 13 February
09:00-09:45 Rondi, L (Università degli Studi di Trieste)
  On stability for the direct scattering problem and its applications to cloaking Sem 1
 

We consider the direct acoustic scattering problem with sound-hard scatterers. We discuss the stability of the solutions with respect to variations of the scatterer. The main tool we use for this purpose is the convergence in the sense of Mosco. As a consequence we obtain uniform decay estimates for scattered fields for a large class of admissible sound-hard scatterers. As a particular case, we show how a sound-hard screen may be approximated by thin sound-hard obstacles. This is a joint work with Giorgio Menegatti.

We show that a sound-hard screen may be also approximated by using a thin lossy layer. This is a crucial step, together with transformation optics, for the construction of approximate full and partial cloaking by inserting a lossy layer between the region to be cloaked and the observer. This is a joint work with Jingzhi Li, Hongyu Liu and Gunther Uhlmann.

 
09:45-10:30 Reyes, J (Cardiff University)
  Conditional stability of Calder\'on problem for less regular conductivities Sem 1
 

Co-authors: Pedro Caro (University of Helsinki), Andoni García (University of Jyväskylä)

A recent log-type conditional stability result with H\"older norm for the Calder\'on problem will be presented, assuming continuously differentiable conductivities with H\"older continuous first-order derivatives in a Lipschitz domain of the Euclidean space with dimension greater than or equal to three.

This is a joint work with Pedro Caro from the University of Helsinki and Andoni Garc\'ia from the University of Jyv\"askyl\"a. The idea of decay in average used by B. Haberman and D. Tataru to obtain their uniqueness result for either continuously differentiable conductivities or Lipschitz conductivities such that their logarythm has small gradient in a Lipschitz domain of $\mathbb{R}^n$ with $n\geq 3$ is followed.

 
10:30-11:00 Morning Coffee
11:00-11:45 Lesnic, D (University of Leeds)
  Determination of an additive source in the heat equation Sem 1
 

Co-authors: Dinh Nho Hao (Hanoi Institute of Mathematics, Vietnam), Areena Hazanee (University of Leeds, UK), Mikola Ivanchov (Ivan Franko National University of Lviv, Ukraine), Phan Xuan Thanh (Hanoi University of Science and Technology, Vietnam)

Water contaminants arising from distributed or non-point sources deliver pollutants indirectly through environmental changes, e.g. a fertilizer is carried into a river by rain which in turn will affect the aquatic life. Then, in this inverse problem of water pollution, an unknown source in the governing equation needs to be determined from the measurements of the concentration or other projections of the dependent variable of the model. A similar inverse problem, arises in heat transfer.

Inverse source problems for the heat equation, especially in the one-dimensional transient case, have received considerable attention in recent years. In most of the previous studies, in order to ensure a unique solution, the unknown heat source was assumed to depend on only one of the independent variables, namely, space or time, or on the dependent variable, namely, concentration/temperature. It is the puropose of our analysis to investigate an extended case in which the unknown source is assumed to depend on both space and time, but which is additively separated into two unknown coefficient source functions, namely, one component dependent on space and another one dependent on time. The additional overspecified conditions can be a couple of local or nonlocal measurements of the concentration/temperature in space or time.

The unique solvability of this linear inverse problem in classical Holder spaces is proved; however, the problem is still ill-posed since small errors in the input data cause large errors in the output source. In order to obtain a stable reconstruction the Tikhonov regularization or the iterative conjugate gradient method is employed. Numerical results will be presented and discussed.

 
11:45-12:30 Belyaev, A (Heriot-Watt University)
  On Implicit Image Differentiation and Filtering Sem 1
 

The main goal of this talk is to demonstrate advantages of using compact (implicit) finite differencing, filtering, and interpolating schemes for image processing applications.

Finite difference schemes can be categorized as "explicit" and "implicit." Explicit schemes express the nodal derivatives as a weighted sum of the function nodal values. For example, f'i=(fi+1-fi-1)/2h is an explicit finite difference approximation of the first-order derivative. By comparison, compact (implicit) finite difference schemes equate a weighted sum of nodal derivatives to a weighted sum of the function nodal values. For instance, f'i-1+4f'i+f'i+1=3(fi+1-fi-1)/2h is an implicit (compact) scheme. Some implicit schemes correspond to Pad{\'e} approximations and produce significantly more accurate approximations for the small scales to compare with explicit schemes having the same stencil widths. Some other implicit schemes are designed to deliver accurate approximations of function derivatives over a wide range of spatial scales. Compact (implicit) finite difference schemes, as well as implicit filtering and interpolating schemes, constitute advanced but standard tools for accurate numerical simulations of problems involving linear and nonlinear wave propagation phenomena.

In this talk, I show how Fourier-Pad{\'e}-Galerkin approximations can be adapted for designing high-quality implicit finite difference schemes, establish a link between implicit schemes and standard explicit finite differences used for image gradient estimation, and demonstrate usefulness of implicit differencing and filtering schemes for various image processing tasks including image deblurring, feature detection, and sharpening.

Some of the results to be presented in this talk can be found in my recent paper: A. Belyaev, "Implicit image differentiation and filtering with applications to image sharpening." {\em SIAM Journal on Imaging Sciences}, 6(1):660-679, 2013.

Related Links: http://epubs.siam.org/doi/abs/10.1137/12087092X - link to the paper mentioned in the abstract

 
12:30-13:30 Lunch at Wolfson Court
13:30-14:15 Fokas, T (University of Cambridge)
  Analytical Methods for certain Medical Imaging Techniques Sem 1
 

One of the most important recent developments in the field of medial imaging has been the elucidation of analytical as opposed to statistical techniques. In this talk, analytical techniques for Positron Emission Tomography (PET), Single Photon Emission Computerised Tomography (SPECT), Magnetoencephalography (MEG) and Electroencephalography (EEG) will be reviewed. Numerical implementations using real data will also be presented.

 
19:30-22:00 Conference Dinner at Trinity College
Friday 14 February
09:00-09:45 Chauris, H (Mines Paris Tech)
  Towards a more robust automatic velocity analysis method Sem 1
 

Co-author: C.-A. Lameloise (MINES ParisTech)

In the context of seismic imaging, we analyse artefacts related to a classical objective functional, the "Differential Semblance Optimization" approach (DSO). This functional has been defined to automatically retrieve a velocity model needed to image complex structures with seismic waves. In practice, it may fail due to the presence of a number of artefacts.

We propose two complementary approaches: first, we give evidence that a quantitative migration scheme is useful to compensate for uneven subsurface illumination. Second, we propose to slightly modify the objective function such that its gradient does not exhibit spurious oscillations for models containing interfaces or discontinuities.

 
09:45-10:30 Burenkov, V (Cardiff University)
  Adaptive regularization of convolution type equations in anisotropic spaces with fractional order of smoothness Sem 1
 

Co-authors: Tamara Tararykova (Cardiff University (UK)), Theophile Logon (Cocody University (Cote d'Ivoir))

Under consideration are multidimensional convolution type equations with kernels whose Fourier transforms satisfy certain anisotropic conditions characterizing their behaviour at infinity. Regularized approximate solutions are constructed by using a priori information about the exact solution and the error, characterized by membership in some anisotropic Nikol'skii-Besov spaces with fractional order of smoothness: F, G respectively. The regularized solutions are defined in a way which is related to minimizing a Tikhonov smoothing functional involving the norms of the spaces F and G. Moreover, the choice of the spaces F and G is adapted to the properties of the kernel. It is important that the anisotropic smoothness parameter of the space F may be arbitrarily small and hence the a priori regularity assumption on the exact solution may be very weak. However, the regularized solutions still converge to the exact one in the appropriate sense (though, of course, the weaker are the a priori assumptions on the exact solution, the slower is the convergence). In particular, for sufficiently small smoothness parameter of the space F, the exact solution is allowed to be an unbounded function with a power singularity which is the case in some problems arising in geophysics. Estimates are obtained characterizing the smootheness of the regularized solutions and the rate of convergence of the regularized solutions to the exact one. Similar results are obtained for the case of periodic convolution type equations.

 
10:30-11:00 Morning Coffee
11:00-11:45 Sidorov, D (Russian Academy of Sciences)
  Volterra Integral Equations of the First Kind with Jump Discontinuous Kernels Sem 1
 

Sufficient conditions are derived for existence and uniqueness for the continuous solutions of the Volterra operator integral equations of the first kind with jump discontinuous kernels. Method of steps which is the well-known principle in the theory of functional equations is employed in combination with the method of successive approximations. We also address the case when the solution is not unique and prove the existence of parametric families of solutions and construct them as power-logarithmic asymptotic expansions. The proposed theory is demonstrated for the scalar Volterra equations of the 1st kind with jump discontinuous kernels with applications in evolving dynamical systems modeling.

Related Links: http://studia.complexica.net/index.php?option=com_content&view=article&id=209%3Avolterra-equations-of-the-first-kind-with-discontinuous-kernels-in-the-theory-of-evolving-systems-control-pp-135-146&catid=58%3Anumber-3&Itemid=103&lang=fr - Related paper in Studia Informatica Universalis

 
11:45-12:30 Schönlieb, C (University of Cambridge)
  Optimizing the optimizers - what is the right image and data model? Sem 1
 

When assigned with the task of reconstructing an image from given data the first challenge one faces is the derivation of a truthful image and data model. Such a model can be determined by the a-priori knowledge about the image, the data and their relation to each other. The source of this knowledge is either our understanding of the type of images we want to reconstruct and of the physics behind the acquisition of the data or we can thrive to learn parametric models from the data itself. The common question arises: how can we optimise our model choice?

Starting from the first modelling strategy this talk will lead us from the total variation as the most successful image regularisation model today to non-smooth second- and third-order regularisers, with data models for Gaussian and Poisson distributed data as well as impulse noise. Applications for image denoising, inpainting and surface reconstruction are given. After a critical discussion of these different image and data models we will turn towards the second modelling strategy and propose to combine it with the first one using a bilevel optimization method. In particular, we will consider optimal parameter derivation for total variation denoising with multiple noise distributions and optimising total generalised variation regularisation for its application in photography.

Joint work with Luca Calatroni, Jan Lellmann, Juan Carlos De Los Reyes and Tuomo Valkonen.

 
12:30-13:30 Lunch at Wolfson Court

Back to top ∧