skip to content
 

Seminars (INV)

Videos and presentation materials from other INI events are also available.

Search seminar archive

Event When Speaker Title Presentation Material
INVW01 25th July 2011
11:00 to 11:45
C1 - Photoacoustic and Thermoacoustic Tomography I
Photoacoustic Acoustic Tomography (PAT) and Thermoacoustic Tomography (TAT) are examples of hybrid inverse methods arising in medical imaging that combine a high resolution modality with another one that can image high contrast between tissues. PAT and TAT combine the hight resolution of ultrasound with the high contrast capabilities of electromagnetic waves.

In these lectures we will describe the mathematical model for PAT and TAT and some of the mathematical progress that has been done in understanding these modalities.
INVW01 25th July 2011
11:45 to 12:30
A Nachman C2 - Reconstructions from Partial Boundary Data I
The classical inverse boundary value problem of Calderón consists in determining the conductivity inside a body from the Dirichlet-to-Neumann map. The problem is of interest in medical imaging and geophysics, where one seeks to image the conductivity of a body by making voltage and current measurements at its surface.

Bukhgeim-Uhlmann and Kenig-Sjöstrand-Uhlmann have shown that (in dimensions three and higher) uniqueness in the above problem holds even if measurements are available on possibly very small subsets of the boundary. This course will explain in detail a constructive proof of these results, obtained in joint work with Brian Street.

The topics I hope to cover are: 1. Review of the reconstruction method for Calderón's problem with full data. 2. New Green's functions for the Laplacian. 3. Boundedness properties of the corresponding single layer operators. 4. New solutions of the Schrödinger equation. 5. Unique solvability of the main boundary integral equation involving only the partial Cauchy data.
INVW01 25th July 2011
14:30 to 15:15
C1 - Photoacoustic and Thermoacoustic Tomography II
Photoacoustic Acoustic Tomography (PAT) and Thermoacoustic Tomography (TAT) are examples of hybrid inverse methods arising in medical imaging that combine a high resolution modality with another one that can image high contrast between tissues. PAT and TAT combine the hight resolution of ultrasound with the high contrast capabilities of electromagnetic waves.

In these lectures we will describe the mathematical model for PAT and TAT and some of the mathematical progress that has been done in understanding these modalities.
INVW01 25th July 2011
15:15 to 16:00
A Nachman C2 - Reconstructions from Partial Boundary Data II
The classical inverse boundary value problem of Calderón consists in determining the conductivity inside a body from the Dirichlet-to-Neumann map. The problem is of interest in medical imaging and geophysics, where one seeks to image the conductivity of a body by making voltage and current measurements at its surface.

Bukhgeim-Uhlmann and Kenig-Sjöstrand-Uhlmann have shown that (in dimensions three and higher) uniqueness in the above problem holds even if measurements are available on possibly very small subsets of the boundary. This course will explain in detail a constructive proof of these results, obtained in joint work with Brian Street.

The topics I hope to cover are: 1. Review of the reconstruction method for Calderón's problem with full data. 2. New Green's functions for the Laplacian. 3. Boundedness properties of the corresponding single layer operators. 4. New solutions of the Schrödinger equation. 5. Unique solvability of the main boundary integral equation involving only the partial Cauchy data.
INVW01 26th July 2011
09:00 to 09:45
C1 - Photoacoustic and Thermoacoustic Tomography III
Photoacoustic Acoustic Tomography (PAT) and Thermoacoustic Tomography (TAT) are examples of hybrid inverse methods arising in medical imaging that combine a high resolution modality with another one that can image high contrast between tissues. PAT and TAT combine the hight resolution of ultrasound with the high contrast capabilities of electromagnetic waves.

In these lectures we will describe the mathematical model for PAT and TAT and some of the mathematical progress that has been done in understanding these modalities.
INVW01 26th July 2011
09:45 to 10:30
A Nachman C2 - Reconstructions from Partial Boundary Data III
The classical inverse boundary value problem of Calderón consists in determining the conductivity inside a body from the Dirichlet-to-Neumann map. The problem is of interest in medical imaging and geophysics, where one seeks to image the conductivity of a body by making voltage and current measurements at its surface.

Bukhgeim-Uhlmann and Kenig-Sjöstrand-Uhlmann have shown that (in dimensions three and higher) uniqueness in the above problem holds even if measurements are available on possibly very small subsets of the boundary. This course will explain in detail a constructive proof of these results, obtained in joint work with Brian Street.

The topics I hope to cover are: 1. Review of the reconstruction method for Calderón's problem with full data. 2. New Green's functions for the Laplacian. 3. Boundedness properties of the corresponding single layer operators. 4. New solutions of the Schrödinger equation. 5. Unique solvability of the main boundary integral equation involving only the partial Cauchy data.
INVW01 26th July 2011
11:00 to 11:45
C1 - Photoacoustic and Thermoacoustic Tomography IV
Photoacoustic Acoustic Tomography (PAT) and Thermoacoustic Tomography (TAT) are examples of hybrid inverse methods arising in medical imaging that combine a high resolution modality with another one that can image high contrast between tissues. PAT and TAT combine the hight resolution of ultrasound with the high contrast capabilities of electromagnetic waves.

In these lectures we will describe the mathematical model for PAT and TAT and some of the mathematical progress that has been done in understanding these modalities.
INVW01 26th July 2011
11:45 to 12:30
A Nachman C2 - Reconstructions from Partial Boundary Data IV
The classical inverse boundary value problem of Calderón consists in determining the conductivity inside a body from the Dirichlet-to-Neumann map. The problem is of interest in medical imaging and geophysics, where one seeks to image the conductivity of a body by making voltage and current measurements at its surface.

Bukhgeim-Uhlmann and Kenig-Sjöstrand-Uhlmann have shown that (in dimensions three and higher) uniqueness in the above problem holds even if measurements are available on possibly very small subsets of the boundary. This course will explain in detail a constructive proof of these results, obtained in joint work with Brian Street.

The topics I hope to cover are: 1. Review of the reconstruction method for Calderón's problem with full data. 2. New Green's functions for the Laplacian. 3. Boundedness properties of the corresponding single layer operators. 4. New solutions of the Schrödinger equation. 5. Unique solvability of the main boundary integral equation involving only the partial Cauchy data.
INVW01 26th July 2011
14:00 to 15:00
R Kress Highlighted lecture 1 - Iterative methods in inverse obstacle scattering revisited
The inverse problem we consider is to determine the shape of an obstacle from the knowledge of the far field pattern for scattering of time-harmonic plane waves. For the sake of simplicity, we will concentrate on the case of scattering from a sound-soft obstacle or a perfect conductor. After reviewing some basics, we will interpret Huygens' principle as a system of two integral equations, named data and field equation, for the unknown boundary of the scatterer and the induced surface flux. Reflecting the ill-posedness of the inverse obstacle scattering problem these integral equations are ill-posed. They are linear with respect to the unknown flux and nonlinear with respect to the unknown boundary and offer, in principle, three immediate possibilities for their iterative solution via linearization and regularization.

We will discuss the mathematical foundations of these algorithms and describe the main ideas of their numerical implementation. Further, we will illuminate various relations between them and exhibit connections and differences to the traditional regularized Newton type iterations as applied to the boundary to far field map. Numerical results in 3D are presented.
INVW01 27th July 2011
09:00 to 09:45
A Kirsch C3 - The Factorization Method for Inverse Problems I

In this talk we introduce the Factorization Method for solving certain inverse problems. We will mainly consider inverse scattering problems but indicate the applicability of this method to other types of inverse problems at the end of the course. First, we explain the Factorization Method for a simple finite dimensional example of an inverse scattering problem (scattering by point sources). Then we turn to a scattering problem for time-harmonic acoustic waves where plane waves are scattered by an inhomogeneous medium. We will briefly discuss the direct problem with respect to uniqueness and existence and derive the Born approximation. In the inverse scattering problem one tries to determine the index of refraction from the knowledge of the far field patterns.

First we consider the Born approximation which linearizes the inverse problem. We apply the Factorization Method to this approximation for the determination of the support of the refractive contrast before we, finally, investigate this method for the full nonlinear problem.

This talk will be rather elementary. Knowledge of some basic facts on Hilbert spaces (including the space L2(D) and the notion of compactness) is sufficient for understanding this talk.

INVW01 27th July 2011
09:45 to 10:30
S Siltanen C4 - Introduction to computational inversion I
Inverse problems arise from indirect measurements of physical quantities. Examples include recovering the internal structure of objects from boundary measurements, for example X-ray attenuation from projection images or electric conductivity distribution from current-to-voltage measurements at the boundary. A defining feature of inverse problems is "ill-posedness", or extreme sensitivity to measurement and modeling errors: two quite different objects may produce almost exactly the same data. This is why specially regularized reconstruction methods are needed for the practical solution of inverse problems. This course explains how to detect ill-posedness in practical measurements and how to design noise-robust computational inversion algorithms. X-ray tomography is used as a guiding example, and Tikhonov regularization is the basic numerical methodology. Matlab software is provided for the participants to enable numerical experiments.

The zip archive contains Matlab files relating to Tomography with explicitly constructed measurement matrix. Feel free to experiment with these files. If you use them as part of your research, please include a reference to the origin and the author of the files.

INVW01 27th July 2011
11:00 to 11:45
A Kirsch C3 - The Factorization Method for Inverse Problems II

In this talk we introduce the Factorization Method for solving certain inverse problems. We will mainly consider inverse scattering problems but indicate the applicability of this method to other types of inverse problems at the end of the course. First, we explain the Factorization Method for a simple finite dimensional example of an inverse scattering problem (scattering by point sources). Then we turn to a scattering problem for time-harmonic acoustic waves where plane waves are scattered by an inhomogeneous medium. We will briefly discuss the direct problem with respect to uniqueness and existence and derive the Born approximation. In the inverse scattering problem one tries to determine the index of refraction from the knowledge of the far field patterns.

First we consider the Born approximation which linearizes the inverse problem. We apply the Factorization Method to this approximation for the determination of the support of the refractive contrast before we, finally, investigate this method for the full nonlinear problem.

This talk will be rather elementary. Knowledge of some basic facts on Hilbert spaces (including the space L2(D) and the notion of compactness) is sufficient for understanding this talk.

INVW01 27th July 2011
11:45 to 12:30
S Siltanen C4 - Introduction to computational inversion II
Inverse problems arise from indirect measurements of physical quantities. Examples include recovering the internal structure of objects from boundary measurements, for example X-ray attenuation from projection images or electric conductivity distribution from current-to-voltage measurements at the boundary. A defining feature of inverse problems is "ill-posedness", or extreme sensitivity to measurement and modeling errors: two quite different objects may produce almost exactly the same data. This is why specially regularized reconstruction methods are needed for the practical solution of inverse problems. This course explains how to detect ill-posedness in practical measurements and how to design noise-robust computational inversion algorithms. X-ray tomography is used as a guiding example, and Tikhonov regularization is the basic numerical methodology. Matlab software is provided for the participants to enable numerical experiments.

The zip archive contains Matlab files relating to Tomography with explicitly constructed measurement matrix. Feel free to experiment with these files. If you use them as part of your research, please include a reference to the origin and the author of the files.

INVW01 28th July 2011
09:00 to 09:45
A Kirsch C3 - The Factorization Method for Inverse Problems III

In this talk we introduce the Factorization Method for solving certain inverse problems. We will mainly consider inverse scattering problems but indicate the applicability of this method to other types of inverse problems at the end of the course. First, we explain the Factorization Method for a simple finite dimensional example of an inverse scattering problem (scattering by point sources). Then we turn to a scattering problem for time-harmonic acoustic waves where plane waves are scattered by an inhomogeneous medium. We will briefly discuss the direct problem with respect to uniqueness and existence and derive the Born approximation. In the inverse scattering problem one tries to determine the index of refraction from the knowledge of the far field patterns.

First we consider the Born approximation which linearizes the inverse problem. We apply the Factorization Method to this approximation for the determination of the support of the refractive contrast before we, finally, investigate this method for the full nonlinear problem.

This talk will be rather elementary. Knowledge of some basic facts on Hilbert spaces (including the space L2(D) and the notion of compactness) is sufficient for understanding this talk.

INVW01 28th July 2011
09:45 to 10:30
C5 - Coherent interferometric imaging in random media I
I will describe the mathematical problem of imaging remote sources or reflectors in heterogeneous (cluttered) media with passive and active arrays of sensors. Because the inhomogeneities in the medium are not known and cannot be estimated, we model the uncertainty about the clutter with spatial random perturbations of the wave speed. The goal of the lectures is to carry out analytically a comparative study of the resolution and signal-to-noise ratio (SNR) of two array imaging methods: the widely used Kirchhoff migration (KM) and coherent interferometry (CINT). By noise in the images we mean fluctuations that are due to the random medium.

Kirchhoff migration and its variants are widely used in seismic inversion, radar and elsewhere. It forms images by superposing the wave fields received at the array, delayed by the travel times from the array sensors to the imaging points. KM works well in smooth and known media, where there is no wave scattering and the travel times can be estimated accurately. It also works well with data that has additive, uncorrelated measurement noise, provided the array is large, because the noise is averaged out by the summation over the many sensors. KM images in clutter are unreliable and difficult to interpret because of the significant wave distortion by the inhomogeneities. The distortion is very different from additive, uncorrelated noise, and it cannot be reduced by simply summing over the sensors in the array. CINT images efficiently in clutter at ranges that do not exceed one or two transport mean free paths. Beyond such ranges the problem becomes much more difficult, specially in the case of active arrays, because the clutter backscatter overwhelms the echoes from the reflectors that we wish to image. Coherent imaging in such media may work only after pre-processing the data with filters of clutter backscatter.

The CINT method forms images by superposing time delayed, local cross-correlations of the wave fields received at the array. The local cross correlations are computed in appropriate time windows and over limited array sensor offsets. It has been shown with analysis and verified with numerical simulations that the time and offset thresholding in the computation of the cross-correlations is essential in CINT, because it introduces a smoothing that is necessary to achieve statistical stability, at the expense of some loss in resolution. By statistical stability we mean negligibly small fluctuations in the CINT image even when cumulative fluctuation effects in the random medium are not small.
INVW01 28th July 2011
11:00 to 11:45
A Kirsch C3 - The Factorization Method for Inverse Problems IV

In this talk we introduce the Factorization Method for solving certain inverse problems. We will mainly consider inverse scattering problems but indicate the applicability of this method to other types of inverse problems at the end of the course. First, we explain the Factorization Method for a simple finite dimensional example of an inverse scattering problem (scattering by point sources). Then we turn to a scattering problem for time-harmonic acoustic waves where plane waves are scattered by an inhomogeneous medium. We will briefly discuss the direct problem with respect to uniqueness and existence and derive the Born approximation. In the inverse scattering problem one tries to determine the index of refraction from the knowledge of the far field patterns.

First we consider the Born approximation which linearizes the inverse problem. We apply the Factorization Method to this approximation for the determination of the support of the refractive contrast before we, finally, investigate this method for the full nonlinear problem.

This talk will be rather elementary. Knowledge of some basic facts on Hilbert spaces (including the space L2(D) and the notion of compactness) is sufficient for understanding this talk.

INVW01 28th July 2011
11:45 to 12:30
C5 - Coherent interferometric imaging in random media II
I will describe the the mathematical problem of imaging remote sources or reflectors in heterogeneous (cluttered) media with passive and active arrays of sensors. Because the inhomogeneities in the medium are not known, and they cannot be estimated in detail from the data gathered at the array, we model the uncertainty about the clutter with spatial random perturbations of the wave speed. The goal of the lectures is to carry out analytically a comparative study of the resolution and signal-to-noise ratio (SNR) of two array imaging methods: the widely used Kirchhoff migration (KM) and coherent interferometry (CINT). By noise in the images we mean fluctuations that are due to the random medium.

Kirchhoff migration [2, 3] and its variants are widely used in seismic inversion, radar [10] and elsewhere. It forms images by superposing the wave fields received at the array, delayed by the travel times from the array sensors to the imaging points. KM works well in smooth and known media, where there is no wave scattering and the travel times can be estimated accurately. It also works well with data that has additive, uncorrelated measurement noise, provided the array is large, because the noise is averaged out by the summation over the many sensors, as expected from the law of large numbers. KM images in heterogeneous (cluttered) media are unreliable and difficult to interpret because of the significant wave distortion by the inhomogeneities. The distortion is very different from additive, uncorrelated noise, and it cannot be reduced by simply summing over the sensors in the array. CINT images efficiently in clutter [5, 6, 7], at ranges that do not exceed one or two transport mean free paths [11]. Beyond such ranges the problem becomes much more difficult, specially in the case of active arrays, because the clutter backscatter overwhelms the echoes from the reflectors thatwe wish to image. Coherent imaging in such media may work only after pre-processing the data with filters of clutter backscatter, as is done in [9, 1].

The CINT method has been introduced in [5, 6, 7] for mitigating wave distortion effects induced by clutter. It forms images by superposing time delayed, local cross-correlations of the wave fields received at the array. Here local cross correlations means that they are computed in appropriate time windows and over limited array sensor offsets. It has been shown with analysis and verified with numerical simulations [5, 6, 7, 8, 4] that the time and offset thresholding in the computation of the cross-correlations is essential in CINT, because it introduces a smoothing that is necessary to achieve statistical stability, at the expense of some loss in resolution. By statistical stability we mean negligibly small fluctuations in the CINT image even when cumulative fluctuation effects in the random medium are not small.

INVW01 28th July 2011
14:00 to 15:00
Y Kurylev Highlighted lecture 2 - Stability of Inverse Problems and related Topics from Differential Geometry
This talk (joint with M. Lassas and T. Yamaguchi) deals with the problem of stabilisation of anisotropic inverse spectral problem. We treat such problems as inverse problems on an unknown Riemannian manifold and formulate conditions for stability in terms of geometric constaints. Analysing the influence of these constraints, we obtain a wider class of objects than Riemannian manifolds, namely orbifolds. We then consider the relation between the stability of inverse problems on manifolds and uniqueness of the inverse problems on the orbifolds.
INVW01 28th July 2011
16:00 to 16:20
I Bleyer Contributed talks. 1. A Double Regularization Approach for Inverse Problems with Noisy Data and Inexact Operator
INVW01 28th July 2011
16:20 to 16:40
Contributed Talks. 2. Inverse problem for the wave equation with disjoint sources and receivers
INVW01 28th July 2011
16:40 to 17:00
Contributed Talks. 3. Vector-valued Reproducing Kernel Hilbert Spaces with applications in function extension and image colorization
INVW01 29th July 2011
09:00 to 09:45
S Siltanen C4 - Introduction to computational inversion III
Inverse problems arise from indirect measurements of physical quantities. Examples include recovering the internal structure of objects from boundary measurements, for example X-ray attenuation from projection images or electric conductivity distribution from current-to-voltage measurements at the boundary. A defining feature of inverse problems is "ill-posedness", or extreme sensitivity to measurement and modeling errors: two quite different objects may produce almost exactly the same data. This is why specially regularized reconstruction methods are needed for the practical solution of inverse problems. This course explains how to detect ill-posedness in practical measurements and how to design noise-robust computational inversion algorithms. X-ray tomography is used as a guiding example, and Tikhonov regularization is the basic numerical methodology. Matlab software is provided for the participants to enable numerical experiments.

The zip archive contains Matlab files relating to Tomography with explicitly constructed measurement matrix. Feel free to experiment with these files. If you use them as part of your research, please include a reference to the origin and the author of the files.

INVW01 29th July 2011
09:45 to 10:30
C5 - Coherent interferometric imaging in random media III
I will describe the the mathematical problem of imaging remote sources or reflectors in heterogeneous (cluttered) media with passive and active arrays of sensors. Because the inhomogeneities in the medium are not known, and they cannot be estimated in detail from the data gathered at the array, we model the uncertainty about the clutter with spatial random perturbations of the wave speed. The goal of the lectures is to carry out analytically a comparative study of the resolution and signal-to-noise ratio (SNR) of two array imaging methods: the widely used Kirchhoff migration (KM) and coherent interferometry (CINT). By noise in the images we mean fluctuations that are due to the random medium.

Kirchhoff migration [2, 3] and its variants are widely used in seismic inversion, radar [10] and elsewhere. It forms images by superposing the wave fields received at the array, delayed by the travel times from the array sensors to the imaging points. KM works well in smooth and known media, where there is no wave scattering and the travel times can be estimated accurately. It also works well with data that has additive, uncorrelated measurement noise, provided the array is large, because the noise is averaged out by the summation over the many sensors, as expected from the law of large numbers. KM images in heterogeneous (cluttered) media are unreliable and difficult to interpret because of the significant wave distortion by the inhomogeneities. The distortion is very different from additive, uncorrelated noise, and it cannot be reduced by simply summing over the sensors in the array. CINT images efficiently in clutter [5, 6, 7], at ranges that do not exceed one or two transport mean free paths [11]. Beyond such ranges the problem becomes much more difficult, specially in the case of active arrays, because the clutter backscatter overwhelms the echoes from the reflectors thatwe wish to image. Coherent imaging in such media may work only after pre-processing the data with filters of clutter backscatter, as is done in [9, 1].

The CINT method has been introduced in [5, 6, 7] for mitigating wave distortion effects induced by clutter. It forms images by superposing time delayed, local cross-correlations of the wave fields received at the array. Here local cross correlations means that they are computed in appropriate time windows and over limited array sensor offsets. It has been shown with analysis and verified with numerical simulations [5, 6, 7, 8, 4] that the time and offset thresholding in the computation of the cross-correlations is essential in CINT, because it introduces a smoothing that is necessary to achieve statistical stability, at the expense of some loss in resolution. By statistical stability we mean negligibly small fluctuations in the CINT image even when cumulative fluctuation effects in the random medium are not small.

INVW01 29th July 2011
11:00 to 11:45
S Siltanen C4 - Introduction to computational inversion IV
Inverse problems arise from indirect measurements of physical quantities. Examples include recovering the internal structure of objects from boundary measurements, for example X-ray attenuation from projection images or electric conductivity distribution from current-to-voltage measurements at the boundary. A defining feature of inverse problems is "ill-posedness", or extreme sensitivity to measurement and modeling errors: two quite different objects may produce almost exactly the same data. This is why specially regularized reconstruction methods are needed for the practical solution of inverse problems. This course explains how to detect ill-posedness in practical measurements and how to design noise-robust computational inversion algorithms. X-ray tomography is used as a guiding example, and Tikhonov regularization is the basic numerical methodology. Matlab software is provided for the participants to enable numerical experiments.

The zip archive contains Matlab files relating to Tomography with explicitly constructed measurement matrix. Feel free to experiment with these files. If you use them as part of your research, please include a reference to the origin and the author of the files.

INVW01 29th July 2011
11:45 to 12:30
C5 - Coherent interferometric imaging in random media IV
I will describe the the mathematical problem of imaging remote sources or reflectors in heterogeneous (cluttered) media with passive and active arrays of sensors. Because the inhomogeneities in the medium are not known, and they cannot be estimated in detail from the data gathered at the array, we model the uncertainty about the clutter with spatial random perturbations of the wave speed. The goal of the lectures is to carry out analytically a comparative study of the resolution and signal-to-noise ratio (SNR) of two array imaging methods: the widely used Kirchhoff migration (KM) and coherent interferometry (CINT). By noise in the images we mean fluctuations that are due to the random medium.

Kirchhoff migration [2, 3] and its variants are widely used in seismic inversion, radar [10] and elsewhere. It forms images by superposing the wave fields received at the array, delayed by the travel times from the array sensors to the imaging points. KM works well in smooth and known media, where there is no wave scattering and the travel times can be estimated accurately. It also works well with data that has additive, uncorrelated measurement noise, provided the array is large, because the noise is averaged out by the summation over the many sensors, as expected from the law of large numbers. KM images in heterogeneous (cluttered) media are unreliable and difficult to interpret because of the significant wave distortion by the inhomogeneities. The distortion is very different from additive, uncorrelated noise, and it cannot be reduced by simply summing over the sensors in the array. CINT images efficiently in clutter [5, 6, 7], at ranges that do not exceed one or two transport mean free paths [11]. Beyond such ranges the problem becomes much more difficult, specially in the case of active arrays, because the clutter backscatter overwhelms the echoes from the reflectors thatwe wish to image. Coherent imaging in such media may work only after pre-processing the data with filters of clutter backscatter, as is done in [9, 1].

The CINT method has been introduced in [5, 6, 7] for mitigating wave distortion effects induced by clutter. It forms images by superposing time delayed, local cross-correlations of the wave fields received at the array. Here local cross correlations means that they are computed in appropriate time windows and over limited array sensor offsets. It has been shown with analysis and verified with numerical simulations [5, 6, 7, 8, 4] that the time and offset thresholding in the computation of the cross-correlations is essential in CINT, because it introduces a smoothing that is necessary to achieve statistical stability, at the expense of some loss in resolution. By statistical stability we mean negligibly small fluctuations in the CINT image even when cumulative fluctuation effects in the random medium are not small.

INVW01 29th July 2011
14:00 to 15:00
M Burger Highlighted lecture 3 - Inverse Problems in Biomedical Imaging
For several decades, medical imaging applications, in particular computerized tomography, was a source for application and development for inverse problems. The recent emergence of the field towards dynamic and multimodal imaging creates a variety of novel challenges for inverse problems, with respect to modelling, analysis, as well as computation. In this talk we shall highlight some mathematical issues of emerging biomedical imaging techniques such as Dynamic PET, SPECT, MRI and Optical Techniques.
INVW02 1st August 2011
10:00 to 10:45
Using the formalism of inverse problem in the theory of integrable models
INVW02 1st August 2011
11:15 to 12:00
U Leonhardt Transformation optics: cloaking and perfect imaging
The field of transformation optics and metamaterials has been named by Science as one of the top ten research insights of the last decade (in fact, it was the only one in physics and engineering that made it into the top ten). What is it? In transformation optics manmade dielectric materials, called metamaterials, are used to implement a coordinate transformation of space (or in some cases of space and time). What can it do? For example, such transformation devices can make things invisible. They can also create perfect images with a resolution no longer limited by the wave nature of light.
INVW02 1st August 2011
14:00 to 14:45
Inverse gravimetry approach to attenuated tomography
INVW02 1st August 2011
16:15 to 17:00
On approximate cloaking by nonsingular transformation media
We give a comprehensive study on regularized approximate electromagnetic cloaking in the spherical geometry via the transformation optics approach. The following aspects are investigated: (i) near-invisibility cloaking of passive media as well as active/radiating sources; (ii) the existence of cloak-busting inclusions without lossy medium lining; (iii) overcoming the cloaking-busts by employing a lossy layer outside the cloaked region; (iv) the frequency dependence of the cloaking performances. We address these issues and connect the obtained asymptotic results to singular ideal cloaking. Numerical veri cations and demonstrations are provided to show the sharpness of our analytical study.
INVW02 2nd August 2011
10:00 to 10:45
Cloaked wave amplifiers via transformation optics
The advent of transformation optics and metamaterials has made possible devices producing extreme effects on wave propagation. Here we give theoretical designs for devices, Schrodinger hats, acting as invisible concentrators of waves. These exist for any wave phenomenon modeled by either the Helmholtz or Schrodinger equations, e.g., polarized waves in electromagnetism, pressure waves in acoustics and matter waves in quantum mechanics, and occupy one part of a parameter space continuum of wave-manipulating structures which also contains standard transformation optics based cloaks, resonant cloaks and cloaked sensors. For electromagnetic and acoustic Schrodinger hats, the resulting centralized wave is a localized excitation. In quantum magnetism, the result is a new charged quasiparticle, a quasmon, which causes conditional probabilistic illusions. We discuss possible solid state implementations.
INVW02 2nd August 2011
11:15 to 12:00
Transmission Eigenvalues and Upper Triangular Compactness
The interior transmission eigenvalue problem can be formulated as a 2x2 system of pdes, where one of the two unknown functions must satisfy too many boundary conditions, and the other too few. The system is not self-adjoint and the resolvent is not compact.

Under the hypothesis that the contrast satisfies a coercivity condition on the boundary of the domain, we show that the corresponding operator has Upper Triangular Compact Resolvent and that the analytic Fredholm theorem holds for such opertors.

As corollaries, we can show that the set of (complex) interior transmission eigenvalues is a (possibly empty) discrete set which depends contunuously on the contrast, and that eigenfunctions must be linearly independent. This is different from previous results because the contrast need not have a constant sign (or be real valued) in the interior of the domain.
INVW02 2nd August 2011
14:00 to 14:45
Reconstructing Conductivity from Minimal Internal Data
We consider the problem of recovering the electric conductivity of a body from knowledge of the magnitude of one curent in the interior. We show that the corresponding equipotential surfaces are area minimizing in a conformal metric determined by the given data, prove identifiability and give numerical reconstructions. We also extend the uniqueness results to the case when the object may contain perfectly conducting and/or insulating regions. (Joint work with Amir Moradifam, Alexandru Tamasan and Alexandre Timonov.)
INVW02 2nd August 2011
14:45 to 15:30
M Salo An inverse problem for the p-Laplacian
We study an inverse problem for strongly nonlinear elliptic equations modelled after the p-Laplacian. It is proved that the boundary values of a conductivity coefficient are uniquely determined by a nonlinear Dirichlet-to-Neumann map. The proofs work with the nonlinear equation directly instead of being based on linearization, and involve complex geometrical optics type solutions based on p-harmonic exponentials and certain p-harmonic functions introduced by Wolff. This is joint work with Xiao Zhong (University of Jyväskylä).
INVW02 2nd August 2011
16:15 to 17:00
C Guillarmou Inverse problem for systems in 2 dimension
Using the new method of Bukhgeim, we study the Calderon problem in 2 dimensions for certain elliptic systems on Riemann surfaces with boundary.
INVW02 3rd August 2011
10:00 to 10:45
The inverse crack problem
I shall outline some of the open issues related to the three dimensional inverse problem of cracks and present some new stability results obtained in joint work with Eva Sincich.
INVW02 3rd August 2011
11:15 to 12:00
An Inverse Spectral Theorem of Ambarzumyan
We prove a substantial extension of an inverse spectral theorem of Ambarzumyan, and show that it can be applied to arbitrary compact Riemannian manifolds, compact quantum graphs and finite combinatorial graphs, subject to the imposition of Neumann (or Kirchhoff) boundary conditions.
INVW02 3rd August 2011
12:00 to 12:45
A Vasy New non-elliptic methods in the analysis of the Laplacian on conformally compact (asymptotically hyperbolic) spaces
I will explain how to analyze the resolvent of the Laplacian on conformally compact spaces by transforming the spectral family to a family of operators on a compact manifold without boundary. One easy consequence of this approach is high energy estimates for the resolvent, uniform in strips, which are crucial, for instance, for understanding the decay of waves. The methods involved are also applicable in many other settings.
INVW02 3rd August 2011
14:00 to 14:45
Transmission Eigenvalues for a Spherically Stratified Medium
We consider the transmission eigenvalue problem for a spherically stratified medium and note that this eigenvalue problem is not self-adjoint. Nevertheless, considerable information on the the spectral theory of transmission eigenvalues can be obtained using methods from the theory of entire functions. In particular we will show that if the index of refraction is constant then complex eigenvalues exist and show that "most" of these complex eigenvalues lie near the real axis. We also consider the inverse spectral problem associated with this eigenvalue problem and show that that the solution of this problem depends heavily on whether or not the index of refraction is less than one or greater than one.
INVW02 3rd August 2011
14:45 to 15:30
The Factorization Method for an Electromagnetic Inverse Scattering Problem
First, we recall some facts from the (direct) scattering of time-harmonic electromagnetic waves by an inhomogeneous medium. We study the question of existence of weak solutions by an integro-differential equation, introduce the far field pattern and derive important properties of the far field operator. Then we turn to the inverse problem to determine the contrast of the refractive index from the knowledge of the far field patterns for all incident plane waves. We apply the Factorization Method which provides an explicit characterization of the shape of the contrast by the given far field data.
INVW02 3rd August 2011
16:00 to 16:45
Transmission Eigenvalues in Inverse Scattering Theory
The transmission eigenvalue problem is a new class of eigenvalue problems that has recently appeared in inverse scattering theory for inhomogeneous media. Such eigenvalues provide information about material properties of the scattering object and can be determined from scattering data, hence can play an important role in a variety of problems in target identification. The transmission eigenvalue problem is non-selfadjoint and nonlinear which make its mathematical investigation very interesting.

In this lecture we will describe how the transmission eigenvalue problem arises in scattering theory, how transmission eigenvalues can be computed from scattering data and what is known mathematically about these eigenvalues. The investigation of transmission eigenvalue problem for anisotropic media will be discussed and Faber-Krahn type inequalities for the first real transmission eigenvalue will be presented. We conclude our presentation with some recent preliminary results on transmission eigenvalues for absorbing and dispersive media, i.e. with complex valued index of refraction, as well as for anisotropic media with contrast that changes sign.

Our presentation contains a collection of results obtained with several collaborators, in particular with David Colton, Drossos Gintides, Houssem Haddar and Andreas Kirsch.
INVW02 4th August 2011
09:15 to 10:00
Carleman estimate for stratified media: the case of a diffusive interface
INVW02 4th August 2011
10:00 to 10:45
Some issues on the inverse conductivity problem with a complex coefficient
In this talk I will present some results concerning the possibility of recovering some features of a complex valued coefficient form boundary data of its solutions.
INVW02 4th August 2011
11:15 to 12:00
The Identification problem in SPECT: uniqueness, non-uniqueness and stability
We study the problem of recovery both the attenuation $a$ and the source $f$ in the attenuated X-ray transform in the plane. We study the linearization as well. It turns out that there is a natural Hamiltonian flow that determines which singularities we can recover. If the perturbation $\delta a$ is supported in a compact set that is non-trapping for that flow, then the problem is well posed. Otherwise, it may not be, and at least in the case of radial $a$, $f$, it is not. We present uniqueness and non-uniqueness results both for the linearized and the non-linear problem; as well as a H\"older stability estimate.
INVW02 4th August 2011
12:00 to 12:45
A Lechleiter Sampling methods for time domain inverse scattering problems
We consider inverse scattering problems for the wave equation in the time domain: find the shape of a Dirichlet scattering object from time domain measurements of scattered waves. For this time domain inverse problem, we introduce sampling methods, a well-known family of techniques for corresponding frequency domain inverse scattering problems.

The problem setting and the time domain algorithm incorporate two basic features: The data consists of measurement of causal waves, and the inversion algorithm directly works on the time domain data without using a Fourier transformation.

Time-domain sampling methods naturally incorporate a continuum of frequencies in the inversion algorithm. Consequently, they induce the potential to improve the reconstruction quality when compared to methods working at one single frequency.
INVW02 5th August 2011
09:15 to 10:00
Inverse scattering from cusp on generalized arithmetic surfaces
INVW02 5th August 2011
10:00 to 10:45
L Robbiano Carleman estimate for Zaremba boundary condition
The Zaremba boundary condition is a mixed boundary condition of the following type, on a part of the boundary we impose Dirichlet boundary condition and on the other part we impose the Neumann boundary condition. For such a problem we prove an logarithme type estimate for the decrease of energy of a solution for a damped wave equation. In the talk we shall explain the plan of the proof. The main part is to prove a Carleman estimate in a neighborhood of the boundary where the type of boundary conditions changes.
INVW02 5th August 2011
11:15 to 12:00
Enhancement of near-cloaking using GPT vanishing structures
INVW02 5th August 2011
12:00 to 12:45
Stability of Calderón Inverse Problem
In this talk we study the different kinds of stability in the inverse Calderon problem in EIT. We study in particular the stability in the case of partial data.
INVW02 5th August 2011
14:00 to 14:45
Adaptive time-frequency detection and filtering for imaging in strongly heterogeneous background media
We consider the problem of detecting and imaging the location of compactly supported reflectors embedded in strongly heterogeneous background media. Imaging in such regimes is quite challenging as the incoherent wave field that is produced from reflections by the background medium overwhelms the scattered field from the object that wish to image. To detect the presence of a reflector in such regimes we introduce an adaptive time-frequency representation of the array response matrix followed by a Singular Value Decomposition (SVD). The detection is adaptive because the time windows that contain the primary echoes from the reflector are not determined in advance. Their location and width is determined by searching through the time-frequency binary tree of the LCT. After detecting the presence of the reflector we filter the array response matrix to retain information only in the time windows that have been selected. We also project the filtered array response matrix to the subspace associated with the top singular value and then image using travel time migration. We show with extensive numerical simulations that this approach to detection and imaging works well in heavy clutter that is calibrated using random matrix theory so as to simulate regimes close to experiments. While the detection and filtering algorithm that we present works well in general clutter it has been analyzed theoretically only for the case of randomly layered media.
INV 9th August 2011
14:00 to 15:00
Artificial black holes
In the talk conditions for the existence of black holes in the case of two space dimensions are given, the stability of black holes under the pertubations of metrics is studied. Also the inverse problem of the determination of the ergosphere by boundary measurements is considered. A new inverse problem for the hyperbolic equations is considered.
INV 16th August 2011
14:00 to 15:00
E Its & A Its Riemann-Hilbert approach to scattering problems in elastic media
We are developing Riemann-Hilbert (RH) approach to scattering problems in elastic media. The approach is based on a version of RH method introduced in nineties by A. Fokas for studying boundary problems for linear and integrable nonlinear PDEs. The suitable Lax pair formulation of the elastodynamic equation is obtained. The integral representations obtained from this vector Lax pair are applied to Rayleigh wave propagation in an elastic quarter space and half space. This reduces the problem to the analysis of certain underdetermined matrix RH problem on a torus. We showed that the problem can be in fact re-formulated as a well-posed RH problem with a shift. Some results of the described analysis will be discussed. Part of this work is done jointly with J. Kaplunov.
INVW03 22nd August 2011
09:45 to 10:30
Computational Conformal / Quasi-conformal Geometry and Its Applications
Conformal (C) / Quasi-conformal (QC) geometry has a long history in pure mathematics, and is an active field in both modern geometry and modern physics. Recently, with the rapid development of 3D digital scanning technology, the demand for effective geometric processing and shape analysis is ever increasing. Computational conformal / quasi-conformal geometry plays an important role for these purposes. Applications can be found in different areas such as medical imaging, computer visions and computer graphics.

In this talk, I will first give an overview of how conformal geometry can be applied in medical imaging and computer graphics. Examples include brain registration and texture mapping, where the mappings are constructed to be as conformal as possible to reduce geometric distortions. In reality, most registrations and surface mappings involve non-conformal distortions, which require more general theories to study. A direct generalization of conformal mapping is quasiconformal mapping, where the mapping is allowed to have bounded conformality distortions. In the second part of my talk, theories of quasicoformal geometry and its applications will be presented. In particular, I will talk about how QC can be used for registration of biological surfaces, shape analysis, medical morphometry and the inpainting of surface diffeomorphism.

INVW03 22nd August 2011
11:00 to 11:45
Calderon's inverse problem in 2D Electrical Impedance Tomography

Calderon's problem asks if and how one can determine the conductivity structure of a material from boundary current-voltage measurements. In two dimensions the problem admits a complete solution. This includes the uniqueness proof for (very) rough coefficients, developing new reconstruction algorithms and their computer implementation.

In this talk we give an overview of the recent progress on the EIT problem in two dimensions. The talk is based on joint works with M. Lassas, L. Päivärinta, S. Siltanen, J. Müller and A. Perämäki.

INVW03 22nd August 2011
11:45 to 12:30
Travel Time Tomography and Tensor Tomography
We will give a survey on some recent results on travel tomography which consists in determining the index of refraction of a medium by measuring the travel times of sound waves going through the medium. We will also consider the related problem of tensor tomography which consists in determining a function, a vector field or tensors of higher rank from their integrals along geodesics.
INVW03 22nd August 2011
14:00 to 14:45
Laplace-Beltrami Eigen-Geometry and Applications to 3D Medical Imaging
Rapid development of 3D data acquisition technologies stimulates researches on 3D surface analysis. Intrinsic descriptors of 3D surfaces are crucial to either process or analyze surfaces. In this talk, I will present our recent work on 3D surfaces analysis by using Laplace-Beltrami (LB) eigen-system. The intrinsically defined LB operator provides us a powerful tool to study surface geometry through its LB eigen-system. By combining with other variational PDEs on surfaces, I will show our results on skeleton construction, feature extraction, pattern identification and surface mapping in 3D brain imaging by using LB eigen-geometry. The nature of LB eigen-system guarantee that our methods are robust to surfaces rotation and translation variations.
INVW03 22nd August 2011
14:45 to 15:30
Graphical Models and Discrete Optimization in Biomedical Imaging: Theory and Applications
Image-based bio-markers have become powerful diagnostic tools due to the rapid and amazing development of medical hardware. In such a context, efficient processing and understanding of the corresponding images has gained significant attention over the past decade. The task to be addressed is extremely challenging due to: (i) curse of non-linearity (images and desired bio-markers exhibit a non-linear relationship), (ii) curse of dimensionality (number of degrees of freedom versus their inference), (iii) curse of non-convexity (designed objective functions present numerous local minima) and (iv) curse of modularity (variability of organs, imaging modalities). In this talk, we will provide some preliminary answers to the aforementioned challenges by exploiting through graphical models and discrete optimization algorithms. Furthermore, concrete examples will be presented towards addressing fundamental problems in biomedical perception like knowledge-based segmentation and deformable image fusion.
INVW03 22nd August 2011
16:00 to 16:30
Image Visualization and Restoration by Curvature Motions

The role of curvatures in visual perception goes back to '54 and is due to Attneave. It can be argued on neurological grounds that human brain could not possible use all the information provided by states of simulation. But information that stimulates the retina, is located at regions where color changes abruptly (contours), and furthermore at angles and peaks of curvature. Yet, a direct computation of curvatures on a raw image is impossible. We show in this presentation how curvatures can be accurately estimated, at subpixel resolution, by a direct computation on level lines after their independent smoothing. This view towards shape analysis requires a representation of an image in terms of its level lines. At the same time, it involves short time smoothing (in occurrence Curve Shortening or Af?ne Shortening) simultaneously for level lines and images.

In this setting, we found an explicit connection between the geometric approach for Curve / Af?ne Shortening and the viscosity approach for the Mean / Af?ne Curvature Motion, based on a complete image processing pipeline, that we term Level Lines (Af?ne) Shortening, shortly LL(A)S. We show that LL(A)S provides an accurate visualization tool of image curvatures, that we call an Image Curvature Microscope. As an application we give some illustrative examples of image visualization and restoration: noise, JPEG artifacts, and aliasing will be shown to be nicely smoothed out by the subpixel curvature motion.

INVW03 22nd August 2011
16:30 to 17:00
E Konukoglu Efficient Probabilistic Model Personalization Integrating Uncertainty on Data and Parameters: Application to Eikonal-Diffusion Models in Cardiac Electrophysiolo
Biophysical models are increasingly used for medical applications at the organ scale. However, model predictions are rarely associated with a confidence measure although there are important sources of uncertainty in computational physiology methods. For instance, the sparsity and noise of the clinical data used to adjust the model parameters (personalization), and the difficulty in modeling accurately soft tissue physiology. The recent theoretical progresses in stochastic models make their use computationally tractable, but there is still a challenge in estimating patient-specific parameters with such models.

In this talk I will describe an efficient Bayesian inference method for model personalization (parameter estimation) using polynomial chaos and compressed sensing. I will demonstrate the method in the context of cardiac electrophysiology and show how this can help in quantifying the impact of the data characteristics and uncertainty on the personalization (and thus prediction) results.

Described method can be beneficial for the clinical use of personalized models as it explicitly takes into account the uncertainties on the data and the model parameters while still enabling simulations that can be used to optimize treatment. Such uncertainty handling can be pivotal for the proper use of modeling as a clinical tool, because there is a crucial requirement to know the confidence one can have in personalized models.

INVW03 22nd August 2011
17:00 to 17:30
Variational Methods in images processing, integro-differential equations and applications to histology and MR imaging
Multiscale analysis can give useful insight into various natural and manmade phenomena. In this talk, we will discuss some new techniques of multiscale analysis in the context of digital images.

We will discuss multiscale image processing using variational and partial dierential equations. We will describe novel integro-dierential equations based on the Rudin-Osher- Fatemi decomposition and its variants. In the second part of the talk, we will discuss the problem of tracing blood vessel boundaries in placental histology images using a combination of global/local registration and Chan-Vese segmentation.

INVW03 23rd August 2011
09:00 to 09:45
C Schoenlieb Alternating total variation minimisation for PET reconstruction
We present a novel reconstruction technique for image reconstruction in positron emission tomography (PET). This technique provides an effective combination of accurately inverting the Radon transform and of implementing an appropriate regularisation for noise removal. In contrast to the majority of existing algorithms which apply denoising to the reconstructed image, our work applies a regularisation both in the measurement and the image space. For this task we use an alternating total variation algorithm. This is joint work with P. E. Barbano and T. Fokas.
INVW03 23rd August 2011
09:45 to 10:30
Photoacoustic Imaging with Focused Illumination
In this talk we consider analytical reconstruction formulas for photoacoustic experiments where the illumination is focused to a plane. In standard Photoacoustic experiments, where the whole specimen is uniformly illuminated the total energy required can be way too much, and thus focusing becomes necessary.

Such focusing experiments require novel reconstruction techniques for imaging, which will be the core topic of this talk. Moreover, the reconstruction algorithms also depend on the measurement setup - we will discuss standard point detectors, realizable for instance by piezo crystals, and integrating detectors, realizable for instance by Mach-Zehnder interferometers. In addition, we review photoacoustic imaging formulas to but the work in perspective.

This is joint work with P. Elbau and R. Schulze (RICAM, Linz, Austria).

INVW03 23rd August 2011
11:00 to 11:45
Robust principle component analysis based four-dimensional computed tomography
We present a new spatiotemporal model for 4D-CT from matrix perspective, Robust PCA based 4DCT model. Instead of viewing 4D object as a temporal collection of three-dimensional (3D) images and looking for local coherence in time or space independently, we explore the maximum temporal coherence of spatial structure among phases. This Robust PCA based 4DCT model can be applicable in other imaging problems for motion reduction or/and change detection. A dynamic data acquisition procedure, i.e., a temporally spiral scheme, is proposed that can potentially maintain the similar reconstruction accuracy while using fewer projections of the data. The key point of this dynamic scheme is to reduce the total number of measurements and hence the radiation dose by acquiring complementary data in different phases without redundant measurements of the common background structure.
INVW03 23rd August 2011
11:45 to 12:30
U Asher Adaptive and stochastic algorithms for piecewise constant EIT and DC resistivity problems with many measurements
We develop fast numerical methods for the practical solution of the famous EIT and DC-resistivity problems in the presence of discontinuities and potentially many experiments or data.

Based on a Gauss-Newton (GN) approach coupled with preconditioned conjugate gradient (PCG) iterations, we propose two algorithms. One determines adaptively the number of inner PCG iterations required to stably and effectively carry out each GN iteration. The other algorithm, useful especially in the presence of many experiments, employs a randomly chosen subset of experiments at each GN iteration that is controlled using a cross validation approach. Numerical examples demonstrate the efficacy of our algorithms.

INVW03 23rd August 2011
14:05 to 14:55
Photoacoustic Tomography: Ultrasonically Breaking through the Optical Diffusion Limit
We develop photoacoustic tomography (PAT) for functional and molecular imaging by physically combining optical and ultrasonic waves via energy transduction. Key applications include early-cancer and functional imaging. Light provides rich tissue contrast but does not penetrate biological tissue in straight paths as x-rays do. Consequently, high-resolution pure optical imaging (e.g., confocal microscopy, two-photon microscopy, and optical coherence tomography) is limited to depths within one optical transport mean free path (~1 mm in the skin). Ultrasonic imaging, on the contrary, provides good image resolution but suffers from poor contrast in early-stage tumors as well as strong speckle artifacts. PAT-embodied in the forms of computed tomography and focused scanning-overcomes the above problems because ultrasonic scattering is ~1000 times weaker than optical scattering. In PAT, a pulsed laser beam illuminates the tissue and generates a small but rapid temperature rise, which induces emission of ultrasonic waves due to thermoelastic expansion. The short-wavelength ultrasonic waves are then detected to form high-resolution tomographic images. PAT broke through the diffusion limit for penetration and achieved high-resolution images at depths up to 7 cm in tissue. Further depths can be reached by thermoacoustic tomography (TAT) using microwaves or RF waves instead of light for excitation.
INVW03 23rd August 2011
15:00 to 15:30
Fast Conformal Mapping for Prone / Supine Registration in CT Colonography
In this talk I will present a challenging mathematical problem in biomedical imaging: that of registering (bringing into spatial alignment) two highly deformable surfaces of the colon. Solving this problem has clinical application in CT Colonography, also known as "virtual colonoscopy", used to screen patients for colorectal lesions. Our registration approach relies on conformally flattening each colon surface using Ricci flow, which is a partial differential equation that deforms the metric of a Riemannian manifold.

We then map 3D differential geometric shape descriptors to the flattened surfaces, and perform a cylindrical registration to derive the final registration. With the registration in place, we can determine corresponding points between the different surfaces, which is accurate to within approximately 6 millimeters.

INVW03 23rd August 2011
15:30 to 16:00
Imaging biomarkers in Alzheimer's Disease: the technical and regulatory challenges of global deployment
Drug development involves huge and extremely expensive global experiments, which at late phase can involve many thousands of patients recruited at 100s of hospitals all over the world. Imaging biomarkers have the potential to provide standardized, quantitative measurements of patient eligibility for the study, drug side effects, and drug efficacy, but deploying them in such global studies has great challenges. Alzheimer's Disease (AD) is an area of huge unmet medical need, and huge sums are being invested in testing potential new treatments for AD, and drug companies are being very ambitious in their use of sophisticated image analysis methods in these studies. The technical and regulatory challenges of these applications is quite different from the normal focus of image analysis research, but the potential benefit of overcoming these challenges is better tools to help bring safe and effective medicines to patients.
INVW03 23rd August 2011
16:30 to 17:00
D-M Koh Cancer Imaging: State-of-the-art and unmet challenges
In the past decade, the field of cancer imaging has continued to expand and grow in response to new challenges posed by our increasing understanding of the molecular basis of cancer and introduction of novel targeted treatments to the clinic. Technological advancement in imaging software and hardware has enabled more rapid acquisition and processing of images, which has led to the growth of functional imaging techniques. We are now able to apply a number of imaging techniques in oncological practice for early diagnosis of disease, detection of small volume disease, providing a roadmap for treatment planning, enabling novel assessment of treatment response, as well as for disease prognostication. However there are a number of imaging challenges which are currently unmet. There is a need for continued validation and qualification of imaging biomarkers using histology, patient outcome data and corroborative multi-parametric imaging. There is a recognised gap in translating imaging findings to treatment delivery, particularly in radiotherapy. There is also a desire to move from simplistic unidimensional tumour burden estimates to volumetric disease quantification across the body. Last but not least, the presence of physiological motion continues to pose diagnostic and therapeutic challenges to pinpoint biologically relevant disease for focussed treatments.
INVW03 23rd August 2011
17:00 to 18:00
Medical Imaging Day Discussion
Open for Business Panel Discussion with LV.Wang, G.Slabaugh, D.Hill & D-M.Koh. This discussion aims to encourage exchange of information and discussion on possible collaboration between academia and industry on future challenges in medical imaging.
INVW03 24th August 2011
09:00 to 09:45
S Arridge Inverse Problems in Optical Tomography
Optical Tomography has developed enormously in the last 20 years. In this modality, light in the visible or near infrared part of the spectrum is injected into an object and its transmitted intensity measured on the boundary of the domain. Several inverse problems can be described which correspond to parameter idenfication problems, inverse source problems, or both. Both linear and non-linear approaches can be used. In this talk I will describe several of these problems, their applications, and methods for their solution.
INVW03 24th August 2011
09:45 to 10:30
Multi-physics Inverse Problems and Internal Functionals
Hybrid (multi-physics) inverse problems aim at combining the high contrast of one imaging modality (such as e.g. Electrical Impedance Tomography or Optical Tomography in medical imaging) with the high resolution of another modality (such as e.g. based on ultrasound or magnetic resonance). Mathematically, these problems often take the form of inverse problems with internal information. This talk will review several results of uniqueness and stability obtained recently for such inverse problems.
INVW03 24th August 2011
11:00 to 11:45
Groups of diffeomorphisms and shape spaces
This tutorial will describe the basic steps of the construction of shape spaces via the action of diffeomorphic transformations. We will discuss how right-invariant distances and Riemannian metrics can be projected onto distances or metrics on shape spaces, and review how they can be built in a computational framework. We will then discuss particular cases, with a special focus on point sets, and show applications of this framework to registration problems and data analysis in shape spaces.
INVW03 24th August 2011
11:45 to 12:30
G Dassios Neuronal Current Decomposition via Vector Surface Ellipsoidal Harmonics
Electroencephalography (EEG) and Magnetoencephalography (MEG) provide the two most efficient imaging techniques for the study of the functional brain, because of their time resolution. Almost all analytical studies of EEG and MEG are based on the spherical model of the brain, while studies in more realistic geometries are restricted to numerical treatments alone. The human brain can best approximated by an ellipsoid with average semi-axes equal to 6, 6.5 and 9 centimeters. An analytic study of the brain activity in ellipsoidal geometry though, is not a trivial problem and a complete closed form solution does not seems possible for either EEG or MEG. In the present work we introduce vector surface ellipsoidal harmonics, we discuss their peculiar orthogonality properties, and finally we use them to decompose the neuronal current within the brain into the part that is detectable by the EEG and that is detectable by the MEG measurements. The decomposition of a vector field in vec tor surface ellipsoidal harmonics leads to three subspaces R, D and T, depending on the character of the surface harmonics that they span this subspaces. We see that both, the electric field obtained from EEG and the magnetic field obtained from MEG, have no T-component. Furthermore, the T-component of the neuronal current does not influence the EEG recordings, while the MEG recordings depend on all three components of the current.
INVW03 24th August 2011
13:45 to 14:30
Expansion methods for medical imaging
INVW03 25th August 2011
09:00 to 09:45
L Cohen Geodesic methods for Biomedical Image Segmentation
Tubular and tree structures appear very commonly in biomedical images like vessels, microtubules or neuron cells. Minimal paths have been used for long as an interactive tool to segment these structures as cost minimizing curves. The user usually provides start and end points on the image and gets the minimal path as output. These minimal paths correspond to minimal geodesics according to some adapted metric. They are a way to find a (set of) curve(s) globally minimizing the geodesic active contours energy. Finding a geodesic distance can be solved by the Eikonal equation using the fast and efficient Fast Marching method. In the past years we have introduced different extensions of these minimal paths that improve either the interactive aspects or the results. For example, the metric can take into account both scale and orientation of the path. This leads to solving an anisotropic minimal path in a 2D or 3D+radius space. On a different level, the user interaction can be minimized by adding iteratively what we called the keypoints, for example to obtain a closed curve from a single initial point. The result is then a set of minimal paths between pairs of keypoints. This can also be applied to branching structures in both 2D and 3D images. We also proposed different criteria to obtain automatically a set of end points of a tree structure by giving only one starting point. More recently, we introduced a new general idea that we called Geodesic Voting or Geodesic Density. The approach consists in computing geodesics between a given source point and a set of points scattered in the image. The geodesic density is defined at each pixel of the image as the number of geodesics that pass over this pixel. The target structure corresponds to image points with a high geodesic density. We will illustrate different possible applications of this approach. The work we will present involved as well F. Benmansour, Y. Rouchdy and J. Mille at CEREMADE.
INVW03 25th August 2011
09:45 to 10:30
X-ray Tomography and Discretization of Inverse Problems

In this talk we consider the question how inverse problems posed for continuous objects, for instance for continuous functions, can be discretized. This means the approximation of the problem by infinite dimensional inverse problems. We will consider linear inverse problems of the form $m=Af+\epsilon$. Here, the function $m$ is the measurement, $A$ is a ill-conditioned linear operator, $u$ is an unknown function, and $\epsilon$ is random noise.

The inverse problem means determination of $u$ when $m$ is given. In particular, we consider the X-ray tomography with sparse or limited angle measurements where $A$ corresponds to integrals of the attenuation function $u(x)$ over lines in a family $\Gamma$.

The traditional solutions for the problem include the generalized Tikhonov regularization and the estimation of $u$ using Bayesian methods. To solve the problem in practice $u$ and $m$ are discretized, that is, approximated by vectors in an infinite dimensional vector space. We show positive results when this approximation can successfully be done and consider examples of problems that can appear. As an example, we consider the total variation (TV) and Besov norm penalty regularization, the Bayesian analysis based on total variation prior and Besov priors.

INVW03 25th August 2011
11:00 to 11:45
Shape Analysis of Population of Manifolds in Computational Anatomy
The accelerated development of imaging techniques in biomedical engineering is challenging mathematicians and computer scientists to develop appropriate methods for the representation and the statistical analysis of various geometrically structured data like submanifolds.

We will first explain how the concepts of homogeneous spaces and riemannian manifolds embedded in the large deformation diffeomorphic metric mapping setting (LDDMM) and the introduction of mathematical currents by Glaunes and Vaillant in this setting have been a powerful and effective framework to support local statistical analysis in more and more complex shape spaces.

We will then discuss a new extension when the submanifolds are the supports of informative fields that need to be also analyzed in a common geometrical-functional representation (joint work with Nicolas Charon).

INVW03 25th August 2011
11:45 to 12:30
Challenges of combining image derived information across modalities, over scale, over time and across populations
INVW03 25th August 2011
14:00 to 14:30
A Moradifam Conductivity imaging from one interior measurement in the presence of perfectly conducting and insulating inclusions
We consider the problem of recovering an isotropic conductivity outside some perfectly conducting or insulating inclusions from the interior measurement of the magnitude of one current density field $|J|$. We prove that the conductivity outside the inclusions, and the shape and position of the perfectly conducting and insulating inclusions are uniquely determined (except in an exceptional case) by the magnitude of the current generated by imposing a given boundary voltage. We have found an extension of the notion of admissibility to the case of possible presence of perfectly conducting and insulating inclusions. This makes it possible to extend the results on uniqueness of the minimizers of the least gradient problem $F(u)=\int_{Omega}a | \nabla u|$ with $u|_{\partial \Omega}=f$ to cases where $u$ has flat regions (is constant on open sets). This is a joint work with Adrian Nachman and Alexandru Tamasam.
INVW03 25th August 2011
14:30 to 15:00
The attenuated X-ray transform on curves
We discuss inversion formulae for the attenuated X-ray transform on curves in the two-dimensional unit disc. This tomographic problem has applications in the medical imaging modality SPECT, and has more recently arisen in the problem of determining the internal permittivity and permeability parameters from a conductive body based on external measurements.
INVW03 25th August 2011
15:00 to 15:30
K Chen A new multi-modality model for effective intensity standardization and image registration

Image registration and segmentation tasks lie in the heart of Medical Imaging. In registration, our concern is to align two or more images using deformable transforms that have desirable regularities. In a multimodal image registration scenario, where two given images have similar features, but non-comparable intensity variations, the sum of squared differences is not suitable to measure image similarities.

In this talk, we first propose a new variational model based on combining intensity and geometric transformations, as an alternative to using mutual information and an improvement to the work by Modersitzki and Wirtz (2006, LNCS, vol.4057), and then develop a fast multigrid algorithm for solving the underlying system of fourth order and nonlinear partial differential equations. We can demonstrate the effective smoothing property of the adopted primal-dual smoother by a local Fourier analysis. An earlier use of mean curvature to regulairse image denosing models was in T F Chan and W Zhu (2008) and the previous work of developing a multigrid algorithm for the Chan-Zhu model was by Brito-Chen (2010). Numerical tests will be presented to show both the improvements achieved in image registration quality as well as multigrid efficiency. Joint work with Dr Noppadol Chumchob.

INVW03 26th August 2011
09:00 to 09:45
J Darbon Network Flows and Non-Linear Discrete Total Variation Evolutions
We consider Discrete Total Variation Flows. Using a combinatorial point of view, we show that these differential inclusions can be exactly computed and we give some properties of the trajectories. An application to contrast-preserving image denoising is presented.
INVW03 26th August 2011
09:45 to 10:30
Personalization of electromechanical models of the heart
Personalization of biophysical models consists in estimating parameters from patient specific data. In this presentation, various strategies for the estimation of parameters of electromechanical models of the heart will be covered. In particular the issue of observability will be raised since only a subset of biophysical parameters can be estimated from common measurements. Personalization results of electrophysiological models from measured isochrones and mechanical models from cine MR images will be presented.
INVW03 26th August 2011
11:00 to 11:45
A free-discontinuity approach to inverse problems
Phase-field methods and length or perimeter penalization have been successfully applied to many imaging problems, such as for instance the Mumford-Shah approach to segmentation and its phase-field counterpart by Ambrosio and Tortorelli.

In this talk we shall illustrate how these techniques may be used also to treat inverse problems where a discontinuous function has to be recovered. As an example we consider the inverse problem of determining insulating cracks or cavities by performing few electrostatic measurements on the boundary. We show the validity of these methods by a convergence analysis and by numerical experiments. The numerical experiments have been performed jointly with Wolfgang Ring (University of Graz, Austria).

INVW03 26th August 2011
11:45 to 12:30
The geometry of the Riemannian manifold of Landmark points, with applications to Medical Imaging
In the past few years there has been a growing interest, in diverse scientific communities, in endowing Shape Spaces with Riemannian metrics, so to be able to measure similarities between shapes and perform statistical analysis on data sets (e.g. for object recognition, target detection and tracking, classification, and automated medical diagnostics).

The knowledge of curvature on a Riemannian manifold is essential in that it allows one to infer about the existence of conjugate points, the behavior of geodesic curves, the well-posedness of the problem of computing the implicit mean (and higher statistical moments) of samples on the manifold, and more. In shape analysis such issues are of fundamental importance since they allow one to build templates, i.e. shape classes that represent typical situations in different applications (e.g. in the field of computational anatomy).

The actual differential geometry of Shape Spaces has started to emerge only very recently: in this talk we will explore the sectional curvature for the Shape Space of landmark points, endowed with the Riemannian metric induced by the action of a diffeomorphism group. Applications to Medical Imaging will be discussed and numerical results will be shown.

INVW03 26th August 2011
14:00 to 14:45
Medical Morphometry using Computational Quasiconformal Geometry
Medical morphometry is an important area in medical imaging for disease analysis. Its goal is to systematically analyze anatomical structures of different subjects, and to generate diagnostic images to help doctors to visualize abnormalities. Quasiconformal(QC) Teichmuller theory, which studies the distortions of the deformation patterns between shapes, has become an important tool for this purpose. In practice, objects are usually represented discretely by triangulation meshes. In this talk, I will firstly describe how quasi-conformal geometry can be discretized onto discrete meshes. This gives a discrete analogue of QC geometry on discrete meshes which represent anatomical structures. Then, I will talk about how computational QC geometry can been applied to practical applications in medical shape analysis.
INVW03 26th August 2011
14:45 to 15:30
Generalized Sampling and Infinite-Dimensional Compressed Sensing
I will discuss a generalization of the Shannon Sampling Theorem that allows for reconstruction of signals in arbitrary bases (and frames). Not only can one reconstruct in arbitrary bases, but this can also be done in a completely stable way. When extra information is available, such as sparsity or compressibility of the signal in a particular basis, one may reduce the number of samples dramatically. This is done via Compressed Sensing techniques, however, the usual finite-dimensional framework is not sufficient. To overcome this obstacle I'll introduce the concept of Infinite-Dimensional Compressed Sensing.
INV 30th August 2011
14:00 to 15:00
Transmission Eigenvalues
INV 6th September 2011
14:00 to 15:00
Sturm-Liouville operators with inner singularities and indefinite inner products
We consider SL differential operators with a potential having a singularity in the interior of the interval. In some situations (e.g. for a Bessel type singularity) the differential expression generates a self-adjoint operator in a Pontryagin space. These considerations are related to M.G.Krein's method for solving the inverse spectral problem for SL operators. Joint work with Malcolm Brown and Matthias Langer.
INV 13th September 2011
14:00 to 15:00
Electromagnetic mediums with a double light cone structure
If we are given an electromagnetic medium we can compute the speed of a propagating signal. For example, in homogeneous medium we can compute the phase velocity of a planewave. A more challenging problem is to understand the converse problem: If we know the behaviour of signal speed in all possible directions, what can we say about the medium? The problem has a natural formulation on a 4-manifold representing spacetime. Then the Fresnel surface describes propagation speed in an electromagnetic medium. For example, in an isotropic medium the Fresnel surface is a Lorentz light cone. Conversely, A. Favaro and L. Bergamin have proven that isotropic medium is the only medium with this property (within a suitable class of linear mediums). The purpose of this talk is to describe the analogous problem when the Fresnel surface is the union of two Lorentz light cones. Uniaxial crystals like calcite is one example. In addition, we find two other medium classes with the same property.
INV 20th September 2011
14:00 to 15:00
Reconstruction scheme for active thermography
Singular sources type of reconstruction scheme is given for active thermography to identify unknown inclusions in known media. The media can be anisotropic and inhomogeneous. If the media are isotropic and the conductivities of background known media are less conductive, then it is also possible to reconstruct the conductivities of inclusions at their boundaries. However, if the conductivities of background known media are more conductive, the scheme cannot always give the reconstruction of the conductivities of inclusions at their boundaries.
INV 27th September 2011
14:00 to 15:00
Stability of Calderon Problem in 2D
Calderon inverse problem asks for the determination of the conductivity of a body from boundary measurements(namely the so-called Dirichlet to Neumann map.

Abstract: In dimension 2 the best avalaible result is due to Astala and Päivärinta. They were able to combine the approach based on the scattering transform introduced by Nachman with the theory of quasiconformal maps to show that, in any planar domain, any function essentially bounded from above and below could be identified by boundary measurements. In the talk we will show that if the oscillation of the conductivities is controlled in some Bessov space and the boundary of the domain has Minkowski dimension less than 2 the identification is stable. We will also discuss the relation between the concept of G-convergence and Dirichlet to Neuman maps to show the sharpness of the result.

INV 27th September 2011
15:30 to 16:30
S Ivanov Boundary distance: volume and geodesic ray transform
In dimension 2 the best avalaible result is due to Astala and Päivärinta. They were able to combine the approach based on the scattering transform introduced by Nachman with the theory of quasiconformal maps to show that, in any planar domain, any function essentially bounded from above and below could be identified by boundary measurements.

In the talk we will show that if the oscillation of the conductivities is controlled in some fractional Sobolev space and the boundary of the domain has Minkowski dimension less than 2 the identification is stable. We will also discuss the relation between the concept of G-convergence and Dirichlet to Neuman maps to show the sharpness of the result.

INV 29th September 2011
17:00 to 18:00
G Uhlmann Cloaking: science meets science fiction
We describe recent theoretical and experimental progress on making objects invisible to detection by electromagnetic waves, acoustic waves and quantum waves. We emphasize the method of transformation optics. For the case of electromagnetic waves, Maxwell's equations have transformation laws that allow for design of electromagnetic materials that steer light around a hidden region, returning it to its original path on the far side. Not only would observers be unaware of the contents of the hidden region, they would not even be aware that something was being hidden. The object, which would have no shadow, is said to be cloaked. We recount some of the history of the subject and discuss some of the issues involved.
INV 4th October 2011
14:00 to 15:00
Gauge equivalence in inverse stationary transport
This talk concerns the inverse problem of reconstruction of the coefficients appearing in the stationary transport equation, from boundary measurements. The attenuation coefficient is assumed anisotropic. Even in the case when all spatial-angular boundary measurements are available (the full Albedo operator is known) the coefficients can only be recovered up to a gauge transformation. The equivalence classes are shown to depend continuously on the boundary data. The Euclidean and simple Riemmanian case are considered.

This is joint work with Plamen Stefanov and Stephen McDowall.

INV 11th October 2011
14:00 to 15:00
Homogenization methods for metamaterials: An introduction
INV 25th October 2011
14:00 to 15:00
W Rundell The reconstruction of sources and inclusions by rational approximation
The basis of most imaging methods is to detect hidden obstacles or inclusions within a body when one can only make measurements on an exterior surface. Such is the ubiquity of these problems, the underlying model can lead to a partial differential equation of any of the major types, but here we focus on the elliptic case. In particular we consider steady-state electrostatic or thermal imaging and giving boundary value problems for Laplace's equation or the case of inverse scattering and near/far field data for the Helmholtz equation. Our inclusions can be interior forces with compact support or obstacles with a fixed, given boundray condition although we shall concentrate on the former situation. We propose a series of algorithms that under certain assumptions allows for the determination of the support set by solving a simpler ``equivalent point source'' problem.
INV 26th October 2011
14:00 to 15:00
Hybrid Inverse Problems and Internal Functionals
Hybrid (coupled-physics) inverse problems aim at combining the high contrast of one imaging modality (such as EIT or OT) with the high resolution of another modality (such as e.g. ultrasound or MRI). Mathematically, several problems take the form of inverse problems with internal information. This talk will review several results of uniqueness and stability obtained in the context of photo-acoustic tomography and ultrasound modulation.
INV 27th October 2011
14:00 to 15:00
Local injectivity for generalized Radon transforms
For a given smooth, positive function $m(x, \xi, \eta)$ we consider a weighted Radon transform $R$ defined by $Rf(\xi, \eta) = \int f(x, \xi x + \eta) m(\xi, \eta, x) dx$ for functions $f(x, y)$ that are defined in some neighborhood of the origin and are supported in $y\ge x^2$. The question is for which $m(x, \xi, \eta)$ it is true that $R$ is injective. A similar problem when the family of lines $y = \xi x + \eta$ is replaced by a family of curves is also considered.
INV 3rd November 2011
14:00 to 15:00
Photoelastic Tomography
INV 8th November 2011
14:00 to 15:00
P Perry Solving Nonlinear Dispersive Equations in Dimension Two by the Method of Inverse Scattering
The Davey-Stewartson II equation and the Novikov-Veselov equations are nonlinear dispersive equations in two dimensions, respectively describing the motion of surface waves in shallow water and geometrical optics in nonlinear media. Both are integrable by the $\overline{\partial}$-method of inverse scattering, and may be considered respective analogues of the cubic nonlinear Schrodinger equation and the KdV equation in one dimension. We will prove global well-posedness for the defocussing DS II equation in the space $H^{1,1}(R^2)$ consisting of $L^2$ functions with $\nabla u$ and $(1+|\, \cdot \,|) u(\, \cdot \, )$ square-integrable. Using the same scattering and inverse scattering maps, we will also show that the inverse scattering method yields global, smooth solutions of the Novikov-Veselov equation for initial data of conductivity type, solving an open problem posed recently by Lassas, Mueller, Siltanen, and Stahel.
INV 10th November 2011
14:00 to 15:00
Biomechanical Imaging in Tissue - Using Time Dependent Data
Biomechanical Tissue Imaging is inspired by the doctor's palpation exam where the doctor presses against the skin to feel abnormal stiff regions within the body. This talk about this imaging area will contain a description of inventive technologies that utilize the concept of interior radiation force; an example of one of these technologies is Supersonic Imaging which is developed in Paris. In these technologies low amplitude (tens of microns) propagating wave motion is produced in the body and the technologies output a movie of this motion. We will discuss elastic and viscoelastic mathematical models and the essential properties that must be included so that solutions mimic the data produced by the experiment. Algorithms, that utilize the fundamental features of the model and the time dependent data, will be presented. Images of cancerous tissue, corroborated with ultrasound images, including cancerous inclusions a few millimeters in diameter will be shown. Statistical properties of the images and sensitivity results will be included if time permits.
INV 15th November 2011
14:00 to 15:00
Common singularities in SPECT and SAR inverse problems
INV 17th November 2011
14:00 to 15:00
Y Capdebosq On the scattered field generated by a ball inhomogeneity of constant index
Consider the solution of a scalar Helmholtz equation where the potential (or index) takes two positive values, one inside a disk or a ball (when d=2 or 3) of radius epsilon and another one outside. For this classical problem, it is possible to derive sharp estimates of the size of the scattered field caused by this inhomogeneity, for any frequencies and any contrast. We will see that uniform estimates with respect to frequency and contrast do not tend to zero with epsilon, because of a quasi-resonance phenomenon. However, broadband estimates can be derived: uniform bounds for the scattered field for any contrast, and any frequencies outside of a set which tend to zero with epsilon.
INV 22nd November 2011
14:00 to 15:00
Nonlinear Fourier transform and electrical impedance tomography
INV 23rd November 2011
14:00 to 15:00
D Mugnolo On the heat equation subject to nonlocal constraints
I will consider a heat equation subject to integral constraints on the total mass and the barycenter, instead of more common boundary conditions. The natural operator theoretical setting is that of a space of distributions on the torus. By variational methods I will show well-posedness and some relevant spectral properties of this problem. This is joint work with Serge Nicaise (Valenciennes, France).
INV 24th November 2011
14:00 to 15:00
Data Assimilation with Numerical model error
Data assimilation addresses the inverse problem of, given a set of uncertain observations and a numerical approximation to a physical system, what set of parameters, especially the initial condition, leads to a forward computation (the 'analysis') which best solves this problem. Data assimilation is widely used in meteorological applications. In this talk I will briefly describe the 4D-VAR method for data assimilation, and then show how its results are influenced by using a variety of different numerical schemes with associated numerical model error. Joint work with Sian Jenkins, Melina Freitag and Nathan Smith (Bath)
INV 29th November 2011
14:00 to 15:00
Gaussian beams and geometric aspects of Inverse problems
Geometry plays an important role in inverse problems. For example, reconstruction of second order elliptic selfadjoint differential operator on the manifold through the gauge transformation can be reduced to the reconstruction of Shrodinger operator, corresponding to Beltrami-Laplace operator, i.e. topology of the manifold , riemannian metric on it and potential. The difficulties mostly related to geometric aspects of the problem. If we consider applied invere problems, we also see, that the main problems lies in geometry. For example, in the main problem of geophysics - the so called migration problem, it is necessary to reconstruct high frequency wave fields in the media with complicated geometry with many caustics of different structure. The difficulties of reconstruction of wave fields close to caustics are also of geometric character. To solve the geometric problems it is necessary to have instruments closely related to the geometry of corresponding problem. One of such instruments is Gaussian beams solutions. In the talk the geometric properties of these solutions and their use in direct and inverse problems will be shown. The problems with more complicated Finsler geometry will also be discussed.
INV 1st December 2011
14:00 to 15:00
Sectional Photoacoustic Imaging
The literature on reconstruction formulas for photoacoustic tomography is vast. The various reconstruction formulas differ by the used measurement devices and geometry on which the data are sampled. In standard photoacoustic imaging the object under investigation is illuminated uniformly. Recently, sectional photoacoustic imaging techniques, using focusing techniques for initializing and measuring the pressure along a plane, appeared in the literature. This talk surveys existing and provides novel exact reconstruction formulas for sectional photoacoustic imaging. Sectional imaging is a research topic on its own, but can also be used as an approach for identification of the absorption density and the speed of sound by photoacoustic measurements. The mathematical model for such an experiment is developed and exact reconstruction formulas for both parameters are presented. This is joint work with P.Elbau, A. Kirsch, R. Schulze.
INV 6th December 2011
14:00 to 15:00
Identification of the blood perfusion coefficient
In 1948, H.H. Pennes postulated that the effect of the temperature difference between the blood supply and the tissue acts as an energy sink term giving rise to the so-caled bi-heat conduction equation. This, in essence, is similar to the heat transfer fin equation where the sink term represents convective heat loss to the surroundings. In this talk, we investigate the identification of the variable blood perfusion coefficient in the transient bio-heat conduction equation. In this inverse coefficient identification problem the additional information sufficient to render a unique solution could be a boundary, internal or integral temperature measurement. Furthermore, stability of solution is achieved by mollification or Tikhonov's regularization with a suitable choice of the regularization parameter.
INVW05 12th December 2011
10:00 to 11:00
W Mulder Where should we focus?

Seismic imaging or migration maps singly scattered data into the subsurface, providing an image of the interfaces between rock formations with different impedances. The corresponding linear inverse problem is the minimization of the least-squares error subject to the Born approximation of the acoustic wave equation.

Substantial preprocessing is usually required to remove data that do not obey the single scattering assumption. Also, an accurate background velocity is needed. Migration velocity analysis exploits the redundancy in the data to estimate the background velocity model. Data for different shot-receiver distances or offsets should provide the same image of the subsurface. Its implementation for the full wave equation invokes action at distance via a subsurface shift in space or time. Figure 1 shows a real-data example. The corresponding cost functional tries to focus energy at zero subsurface shift, thereby suppressing the unphysical action at distance.

Although removal of surface multiples is a common technique, interbed multiples as well as remnant surface multiples may still lead the focusing algorithms astray. Focusing in the data domain is a recent generalization that, in principle, should not suffer from the presence of surface and interbed multiples. Further development is, however, still required to mature the method.

Example of seismic velocity inversion

Figure 1. Example of seismic velocity inversion with focusing based on horizontal shifts in the depth domain, starting from the best velocity model that increases linearly with depth. The left panel shows the extended image at a lateral position of x = 2 km, as a function of horizontal subsurface offset hx and depth z. The iteration count is displayed in the left upper corner. The central panel displays the migration image. The right one shows the reconstructed smooth background velocity model.

INVW05 12th December 2011
11:00 to 12:00
Introduction to Radar Imaging
Radar imaging is a technology that has been developed, very successfully, within the engineering community during the last 50 years. Radar systems on satellites now make beautiful images of regions of our earth and of other planets such as Venus. One of the key components of this impressive technology is mathematics, and many of the open problems are mathematical ones. This lecture will explain, from first principles, some of the basics of radar and the mathematics involved in producing high-resolution radar images.
INVW05 12th December 2011
13:30 to 14:30
S Arridge (Inverse Problems in) BioMedical Imaging

Biomedical Imaging is a large topic that may be divided into direct imaging methods versus indirect imaging. By direct imaging we mean methods such as microscopy wherein the data is acquired and presented as an image; by indirect imaging we mean methods such as tomography wherein data is acquired through a detector and images are reconstructed by solving an inverse problem. Common to both approaches are tasks such as segmentation, registration, and pattern recognition, and confounding processes such as noise, blurring and obscuration.

Direct Biomedical Imaging can be contrasted with Computer Vision due to the different nature of the resolution, contrast, and confounding processes involved. Indirect Imaging can be compared to other classes of inverse problems, and again we may draw particular details to do with the typically large scale of biomedical images, their sometimes non-unique or badly ill-posed nature, and in some cases their non-linear character.

In this talk I will try to give an overview of some current topics in these areas.

INVW05 12th December 2011
14:30 to 15:00
Acousto-Optic Imaging and Related Inverse Problems
We propose a tomographic method to reconstruct the optical properties of a highly scattering medium from incoherent acousto-optic measurements. The method is based on the solution to an inverse problem for the diffusion equation and makes use of the principle of interior control of boundary measurements by an external wave field. This is joint work with Guillaume Bal.
INVW05 12th December 2011
15:30 to 16:00
Reconstruction for an offset cone beam x-ray tomography system
The RTT airport baggage x-ray tomography system uses an unusual geometry in which the detector array is offset from the circle on which the sources lie. Rather than a single rotating source the sources are switched on and off. This enables the system to operate at the high speed required for airport baggage scanning but presents challenges for reconstruction. We discuss the strategy for choosing the source firing sequence and a present a reconstruction algorithm using rebinning on to multiple curved surfaces.
INVW05 12th December 2011
16:00 to 16:30
T Varslot & A Kingston & G Myers & A Sheppard Theoretically-exact CT-reconstruction from experimental data
We demonstrate how an optimisation-based autofocus technique may be used to overcome physical instabilities that have, until now, made high-resolution theoretically-exact tomographic reconstruction impractical. We show that autofocus-corrected, theoretically-exact helical CT is a viable option for high-resolution micro-CT imaging at cone-angles approaching +- 50 degrees. The elevated cone-angle enables better utilisation of the available X-ray flux and therefore shorter image acquisition time than conventional micro-CT systems. By using the theoretically-exact Katsevich 1PI inversion formula, we are not restricted to a low-cone-angle regime; we can in theory obtain artefact-free reconstructions from projection data acquired at arbitrary high cone-angles. However, this reconstruction method is sensitive to misalignments in the tomographic data, which result in geometric distortion and streaking artefacts. We use a parametric model to quantify the deviation between the actual acquisition trajectory and an ideal helix, and use an autofocus method to estimate the relevant parameters. We define optimal units for each parameter, and use these to ensure consistent alignment accuracy across different cone-angles and different magnification factors.
INVW05 12th December 2011
16:30 to 17:00
X Luo & W Li & N Hill & R Ogden & A Smythe Inverse estimation of fibre reinforced soft tissue of human gallbladder wall
Cholecystectomy (surgical removal of the gallbladder) for gallbladder pain is the most common elective abdominal operation performed in the western world. However, the outcome is not entirely satisfactory as the mechanism of gallbladder pain is unclear. We have developed a mechanical model of gallbladder aiming to understand its mechanical behaviour. To apply this model to clinical situations, it is often necessary to estimate the material properties from non-invasive medical images. In this work, we present a non gradient-based optimization inverse approach for estimating the elastic modulus of human gallbladders from ultrasound images. Two forward problems are considered. One utilizes a linear orthotropic material model and tracks the Elastic moduli in the circumferential and longitudinal directions. The other is a nonlinear Holzapfel-Grass-Ogden model in which two families of fibres are embedded circumferentially in an otherwise homogeneous Neo-Hookean elastin matri x. These forward problems are solved using the finite element package Abaqus, and a python/Matlab based optimization algorithm is developed to search the global minimum of the error functional, which measures the difference in geometries from the numerical predictions and images. We will compare and analyse the results for six gallbladder samples, and discuss the outstanding challenging issues.
INVW05 13th December 2011
09:00 to 09:30
Modulated plane wave methods for Helmholtz problems in heterogeneous media

A major challenge in seismic imaging is full waveform inversion in the frequency domain. If an acoustic model is assumed the underlying problem formulation is a Helmholtz equation with varying speed of sound. Typically, in seismic applications the solution has many wavelengths across the computational domain, leading to very large linear systems after discretisation with standard finite element methods. Much progress has been achieved in recent years by the development of better preconditioners for the iterative solution of these linear systems. But the fundamental problem of requiring many degrees of freedom per wavelength for the discretisation remains.

For problems in homogeneous media, that is, spatially constant wave velocity, plane wave finite element methods have gained significant attention. The idea is that instead of polynomials on each element we use a linear combination of oscillatory plane wave solutions. These basis functions already oscillate with the right wavelength, leading to a significant reduction in the required number of unknowns. However, higher-order convergence is only achieved for problems with constant or piecewise constant media.

In this talk we discuss the use of modulated plane waves in heterogeneous media, products of low-degree polynomials and oscillatory plane wave solutions for a (local) average homogeneous medium. The idea is that high-order convergence in a varying medium is recovered due to the polynomial modulation of the plane waves. Wave directions are chosen based on information from raytracing or other fast solvers for the eikonal equation. This approach is related to the Amplitude FEM originally proposed by Giladi and Keller in 2001. However, for the assembly of the systems we will use a discontinuous Galerkin method, which allows a simple way of incorporating multiple phase information in one element. We will discuss the dependence of the element sizes on the wavelenth and the accuracy of the phase information, and present several examples that demonstrate the properties of modulated plane wave methods for heterogeneous media problems.

INVW05 13th December 2011
09:30 to 10:00
Seismic inverse scattering by reverse time migration
We will consider the linearized inverse scattering problem from seismic imaging. While the first reverse time migration algorithms were developed some thirty years ago, they have only recently become popular for practical applications. We will analyze a modification of the reverse time migration algorithm that turns it into a method for linearized inversion, in the sense of a parametrix. This is proven using tools from microlocal analysis. We will also discuss the limitations of the method and show some numerical results.
INVW05 13th December 2011
10:00 to 10:30
Local analysis of the inverse problem associated with the Helmholtz equation -- Lipschitz stability and iterative reconstruction
We consider the Helmholtz equation on a bounded domain, and the Dirichlet-to-Neumann map as the data. Following the work of Alessandrini and Vessalla, we establish conditions under which the inverse problem defined by the Dirichlet-to-Neumann map is Lipschitz stable. Recent advances in developing structured massively parallel multifrontal direct solvers of the Helmholtz equation have motivated the further study of iterative approaches to solving this inverse problem. We incorporate structure through conormal singularities in the coefficients and consider partial boundary data. Essentially, the coefficients are finite linear combinations of piecewise constant functions. We then establish convergence (radius and rate) of the Landweber iteration in appropriately chosen Banach spaces, avoiding the fact the coefficients originally can be $L^{\infty}$, to obtain a reconstruction. Here, Lipschitz (or possibly Hoelder) stability replaces the so-called source condition. We accommodate the exponential growth of the Lipschitz constant using approximations by finite linear combinations of piecewise constant functions and the frequency dependencies to obtain a convergent projected steepest descent method containing elements of a nonlinear conjugate gradient method. We point out some correspondences with discretization, compression, and multigrid techniques. Joint work with E. Beretta, L. Qiu and O. Scherzer.
INVW05 13th December 2011
10:30 to 11:00
Re-routing of elastodynamic waves by means of transformation optics in planar, cylindrical, and spherical geometries

Transformation optics has proven a powerful tool to achieve cloaking from electromagnetic and acoustic waves. There are still technical issues with applications of transformation optics to elastodynamics, due to the fact that the elastodynamic wave equation does not in general possess suitable invariances under the required transformations. However, for a few types of materials, invariances of the appropriate kind have been shown to exist.

In the present talk we consider a few canonical scattering and reflection problems, and show that by coating the planar, cylindrical or spherical reflecting or scattering bodies with a fiber-reinforced layer of a metamaterial with a suitable gradient in material properties, the reflection or scattering of shear waves from the body can be significantly reduced.

It has been suggested that constructions inspired by transformation optics could potentially provide protection for infrastructure from seismic waves. Even if waves from earthquakes may have wavelengths making some such suggestions implausible, passive protection from shorter elastic bulk waves from other sources may be achieved by a scheme based on transformation optics. Other suggested applications are in the car and aeronautics industries.

The problems considered here, albeit rather special model problems, hopefully may provide some additional insight into protection against mechanical waves by means of transformation elastodynamics.

A result of the analysis in the case of a spherical case is, that to maximize number of modes to which the coated spherical body is “invisible,” rigid body rotations of the innermost part of the coating should be allowed. (However, this is only essential in the low frequency range.) It is also worth noting that since the transition matrices of scatterers described here have, as it were, quite well-populated null-spaces, they provide simple examples of cases where complete knowledge of the scatterer and of the scattered field does not even remotely suffice to reconstruct the incident field.

INVW05 13th December 2011
14:00 to 14:30
Identification of non-linearities in transport-diffusion models of crowded motion
INVW05 13th December 2011
14:30 to 15:00
P van Leeuwen Particle filters in highly nonlinear high-dimensional systems
Bayes theorem formulates the data-assimilation problem as a multiplication problem and not an inverse problem. In this talk we exploit that using an extremely efficient particle filter on a highly nonlinear geophysical fluid flow problem of dimension 65,000. We show how collapse of the particles can be avoided, and discuss statistics showing that the particle filter is performing correctly.
INVW05 13th December 2011
15:30 to 16:00
Spatial categorical inversion: Seismic inversion into lithology/fluid classes

Modeling of discrete variables in a three-dimensional reference space is a challenging problem. Constraints on the model expressed as invalid local combinations and as indirect measurements of spatial averages add even more complexity.

Evaluation of offshore petroleum reservoirs covering many square kilometers and buried at several kilometers depth contain problems of this type. Foc us is on identification of hydrocarbon (gas or oil) pockets in the subsurface - these appear as rare events. The reservoir is classified into lithology (rock) cla sses - shale and sandstone - and the latter contains fluids - either gas, oil or brine (salt water). It is known that these classes are vertically thin with large horizontal continuity. The reservoir is considered to be in equilibrium - hence fixed vertical sequences of fluids - gas/oil/brine - occur due to gravitational sorting. Seismic surveys covering the reservoir is made and through processing of the data, angle-dependent amplitudes of reflections are available. Moreover, a few wells are drilled through the reservoir and exact obse rvations of the reservoir properties are collected along the well trace.

The inversion is phrased in a hierarchical Bayesian inversion framework. The prior model, capturing the geometry and ordering of the classes, is of Markov random field type. A particular parameterization coined Profile Markov random field is def ined. The likelihood model linking lithology/fluids and seismic data captures maj or characteristics of rock physics models and the wave equation. Several parameters in this likelihood model are considered to be stochastic and they are inferred from seismic data and observations along the well trace. The posterior model is explored by an extremely efficient MCMC-algorithm.

The methodology is defined and demonstrated on observations from a real North Sea reservoir.

INVW05 13th December 2011
16:00 to 16:30
Practical and principled methods for large-scale data assimilation and parameter estimation

Uncertainty quantification can begin by specifying the initial state of a system as a probability measure. Part of the state (the 'parameters') might not evolve, and might not be directly observable. Many inverse problems are generalisations of uncertainty quantification such that one modifies the probability measure to be consistent with measurements, a forward model and the initial measure. The inverse problem, interpreted as computing the posterior probability measure of the states, including the parameters and the variables, from a sequence of noise corrupted observations, is reviewed in the talk. Bayesian statistics provides a natural framework for a solution but leads to very challenging computational problems, particularly when the dimension of the state space is very large, as when arising from the discretisation of a partial differential equation theory.

In this talk we show how the Bayesian framework provides a unification of the leading techniques in use today. In particular the framework provides an interpretation and generalisation of Tikhonov regularisation, a method of forecast verification and a way of quantifying and managing uncertainty. A summary overview of the field is provided and some future problems and lines of enquiry are suggested.

INVW05 13th December 2011
16:30 to 17:00
D Oliver The ensemble Kalman filter for distributed parameter estimation in porous media flow
INVW05 13th December 2011
17:00 to 17:30
Besov Priors for Bayesian Inverse problems
We consider the inverse problem of estimating a function $u$ from noisy measurements of a known, possibly nonlinear, function of $u$. We use a Bayesian approach to find a well-posed probabilistic formulation of the solution to the above inverse problem. Motivated by the sparsity promoting features of the wavelet bases for many classes of functions appearing in applications, we study the use of the Besov priors within the Bayesian formalism. This is Joint work with Stephen Harris (Edinburgh) and Andrew Stuart (Warwick).
INVW05 14th December 2011
09:00 to 09:30
Some applications of least-squares inversion in exploration geophysics
In exploration geophysics, we often obtain subsurface images from the data we record at the surface of the Earth. This imaging problem can be formulated as a data misfit problem. However, we face a number of numerical challenges when we want to apply this approach with seismic or electromagnetic data. We first need to efficiently compute some approximations of the elastodynamic or electromagnetic equations. Secondly, we need to solve the inverse problem with a local optimization because of the large problem size. During this presentation, after having briefly discussed the numerical solutions of the partial differential equations governing the physics, we shall describe the inverse formulation. Then, we shall present some of the applications and the difficulties we encounter in practice.
INVW05 14th December 2011
09:30 to 10:00
Alternative formulations for full waveform inversion

Classical full waveform inversion is a powerful tool to retrieve the Earth properties (P- and S-velocities) from seismic measurements at the surface. It simply consists of minimizing the misfit between observed and computed data. However, the associated objective function suffers from many local minima, mainly due the oscillatory aspect of seismic data. A local gradient approach does not usually converge to the global minimum.

We first review the classical full waveform inversion and its limitations. We then present two alternatives to avoid local minima in the determination of the background (large scale) velocity model. The first method is referred as the Normalized Integration Method (Liu et al., 2011). The objective function measures the misfit between the integral of the envelope of the signal. Because we only compare functions increasing with time, the objective function has a more convex shape.

The second method is a differential version of the full waveform inversion. This method is closely related the differential semblance optimization method (Symes, 2008) used in seismic imaging to automatically determine the Earth properties from reflected data.

We illustrate the two methods on basic 2-D examples to discuss the advantages and limitations.

INVW05 14th December 2011
10:00 to 10:30
2D nonlinear inversion of walkaway data

Well-seismic data such as vertical seismic profiles (VSP) provides detailed information about the elastic properties of the subsurface at the vicinity of the well. Heterogeneity of sedimentary terrains can lead to non negligible multiple scattering, one of the manifestations of the non linearity involved in the mapping between elastic parameters and seismic data. Unfortunately this technique is severely hampered by the 1D assumption.

We present a 2D extension of the 1D nonlinear inversion technique in the context of acoustic wave propagation. In the case of a subsurface with gentle lateral variations, we propose a regularization technique which aims at ensuring the stability of the inversion in a context where the recorded seismic waves provide a very poor illumination of the subsurface. We deal with a huge size nonlinear inverse problem. Solving this difficult problem is rewarded by a vertical resolution much higher than the one obtained by standard seismic imaging techniques at distances of about one hundred meters from the well.

INVW05 14th December 2011
10:30 to 11:00
Imaging using reciprocity principles: extended images, multiple scattering and nonlinear inversion
INVW05 14th December 2011
13:30 to 14:15
The future of imaging and inversion in a complex Earth
It is now over 25 years since the introduction of 3D seismic data acquisition and processing. These techniques have proven to be very useful. In fact, in a recent industry wide survey, 3D seismic was regarded as the single most valuable technology for the hydrocarbon industry over the last two decades. The objectives of seismic surveys are to provide a structural image as well as to estimate Earth properties of the sub-surface. Due to the high demand for hydrocarbons, industry have increasingly been exploring substantially more complex or difficult areas, such as deep water or sub-salt reservoirs. As a result, a step-change in technology for inversion and imaging has occurred, made possible by increasingly powerful computational platforms. New imaging methods such as full waveform inversion (Tarantola, Pratt) and Reverse Time Migration (RTM) utilize the full richness of recorded data (as opposed to conventional imaging methods which use simple reflections only). Consequently, industrial scientists have become increasingly aware of the limitations of what has been called 3D seismic data. This has been limited in three respects: i) bandwidth; ii) the lateral extent of source and receiver arrays; iii) aliasing in terms of source and receiver spacing.

In my presentation I will show how recent advances overcomes some of these limitations.

INVW05 14th December 2011
14:15 to 15:00
Earth imaging - a developing picture
INVW05 14th December 2011
15:30 to 16:15
Full Waveform Inversion in Laplace Domain
Seismic Full Waveform Inversion (FWI) consists in the estimation of Earth's subsurface structure based on measurements of physical fields near its surface. It is based on the minimization of an objective function measuring the difference between predicted and observed data. FWI is mostly formulated in time or Fourier domain. However FWI diverges if the starting model is far from the true model. This is consequence of the lack of low frequency in the seismic sources which limits the recovery of the large-scale structures in the velocity model. Re-formulating FWI in the Laplace domain using a logarithmic objective function introduces a fast and efficient method capable to recover long-wavelength velocity structure starting from a very simple initial solution and independent of the frequency content of the data. In this presentation we will present the FWI formulated in Laplace domain and its application to synthetic and field seismic data.
INVW05 14th December 2011
16:15 to 17:00
Seismic inverse problems - towards full wavefield acquisition?
INVW05 15th December 2011
09:00 to 09:30
Inversion of Pressure Transient Testing Data
In the oilfield, pressure transient testing is an ideal tool for determining average reservoir parameters, but this technique does not fully quantify the uncertainty in the spatial distribution of these parameters. We wish to determine plausible parameter distributions, consistent with both the pressure transient testing data and prior geological knowledge. We used a Langevin-based MCMC technique, adapted to the large number of parameters and data, to identify geological features and characterize the uncertainty.
INVW05 15th December 2011
09:30 to 10:00
P Sacks Inverse problems for the potential form wave equation in an annulus
INVW05 15th December 2011
10:00 to 10:30
Geometrical implications of the compound representation for weakly scattered amplitudes
It is known that a random walk model yields the multiplicative representation of a coherent scattered amplitude in terms of a complex Ornstein–Uhlenbeck process modulated by the square root of the cross-section. A corresponding biased random walk enables the derivation of the dynamics of a weak coherent scattered amplitude as a stochastic process in the complex plane. Strong and weak scattering patterns differ regarding the correlation structure of their radial and angular fluctuations. Investigating these geometric characteristics yields two distinct procedures to infer the scattering cross-section from the phase and intensity fluctuations of the scattered amplitude. These inference techniques generalize an earlier result demonstrated in the strong scattering case. Their significance for experimental applications, where the cross-section enables tracking of anomalies, is discussed.
INVW05 15th December 2011
10:30 to 11:00
SV Utyuzhnikov Inverse Source Problems of Active Sound Control for Composite Domains

In the active noise shielding problem, a quite arbitrary domain (bounded or unbounded) is shielded from the field (noise), generated outside, via introducing additional sources. Along with noise, the presence of internal (wanted) sound sources is admitted. Active shielding is achieved by constructing additional (secondary) sources in such a way that the total contribution of all sources leads to the noise attenuation. In contrast to passive control, there is no any mechanical insulation in the system. In practice, active and passive noise control strategies could often be combined, because passive insulation is more efficient for higher frequencies, whereas active shielding is more efficient for lower frequencies.

The problem is formulated as an inverse source problem with the secondary sources positioned outside the domain to be shielded. The solution to the problem is obtained in both the frequency and time domains, and based on Calderόn – Ryaben’kii’s surface potentials [1]. A key property of these potentials is that they are projections. The constructed solution to the problem requires only the knowledge of the total field at the perimeter of the shielded domain [1-3]. In practice, usually the total field can only be measured. The methodology automatically differentiates between the wanted and unwanted components of the field. A unique feature of the proposed methodology is its capability to cancel the unwanted noise across the volume and keep the wanted sound unaffected. It is important that the technique requires no detailed information of either the properties of the medium or the noise sources.

The technique can also be extended to a composite protected region (multiply connected) [4]. Moreover, the overall domain can arbitrarily be split into a collection of subdomains, and those subdomains are selectively allowed to either communicate freely or otherwise be shielded from their peers. In doing so, no reciprocity is assumed, i.e., for a given pair or subdomains one may be allowed to hear the other, but not vice a versa. Possible applications of this approach to engineering problems such as oil prospecting are discussed.

INVW05 15th December 2011
14:00 to 14:30
Lipschitz stability of an inverse problem for the Helmholtz equation
Consider the inverse problem of determining the potential q from the Neumann-to-Dirichlet map q of a Schrödinger type equation.

A relevant question, specially in applications, is the stability of the inversion. In this work, a Lipschitz type stability is established assuming a priori that q is piecewise constant with a bounded know number of unknown values.

INVW05 15th December 2011
14:30 to 15:00
P Childs Numerics of waveform inversion for seismic data

Depth imaging and inversion of seismic data is becoming commonplace within the seismic industry. However the inversion procedures used today have a highly non-convex objective function and will often fail unless careful multiscale processing has been included in the workflow. In this talk, we will review some approaches to improving the robustness of the procedure.

Because the PDE-constrained inversion procedure used in industry can be very expensive due to the large number of PDE solves required, we will review and address some of the numerical challenges in this area. The talk will concentrate mainly on computational developments and will be illustrated with industrial examples from full waveform inversion of seismic data.

INVW05 15th December 2011
15:30 to 16:00
A wave equation based Kirchhoff operator and its inverse

In seismic imaging one tries to compute an image of the singularities in the earth's subsurface from seismic data. Seismic data sets used in the exploration for oil and gas usually consist of a collection of sources and receivers, which are both positioned at the surface of the earth. Since each receiver records a time series, the ideal seismic data set is five dimensional: sources and receivers both have two spatial coordinates and these four spatial coordinates are complemented by one time variable.

Singularities in the earth give rise to scattering of incident waves. The most common situation is that of re flection against an interface of discontinuity. Refl ected and incoming waves are related via refl ection coefficients, which depend in general on two angles, namely the angle of incidence and the azimuth angle. Re flection coefficients are therefore also dependent on five variables, namely three location variables and two angles.

The classical Kirchhoff integral can be seen as an operator mapping these angle-azimuth dependent refl ection coefficients to singly scattered data generated and recorded at the surface. It essentially depends on asymptotic quantities which can be computed via ray tracing. For a known velocity model, seismic imaging comes down to nding a left inverse of the Kirchhoff operator.

In this talk I will construct such a left inverse explicitly. The construction uses the well known concepts of subsurface offset and subsurface angle gathers and is completely implementable in a wave equation framework. Being able to perform such true amplitude imaging in a wave equation based setting has signifficant advantages in truly complex geologies, where an asymptotic approximation to the wave equation does not suffice. The construction also naturally leads to a reformulation of the classical Kirchhoff operator into a wave equation based variant, which can be used e.g. for wave equation based least squares migration. Finally, I will discuss invertibility of the new Kirchhoff operator, i.e. I will construct a right inverse as well.

INVW05 15th December 2011
16:00 to 16:30
L Demanet Can we determine low frequencies from high frequencies?
Data usually come in a high frequency band in wave-based imaging, yet one often wishes to determine large-scale features of the model that predicted them. When is this possible? Both the specifics of wave propagation and signal structure matter in trying to deal with this multifaceted question. I report on some recent progress with Paul Hand and Hyoungsu Baek. The answers are not always pretty.
INVW05 15th December 2011
16:30 to 17:00
Band-limited ray tracing
We present a new band-limited ray tracing method that aims to overcome some of the limitations of the standard high frequency ray tracing in complex velocity models that particularly contains complex boundaries. Our method is based on band-limited Snell’s law, which is derived from the Kirchhoff integral formula by localization around a boundary location of interest using Fresnel volume.
INVW05 15th December 2011
17:00 to 17:30
Level Set Methods for Inverse Problems
INVW05 16th December 2011
09:00 to 09:30
Numerical analysis of structural identifiability of electrochemical systems
Development of an experiment-based model often encounters so-called identifiability problem. Namely, if there is a system of (e.g., differential) equations at our disposal and a set of experiments to perform, the question arises whether the planned experiments allow for reliable identification of the parameters of the model, such as reaction rates or diffusivities? Since in many cases the initial answer is negative, one has to modify the experimental design. In the present research we considered identifiability of a system of reaction-diffusion equations and explicitly calculate the experimental conditions, which allows for the most reliable identification of the model’s parameters. According to our approach solution of the identifiability problem requires finding of the global maximum of a specially designed function and it is shown that the identifiability criterion is equal to the ratio of the parameters’ uncertainty to the experimental error under worst-case scenario, i.e., it characterizes the precision of the identification procedure. Since the outcome of our identifiability test is not simply “yes” or “no”, but a number, one can modify the experimental conditions in order to minimize the uncertainty.
INVW05 16th December 2011
09:30 to 10:00
G Vitale Force Traction Microscopy: an inverse problem with pointwise observations
Force Traction Microscopy is an inversion method that allows to obtain the stress field applied by a living cell on the environment on the basis of a pointwise knowledge of the displacement produced by the cell itself. This classical biophysical problem, usually addressed in terms of Green functions, can be alternatively tackled using a variational framework and then a finite elements discretization. In such a case, a variation of the Tichonov functional under suitable regularization is operated in view of its minimization. This setting naturally suggests the introduction of a new equation, based on the adjoint operator of the elasticity problem. The pointwise observations require to exploit the theory of elasticity extended to forcing terms that are Borel measures. In this work we show the proof of well posedness of the above problem, borrowing technics from the field of Optimal Control. We also illustrate a numerical strategy of the inversion method that discretizes the partial differential equations associated to the optimal control problem. A detailed discussion of the numerical approximation of a test problem (with known solution) that contains most of the mathematical difficulties of the real one, allows a precise evaluation of the degree of confidence that one can have in the numerical results.
INVW05 16th December 2011
10:00 to 10:30
Inverse Problems in the Prediction of Reservoir Petroleum Properties using Multiple Kernel Learning

In Reservoir engineering a common inverse problem is that of estimating the reservoir properties such as Porosity and Permeability by matching the simulation model to the dynamic Production data. Using this model, future predictions can then be made and the uncertainty of these predictions quantified using Bayes Rules.

Multiple Kernel Learning (MKL) is an inverse problem that maps input data into a feature space with the use of kernel functions. MKL is a predictive tool that has been applied in the Petroleum Industry to estimate the spatial distribution of Porosity and Permeability. The parameters of the kernels and the choice of the kernels are determined by matching to hard data for Porosity and Permeability found at the wells thus producing a static model that is used as input into the dynamic model.

In this paper we show how we combine the above mentioned inverse problems. We estimate the Porosity and Permeability into a static model then match to the dynamic production data to tune the parameters in the Multiple Kernel Learning Framework. Specifically we integrate the MLE estimation from the MKL objective Function into the History Matching Function.

INVW05 16th December 2011
10:30 to 11:00
S Rouquette Estimation of the heat flux parameters during a static Gas Tungsten Arc Welding experiment

Gas Tungsten Arc (GTA) welding process is mainly used for assembly metallic structures which require high level safety (so excellent joint quality). This welding process is based on electrical arc created between a tungsten electrode and the base metal (work-pieces to assemble). An inert gaseous flow (argon or/and helium) shields the tungsten electrode and the molten metal against the oxidation. The energy required for melting the base metal is brought from the heat generated by the electrical arc.GTAW process involves a combination of physical phenomena: heat transfer, fluid flow, self-induced electromagnetic force. Mechanisms involved in the weld pool formation and geometry are surface tension, impigning arc pressure, buoyancy force and Lorentz force. . It is well known that for welding intensities inferior to 200A, GTAW phenomena are well described with a heat transfer - fluid flow modelling and the Marangoni force on the weld pool. The knowledge of the heat flux, at the a rc plasma – work-piece interface, is one of the key parameter for establishing a predictive multiphysics GTAW simulation.

In this work, we investigated the estimation of the heat source by an inverse technique with a heat transfer and fluid model modelling for the GTAW process. The heat source is described with a Gaussian function involving two parameters: process efficiency and Gaussian radius. These two parameters are not known accurately and they require to be estimated. So an inverse technique regularized with the Levenberg-Marquardt Algorithm (LMA) is employed for the estimation of these two parameters. All the stages of the LMA are described. A sensitivity analysis has been done in order to determine if the thermal data and thermocouple locations are relevant for making the estimation of the two parameters simultaneously. The linear dependence between the two estimated parameters is studied. Then the sensitivity matrix is build and the IHFP is solved. The robustness of the stated IHFP is investigated through few numerical cases. Lastly, the IHFP is solved with experimental thermal data an d the results are discussed.

INVW05 16th December 2011
14:00 to 14:30
Diffeomorphic Image Registration
The deformation of an image so that its appearance more closely matches that of another image (image registration) has applications in many fields, from medical image analysis through evolutionary biology and fluid dynamics to astronomy. Over recent years there has been a great deal of interest in smooth, invertible (i.e., diffeomorphic) warps, not least because the underlying Euler-Poincare PDEs are geodesic equations on the diffeomorphism group with respect to a group-invariant metic. In this talk I will summarise the work in the field from the inverse problems point of view and highlight areas of future work.
INVW05 16th December 2011
14:30 to 15:00
C Nolan Microlocal Analysis of Bistatic Synthetic Aperture Radar Imaging
INVW05 16th December 2011
15:30 to 16:00
Finite difference resistivity modeling on unstructured grids with large conductivity contrasts
The resolution of the 3-D electrical forward problem faces several difficulties. Besides the singularity at the source location, major issues are caused by the definition of the computational domain to match a particular topography, and by high conductivity contrasts. To address these issues, we combine here two methods. First, we implement a specific finite difference method that takes into account specified interfaces in elliptic problems. Here, the contrasts are defined along grid lines. Second, we extend the method to unstructured meshes by integrating it to the generalized finite difference technique. In practice, once the conductivity model is defined, the approach does not need to explicitly specify where the large contrasts are located. Several numerical tests are carried out for various Poisson problems and show a high degree of accuracy.
INVW05 16th December 2011
16:00 to 16:30
Bar Code Scanning -- An Inverse Problem for Words
Bar codes are ubiquitous -- they are used to identify products in stores, parts in a warehouse, and books in a library, etc. In this talk, the speaker will describe how information is encoded in a bar code and how it is read by a scanner. The presentation will go over how the decoding process, from scanner signal to coded information, can be formulated as an inverse problem. The inverse problem involves finding the "word" hidden in the signal. What makes this inverse problem, and the approach to solve it, somewhat unusual is that the unknown has a finite number of states.
INVW05 16th December 2011
16:30 to 17:00
Position tomography and seismic inversion
Active source seismic data may depend on more parameters than the spatial dimension of the earth model, and thus must satisfy some internal consistency conditions. Membership in the kernel of an annihilation operator provides one useful way to express these conditions. For linearized data simulation with smooth reference model, annihilators may belong to well-studied classes of oscillatory integral operators, which have rich geometric structure. I will describe generally how annihilators arise and lead to inversion algorithms, and specifically how space-shift annihilators and associated position tomography problems may be used to determine the reference model in the linearized description of reflected waveform inversion.
INVW07 7th February 2014
11:00 to 11:45
E-M Brinkmann Exploiting joint sparsity information by coupled Bregman iterations
Co-authors: Eva-Maria Brinkmann (WWU Münster), Michael Möller (Arnold & Richter Cinetechnik), Tamara Seybold (Arnold & Richter Cinetechnik)

Many applications are concerned with the reconstruction or denoising of multichannel images (color, spectral, time) with a natural prior information of correlated sparsity patterns. The most striking one is joint edge sparsity for different channels of a color image.

We discuss how such prior information can be encoded in Bregman distances for frequently used one-homogeneous functionals, and introduce a novel concept of infimal convolution of Bregman distances. We then discuss appropriate modifications of Bregman iterations towards a coupled reconstruction scheme. First results are presented for color image denoising.

INVW07 7th February 2014
11:45 to 12:15
M Betcke A priorconditioned LSQR algorithm for linear ill-posed problems with edge-preserving regularization
Co-authors: Simon Arridge (University College London), Lauri Harhanen (Aalto University)

In this talk we present a method for solving large-scale linear inverse problems regularized with a nonlinear, edge-preserving penalty term such as e.g. total variation or Perona–Malik. In the proposed scheme, the nonlinearity is handled with lagged diffusivity fixed point iteration which involves solving a large-scale linear least squares problem in each iteration. The size of the linear problem calls for iterative methods e.g. Krylov methods which are matrix-free i.e. the forward map can be defined through its action on a vector. Because the convergence of Krylov methods for problems with discontinuities is notoriously slow, we propose to accelerate it by means of priorconditioning. Priorconditioning is a technique which embeds the information contained in the prior (expressed as a regularizer in Bayesian framework) directly into the forward operator and hence into the solution space. We derive a factorization-free priorconditioned LSQR algorithm, allowing implicit ap plication of the preconditioner through efficient schemes such as multigrid. We demonstrate the effectiveness of the proposed scheme on a three-dimensional problem in fluorescence diffuse optical tomography using algebraic multigrid preconditioner.

INVW07 7th February 2014
13:45 to 14:30
A statistical perspective on sparse regularization and geometric modelling
Consider a typical inverse problem where we wish to reconstruct an unknown function from a set of measurements. When the function is discretized it is usual for the number of data points to be insufficient to uniquely determine the unknowns – the problem is ill-posed. One approach is to reduce the size of the set of eligible solutions until it contains only a single solution—the problem is regularized. There are, however, infinitely many possible restrictions each leading to a unique solution. Hence the choice of regularization is crucial, but the best choice, even amongst those commonly used, is still difficult. Such regularized reconstruction can be placed into a statistical setting where data fidelity becomes a likelihood function and regularization becomes a prior distribution. Reconstruction then becomes a statistical inference task solved, perhaps, using the posterior mode. The common regularization approaches then correspond to different choices of prior di stribution. In this talk the ideas of regularized estimation, including ridge, lasso, bridge and elastic-net regression methods, will be defined. Application of sparse regularization to basis function expansions, and other dictionary methods, such as wavelets will be discussed. Their link to smooth and sparse regularization, and to Bayesian estimation, will be considered. As an alternative to locally constrained reconstruction methods, geometric models impose a global structure. Such models are usually problem specific, compared to more generic locally constrained methods, but when the parametric assumptions are reasonable they will make better use of the data, provide simpler models and can include parameters which may be used directly, for example in monitoring or control, without the need for extra post-processing. Finally, the matching of modelling and estimation styles with numerical procedures, to produce efficient algorithms, will be discussed.
INVW07 7th February 2014
14:30 to 15:00
A primal dual method for inverse problems in MRI with non-linear forward operators
Co-authors: Martin Benning (University of Cambridge), Dan Holland (University of Cambridge), Lyn Gladden (University of Cambridge), Carola-Bibiane Schönlieb (University of Cambridge), Florian Knoll (New York University), Kristian Bredies (University of Graz)

Many inverse problems inherently involve non-linear forward operators. In this talk, I concentrate on two examples from magnetic resonance imaging (MRI). One is modelling the Stejskal-Tanner equation in diffusion tensor imaging (DTI), and the other is decomposing a complex image into its phase and amplitude components for MR velocity imaging, in order to regularise them independently. The primal-dual method of Chambolle and Pock being advantageous for convex problems where sparsity in the image domain is modelled by total variation type functionals, I recently extended it to non-linear operators. Besides motivating the algorithm by the above applications, through earlier collaborative efforts using alternative convex models, I will sketch the main ingredients for proving local convergence of the method. Then I will demonstrate very promising numerical performance.

INVW07 7th February 2014
15:30 to 16:00
K Chen Restoration of images with blur and noise - effective models for known and unknown blurs
In recent years, the interdisciplinary field of imaging science has been experiencing an explosive growth in active research and applications.

In this talk I shall present some recent and new work of modeling the inverse problem of removing noise and blur in a given and observed image. Here we assume the Gaussian additive noise is present and the blur is defined by some linear filters. Inverting the filtering process does not lead to unique solutions without suitable regularization. There are several cases to discuss:

Firstly I discuss the problem of how to select optimal coupling parameters, given an accurate estimate of the noise level, in a total variation (TV) optimisation model.

Secondly I show a new algorithm for imposing the positivity constraint for the TV model for the case of a known blur.

Finally I show how to generalise the new idea to the blind deconvolution where the blur operator is unknown and must be restored along with the image. Again the TV regularisers are used. However with the splitting idea, our work can be extended to include other high order regularizers such as the mean curvature.

Once an observed image is improved, further tasks such as segmentation and co-registration become feasible. There will be potentially ample applications to follow up.

Joint work with B. Williams, J. P. Zhang, Y.Zheng, S. Harding (Liverpool) and E. Piccolomini, F. Zama (Bologna). Other collaborators in imaging in general include T. F. Chan, R. H. Chan, B. Yu, N. Badshah, H. Ali, L. Rada, C. Brito, L. Sun, F. L. Yang, N. Chumchob, M. Hintermuller, Y. Q. Dong, X. C. Tai, etc.

Related Links: http://www.liv.ac.uk/~cmchenke - Home page

INVW07 7th February 2014
16:00 to 16:30
Deghosting seismic data by sparse reconstruction
In marine environments, seismic reflection data is typically acquired with acoustic sensors attached to multiple streamers towed relatively close to the sea surface. Upward going waves reflect from the sea surface and destructively interfere with the primary signal. Ideally we would like to deconvolve these “ghost” events from our data. However, their phase delay depends on the angle of propagation at the receiver, and unfortunately, streamer separation is such that most frequencies of interest are aliased, so this angle cannot be easily determined.

In this talk, I will show how the problem can be addressed with the machinery of compressed sensing. I will illustrate with data examples how the trade-offs involved in the choice of basis function, the choice of sparse solver, the dimensionality in which the problem is framed, and the accuracy of the physics in the forward model, all effect the quality and cost of the reconstruction.

INVW07 7th February 2014
16:30 to 17:00
Compressed sensing in the real world - The need for a new theory
Compressed sensing is based on the three pillars: sparsity, incoherence and uniform random subsampling. In addition, the concepts of uniform recovery and the Restricted Isometry Property (RIP) have had a great impact. Intriguingly, in an overwhelming number of inverse problems where compressed sensing is used or can be used (such as MRI, X-ray tomography, Electron microscopy, Reflection seismology etc.) these pillars are absent. Moreover, easy numerical tests reveal that with the successful sampling strategies used in practice one does not observe uniform recovery nor the RIP. In particular, none of the existing theory can explain the success of compressed sensing in a vast area where it is used. In this talk we will demonstrate how real world problems are not sparse, yet asymptotically sparse, coherent, yet asymptotically incoherent, and moreover, that uniform random subsampling yields highly suboptimal results. In addition, we will present easy arguments explaining why uniform recovery and the RIP is not observed in practice. Finally, we will introduce a new theory that aligns with the actual implementation of compressed sensing that is used in applications. This theory is based on asymptotic sparsity, asymptotic incoherence and random sampling with different densities. This theory supports two intriguing phenomena observed in reality: 1. the success of compressed sensing is resolution dependent, 2. the optimal sampling strategy is signal structure dependent. The last point opens up for a whole new area of research, namely the quest for the optimal sampling strategies.
INVW06 10th February 2014
09:45 to 10:30
Seeing Through Space Time
We consider inverse problems for the Einstein equation with a time-depending metric on a 4-dimensional globally hyperbolic Lorentzian manifold. We formulate the concept of active measurements for relativistic models. We do this by coupling Einstein equations with equations for scalar fields.

The inverse problem we study is the question, do the observations of the solutions of the coupled system in an open subset of the space-time with the sources supported in an open set determine the properties of the metric in a larger domain. To study this problem we define the concept of light observation sets and show that these sets determine the conformal class of the metric. This corresponds to passive observations from a distant area of space which is filled by light sources.

This is joint work with Y. Kurylev and G Uhlmann

INVW06 10th February 2014
11:00 to 11:45
Conjugate gradient iterative hard thresholding for compressed sensing and matrix completion
Co-authors: Jeffrey D. Blanchard (Grinnell College), Ke Wei (University of Oxford)

Compressed sensing and matrix completion are techniques by which simplicity in data can be exploited for more efficient data acquisition. For instance, if a matrix is known to be (approximately) low rank then it can be recovered from few of its entries. The design and analysis of computationally efficient algorithms for these problems has been extensively studies over the last 8 years. In this talk we present a new algorithm that balances low per iteration complexity with fast asymptotic convergence. This algorithm has been shown to have faster recovery time than any other known algorithm in the area, both for small scale problems and massively parallel GPU implementations. The new algorithm adapts the classical nonlinear conjugate gradient algorithm and shows the efficacy of a linear algebra perspective to compressed sensing and matrix completion.

INVW06 10th February 2014
11:45 to 12:30
S Arridge Quantitative PhotoAcoustics Using the Transport Equation
Quantitative photoacoustic tomography involves the reconstruction of a photoacoustic image from surface measurements of photoacoustic wave pulses followed by the recovery of the optical properties of the imaged region. The latter is, in general, a nonlinear, ill-posed inverse problem, for which model-based inversion techniques have been proposed. Here, the full radiative transfer equation is used to model the light propaga- tion, and the acoustic propagation and image reconstruction solved using a pseudo-spectal time-domain method. Direct inversion schemes are impractical when dealing with real, three-dimensional images. In this talk an adjoint field method is used to efficiently calculate the gradient in a gradient-based optimisation technique for simultaneous recovery of absorption and scattering coefficients.

Joint work with B. Cox, T. Saratoon, T. Tarvainen.

INVW06 10th February 2014
13:30 to 14:15
Reconstruction of the wave speed in a geophysical inverse problem
We analyze the inverse problem, originally formulated by Dix in geophysics, of reconstructing the wave speed inside a domain from boundary measurements associated with the single scattering of seismic waves. We consider a domain $M$ with a varying and possibly anisotropic wave speed which we model as a Riemannian metric $g$. For our data, we assume that $M$ contains a dense set of point scatterers and that in a subset $U\subset M$, modeling the domain that contains the measurement devices, e.g, on the Earth's surface is seismic measurements, we can produce sources and measure the wave fronts of the single scattered waves diffracted from the point scatterers. The inverse problem we study is to recover the metric $g$ in $M$ up to a change of coordinates. To do this we show that the shape operators related to wave fronts produced by the point scatterers within $M$ satisfy a certain system of differential equations which may be solved along geodesics of the metric. In this way, assuming we know $g$ as well as the shape operator of the wave fronts in the region $U$, we may recover $g$ in certain coordinate systems (i.e. Riemannian normal coordinates centered at point scatterers).

The reconstruction of the Riemannian metric reduces to the problem of determination of unknown coefficient functions in a system of Riccati equations that the shape operators satisfy. This generalizes the well-known geophysical method of Dix to metrics which may depend on all spatial variables and be anisotropic. In particular, the novelty of this solution lies in the fact that it can be used to reconstruct the metric also in the presence of the caustics.

The results have been done in collaboration with Maarten de Hoop, Sean Holman, Einar Iversen, and Bjorn Ursin

INVW06 10th February 2014
14:15 to 15:00
O Scherzer Mathematical Modeling of Optical Coherence Tomography
Co-authors: Peter Elbau (University of Vienna), Leonidas Mindrinos (University of Vienna)

In this talk we present mathematical methods to formulate Optical Coherence Tomography (OCT) on the basis of the electromagnetic theory. OCT produces high-resolution images of the inner structure of biological tissues. Images are obtained by measuring the time delay and the intensity of backscattered of back-reflected light from the sample considering also the coherence properties of light. A general mathematical problem for OCT is presented considering the sample field as a solution of the Maxwell's equations. Moreover, we present some imaging formulas.

INVW06 10th February 2014
15:30 to 16:15
A Data-Driven Edge-Preserving D-bar Method for Electrical Impedance Tomography
Co-authors: Sarah Hamilton (University of Helsinki), Andreas Hauptmann (University of Helsinki)

Electrical Impedance Tomography (EIT) is a non-invasive, inexpensive, and portable imaging modality where an unknown physical body is probed with electric currents fed through electrodes positioned on the surface of the body. The resulting voltages at the electrodes are measured, and the goal is to recover the internal electric conductivity of the body from the current-to-voltage boundary measurements. The reconstruction task is a highly ill-posed nonlinear inverse problem, which is very sensitive to noise, and requires the use of regularized solution methods. EIT images typically have low spatial resolution due to smoothing caused by regularization. A new edge-preserving EIT algorithm is proposed, based on applying a deblurring flow stopped at minimal data discrepancy. The method makes heavy use of a novel data fidelity term based on the so-called CGO sinogram. This nonlinear data preprocessing step provides superior robustness over traditional EIT data formats such as curr ent-to-voltage matrix or Dirichlet-to-Neumann operator.

Related Links: http://arxiv.org/abs/1312.5523 - Arxiv preprint

INVW06 10th February 2014
16:15 to 17:00
B Cox Photoacoustic tomography: progress and open problems
Photoacoustic tomography (PAT) is an emerging biomedical imaging modality which exploits the photoacoustic effect, whereby light absorption gives rise to ultrasound waves. It is already being used in a number of applications, such as preclinical and breast imaging, and for cancer and drug research. There are two inverse problems in PAT: an acoustic inversion and a diffuse optical inversion, which can be decoupled because of the differences in the timescale of acoustic and optical propagation. A great deal of work has been done on the former, and progress has been made on the latter in recent years. However, there remain several open image reconstruction problems of considerable practical importance, both acoustic and optical. This talk will give an overview of PAT, describe the various experimental systems available for making PAT measurements, highlight the progress made to date, and introduce some remaining unsolved inverse problems of interest.
INVW06 11th February 2014
09:00 to 09:45
Topological reduction of the inverse Born series
I will discuss a fast direct method to solve the inverse scattering problem for diffuse waves. Applications to optical tomography will be described.
INVW06 11th February 2014
09:45 to 10:30
D Calvetti Sequential Monte Carlo and particle methods in inverse problems
Co-authors: Andrea Arnold (CWRU), Erkki Somersalo (CWRU)

In sequential Monte Carlo methods, the posterior distribution of an unknown of interest is explored in a sequential manner, by updating the Monte Carlo sample as new data arrive. In a similar fashion, particle filtering encompasses different sampling techniques to track the time course of a probability density that evolves in time based on partial observations of it. Methods that combine particle filters and sequential Monte Carlo have been developed for some time, mostly in connection with estimating unknown parameters in stochastic differential equations. In this talk, we present some new ideas suitable for treating large scale, non-stochastic, severely stiff systems of differential equations combining sequential Monte Carlo methods with classical numerical analysis concepts.

INVW06 11th February 2014
11:00 to 11:45
Bayesian preconditioning for truncated Krylov subspace regularization with an application to Magnetoencephalography (MEG)
Co-authors: Daniela Calvetti (Case Western Reserve University), Laura Homa (Case Western Reserve University)

We consider the computational problem arising in magnetoencephalography (MEG), where the goal is to estimate the electric activity within the brain non-invasively from extra-cranial measurements of the magnetic field components. The problem is severely ill-posed due to the intrinsic non-uniqueness of the solution, and suffer further from the challenges of starting from a weak data signal, its high dimensionality and complexity of the noise, part of which is due to the brain itself. We propose a new algorithm that is based on truncated conjugate gradient algorithm for least squares (CGLS) with statistically inspired left and right preconditioners. We demonstrate that by carefully accounting for the spatiotemporal statistical structure of the brain noise, and by adopting a suitable prior within the Bayesian framework, we can design a robust and efficient method for the numerical solution of the MEG inverse problem which can improve the spatial and temporal resolution of events of short duration.

INVW06 11th February 2014
13:30 to 14:15
Four-dimensional X-ray tomography
In recent years, mathematical methods have enabled three-dimensional medical X-ray imaging using much lower radiation dose than before. One example of products based on such approach is the 3D dental X-ray imaging device called VT, manufactured by Palodex Group. The idea is to collect fewer projection images than traditional computerized tomography machines and then use advanced mathematics to reconstruct the tissue from such incomplete data. The idea can be taken further by placing several pairs of X-ray source and detector "filming" the patient from many directions at the same time. This allows in principle recovering the three-dimensional inner structure as a function of time. There are many potential commercial applications of such a novel imaging modality: cardiac imaging, angiography, small animal imaging and nondestructive testing. However, new regularized inversion methods are needed for imaging based on such special type of data. A novel level-set type method is introduced for that purpose, enforcing continuity in space-time in a robust and reliable way. Tentative computational results are shown, based on both simulated and measured data. The results suggest that the new imaging modality is promising for practical applications.
INVW06 11th February 2014
14:15 to 15:00
Optimal Design in Large-Scale Inversion - From Compressive to Comprehensive Sensing
Co-authors: Eldad Haber (UBC), Luis Tenorio (CSM)

In the quest for improving inversion fidelity of large-scale problems, great consideration has been devoted towards effective solution of ill-posed problems of various regularization configurations. Nevertheless, complementary issues, such as determination of optimal configurations for data acquisition or more generally any other controllable parameters of the apparatus and process were frequently overlooked. While optimal design for well-posed problems has been extensively studied in the past, little consideration has been directed to its ill-posed counterpart. This is strikingly in contrast to the fact that a broad range of real-life problems are of such nature. In this talk, some of the intrinsic difficulties associated with design for ill-posed inverse problems shall be described, further, a coherent formulation to address these challenges will be laid out and finally the importance of design for various inversion problems shall be demonstrated.

Related Links: http://ocrdesign.wix.com/home - Design in Inversion - Open Collaboration Research

https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=9&cad=rja&ved=0CE0QFjAI&url=http%3A%2F%2Fusers.ices.utexas.edu%2F~omar%2Fsantafe2013%2Fslides%2FHoresh.ppsx&ei=1oSyUubrBsaekQfbwYCwBw&usg=AFQjCNGO6s1LcQqgbTrakWPD1TvHwf_ivw&sig2=60jCyY3RY_F3IGfuLnm-5g&bvm=bv.58187178,d.eW0 - Optimal Design for Large-Scale Ill-Posed Problems - Slide deck

INVW06 11th February 2014
15:30 to 16:15
Tracerco Discovery. The world's first gamma ray subsea CT scanner for pipeline integrity and flow assurance
Tracerco Discovery is the first instrument in the world capable of performing tomographic reconstruction of subsea pipelines online and down to 3000 m depth. It combines advanced nuclear physics and mathematics with state-of-the-art engineering to yield one of the most advanced instruments available today for diagnosing pipeline walls and contents within the oil and gas industry.

The talk will provide an overview of the Discovery technology and its implementation into current and upcoming instruments. Both simulated and experimental results obtained during commissioning and subsea trials will be presented, highlighting some of the technical and scientific challenges encountered and overcome during the design and the production of the instrument.

INVW06 12th February 2014
09:00 to 09:45
Compressed sensing in the real world - The need for a new theory
Compressed sensing is based on the three pillars: sparsity, incoherence and uniform random subsampling. In addition, the concepts of uniform recovery and the Restricted Isometry Property (RIP) have had a great impact. Intriguingly, in an overwhelming number of inverse problems where compressed sensing is used or can be used (such as MRI, X-ray tomography, Electron microscopy, Reflection seismology etc.) these pillars are absent. Moreover, easy numerical tests reveal that with the successful sampling strategies used in practice one does not observe uniform recovery nor the RIP. In particular, none of the existing theory can explain the success of compressed sensing in a vast area where it is used. In this talk we will demonstrate how real world problems are not sparse, yet asymptotically sparse, coherent, yet asymptotically incoherent, and moreover, that uniform random subsampling yields highly suboptimal results. In addition, we will present easy arguments explaining why unif orm recovery and the RIP is not observed in practice. Finally, we will introduce a new theory that aligns with the actual implementation of compressed sensing that is used in applications. This theory is based on asymptotic sparsity, asymptotic incoherence and random sampling with different densities. This theory supports two intriguing phenomena observed in reality: 1. the success of compressed sensing is resolution dependent, 2. the optimal sampling strategy is signal structure dependent. The last point opens up for a whole new area of research, namely the quest for the optimal sampling strategies.
INVW06 12th February 2014
09:45 to 10:30
Hyperbolic inverse problems and exact controllability
We will discuss our recent stability results on the hyperbolic inverse boundary value problem and also on the hyperbolic inverse initial source problem. The latter problem arises as a part of the photoacoustic tomography problem. The control theoretic concept of exact controllability plays an important role in the results.
INVW06 12th February 2014
11:00 to 11:45
On Convex Finite-Dimensional Variational Methods in Imaging Sciences, and Hamilton-Jacobi Equations
We consider standard finite-dimensional variational models used in signal/image processing that consist in minimizing an energy involving a data fidelity term and a regularization term. We propose new remarks from a theoretical perspective which give a precise description on how the solutions of the optimization problem depend on the amount of smoothing effects and the data itself. The dependence of the minimal values of the energy is shown to be ruled by Hamilton-Jacobi equations, while the minimizers $u(x,t)$ for the observed images $x$ and smoothing parameters $t$ are given by $u(\bfx,t) = x - t \nabla H(\nabla_x E(x,t))$ where $E(x,t)$ is the minimal value of the energy and $H$ is a Hamiltonian related to the data fidelity term. Various vanishing smoothing parameter results are derived illustrating the role played by the prior in such limits.
INVW06 12th February 2014
11:45 to 12:30
On Large Scale Inverse Problems that Cannot be solved
In recent years data collection systems have improved and we are now able to collect large volume of data over vast regions in space. This lead to large scale inverse problems that involve with multiple scales and many data. To invert this data sets, we must rethink our numerical treatment of the problems starting from our discretization, to the optimization technique to be used and the efficient way we can parallelize these problems. In this talk we introduce a new multiscale asynchronous method for the treatment of such data and apply it to airborne Electromagnetic data.
INVW06 12th February 2014
13:30 to 14:15
Alternating Projection, Ptychographic Imaging and connection graph Laplacian
Co-authors: Yu-Chao Tu (Mathematics, Princeton University), Stefano Marchesini (Lawrence Berkeley Lab)

In this talk, we demonstrate the global convergence of the alternating projection (AP) algorithm to a unique solution up to a global phase factor in the ptychographic imaging. Additionally, we survey the intimate relationship between the AP algorithm and the notion of ``phase synchronization''. Based on this relationship, the recently developed technique connection graph Laplacian is applied to quickly construct an accurate initial guess, and accelerate convergence speed for large scale diffraction data problems. This is a joint work with Stefano Marchesini and Yu-Chao Tu.

INVW06 13th February 2014
09:00 to 09:45
On stability for the direct scattering problem and its applications to cloaking
We consider the direct acoustic scattering problem with sound-hard scatterers. We discuss the stability of the solutions with respect to variations of the scatterer. The main tool we use for this purpose is the convergence in the sense of Mosco. As a consequence we obtain uniform decay estimates for scattered fields for a large class of admissible sound-hard scatterers. As a particular case, we show how a sound-hard screen may be approximated by thin sound-hard obstacles. This is a joint work with Giorgio Menegatti.

We show that a sound-hard screen may be also approximated by using a thin lossy layer. This is a crucial step, together with transformation optics, for the construction of approximate full and partial cloaking by inserting a lossy layer between the region to be cloaked and the observer. This is a joint work with Jingzhi Li, Hongyu Liu and Gunther Uhlmann.

INVW06 13th February 2014
09:45 to 10:30
Conditional stability of Calder\'on problem for less regular conductivities
Co-authors: Pedro Caro (University of Helsinki), Andoni García (University of Jyväskylä)

A recent log-type conditional stability result with H\"older norm for the Calder\'on problem will be presented, assuming continuously differentiable conductivities with H\"older continuous first-order derivatives in a Lipschitz domain of the Euclidean space with dimension greater than or equal to three.

This is a joint work with Pedro Caro from the University of Helsinki and Andoni Garc\'ia from the University of Jyv\"askyl\"a. The idea of decay in average used by B. Haberman and D. Tataru to obtain their uniqueness result for either continuously differentiable conductivities or Lipschitz conductivities such that their logarythm has small gradient in a Lipschitz domain of $\mathbb{R}^n$ with $n\geq 3$ is followed.

INVW06 13th February 2014
11:00 to 11:45
D Lesnic Determination of an additive source in the heat equation
Co-authors: Dinh Nho Hao (Hanoi Institute of Mathematics, Vietnam), Areena Hazanee (University of Leeds, UK), Mikola Ivanchov (Ivan Franko National University of Lviv, Ukraine), Phan Xuan Thanh (Hanoi University of Science and Technology, Vietnam)

Water contaminants arising from distributed or non-point sources deliver pollutants indirectly through environmental changes, e.g. a fertilizer is carried into a river by rain which in turn will affect the aquatic life. Then, in this inverse problem of water pollution, an unknown source in the governing equation needs to be determined from the measurements of the concentration or other projections of the dependent variable of the model. A similar inverse problem, arises in heat transfer.

Inverse source problems for the heat equation, especially in the one-dimensional transient case, have received considerable attention in recent years. In most of the previous studies, in order to ensure a unique solution, the unknown heat source was assumed to depend on only one of the independent variables, namely, space or time, or on the dependent variable, namely, concentration/temperature. It is the puropose of our analysis to investigate an extended case in which the unknown source is assumed to depend on both space and time, but which is additively separated into two unknown coefficient source functions, namely, one component dependent on space and another one dependent on time. The additional overspecified conditions can be a couple of local or nonlocal measurements of the concentration/temperature in space or time.

The unique solvability of this linear inverse problem in classical Holder spaces is proved; however, the problem is still ill-posed since small errors in the input data cause large errors in the output source. In order to obtain a stable reconstruction the Tikhonov regularization or the iterative conjugate gradient method is employed. Numerical results will be presented and discussed.

INVW06 13th February 2014
11:45 to 12:30
A Belyaev On Implicit Image Differentiation and Filtering
The main goal of this talk is to demonstrate advantages of using compact (implicit) finite differencing, filtering, and interpolating schemes for image processing applications.

Finite difference schemes can be categorized as "explicit" and "implicit." Explicit schemes express the nodal derivatives as a weighted sum of the function nodal values. For example, f'i=(fi+1-fi-1)/2h is an explicit finite difference approximation of the first-order derivative. By comparison, compact (implicit) finite difference schemes equate a weighted sum of nodal derivatives to a weighted sum of the function nodal values. For instance, f'i-1+4f'i+f'i+1=3(fi+1-fi-1)/2h is an implicit (compact) scheme. Some implicit schemes correspond to Pad{\'e} approximations and produce significantly more accurate approximations for the small scales to compare with explicit schemes having the same stencil widths. Some other implicit schemes are designed to deliver accurate approximations of function derivatives over a wide range of spatial scales. Compact (implicit) finite difference schemes, as well as implicit filtering and interpolating schemes, constitute advanced but standard tools for accurate numerical simulations of problems involving linear and nonlinear wave propagation phenomena.

In this talk, I show how Fourier-Pad{\'e}-Galerkin approximations can be adapted for designing high-quality implicit finite difference schemes, establish a link between implicit schemes and standard explicit finite differences used for image gradient estimation, and demonstrate usefulness of implicit differencing and filtering schemes for various image processing tasks including image deblurring, feature detection, and sharpening.

Some of the results to be presented in this talk can be found in my recent paper: A. Belyaev, "Implicit image differentiation and filtering with applications to image sharpening." {\em SIAM Journal on Imaging Sciences}, 6(1):660-679, 2013.

Related Links: http://epubs.siam.org/doi/abs/10.1137/12087092X - link to the paper mentioned in the abstract

INVW06 13th February 2014
13:30 to 14:15
T Fokas Analytical Methods for certain Medical Imaging Techniques
One of the most important recent developments in the field of medial imaging has been the elucidation of analytical as opposed to statistical techniques. In this talk, analytical techniques for Positron Emission Tomography (PET), Single Photon Emission Computerised Tomography (SPECT), Magnetoencephalography (MEG) and Electroencephalography (EEG) will be reviewed. Numerical implementations using real data will also be presented.
INVW06 14th February 2014
09:00 to 09:45
H Chauris Towards a more robust automatic velocity analysis method
Co-author: C.-A. Lameloise (MINES ParisTech)

In the context of seismic imaging, we analyse artefacts related to a classical objective functional, the "Differential Semblance Optimization" approach (DSO). This functional has been defined to automatically retrieve a velocity model needed to image complex structures with seismic waves. In practice, it may fail due to the presence of a number of artefacts.

We propose two complementary approaches: first, we give evidence that a quantitative migration scheme is useful to compensate for uneven subsurface illumination. Second, we propose to slightly modify the objective function such that its gradient does not exhibit spurious oscillations for models containing interfaces or discontinuities.

INVW06 14th February 2014
09:45 to 10:30
Adaptive regularization of convolution type equations in anisotropic spaces with fractional order of smoothness
Co-authors: Tamara Tararykova (Cardiff University (UK)), Theophile Logon (Cocody University (Cote d'Ivoir))

Under consideration are multidimensional convolution type equations with kernels whose Fourier transforms satisfy certain anisotropic conditions characterizing their behaviour at infinity. Regularized approximate solutions are constructed by using a priori information about the exact solution and the error, characterized by membership in some anisotropic Nikol'skii-Besov spaces with fractional order of smoothness: F, G respectively. The regularized solutions are defined in a way which is related to minimizing a Tikhonov smoothing functional involving the norms of the spaces F and G. Moreover, the choice of the spaces F and G is adapted to the properties of the kernel. It is important that the anisotropic smoothness parameter of the space F may be arbitrarily small and hence the a priori regularity assumption on the exact solution may be very weak. However, the regularized solutions still converge to the exact one in the appropriate sense (though, of course, the weaker are the a priori assumptions on the exact solution, the slower is the convergence). In particular, for sufficiently small smoothness parameter of the space F, the exact solution is allowed to be an unbounded function with a power singularity which is the case in some problems arising in geophysics. Estimates are obtained characterizing the smootheness of the regularized solutions and the rate of convergence of the regularized solutions to the exact one. Similar results are obtained for the case of periodic convolution type equations.

INVW06 14th February 2014
11:00 to 11:45
Volterra Integral Equations of the First Kind with Jump Discontinuous Kernels
Sufficient conditions are derived for existence and uniqueness for the continuous solutions of the Volterra operator integral equations of the first kind with jump discontinuous kernels. Method of steps which is the well-known principle in the theory of functional equations is employed in combination with the method of successive approximations. We also address the case when the solution is not unique and prove the existence of parametric families of solutions and construct them as power-logarithmic asymptotic expansions. The proposed theory is demonstrated for the scalar Volterra equations of the 1st kind with jump discontinuous kernels with applications in evolving dynamical systems modeling.

Related Links: http://studia.complexica.net/index.php?option=com_content&view=article&id=209%3Avolterra-equations-of-the-first-kind-with-discontinuous-kernels-in-the-theory-of-evolving-systems-control-pp-135-146&catid=58%3Anumber-3&Itemid=103&lang=fr - Related paper in Studia Informatica Universalis

INVW06 14th February 2014
11:45 to 12:30
Optimizing the optimizers - what is the right image and data model?
When assigned with the task of reconstructing an image from given data the first challenge one faces is the derivation of a truthful image and data model. Such a model can be determined by the a-priori knowledge about the image, the data and their relation to each other. The source of this knowledge is either our understanding of the type of images we want to reconstruct and of the physics behind the acquisition of the data or we can thrive to learn parametric models from the data itself. The common question arises: how can we optimise our model choice?

Starting from the first modelling strategy this talk will lead us from the total variation as the most successful image regularisation model today to non-smooth second- and third-order regularisers, with data models for Gaussian and Poisson distributed data as well as impulse noise. Applications for image denoising, inpainting and surface reconstruction are given. After a critical discussion of these different image and data models we will turn towards the second modelling strategy and propose to combine it with the first one using a bilevel optimization method. In particular, we will consider optimal parameter derivation for total variation denoising with multiple noise distributions and optimising total generalised variation regularisation for its application in photography.

Joint work with Luca Calatroni, Jan Lellmann, Juan Carlos De Los Reyes and Tuomo Valkonen.

University of Cambridge Research Councils UK
    Clay Mathematics Institute London Mathematical Society NM Rothschild and Sons