# Workshop Programme

## for period 12 - 16 December 2011

### Inverse Problems in Science and Engineering

12 - 16 December 2011

Timetable

Monday 12 December | ||||

09:00-09:45 | Registration & Morning Coffee | |||

09:45-10:00 | Welcome & Announcements | |||

10:00-11:00 | Mulder, W (Shell Global Solutions International & Delft University of Technology) |
|||

Where should we focus? | Sem 1 | |||

Seismic imaging or migration maps singly scattered data into the subsurface, providing an image of the interfaces between rock formations with different impedances. The corresponding linear inverse problem is the minimization of the least-squares error subject to the Born approximation of the acoustic wave equation. Substantial preprocessing is usually required to remove data that do not obey the single scattering assumption. Also, an accurate background velocity is needed. Migration velocity analysis exploits the redundancy in the data to estimate the background velocity model. Data for different shot-receiver distances or offsets should provide the same image of the subsurface. Its implementation for the full wave equation invokes action at distance via a subsurface shift in space or time. Figure 1 shows a real-data example. The corresponding cost functional tries to focus energy at zero subsurface shift, thereby suppressing the unphysical action at distance. Although removal of surface multiples is a common technique, interbed multiples as well as remnant surface multiples may still lead the focusing algorithms astray. Focusing in the data domain is a recent generalization that, in principle, should not suffer from the presence of surface and interbed multiples. Further development is, however, still required to mature the method. Figure 1. Example of seismic velocity inversion with focusing based on horizontal shifts in the depth domain, starting from the best velocity model that increases linearly with depth. The left panel shows the extended image at a lateral position of x = 2 km, as a function of horizontal subsurface offset hx and depth z. The iteration count is displayed in the left upper corner. The central panel displays the migration image. The right one shows the reconstructed smooth background velocity model. |
||||

11:00-12:00 | Cheney, M (Rensselaer Polytechnic Institute) |
|||

Introduction to Radar Imaging | Sem 1 | |||

Radar imaging is a technology that has been developed, very successfully, within the engineering community during the last 50 years. Radar systems on satellites now make beautiful images of regions of our earth and of other planets such as Venus. One of the key components of this impressive technology is mathematics, and many of the open problems are mathematical ones. This lecture will explain, from first principles, some of the basics of radar and the mathematics involved in producing high-resolution radar images. |
||||

12:00-12:30 | Informal Discussion Time | |||

12:30-13:30 | Lunch at Wolfson Court | |||

13:30-14:30 | Arridge, S (University College London) |
|||

(Inverse Problems in) BioMedical Imaging | Sem 1 | |||

Biomedical Imaging is a large topic that may be divided into direct imaging methods versus indirect imaging. By direct imaging we mean methods such as microscopy wherein the data is acquired and presented as an image; by indirect imaging we mean methods such as tomography wherein data is acquired through a detector and images are reconstructed by solving an inverse problem. Common to both approaches are tasks such as segmentation, registration, and pattern recognition, and confounding processes such as noise, blurring and obscuration. Direct Biomedical Imaging can be contrasted with Computer Vision due to the different nature of the resolution, contrast, and confounding processes involved. Indirect Imaging can be compared to other classes of inverse problems, and again we may draw particular details to do with the typically large scale of biomedical images, their sometimes non-unique or badly ill-posed nature, and in some cases their non-linear character. In this talk I will try to give an overview of some current topics in these areas. |
||||

14:30-15:00 | Schotland, J (University of Michigan) |
|||

Acousto-Optic Imaging and Related Inverse Problems | Sem 1 | |||

We propose a tomographic method to reconstruct the optical properties of a highly scattering medium from incoherent acousto-optic measurements. The method is based on the solution to an inverse problem for the diffusion equation and makes use of the principle of interior control of boundary measurements by an external wave field. This is joint work with Guillaume Bal. |
||||

15:00-15:30 | Afternoon Tea | |||

15:30-16:00 | Lionheart, WRB; Betcke, M; Thompson, W; Wadeson, N (University of Manchester) |
|||

Reconstruction for an offset cone beam x-ray tomography system | Sem 1 | |||

The RTT airport baggage x-ray tomography system uses an unusual geometry in which the detector array is offset from the circle on which the sources lie. Rather than a single rotating source the sources are switched on and off. This enables the system to operate at the high speed required for airport baggage scanning but presents challenges for reconstruction. We discuss the strategy for choosing the source firing sequence and a present a reconstruction algorithm using rebinning on to multiple curved surfaces. |
||||

16:00-16:30 | Varslot, T; Kingston, A; Myers, G; Sheppard, A (Australian National University) |
|||

Theoretically-exact CT-reconstruction from experimental data | Sem 1 | |||

We demonstrate how an optimisation-based autofocus technique may be used to overcome physical instabilities that have, until now, made high-resolution theoretically-exact tomographic reconstruction impractical. We show that autofocus-corrected, theoretically-exact helical CT is a viable option for high-resolution micro-CT imaging at cone-angles approaching +- 50 degrees. The elevated cone-angle enables better utilisation of the available X-ray flux and therefore shorter image acquisition time than conventional micro-CT systems. By using the theoretically-exact Katsevich 1PI inversion formula, we are not restricted to a low-cone-angle regime; we can in theory obtain artefact-free reconstructions from projection data acquired at arbitrary high cone-angles. However, this reconstruction method is sensitive to misalignments in the tomographic data, which result in geometric distortion and streaking artefacts. We use a parametric model to quantify the deviation between the actual acquisition trajectory and an ideal helix, and use an autofocus method to estimate the relevant parameters. We define optimal units for each parameter, and use these to ensure consistent alignment accuracy across different cone-angles and different magnification factors. |
||||

16:30-17:00 | Luo, X; Li, W; Hill, N; Ogden, R; Smythe, A (University of Glasgow/Sheffield) |
|||

Inverse estimation of fibre reinforced soft tissue of human gallbladder wall | Sem 1 | |||

Cholecystectomy (surgical removal of the gallbladder) for gallbladder pain is the most common elective abdominal operation performed in the western world. However, the outcome is not entirely satisfactory as the mechanism of gallbladder pain is unclear. We have developed a mechanical model of gallbladder aiming to understand its mechanical behaviour. To apply this model to clinical situations, it is often necessary to estimate the material properties from non-invasive medical images. In this work, we present a non gradient-based optimization inverse approach for estimating the elastic modulus of human gallbladders from ultrasound images. Two forward problems are considered. One utilizes a linear orthotropic material model and tracks the Elastic moduli in the circumferential and longitudinal directions. The other is a nonlinear Holzapfel-Grass-Ogden model in which two families of fibres are embedded circumferentially in an otherwise homogeneous Neo-Hookean elastin matri x. These forward problems are solved using the finite element package Abaqus, and a python/Matlab based optimization algorithm is developed to search the global minimum of the error functional, which measures the difference in geometries from the numerical predictions and images. We will compare and analyse the results for six gallbladder samples, and discuss the outstanding challenging issues. |
||||

17:00-18:00 | Drinks Reception | |||

18:45-19:15 | Dinner at Fitzwilliam College (residents only) |

Tuesday 13 December | ||||

09:00-09:30 | Betcke, T (University College London) |
|||

Modulated plane wave methods for Helmholtz problems in heterogeneous media | Sem 1 | |||

A major challenge in seismic imaging is full waveform inversion in the frequency domain. If an acoustic model is assumed the underlying problem formulation is a Helmholtz equation with varying speed of sound. Typically, in seismic applications the solution has many wavelengths across the computational domain, leading to very large linear systems after discretisation with standard finite element methods. Much progress has been achieved in recent years by the development of better preconditioners for the iterative solution of these linear systems. But the fundamental problem of requiring many degrees of freedom per wavelength for the discretisation remains. For problems in homogeneous media, that is, spatially constant wave velocity, plane wave finite element methods have gained significant attention. The idea is that instead of polynomials on each element we use a linear combination of oscillatory plane wave solutions. These basis functions already oscillate with the right wavelength, leading to a significant reduction in the required number of unknowns. However, higher-order convergence is only achieved for problems with constant or piecewise constant media. In this talk we discuss the use of modulated plane waves in heterogeneous media, products of low-degree polynomials and oscillatory plane wave solutions for a (local) average homogeneous medium. The idea is that high-order convergence in a varying medium is recovered due to the polynomial modulation of the plane waves. Wave directions are chosen based on information from raytracing or other fast solvers for the eikonal equation. This approach is related to the Amplitude FEM originally proposed by Giladi and Keller in 2001. However, for the assembly of the systems we will use a discontinuous Galerkin method, which allows a simple way of incorporating multiple phase information in one element. We will discuss the dependence of the element sizes on the wavelenth and the accuracy of the phase information, and present several examples that demonstrate the properties of modulated plane wave methods for heterogeneous media problems. |
||||

09:30-10:00 | Stolk, C (University of Amsterdam) |
|||

Seismic inverse scattering by reverse time migration | Sem 1 | |||

We will consider the linearized inverse scattering problem from seismic imaging. While the first reverse time migration algorithms were developed some thirty years ago, they have only recently become popular for practical applications. We will analyze a modification of the reverse time migration algorithm that turns it into a method for linearized inversion, in the sense of a parametrix. This is proven using tools from microlocal analysis. We will also discuss the limitations of the method and show some numerical results. |
||||

10:00-10:30 | de Hoop, M (Purdue University) |
|||

Local analysis of the inverse problem associated with the Helmholtz equation -- Lipschitz stability and iterative reconstruction | Sem 1 | |||

We consider the Helmholtz equation on a bounded domain, and the Dirichlet-to-Neumann map as the data. Following the work of Alessandrini and Vessalla, we establish conditions under which the inverse problem defined by the Dirichlet-to-Neumann map is Lipschitz stable. Recent advances in developing structured massively parallel multifrontal direct solvers of the Helmholtz equation have motivated the further study of iterative approaches to solving this inverse problem. We incorporate structure through conormal singularities in the coefficients and consider partial boundary data. Essentially, the coefficients are finite linear combinations of piecewise constant functions. We then establish convergence (radius and rate) of the Landweber iteration in appropriately chosen Banach spaces, avoiding the fact the coefficients originally can be $L^{\infty}$, to obtain a reconstruction. Here, Lipschitz (or possibly Hoelder) stability replaces the so-called source condition. We accommodate the exponential growth of the Lipschitz constant using approximations by finite linear combinations of piecewise constant functions and the frequency dependencies to obtain a convergent projected steepest descent method containing elements of a nonlinear conjugate gradient method. We point out some correspondences with discretization, compression, and multigrid techniques. Joint work with E. Beretta, L. Qiu and O. Scherzer. |
||||

10:30-11:00 | Olsson, P (Chalmers University of Technology) |
|||

Re-routing of elastodynamic waves by means of transformation optics in planar, cylindrical, and spherical geometries | Sem 1 | |||

Transformation optics has proven a powerful tool to achieve cloaking from electromagnetic and acoustic waves. There are still technical issues with applications of transformation optics to elastodynamics, due to the fact that the elastodynamic wave equation does not in general possess suitable invariances under the required transformations. However, for a few types of materials, invariances of the appropriate kind have been shown to exist. In the present talk we consider a few canonical scattering and reflection problems, and show that by coating the planar, cylindrical or spherical reflecting or scattering bodies with a fiber-reinforced layer of a metamaterial with a suitable gradient in material properties, the reflection or scattering of shear waves from the body can be significantly reduced. It has been suggested that constructions inspired by transformation optics could potentially provide protection for infrastructure from seismic waves. Even if waves from earthquakes may have wavelengths making some such suggestions implausible, passive protection from shorter elastic bulk waves from other sources may be achieved by a scheme based on transformation optics. Other suggested applications are in the car and aeronautics industries. The problems considered here, albeit rather special model problems, hopefully may provide some additional insight into protection against mechanical waves by means of transformation elastodynamics. A result of the analysis in the case of a spherical case is, that to maximize number of modes to which the coated spherical body is “invisible,” rigid body rotations of the innermost part of the coating should be allowed. (However, this is only essential in the low frequency range.) It is also worth noting that since the transition matrices of scatterers described here have, as it were, quite well-populated null-spaces, they provide simple examples of cases where complete knowledge of the scatterer and of the scattered field does not even remotely suffice to reconstruct the incident field. |
||||

11:00-11:30 | Morning Coffee | |||

11:30-12:30 | Informal Discussion Time | |||

12:30-13:30 | Lunch at Wolfson Court | |||

14:00-14:30 | Pietschmann, J-F; Burger, M; Wolfram, M-T (Universität Münster/Vienna) |
|||

Identification of non-linearities in transport-diffusion models of crowded motion | Sem 1 | |||

14:30-15:00 | van Leeuwen, P (University of Reading) |
|||

Particle filters in highly nonlinear high-dimensional systems | Sem 1 | |||

Bayes theorem formulates the data-assimilation problem as a multiplication problem and not an inverse problem. In this talk we exploit that using an extremely efficient particle filter on a highly nonlinear geophysical fluid flow problem of dimension 65,000. We show how collapse of the particles can be avoided, and discuss statistics showing that the particle filter is performing correctly. |
||||

15:00-15:30 | Afternoon Tea | |||

15:30-16:00 | Omre, H (University of Science and Technology, Trondheim) |
|||

Spatial categorical inversion: Seismic inversion into lithology/fluid classes | Sem 1 | |||

Modeling of discrete variables in a three-dimensional reference space is a challenging problem. Constraints on the model expressed as invalid local combinations and as indirect measurements of spatial averages add even more complexity. Evaluation of offshore petroleum reservoirs covering many square kilometers and buried at several kilometers depth contain problems of this type. Foc us is on identification of hydrocarbon (gas or oil) pockets in the subsurface - these appear as rare events. The reservoir is classified into lithology (rock) cla sses - shale and sandstone - and the latter contains fluids - either gas, oil or brine (salt water). It is known that these classes are vertically thin with large horizontal continuity. The reservoir is considered to be in equilibrium - hence fixed vertical sequences of fluids - gas/oil/brine - occur due to gravitational sorting. Seismic surveys covering the reservoir is made and through processing of the data, angle-dependent amplitudes of reflections are available. Moreover, a few wells are drilled through the reservoir and exact obse rvations of the reservoir properties are collected along the well trace. The inversion is phrased in a hierarchical Bayesian inversion framework. The prior model, capturing the geometry and ordering of the classes, is of Markov random field type. A particular parameterization coined Profile Markov random field is def ined. The likelihood model linking lithology/fluids and seismic data captures maj or characteristics of rock physics models and the wave equation. Several parameters in this likelihood model are considered to be stochastic and they are inferred from seismic data and observations along the well trace. The posterior model is explored by an extremely efficient MCMC-algorithm. The methodology is defined and demonstrated on observations from a real North Sea reservoir. |
||||

16:00-16:30 | Farmer, CL (University of Oxford) |
|||

Practical and principled methods for large-scale data assimilation and parameter estimation | Sem 1 | |||

Uncertainty quantification can begin by specifying the initial state of a system as a probability measure. Part of the state (the 'parameters') might not evolve, and might not be directly observable. Many inverse problems are generalisations of uncertainty quantification such that one modifies the probability measure to be consistent with measurements, a forward model and the initial measure. The inverse problem, interpreted as computing the posterior probability measure of the states, including the parameters and the variables, from a sequence of noise corrupted observations, is reviewed in the talk. Bayesian statistics provides a natural framework for a solution but leads to very challenging computational problems, particularly when the dimension of the state space is very large, as when arising from the discretisation of a partial differential equation theory. In this talk we show how the Bayesian framework provides a unification of the leading techniques in use today. In particular the framework provides an interpretation and generalisation of Tikhonov regularisation, a method of forecast verification and a way of quantifying and managing uncertainty. A summary overview of the field is provided and some future problems and lines of enquiry are suggested. |
||||

16:30-17:00 | Oliver, D (University Centre for Integrated Petroleum Research in Bergen, Norway) |
|||

The ensemble Kalman filter for distributed parameter estimation in porous media flow | Sem 1 | |||

17:00-17:30 | Dashti, M (University of Warwick) |
|||

Besov Priors for Bayesian Inverse problems | Sem 1 | |||

We consider the inverse problem of estimating a function $u$ from noisy measurements of a known, possibly nonlinear, function of $u$. We use a Bayesian approach to find a well-posed probabilistic formulation of the solution to the above inverse problem. Motivated by the sparsity promoting features of the wavelet bases for many classes of functions appearing in applications, we study the use of the Besov priors within the Bayesian formalism. This is Joint work with Stephen Harris (Edinburgh) and Andrew Stuart (Warwick). |
||||

18:45-19:15 | Dinner at Fitzwilliam College (residents only) |

Wednesday 14 December | ||||

09:00-09:30 | Plessix, R-E (Shell Global Solutions International) |
|||

Some applications of least-squares inversion in exploration geophysics | Sem 1 | |||

In exploration geophysics, we often obtain subsurface images from the data we record at the surface of the Earth. This imaging problem can be formulated as a data misfit problem. However, we face a number of numerical challenges when we want to apply this approach with seismic or electromagnetic data. We first need to efficiently compute some approximations of the elastodynamic or electromagnetic equations. Secondly, we need to solve the inverse problem with a local optimization because of the large problem size. During this presentation, after having briefly discussed the numerical solutions of the partial differential equations governing the physics, we shall describe the inverse formulation. Then, we shall present some of the applications and the difficulties we encounter in practice. |
||||

09:30-10:00 | Chauris, H (Ecole des Mines de Paris) |
|||

Alternative formulations for full waveform inversion | Sem 1 | |||

Classical full waveform inversion is a powerful tool to retrieve the Earth properties (P- and S-velocities) from seismic measurements at the surface. It simply consists of minimizing the misfit between observed and computed data. However, the associated objective function suffers from many local minima, mainly due the oscillatory aspect of seismic data. A local gradient approach does not usually converge to the global minimum. We first review the classical full waveform inversion and its limitations. We then present two alternatives to avoid local minima in the determination of the background (large scale) velocity model. The first method is referred as the Normalized Integration Method (Liu et al., 2011). The objective function measures the misfit between the integral of the envelope of the signal. Because we only compare functions increasing with time, the objective function has a more convex shape. The second method is a differential version of the full waveform inversion. This method is closely related the differential semblance optimization method (Symes, 2008) used in seismic imaging to automatically determine the Earth properties from reflected data. We illustrate the two methods on basic 2-D examples to discuss the advantages and limitations. |
||||

10:00-10:30 | Delprat-Jannaud, F (Institut Francais du Petrole) |
|||

2D nonlinear inversion of walkaway data | Sem 1 | |||

Well-seismic data such as vertical seismic profiles (VSP) provides detailed information about the elastic properties of the subsurface at the vicinity of the well. Heterogeneity of sedimentary terrains can lead to non negligible multiple scattering, one of the manifestations of the non linearity involved in the mapping between elastic parameters and seismic data. Unfortunately this technique is severely hampered by the 1D assumption. We present a 2D extension of the 1D nonlinear inversion technique in the context of acoustic wave propagation. In the case of a subsurface with gentle lateral variations, we propose a regularization technique which aims at ensuring the stability of the inversion in a context where the recorded seismic waves provide a very poor illumination of the subsurface. We deal with a huge size nonlinear inverse problem. Solving this difficult problem is rewarded by a vertical resolution much higher than the one obtained by standard seismic imaging techniques at distances of about one hundred meters from the well. |
||||

10:30-11:00 | Vasconcelos, I (Schlumberger Cambridge Research) |
|||

Imaging using reciprocity principles: extended images, multiple scattering and nonlinear inversion | Sem 1 | |||

11:00-11:30 | Morning Coffee | |||

11:30-12:30 | Informal Discussion Time | |||

12:30-13:30 | Lunch at Wolfson Court | |||

13:30-14:15 | Robertsson, J | |||

The future of imaging and inversion in a complex Earth | ||||

It is now over 25 years since the introduction of 3D seismic data acquisition and processing. These techniques have proven to be very useful. In fact, in a recent industry wide survey, 3D seismic was regarded as the single most valuable technology for the hydrocarbon industry over the last two decades. The objectives of seismic surveys are to provide a structural image as well as to estimate Earth properties of the sub-surface. Due to the high demand for hydrocarbons, industry have increasingly been exploring substantially more complex or difficult areas, such as deep water or sub-salt reservoirs. As a result, a step-change in technology for inversion and imaging has occurred, made possible by increasingly powerful computational platforms. New imaging methods such as full waveform inversion (Tarantola, Pratt) and Reverse Time Migration (RTM) utilize the full richness of recorded data (as opposed to conventional imaging methods which use simple reflections only). Consequently, industrial scientists have become increasingly aware of the limitations of what has been called 3D seismic data. This has been limited in three respects: i) bandwidth; ii) the lateral extent of source and receiver arrays; iii) aliasing in terms of source and receiver spacing. In my presentation I will show how recent advances overcomes some of these limitations. |
||||

13:30-17:30 | OPEN FOR BUSINESS SESSION: Inverse Problems in Oil and Gas Exploration: Contemporary Views | |||

14:15-15:00 | Ellis, D | |||

Earth imaging - a developing picture | Sem 1 | |||

15:00-15:30 | Afternoon tea | |||

15:30-16:15 | Calandra, H | |||

Full Waveform Inversion in Laplace Domain | Sem 1 | |||

Seismic Full Waveform Inversion (FWI) consists in the estimation of Earth's subsurface structure based on measurements of physical fields near its surface. It is based on the minimization of an objective function measuring the difference between predicted and observed data. FWI is mostly formulated in time or Fourier domain. However FWI diverges if the starting model is far from the true model. This is consequence of the lack of low frequency in the seismic sources which limits the recovery of the large-scale structures in the velocity model. Re-formulating FWI in the Laplace domain using a logarithmic objective function introduces a fast and efficient method capable to recover long-wavelength velocity structure starting from a very simple initial solution and independent of the frequency content of the data. In this presentation we will present the FWI formulated in Laplace domain and its application to synthetic and field seismic data. |
||||

16:15-17:00 | ten Kroode, F; Smit, D | |||

Seismic inverse problems - towards full wavefield acquisition? | Sem 1 | |||

17:00-17:30 | Panel Discussion | |||

17:30-19:00 | Wine reception | |||

18:45-19:15 | Dinner at Fitzwilliam College (residents only) |

Thursday 15 December | ||||

09:00-09:30 | Booth, R (Schlumberger Brazil Research & Geoengineering Center) |
|||

Inversion of Pressure Transient Testing Data | Sem 1 | |||

In the oilfield, pressure transient testing is an ideal tool for determining average reservoir parameters, but this technique does not fully quantify the uncertainty in the spatial distribution of these parameters. We wish to determine plausible parameter distributions, consistent with both the pressure transient testing data and prior geological knowledge. We used a Langevin-based MCMC technique, adapted to the large number of parameters and data, to identify geological features and characterize the uncertainty. |
||||

09:30-10:00 | Sacks, P (Iowa State University) |
|||

Inverse problems for the potential form wave equation in an annulus | Sem 1 | |||

10:00-10:30 | *Fayard, P; Field, TR (McMaster University) |
|||

Geometrical implications of the compound representation for weakly scattered amplitudes | Sem 1 | |||

It is known that a random walk model yields the multiplicative representation of a coherent scattered amplitude in terms of a complex Ornstein–Uhlenbeck process modulated by the square root of the cross-section. A corresponding biased random walk enables the derivation of the dynamics of a weak coherent scattered amplitude as a stochastic process in the complex plane. Strong and weak scattering patterns differ regarding the correlation structure of their radial and angular fluctuations. Investigating these geometric characteristics yields two distinct procedures to infer the scattering cross-section from the phase and intensity fluctuations of the scattered amplitude. These inference techniques generalize an earlier result demonstrated in the strong scattering case. Their significance for experimental applications, where the cross-section enables tracking of anomalies, is discussed. |
||||

10:30-11:00 | Utyuzhnikov, SV (University of Manchester) |
|||

Inverse Source Problems of Active Sound Control for Composite Domains | Sem 1 | |||

In the active noise shielding problem, a quite arbitrary domain (bounded or unbounded) is shielded from the field (noise), generated outside, via introducing additional sources. Along with noise, the presence of internal (wanted) sound sources is admitted. Active shielding is achieved by constructing additional (secondary) sources in such a way that the total contribution of all sources leads to the noise attenuation. In contrast to passive control, there is no any mechanical insulation in the system. In practice, active and passive noise control strategies could often be combined, because passive insulation is more efficient for higher frequencies, whereas active shielding is more efficient for lower frequencies. The problem is formulated as an inverse source problem with the secondary sources positioned outside the domain to be shielded. The solution to the problem is obtained in both the frequency and time domains, and based on Calderόn – Ryaben’kii’s surface potentials [1]. A key property of these potentials is that they are projections. The constructed solution to the problem requires only the knowledge of the total field at the perimeter of the shielded domain [1-3]. In practice, usually the total field can only be measured. The methodology automatically differentiates between the wanted and unwanted components of the field. A unique feature of the proposed methodology is its capability to cancel the unwanted noise across the volume and keep the wanted sound unaffected. It is important that the technique requires no detailed information of either the properties of the medium or the noise sources. The technique can also be extended to a composite protected region (multiply connected) [4]. Moreover, the overall domain can arbitrarily be split into a collection of subdomains, and those subdomains are selectively allowed to either communicate freely or otherwise be shielded from their peers. In doing so, no reciprocity is assumed, i.e., for a given pair or subdomains one may be allowed to hear the other, but not vice a versa. Possible applications of this approach to engineering problems such as oil prospecting are discussed. |
||||

11:00-11:30 | Morning Coffee | |||

11:30-12:30 | Informal Discussion Time | |||

12:30-13:30 | Lunch at Wolfson Court | |||

14:00-14:30 | Qiu, L (Purdue University) |
|||

Lipschitz stability of an inverse problem for the Helmholtz equation | Sem 1 | |||

Consider the inverse problem of determining the potential q from the Neumann-to-Dirichlet map q of a Schrödinger type equation. A relevant question, specially in applications, is the stability of the inversion. In this work, a Lipschitz type stability is established assuming a priori that q is piecewise constant with a bounded know number of unknown values. |
||||

14:30-15:00 | Childs, P (Schlumberger Cambridge Research) |
|||

Numerics of waveform inversion for seismic data | Sem 1 | |||

Depth imaging and inversion of seismic data is becoming commonplace within the seismic industry. However the inversion procedures used today have a highly non-convex objective function and will often fail unless careful multiscale processing has been included in the workflow. In this talk, we will review some approaches to improving the robustness of the procedure. Because the PDE-constrained inversion procedure used in industry can be very expensive due to the large number of PDE solves required, we will review and address some of the numerical challenges in this area. The talk will concentrate mainly on computational developments and will be illustrated with industrial examples from full waveform inversion of seismic data. |
||||

15:00-15:30 | Afternoon Tea | |||

15:30-16:00 | ten Kroode, F (Shell International Exploration and Production) |
|||

A wave equation based Kirchhoff operator and its inverse | Sem 1 | |||

In seismic imaging one tries to compute an image of the singularities in the earth's subsurface from seismic data. Seismic data sets used in the exploration for oil and gas usually consist of a collection of sources and receivers, which are both positioned at the surface of the earth. Since each receiver records a time series, the ideal seismic data set is five dimensional: sources and receivers both have two spatial coordinates and these four spatial coordinates are complemented by one time variable. Singularities in the earth give rise to scattering of incident waves. The most common situation is that of re flection against an interface of discontinuity. Refl ected and incoming waves are related via refl ection coefficients, which depend in general on two angles, namely the angle of incidence and the azimuth angle. Re flection coefficients are therefore also dependent on five variables, namely three location variables and two angles. The classical Kirchhoff integral can be seen as an operator mapping these angle-azimuth dependent refl ection coefficients to singly scattered data generated and recorded at the surface. It essentially depends on asymptotic quantities which can be computed via ray tracing. For a known velocity model, seismic imaging comes down to nding a left inverse of the Kirchhoff operator. In this talk I will construct such a left inverse explicitly. The construction uses the well known concepts of subsurface offset and subsurface angle gathers and is completely implementable in a wave equation framework. Being able to perform such true amplitude imaging in a wave equation based setting has signifficant advantages in truly complex geologies, where an asymptotic approximation to the wave equation does not suffice. The construction also naturally leads to a reformulation of the classical Kirchhoff operator into a wave equation based variant, which can be used e.g. for wave equation based least squares migration. Finally, I will discuss invertibility of the new Kirchhoff operator, i.e. I will construct a right inverse as well. |
||||

16:00-16:30 | Demanet, L (MIT) |
|||

Can we determine low frequencies from high frequencies? | Sem 1 | |||

Data usually come in a high frequency band in wave-based imaging, yet one often wishes to determine large-scale features of the model that predicted them. When is this possible? Both the specifics of wave propagation and signal structure matter in trying to deal with this multifaceted question. I report on some recent progress with Paul Hand and Hyoungsu Baek. The answers are not always pretty. |
||||

16:30-17:00 | Yarman, CE (WesternGeco (Schlumberger)) |
|||

Band-limited ray tracing | Sem 1 | |||

We present a new band-limited ray tracing method that aims to overcome some of the limitations of the standard high frequency ray tracing in complex velocity models that particularly contains complex boundaries. Our method is based on band-limited Snell’s law, which is derived from the Kirchhoff integral formula by localization around a boundary location of interest using Fresnel volume. |
||||

17:00-17:30 | Dorn, O (University of Manchester) |
|||

Level Set Methods for Inverse Problems | Sem 1 | |||

19:30-22:00 | Conference Dinner at St Catherine's College |

Friday 16 December | ||||

09:00-09:30 | Vikhansky, A (QMUL) |
|||

Numerical analysis of structural identifiability of electrochemical systems | Sem 1 | |||

Development of an experiment-based model often encounters so-called identifiability problem. Namely, if there is a system of (e.g., differential) equations at our disposal and a set of experiments to perform, the question arises whether the planned experiments allow for reliable identification of the parameters of the model, such as reaction rates or diffusivities? Since in many cases the initial answer is negative, one has to modify the experimental design. In the present research we considered identifiability of a system of reaction-diffusion equations and explicitly calculate the experimental conditions, which allows for the most reliable identification of the model’s parameters. According to our approach solution of the identifiability problem requires finding of the global maximum of a specially designed function and it is shown that the identifiability criterion is equal to the ratio of the parameters’ uncertainty to the experimental error under worst-case scenario, i.e., it characterizes the precision of the identification procedure. Since the outcome of our identifiability test is not simply “yes” or “no”, but a number, one can modify the experimental conditions in order to minimize the uncertainty. |
||||

09:30-10:00 | Vitale, G (Politecnico di Torino) |
|||

Force Traction Microscopy: an inverse problem with pointwise observations | Sem 1 | |||

Force Traction Microscopy is an inversion method that allows to obtain the stress field applied by a living cell on the environment on the basis of a pointwise knowledge of the displacement produced by the cell itself. This classical biophysical problem, usually addressed in terms of Green functions, can be alternatively tackled using a variational framework and then a finite elements discretization. In such a case, a variation of the Tichonov functional under suitable regularization is operated in view of its minimization. This setting naturally suggests the introduction of a new equation, based on the adjoint operator of the elasticity problem. The pointwise observations require to exploit the theory of elasticity extended to forcing terms that are Borel measures. In this work we show the proof of well posedness of the above problem, borrowing technics from the field of Optimal Control. We also illustrate a numerical strategy of the inversion method that discretizes the partial differential equations associated to the optimal control problem. A detailed discussion of the numerical approximation of a test problem (with known solution) that contains most of the mathematical difficulties of the real one, allows a precise evaluation of the degree of confidence that one can have in the numerical results. |
||||

10:00-10:30 | *Backhouse, L; Demyanov, V; Christie, M (Heriot Watt University) |
|||

Inverse Problems in the Prediction of Reservoir Petroleum Properties using Multiple Kernel Learning | Sem 1 | |||

In Reservoir engineering a common inverse problem is that of estimating the reservoir properties such as Porosity and Permeability by matching the simulation model to the dynamic Production data. Using this model, future predictions can then be made and the uncertainty of these predictions quantified using Bayes Rules. Multiple Kernel Learning (MKL) is an inverse problem that maps input data into a feature space with the use of kernel functions. MKL is a predictive tool that has been applied in the Petroleum Industry to estimate the spatial distribution of Porosity and Permeability. The parameters of the kernels and the choice of the kernels are determined by matching to hard data for Porosity and Permeability found at the wells thus producing a static model that is used as input into the dynamic model. In this paper we show how we combine the above mentioned inverse problems. We estimate the Porosity and Permeability into a static model then match to the dynamic production data to tune the parameters in the Multiple Kernel Learning Framework. Specifically we integrate the MLE estimation from the MKL objective Function into the History Matching Function. |
||||

10:30-11:00 | Rouquette, S (University of Montpellier 2) |
|||

Estimation of the heat flux parameters during a static Gas Tungsten Arc Welding experiment | Sem 1 | |||

Gas Tungsten Arc (GTA) welding process is mainly used for assembly metallic structures which require high level safety (so excellent joint quality). This welding process is based on electrical arc created between a tungsten electrode and the base metal (work-pieces to assemble). An inert gaseous flow (argon or/and helium) shields the tungsten electrode and the molten metal against the oxidation. The energy required for melting the base metal is brought from the heat generated by the electrical arc.GTAW process involves a combination of physical phenomena: heat transfer, fluid flow, self-induced electromagnetic force. Mechanisms involved in the weld pool formation and geometry are surface tension, impigning arc pressure, buoyancy force and Lorentz force. . It is well known that for welding intensities inferior to 200A, GTAW phenomena are well described with a heat transfer - fluid flow modelling and the Marangoni force on the weld pool. The knowledge of the heat flux, at the a rc plasma – work-piece interface, is one of the key parameter for establishing a predictive multiphysics GTAW simulation. In this work, we investigated the estimation of the heat source by an inverse technique with a heat transfer and fluid model modelling for the GTAW process. The heat source is described with a Gaussian function involving two parameters: process efficiency and Gaussian radius. These two parameters are not known accurately and they require to be estimated. So an inverse technique regularized with the Levenberg-Marquardt Algorithm (LMA) is employed for the estimation of these two parameters. All the stages of the LMA are described. A sensitivity analysis has been done in order to determine if the thermal data and thermocouple locations are relevant for making the estimation of the two parameters simultaneously. The linear dependence between the two estimated parameters is studied. Then the sensitivity matrix is build and the IHFP is solved. The robustness of the stated IHFP is investigated through few numerical cases. Lastly, the IHFP is solved with experimental thermal data an d the results are discussed. |
||||

11:00-11:30 | Morning Coffee | |||

11:30-12:30 | Informal Discussion Time | |||

12:30-13:30 | Lunch at Wolfson Court | |||

14:00-14:30 | Marsland, S (Massey University) |
|||

Diffeomorphic Image Registration | Sem 1 | |||

The deformation of an image so that its appearance more closely matches that of another image (image registration) has applications in many fields, from medical image analysis through evolutionary biology and fluid dynamics to astronomy. Over recent years there has been a great deal of interest in smooth, invertible (i.e., diffeomorphic) warps, not least because the underlying Euler-Poincare PDEs are geodesic equations on the diffeomorphism group with respect to a group-invariant metic. In this talk I will summarise the work in the field from the inverse problems point of view and highlight areas of future work. |
||||

14:30-15:00 | Nolan, C (University of Limerick) |
|||

Microlocal Analysis of Bistatic Synthetic Aperture Radar Imaging | Sem 1 | |||

15:00-15:30 | Afternoon Tea | |||

15:30-16:00 | Chauris, H (Mines Paristech) |
|||

Finite difference resistivity modeling on unstructured grids with large conductivity contrasts | Sem 1 | |||

The resolution of the 3-D electrical forward problem faces several difficulties. Besides the singularity at the source location, major issues are caused by the definition of the computational domain to match a particular topography, and by high conductivity contrasts. To address these issues, we combine here two methods. First, we implement a specific finite difference method that takes into account specified interfaces in elliptic problems. Here, the contrasts are defined along grid lines. Second, we extend the method to unstructured meshes by integrating it to the generalized finite difference technique. In practice, once the conductivity model is defined, the approach does not need to explicitly specify where the large contrasts are located. Several numerical tests are carried out for various Poisson problems and show a high degree of accuracy. |
||||

16:00-16:30 | Santosa, F (University of Minnesota) |
|||

Bar Code Scanning -- An Inverse Problem for Words | Sem 1 | |||

Bar codes are ubiquitous -- they are used to identify products in stores, parts in a warehouse, and books in a library, etc. In this talk, the speaker will describe how information is encoded in a bar code and how it is read by a scanner. The presentation will go over how the decoding process, from scanner signal to coded information, can be formulated as an inverse problem. The inverse problem involves finding the "word" hidden in the signal. What makes this inverse problem, and the approach to solve it, somewhat unusual is that the unknown has a finite number of states. |
||||

16:30-17:00 | Symes, W (Rice University) |
|||

Position tomography and seismic inversion | Sem 1 | |||

Active source seismic data may depend on more parameters than the spatial dimension of the earth model, and thus must satisfy some internal consistency conditions. Membership in the kernel of an annihilation operator provides one useful way to express these conditions. For linearized data simulation with smooth reference model, annihilators may belong to well-studied classes of oscillatory integral operators, which have rich geometric structure. I will describe generally how annihilators arise and lead to inversion algorithms, and specifically how space-shift annihilators and associated position tomography problems may be used to determine the reference model in the linearized description of reflected waveform inversion. |
||||

18:45-19:15 | Dinner at Fitzwilliam College (residents only) |