## Equipe : Thématiques

Accueil › L’Institut › Équipes › DIMABIO › Thématiques › Inverse Problems

### Inverse Problems

**Nonlinear Inverse Scattering Problems (ISPs) **

The quick and dramatic development of the medical imaging modalities such as CT, MRI and OCT was based on solving the linear inverse problem. Today, one of the grand challenges in tomographic imaging is to find an efficient and reliable solution to the nonlinear inverse problem. The nonlinearity in question arises due to the multiple scattering of the waves that are used to probe the medium. In mono-energetic X-ray imaging, the inverse problem is linear because one can associate a well-defined X-ray trajectory with each measurement. The integral attenuation along the ray gives rise to a linear ray transform of the medium (the Radon transform and its various generalizations). However, in physical situations involving multiple scattering, it is not possible to define rays of trajectories along which the particles or fields propagate.

Therefore, as soon as the multiple scattering sets in, the inverse problem becomes nonlinear. In particular, this is true in the problems involving classical sound and electromagnetic waves. Here the nonlinearity of the inverse problem is easily understood : the underlying equations are linear in the field but not in the medium. In other words, one can imagine a homogeneous medium with a scattering object A inside and another medium with a scattering object B. We can measure the scattered field in each medium. If we now make a medium in which both scatterers A and B are present simultaneously, the scattered field would not be given by a sum of the scattered fields measured in the first two cases. This lack of superposition principle with respect to the medium inhomogeneities makes the inverse scattering problem fundamentally nonlinear.

The mainstream approach to solving nonlinear ISPs numerically is about 300 years old and goes back to Newton. The method is based on defining a certain functional of the unknown contrast and the data (the cost function), which is constructed from some norm of the equation error and a regularizing term. The goal is then to find the contrast that minimizes the cost function. Newton-type methods start by linearizing the cost function near some initial guess for the unknown contrast, solving the linearized equations to receive an update of the initial guess and then repeating the process iteratively. The two major problems of the Newtonian approach are the existence of multiple local minima of the cost function and low computational efficiency in the presence of large datasets.

The local minima is a fundamental difficulty that can not be easily overcome. The various Gauss-Newton methods can be formulated in a manner that guarantees that the equation error decreases with each iteration. However, the process can be descending into a local minimum that is very far from the true solution. In a multidimensional space, the descent can take many iterations and be very time consuming. In fact, it is easy to find a case of Gauss-Newton numerical simulations when the error of the equation steadily declines while the error of the target grows. If the true contrast is not known (as is expected to be the case in realistic applications), there is no way to tell whether the iterations are converging to the true solution and the resultant reconstruction can be misleading.

The problem of large data sets is also a serious impediment. Each Gauss-Newton iteration involves solution of the forward problem (that is, evaluating the cost function at each successive approximation of the contrast) and computing the functional derivative of the cost function with respect to the contrast. These operations are computationally costly. For example, if finite-elements or finite-difference methods are employed for solving the forward problem, the solution must be repeated for every source or illumination pattern used (and there may be thousands of them with the modern experimental techniques). This makes utilization of large data sets that are readily available with the modern measurement techniques problematic even with the use of supercomputers.

**Data-Compatible T-Matrix Completion (DCTMC) **

DCTMC is a novel class of algorithms for solving the nonlinear ISPs. The main idea of DCTMC is to relax the requirement that the unknown potential is strictly local. We then define a class of T-matrices that are compatible with the data (this step turns out to not be limited by the size of the data set) and complete iteratively the unknown elements of T so that the latter corresponds to a diagonal interaction V as close as is possible numerically. Here we utilize the one-to-one correspondence between T and V.

DCTMC is described in detail here and here.

Shown below is a set of reconstructions (tomographic slices of a rectangular sample) for the ISP of inverse diffraction with scalar waves. This ISP is encountered, in particular, in ultrasound and seismic imaging. Reconstruction are performed for various values of the contrast. FR denotes linearized reconstruction using first Rytov approximation, FB is linearized reconstruction using first Born approximation and NL is fully nonlinear reconstruction using DCTMC. As the contrast (chi_0) increases, linearized reconstructions break down while the DCTMC reconstruction remains stable.

**Image Reconstruction Methods for Optical Tomography**

This topic is under development...