MEDICAL IMAGING TECHNIQUES INCLUDING ADAPTIVE RECONSTRUCTION

20260030816 ยท 2026-01-29

    Inventors

    Cpc classification

    International classification

    Abstract

    A computer-implemented method for adaptive image reconstruction can include: generating, by a processor, a preliminary reconstruction image using a preliminary reconstruction configuration; automatically adjusting the preliminary reconstruction configuration to an updated reconstruction configuration, by, at least: obtaining, by the processor, preliminary information from the preliminary reconstruction image; accessing, by the processor, a database of reconstruction configurations, the database providing a mapping of characteristics of images and objects in the images to reconstruction configurations; and performing, by the processor, a lookup operation to identify the updated reconstruction configuration based on the preliminary information; and generating, by the processor, a reconstruction image using the updated reconstruction configuration. Intermediate multifrequency images are generated during generating the preliminary reconstruction image and/or the reconstruction image and can be used to obtain preliminary information from the preliminary reconstruction image or reconstruction information from the reconstruction image, respectively.

    Claims

    1. A computer-implemented method for adaptive image reconstruction, comprising: generating, by a processor, a preliminary reconstruction image using a preliminary reconstruction configuration; automatically adjusting the preliminary reconstruction configuration to an updated reconstruction configuration, by, at least: obtaining, by the processor, preliminary information from the preliminary reconstruction image; accessing, by the processor, a database of reconstruction configurations, the database providing a mapping of characteristics of images and objects in the images to reconstruction configurations; and performing, by the processor, a lookup operation to identify the updated reconstruction configuration based on the preliminary information; and generating, by the processor, a reconstruction image using the updated reconstruction configuration.

    2. The method of claim 1, wherein generating the reconstruction image using the updated reconstruction configuration comprises: applying an inverse scattering algorithm based on propagation through freespace; and applying a phase mask at each propagation step of the inverse scattering algorithm to give a total field comprising an incident field plus a scattered field.

    3. The method of claim 2, wherein the inverse scattering algorithm is adapted to a particular coordinate system.

    4. The method of claim 2, wherein a size of steps used in the inverse scattering algorithm for the generating of the reconstruction image is based on anisotropic pixel selection.

    5. The method of claim 4, wherein at least two iterations of the inverse scattering algorithm use a different anisotropic pixel selection.

    6. The method of claim 4, wherein the anisotropic pixel selection includes a distance in one direction between surfaces of the steps used in the inverse scattering algorithm.

    7. The method of claim 6, wherein the generating of the reconstruction image is performed at multiple frequencies, wherein the distance in one direction between surfaces of the steps for the anisotropic pixel selection varies across the multiple frequencies.

    8. The method of claim 7, wherein the distance in one direction between surfaces of the steps for the anisotropic pixel selection further varies across iterations for a particular frequency.

    9. The method of claim 1, further comprising: receiving an indication of a coordinate system from a selection of a cartesian coordinate system and a curvilinear coordinate system; and filtering available reconstruction configurations at the database of reconstruction configurations according to the coordinate system before generating the preliminary reconstruction image using the preliminary reconstruction configuration.

    10. The method of claim 1, wherein the adaptive imaging method is adapted to a curvilinear coordinate system, wherein available reconstruction configurations including for the preliminary reconstruction configuration and the updated reconstruction configuration are based on the curvilinear coordinate system.

    11. The method of claim 10, further comprising: receiving data collected using a cylindrical array of transmitters and/or receivers; and transforming the data into cylindrical coordinates.

    12. The method of claim 1, further comprising: receiving data collected using vertically limited diffraction-less beams.

    13. The method of claim 12, wherein the preliminary reconstruction image and/or the reconstruction image is generated by applying a 2D reconstruction algorithm.

    14. The method of claim 12, wherein the preliminary reconstruction image and/or the reconstruction image is generated by applying a 3D reconstruction algorithm.

    15. The method of claim 12, wherein thickness of the diffraction-less beams are varied.

    16. The method of claim 12, wherein the diffraction-less beams are rotationally limited.

    17. The method of claim 1, further comprising receiving data captured by an imaging procedure using a multifrequency data collection method.

    18. The method of claim 17, wherein the multifrequency data collection method includes transmitting a pulse containing multiple frequencies towards an object being imaged.

    19. The method of claim 1, wherein the preliminary reconstruction configuration comprises a set of instructions, comprising forming a sequence of intermediate preliminary reconstruction images, wherein each intermediate preliminary reconstruction image has an increased resolution relative to its previous intermediate preliminary reconstruction image.

    20. The method of claim 19, wherein the sequence of intermediate preliminary reconstruction images starts at a low frequency intermediate preliminary reconstruction image and proceeds in a stepwise fashion to higher frequency intermediate preliminary reconstruction images.

    21. The method of claim 19, wherein the sequence of intermediate preliminary reconstruction images is carried out at frequencies in a non-monotonic progression.

    22. The method of claim 1, wherein the updated reconstruction configuration comprises a set of instructions, comprising forming a sequence of intermediate reconstruction images, wherein each intermediate reconstruction image has an increased resolution relative to its previous intermediate reconstruction image.

    23. The method of claim 22, wherein the sequence of intermediate reconstruction images starts at a low frequency intermediate reconstruction image and proceeds in a stepwise fashion to higher frequency intermediate reconstruction images.

    24. The method of claim 22, wherein the sequence of intermediate reconstruction images is carried out at frequencies in a non-monotonic progression.

    25. The method of claim 1, wherein the preliminary reconstruction configuration comprises a set of instructions, comprising: capturing an entire volume of a 3-dimensional image space; using a maximum distance between cross-sections of the 3-dimensional image space; using a limited range of frequencies to generate the preliminary reconstruction image; and using a limited number of iterations to generate the preliminary reconstruction image.

    26. The method of claim 25, wherein the updated reconstruction configuration comprises a set of instructions, comprising: capturing a portion of the volume of the 3-dimensional image space; using a limited distance between cross-sections of the 3-dimensional image space; using an expanded range of frequencies to generate the reconstruction image; and using an increased number of iterations to generate the reconstruction image.

    27. The method of claim 1, further comprising: detecting one or more errors in the preliminary reconstruction image in real time while generating the preliminary reconstruction image; stopping generating the preliminary reconstruction image; selecting a new preliminary reconstruction configuration from the database of reconstruction configurations based on the detected one or more errors in the preliminary reconstruction image; and restarting the method beginning with generating the preliminary reconstruction image using the new preliminary reconstruction configuration.

    28. The method of claim 27, wherein generating the preliminary reconstruction image comprises assigning values to a plurality of voxels, wherein detecting one or more errors in the preliminary reconstruction image in real time while generating the preliminary reconstruction image comprises: evaluating values of voxels located near each other; and detecting an error if the values of voxels located near each other are above or below a voxel value threshold.

    29. The method of claim 28, wherein the object being imaged is a patient's breast, wherein detecting an error if the values of voxels located near each other are above or below a voxel value threshold comprises detecting the presence of a silicone implant if the values of voxels located near each other are above a silicone implant voxel value threshold.

    30. The method of claim 28, wherein the object being imaged is a patient's breast, wherein evaluating values of voxels located near each other comprises evaluating values of voxels near a patient's chest wall, wherein detecting an error if the values of voxels located near each other are above or below a voxel value threshold comprises detecting the presence of fatty tissue or non-biological material near the patient's chest wall if the values of voxels located near the patient's chest wall are above a chest wall voxel value threshold.

    31. The method of claim 1, further comprising: detecting one or more errors in the reconstruction image in real time while generating the reconstruction image; and stopping generating the reconstruction image.

    32. The method of claim 31, further comprising: selecting a new preliminary reconstruction configuration from the database of reconstruction configurations based on the detected one or more errors in the reconstruction image; and restarting the method beginning with generating the preliminary reconstruction image using the new preliminary reconstruction configuration.

    33. The method of claim 31, further comprising: selecting a new reconstruction configuration from the database of reconstruction configurations based on the detected one or more errors in the reconstruction image; and restarting the method beginning with generating the reconstruction image using the new reconstruction configuration.

    34. The method of claim 31, wherein generating the reconstruction image comprises large-scale optimization, wherein detecting one or more errors in the reconstruction image in real time while generating the reconstruction image comprises: evaluating values of a residual of the reconstruction image; and detecting a local minimum if the values of the residual are above a local minimum threshold value.

    35. The method of claim 1, further comprising: obtaining, by the processor, reconstruction information from the reconstruction image; and outputting the reconstruction image and reconstruction information.

    36. The method of claim 35, wherein the reconstruction image is a multifrequency image, wherein intermediate multifrequency images are generated during generating the reconstruction image, wherein obtaining, by the processor, reconstruction information from the reconstruction image comprises: calculating a frequency-independent wavenumber by removing a frequency dependent part of a wavenumber from a Helmholtz equation based model using information across frequencies of the intermediate multifrequency images.

    37. The method of claim 36, wherein obtaining, by the processor, reconstruction information from the reconstruction image further comprises: estimating mass density using the frequency-independent wavenumber.

    38. The method of claim 37, wherein obtaining, by the processor, reconstruction information from the reconstruction image further comprises: identifying objects using the estimated mass density.

    39. The method of claim 35, wherein the reconstruction image is a multifrequency image, wherein intermediate multifrequency images are generated during generating the reconstruction image, wherein obtaining, by the processor, reconstruction information from the reconstruction image comprises: identifying frequency-dependent characteristics using values across frequencies of the intermediate multifrequency images, including attenuation and speed of sound.

    40. The method of claim 39, wherein obtaining, by the processor, reconstruction information from the reconstruction image further comprises: estimating porosity based on the identified frequency-dependent characteristics.

    41. The method of claim 1, wherein the preliminary reconstruction image is a multifrequency image, wherein intermediate multifrequency images are generated during generating the preliminary reconstruction image, wherein obtaining, by the processor, preliminary information from the preliminary reconstruction image comprises: calculating a frequency-independent wavenumber by removing a frequency dependent part of a wavenumber from a Helmholtz equation based model or paraxial approximation using information across frequencies of the intermediate multifrequency images.

    42. The method of claim 35, further comprising: obtaining an attenuation image of the reconstruction image; generating, by the processor, a reflection image; performing a morphological operation with respect to the attenuation image to generate processed attenuation image; fusing the processed attenuation image with the reflection image to generate a final reflection image; and outputting the final reflection image.

    43. The method of claim 42, wherein the morphological operation comprises erosion.

    44. The method of claim 1, wherein the object being imaged is a patient's breast, wherein obtaining, by the processor, preliminary information from the preliminary reconstruction image comprises estimating mammographic density by: separating exterior voxels from breast voxels of the preliminary reconstruction image; segmenting high speed value breast voxels from other breast voxels of the preliminary reconstruction image to generate a first segmented image, wherein high speed value breast voxels are breast voxels having a speed value above a threshold; removing, from the first segmented image, high speed value breast voxels corresponding to skin tissue of the patient's breast to generate a second segmented image; and calculating mammographic density by determining a percentage of the high speed value breast voxels in the second segmented image.

    45. The method of claim 44, wherein obtaining, by the processor, preliminary information from the preliminary reconstruction image further comprises obtaining a size of an object being imaged, wherein the database of reconstruction configurations provides a mapping of object size and mammographic density to reconstruction configurations.

    46. The method of claim 1, wherein obtaining, by the processor, preliminary information from the preliminary reconstruction image comprises identifying a size of an object being imaged by: separating a volume of the object being imaged from an exterior volume; and quantifying the size of the object.

    47. A system for adaptive image reconstruction, comprising: a processor; a storage system; a database of reconstruction configurations; a program stored at the storage system and comprising instructions that when executed by the processor direct the system to at least: generate a preliminary reconstruction image using a preliminary reconstruction configuration; automatically adjust the preliminary reconstruction configuration to an updated reconstruction configuration, by, at least: obtaining preliminary information from the preliminary reconstruction image; accessing the database of reconstruction configurations, the database providing a mapping of characteristics of images and objects in the images to reconstruction configurations; and performing a lookup operation to identify the updated reconstruction configuration based on the preliminary information; and generate a reconstruction image using the updated reconstruction configuration.

    Description

    BRIEF DESCRIPTION OF THE DRAWINGS

    [0009] FIG. 1 illustrates an ultrasound-based tomographic system that may be used to collect data used in the described image reconstruction and analysis techniques.

    [0010] FIG. 2 illustrates a process flow diagram of a process that can be carried out by an acquisition control system.

    [0011] FIG. 3 illustrates an example architecture for adaptive imaging.

    [0012] FIGS. 4A and 4B illustrate an example computer-implemented method for adaptive image reconstruction.

    [0013] FIG. 5 illustrates an example adaptive imaging process for generating transmission/attenuation image(s).

    [0014] FIGS. 6A-6D illustrate results of a cylindrical coordinate system-based algorithm.

    [0015] FIG. 7 shows reflection images of a breast, including the fusing of a processed attenuation image and reflection image.

    [0016] FIGS. 8A and 8B illustrate light wave propagation for diffraction-less beams.

    [0017] FIG. 9 shows a log plot of attenuation coefficient vs frequency from Bamber, 1986, Attenuation and absorption. Physical Principles of Medical Ultrasonics, ed. C R Hill.

    [0018] FIGS. 10A-10D illustrate identification of tissue type through the power law variation of attenuation with frequency.

    [0019] FIG. 11 illustrates propagation between a transmitter and receiver.

    [0020] FIG. 12 shows a plot of Cs vs compressional speed.

    [0021] FIGS. 13A-13D provide images showing application of porosity estimation using Biot as described herein.

    [0022] FIG. 14 is schematic showing the Reflection coefficients relevant to the layered medium Green's function approach.

    [0023] FIG. 15 illustrates a rectangular scattering matrix according to an embodiment of the present invention.

    [0024] FIG. 16 illustrates a N-by-N scattering region according to an embodiment of the present invention.

    [0025] FIG. 17 illustrates a N-by-N scattering coalesced region according to the present invention.

    [0026] FIG. 18 shows a transducer coupling according to an embodiment of the present invention.

    [0027] FIG. 19 shows the flowchart for the solution of the forward problem by means of the Parabolic FFT Marching method.

    [0028] FIGS. 20A/B/C show the flowchart for the solution of the Jacobian of the forward problem for the Parabolic FFT Marching method.

    [0029] FIGS. 21A/B/C show the flowchart for the action of the Hermitian conjugate of the Jacobian for the Parabolic FFT Marching method.

    [0030] FIGS. 22A/B/C/D show the flowchart for the application of the conjugate gradient method applied to the generalization of the Propagation-Backpropagation method of Inverse Scattering.

    [0031] FIG. 23 is the flowchart for the original Propagation-Backpropagation Method described in the paper A Propagation-Backpropagation Method for Ultrasound Tomography, Frank Natterer, which is included herein as reference.

    [0032] FIGS. 24A/B/C/D/E show the flowchart for the brightness maximization based phase aberration correction algorithm.

    [0033] FIG. 25 shows the basic Geometry for the paper A Propagation-Backpropagation Method for Ultrasound Tomography, Frank Natterer, which is included herein as reference. This is referred to as FIG. 1 in this paper. The square Q.sub.j is the square of sidelength 2 whose boundary is made up of .sub.j (side-scatter directions), .sub.j.sup. (Backscatter direction), and .sub.j.sup.+ (forward scatter direction). It encompasses the region 22, which contains the support of the object function . .sub.j is the direction of the incident wavefield propagation.

    [0034] FIGS. 26A and 26B show an elongated rectangle Q.sub.j enclosing the region , with boundaries .sub.j (side-scatter directions), .sub.j.sup. (Backscatter direction), and .sub.j.sup.+ (forward scatter direction). .sub.j is the direction of the incident wavefield propagation (for FIG. 26A).

    [0035] FIG. 27 shows the geometry of an acoustic transducer array illuminating an anatomical region through an aberrating layer of fat. A region of interest (ROI), selected by the user, is also shown. The transducer elements that contribute to the image in the ROI are denoted e.sub.m1 to e.sub.m2.

    [0036] FIG. 28 shows the method for solving for the object function by means of a SQUARE nonlinear map, iteratively, in the scattered field domain.

    [0037] FIG. 29 shows the iterative method for solving for the object function by means of a SQUARE nonlinear map in the object function () domain.

    [0038] FIG. 30 shows the geometric set up for a typical calibration procedure.

    [0039] FIG. 31 shows a comparison of the normalized total fields' (predicted and measured) magnitude versus receiver position.

    [0040] FIG. 32 shows the comparison of the normalized total fields' phase. Here, the two fields are the predicted field using the starting values for the scattering parameters, and the measured field.

    [0041] FIG. 33 shows the detailed model used for the receiver in the acoustic 2D case, in a typical calibration setup.

    [0042] FIG. 34 shows the geometric setup used for the calibration of the receiver and transmitter.

    [0043] FIG. 35 shows the scattering parameters used in the 2D acoustic calibration procedure typical of the calibration procedure used in inverse scattering, both for acoustic and EM modalities.

    [0044] FIG. 36 illustrates a generic imaging algorithm, according to the present invention.

    [0045] FIGS. 37A-37D detail an expanded imaging algorithm with respect to that shown in FIG. 36.

    [0046] FIG. 38 shows the generic solution to the forward problem according to the present invention.

    [0047] FIGS. 39A and 39B show the generic solution to the forward problem (or any square system) using a biconjugate gradient algorithm according to an embodiment of the present invention.

    [0048] FIGS. 40A and 40B show the generic solution to the forward problem using the stabilized conjugate gradient algorithm, according to an embodiment of the present invention

    [0049] FIG. 41 illustrates the application of the Lippmann-Schwinger Operator to the internal field estimate according to an embodiment of the present invention.

    [0050] FIG. 42 illustrates the application of the Lippmann-Schwinger Operator in the presence of layering to the internal field estimate according to an embodiment of the present invention.

    [0051] FIG. 43 illustrates the application of the Hermitian conjugate of the Lippmann-Schwinger operator.

    [0052] FIG. 44 illustrates the scattering subroutine, according to an embodiment of the present invention.

    [0053] FIG. 45 illustrates the propagation or transportation of fields from image space to detector position.

    [0054] FIGS. 46A/B/C illustrate application of the Jacobian Matrix in the generic case (free spaceor no layering), according to an embodiment of the present invention

    [0055] FIGS. 47A/B/C illustrates application of the Hermitian Jacobian Matrix in the generic case according to an embodiment of the present invention.

    [0056] FIGS. 48A/B illustrates application of the Jacobian Matrix with correlations according to an embodiment of the present invention.

    [0057] FIG. 49 illustrates the application of the second Lippmann-Schwinger operator in the generic case.

    [0058] FIG. 50 illustrates the application of Hermitian conjugate of the second Lippmann Schwinger Operators

    [0059] FIG. 51 the illustrates application the Hermitian conjugate of transportation of fields from image space to detector position.

    [0060] FIGS. 52A/B illustrate application of the Hermitian conjugate of the Jacobian Matrix in the presence of layering according to an embodiment of the present invention.

    [0061] FIG. 53 shows an example computing system through which X may be carried out.

    DETAILED DESCRIPTION

    [0062] Medical imaging techniques including adaptive reconstruction are provided. Adaptive reconstruction involves the adaptability of the algorithms and operations for imaging an object based on specified criteria and/or tests. Imaging techniques incorporating adaptive reconstruction can be referred to as adaptive imaging.

    [0063] FIG. 1 illustrates an ultrasound-based tomographic system that may be used to collect data used in the described image reconstruction and analysis techniques. Referring to FIG. 1, imaging system 100 can include a transmitter 101, receiver 102, and transceiver 103. When operating in ultrasound frequencies, imaging system 100 can perform both reflection and transmission ultrasound methods to gather data. The reflection portion (e.g., transceiver 103) directs pulses of sound wave energy into tissues and receives the reflected energy from those pulseshence it is referred to as reflection ultrasound. Detection of the sound pulse energies on the opposite side of a tissue after it has passed through the tissue is referred to as transmission ultrasound.

    [0064] The transmitter 101 and a receiver 102 are provided on opposite sides to enable the performing of transmission ultrasound. The transmitter 101 and the receiver 102 may be in the form of an array of transmitters and receivers. The transmitter array can emit broad-band plane pulses (e.g., 0.3-2 MHz) while the receiver array includes elements that digitize the time signal. The transceiver 103, which can include a set of reflection transducers, can be used to perform reflection measurements. The reflection transducers of the transceiver 103 can include transducers of varying focal lengths, providing a large depth of focus when combined. The focus can be fixed or variable in either or both directions. For example, in some cases, the focus may be in the vertical direction only, and the horizontal focus may be dynamic as controlled by a beamformer. In another embodiment, beamforming is included for the vertical direction.

    [0065] The transmitter 101 and/or transceiver 103 can include a waveform generator that can produce repetitive narrow bandwidth signals or a wide bandwidth signal. The advantages of using a suitable wide bandwidth signal is that in one transmit-receive event, information at multiple frequencies of interest may be collected. Indeed, by using multiple frequencies, it will typically be possible to obtain more data for use in reconstructing the image. Accordingly, a computing system performing image reconstruction can receive data captured by an imaging procedure using a multifrequency data collection method, which includes transmitting a pulse containing multiple frequencies towards an object being imaged.

    [0066] The lowest frequency used for the signal is selected such that the wavelength of the signal is not significantly smaller than the object to be imaged. In one embodiment, for example, the wavelength may be approximately twice the relevant characteristic length of the object (e.g., breast or other anatomical body part such as abdomen, extremities, etc.) or other smaller length. As a non-limiting example, a selected lowest frequency for imaging a breast may be 0.300 MHz. The highest frequency is selected such that the acoustic signal may be propagated through the object without being absorbed to such an extent as to render detection of the scattered signal impossible or impractical. Thus, depending upon the absorption properties and size of the object which is to be scanned, use of multiple frequencies within this range constraint will typically enhance the ability to more accurately reconstruct the image of the object. The use of multiple frequencies or signals containing many frequencies also has the advantage of obtaining data that may be used to accurately reconstruct frequency dependent material properties.

    [0067] A coupling medium (e.g., in the form of a liquid or gel for matching of refractive indices) can be used between the object of interest and the imaging system 100. For example, a receptacle 110 can be provided to present a water (or other liquid or gel) bath in which a patient may rest at least the region of interest (e.g., the part 112 being imaged). Because the motion artifacts associated with patient movement can affect the image quality, mechanisms can be provided to facilitate retention and positioning of the breast or other body part. For example, in the case of breast tissue, an adhesive pad with a magnet can be placed near the nipple region of the breast and docked to a magnetized retention rod that gently holds the breast in a consistent position during the scan. As another example, a membrane over the bath between the breast and the liquid is used to hold the breast (and allow for alternative liquids in the bath).

    [0068] 360 of data can be obtained through rotation of the system. The system (particularly arms containing the transmitter 101 and the receiver 102) may rotate 360 to acquire measurements from effectively all the angles (e.g., data sufficient to provide a 360 view even if not taken at every angle between 0 and) 360 and collect tomographic views of ultrasound wave data. The reflection transducer data can be collected with one or more horizontal reflection transducers of transceiver 103 that acquire data in steps or continuously as they rotate 360 along with the transmitter 101 and receiver 102. The transducers may also be tilted with a non-zero polar or azimuth angle.

    [0069] In a specific implementation, the system rotates around the patient while both transmission and reflection information are captured. It is not necessary to acquire an entire 360 scan; images can be reconstructed with limited information. For example, a patient can lie prone with their breast pendent in a controlled temperature water bath (e.g., 31 C.) within the field of view of the transmitter 101, receiver 102, and transceiver 103 as the transmitter 101, receiver 102, and transceiver 103 rotate 360 around the patient. Then, in one example case 180 projections of ultrasound wave data may be obtained. In another example case, 200 to up to 360 projections or beyond of the ultrasound wave data may be obtained. After performing a rotation, an array chassis holding the transmitter 101, receiver 102, and transceiver 103 can be raised or lowered in a desired sequence to acquire data at another level. The sequence of levels for acquisition may be monotonic or non-monotonic, depending on implementation

    [0070] Other detector configurations may be used. For example, additional detectors in a continuous or discontinuous ring or polygon configurations may be used. Of course, any configuration selected will have tradeoffs in speed and cost. In addition, in some cases, reflection arrays (the transducers for the reflection measurements) can do double-duty and perform independent transmission and receiver functions as well as reflection measurements. In another embodiment, the reflection array consists of a matrix of elements with multiple rows and columns.

    [0071] An acquisition control system can be used to operate the various active components (e.g., the transducers) and can control their physical motion (when system 100 is arranged in a rotating configuration). An acquisition control system can automate a scan in response to a start signal from an operator. This automated acquisition process does not require operator interaction during the scanning procedure. Once the scan is complete, the acquisition control system (or other computing system having access to the data) can compute the reflection, speed of sound, and attenuation results from the collected data.

    [0072] Accordingly, the data captured by system 100 can be used to reconstruct an image using inverse scattering techniques. Applications describing techniques for inverse scattering include U.S. Pat. Nos. 4,662,222; 5,339,282; 6,005,916; 5,588,032; 6,587,540; 6,636,584; 7,570,742; 7,684,846; 7,699,713; 7,771,360; 7,841,982; 8,246,543; and 8,366,617, which are hereby incorporated by reference in their entirety-except for that which is inconsistent with the apparatus and techniques described herein. In addition, the techniques described in Three-dimensional nonlinear inverse scattering: Quantitative transmission algorithms, refraction corrected reflection, scanner design and clinical results, Wiskin et al., Proceedings of Meetings on Acoustics, Vol. 19, 075001 (2013), are hereby incorporated by reference in their entirety.

    [0073] Results can be provided to various computing systems including a viewing station and/or a picture archival and communication system (PACS). Thus, images can be automatically acquired, stored for processing, and available for physician review and interpretation at a review workstation.

    [0074] FIG. 2 illustrates a process flow diagram of a process that can be carried out by an acquisition control system. Referring to FIG. 2, operation of imaging system 100 of FIG. 1 can be carried out using an acquisition control system performing process 200. Here, in response to receiving an indication to initiate automated scanning (e.g., from an operator or from some programmatic trigger), an acquisition control system can initialize and send a transmission wave from a specified angle about a patient (210), for example from one or more transmitters (such as transmitter 101). As the receiver(s) 102 sense the signal transmitting through the patient (220), raw transmission data 221 is captured. Then, spatially compounded extended depth of focus B mode scans, for example using transceiver(s) 103, are acquired (230) to obtain raw reflection data 231. Of course, in some cases, the B mode scans may be performed before the transmission ones. Additionally, in some embodiments, reflection transducers may have different focal lengths to extend the overall depth of focus within the imaging volume. In one embodiment the focal length is in the vertical direction. Of course, as mentioned above, the focus can be fixed or variable in either or both directions. In some cases, the transmitters (and transceivers) can be used to transmit a pulse containing multiple frequencies towards an object being imaged.

    [0075] The acquisition control system determines whether the detectors are in the final position (240). For a rotating system, the acquisition control system can communicate with a motor control of the platform on which the active components are provided so that a current and/or next position of the platform is known and able to be actuated. For a fixed system, the acquisition control system determines the selection of the active arrays according to an activation program. In some implementations, the system may be a combination of rotating and fixed. For example, the system may be partially rotating and/or allow for vertical movement only. Accordingly, the detection of final position may be based on information provided by the motor control, position sensors, and/or a position program (e.g., using counter to determine whether appropriate number of scans have been carried out or following a predetermined pattern for activating transceivers). If the detectors are not in final position, the acquisition control system causes the array to be repositioned (250), for example, by causing the platform to rotate or by selecting an appropriate array of transceivers of a fixed platform configuration. After the array is repositioned, the transmission wave is sent (210) and received (220) so that the raw transmission data 221 is collected and the B mode scans can be acquired (230) for raw reflection data 231. This repeats until the detectors are determined to be in the final position.

    [0076] FIG. 3 illustrates an example architecture for adaptive imaging. Referring to FIG. 3, an adaptive imaging architecture 300, which may be implemented on a computing system such as described with respect to FIG. 53, includes a reconstruction manager 310. Reconstruction manager 310 includes the instructions for controlling an adaptive imaging algorithm. For example, reconstruction manager 310 initiates appropriate preprocessing 320 of imaging data such as transmission data 312 and/or reflection data 314, directs generation of transmission/speed of sound/attenuation image(s) 330, directs generation of reflection image(s) 340, applies appropriate image evaluation processes 350 with respect to the generated images, and applies appropriate post processing 360 for the generated images. Reconstruction manager 310 enables automatic adjustments to the applied algorithms based on various aspects of the object being imaged in part based on the image evaluation processes 350 and the image itself such that each process can be adapted, allowing for dynamic selection of optimal parameters. As an illustrative example, as part of generating transmission/attenuation image(s) 330, the reconstruction manager provides an initial reconstruction configuration (e.g., from configuration directory 370) for generating an image and then uses information obtained from the image to update the reconstruction configuration so as to generate a better image. This process can include preliminary image reconstruction 332, preliminary reconstruction image evaluation 334, and image reconstruction. For example, reconstruction manager 310 can perform operations 410, 420, and 430 described with respect to FIGS. 4A and 4B. Within each reconstruction process (e.g., including preliminary image reconstruction 332 and image reconstruction 336), aspects of the reconstruction can be dynamic and adaptable based on various characteristics identified, for example, using specified tests. For example, reconstruction manager 310 can provide a particular reconstruction configuration to use for generating an image along with the particular tests to apply during the reconstruction that can cause changes to certain parameters and/or cause the process to stop and be restarted with the same or different reconstruction configuration. In the illustrated example, the preliminary image reconstruction 332 involves reconstruction configuration Config A 380 and test(s) 382; and the image reconstruction 33 involves reconstruction configuration Config B 390 and test(s) 392.

    [0077] FIGS. 4A and 4B illustrate an example computer-implemented method for adaptive image reconstruction. Referring to FIG. 4A, a method 400 for adaptive image reconstruction can include generating (410), by a processor, a preliminary reconstruction image using a preliminary reconstruction configuration; automatically adjusting (420) the preliminary reconstruction configuration to an updated reconstruction configuration; and generating (430), by the processor, a reconstruction image using the updated reconstruction configuration.

    [0078] Referring to FIG. 4B, automatically adjusting (420) the preliminary reconstruction configuration to an updated reconstruction configuration can include obtaining (422), by the processor, preliminary information from the preliminary reconstruction image; accessing (424), by the processor, a database of reconstruction configurations, the database providing a mapping of characteristics of images and objects in the images to reconstruction configurations; and performing (426), by the processor, a lookup operation to identify the updated reconstruction configuration based on the preliminary information.

    [0079] Generating (430) a reconstruction image using the updated reconstruction configuration can include applying an inverse scattering algorithm based on propagation through freespace (which can be modeled as empty water); and applying a phase mask at each propagation step of the inverse scattering algorithm to give a total field comprising an incident field plus a scattered field. This method for generating the reconstruction image is applicable for generating (410) the preliminary reconstruction image as well. As will be described in more detail herein, the inverse scattering algorithm can be adapted to a particular coordinate system. In addition, various aspects of the inverse scattering algorithm can be adapted in accordance with the adaptive imaging processes described herein.

    [0080] After the reconstruction image is generated, reconstruction information can be obtained from the reconstruction image; and the reconstruction image and reconstruction information can be output, for example, to a display.

    [0081] Returning to FIG. 3, as an illustrative example implementing adaptive imaging architecture 300, the process may be started by command prompts from a technician. Certain preselected parameters may be selected to provide a starting point for operations and/or provide user input to the adaptive imaging processes. A preliminary configuration is selected from the configuration directory 370 by default, based on characteristics of the data, and/or based on input received from the technician. The configuration directory 370 contains all possible recipes. Recipes are sets of frequencies to be used in reconstruction of images from the transmission data. The recipe can include a range of frequencies and the intervals between each frequency. The recipe can also include the iterations at each frequency and each data level. Other aspects can be part of the recipes including parameters such as sections, anisotropic pixel selection, coordinate system, and stopping criteria.

    [0082] Based at least in part on the recipes that may be selected for the preliminary configuration, preprocessing for the raw transmission data 312 is carried out. For example, preprocessing can include performing a Fourier transform. The Fourier transform is a mathematical operation that transforms the data from the time domain to the frequency domain. After the Fourier transform, magnitude is the y-axis and frequency is the x-axis for the data for operations carried out using the cartesian coordinate system. In another embodiment, the order of the data is changed to allow for quicker sequential access in memory. For a cylindrical or spherical coordinate system, additional preprocessing may be performed to transform coordinates to cylindrical or spherical. In some cases, the data is collected on a system that has the geometric shape corresponding to cylindrical or spherical or ellipsoidal coordinate surfaces.

    [0083] After preprocessing, the generation of a preliminary image is carried out. The preliminary configuration enables an initial image to be generated quickly. The preliminary image is a relatively fast image, formed using relatively fewer frequencies, for relatively fewer iterations, and is generally blurry. In one embodiment, the image is formed using all levels of data. To form the preliminary image: all data levels may be used. Data levels refers to one particular level of data collection. In one embodiment, the array chassis (see e.g., discussion of system of FIG. 1) rotates through 360 degrees collecting transmission and/or reflection data at various azimuthal angles. In another embodiment, the rotational and vertical movement are combined so the concept of data level refers to one 360 degree rotation before the array changes azimuthal direction. In another embodiment, the arrays change direction before all 360 degrees are traversed. All data levels means every cross-section from the raw data, whether the cross-sections intercept the object or not. Taking all data levels will provide quantitative data regarding the size of the object, which can be included in the preliminary information obtained during the preliminary reconstruction image evaluation 334.

    [0084] Some of the magnitude and frequency data from the preprocessed data are selected for each image, based on the recipe from preliminary configuration (e.g., Config A 380). In some cases, the preliminary reconstruction configuration includes a set of instructions involving capturing an entire volume of a 3-dimensional image space; using a maximum distance between cross-sections of the 3-dimensional image space; using a limited range of frequencies to generate the preliminary reconstruction image; and using a limited number of iterations to generate the preliminary reconstruction image. Other intermediate preliminary reconstruction images may be included.

    [0085] As an illustrative example, the recipe can include first generating an initial attenuation image (time of flight), then generating a 2D image (e.g., based on the initial attenuation image), then generating a 3D coarse image (e.g., based on the 2D image), and then generating a 3D fine image (e.g., based on the 3D coarse image), where the 2D image is generated for each frequency of a set of frequencies selected between 0.3 MHz and 0.8 MHz, the 3D coarse image is generated for each frequency of a set of frequencies selected between 0.4 MHz and 0.8 MHZ, and the 3D fine image is generated for each frequency of a set of frequencies selected between 0.8 MHz and 1.3 MHz for example. Other embodiments can use different sets of frequencies, which may overlap for the three grids mentioned here. Note the coarse grid refers to voxels with generally larger dimensions and/or anisotropy.

    [0086] Accordingly, multiple images are generated for the preliminary imageeach based on a different frequency signal/band. For each image, large-scale optimization can be performed (running inversions). In particular, as will be described in more detail herein, acoustic wave propagation through an object is simulated and the properties of the object are optimized to make the simulated propagation fit the observed propagation from the transmission data. The optimization problem can be solved using a stochastic gradient algorithm and the optimization results are used to assign values to pixels and form images. The pixels are squares on a 2D grid. Each 2D grid corresponds to a cross-section, or data level of the object being imaged. In some cases, as described in more detail herein, an adaptable anisotropic pixel may be used as a parameter in the reconstruction algorithm.

    [0087] The reconstruction manager 310 provides the stopping criteria (e.g., included in test(s) 382) which in various embodiments may involve various data (e.g., residuals) associated with the minimization procedure, and subsets chosen for each iteration by the stochastic gradient algorithm and these characteristics can be dynamic for different iterations. The number of iterations for optimization may generally increase with each image formed (e.g., from the TOF, to the 2D image, to the coarse 3D image, to the fine 3D image), leading to higher quality images at the end of the sequence.

    [0088] As part of the test(s) 382, detection of local minima can be performed early in the optimization process, for example, detection of local minima during the 3D coarse image generation where the residual is above a threshold can trigger the operations to cease and restart using a different recipe. This test of residual behavior may also be referred to as a process detecting whether a salt and pepper image is being formed-such an image is a pseudo-random juxtaposition of high and low values in place of physiologically correct estimates of the speed of sound. In such a case, the test may communicate this failure to the reconstruction manager 310, which can then obtain a different preliminary configuration, and restart the transmission image formation process.

    [0089] Other tests 382 can also evaluate the object being imaged. For example, there may be a test to detect the presence of a silicone (or other) implant if there is a high residual or a high number of voxels which have a large number of pixels above a certain threshold (e.g., using pixel counts and speed of sound estimation). The test may determine the number of voxels having this value above the certain threshold is too large (above a certain number, for example). Tests can be performed to identify whether high speed voxels occur near the chest wall in the case of breast tissue (or other anomalous values that may be different than expected at a particular location) and this number is above a certain threshold. Indeed, tests 382 can include pixel (voxel) counts and speed of sound estimation, evaluation of gradients and residual magnitude, ratios of magnitudes of gradients and their behavior such as rate of decrease, and the like. Accordingly, tests can be included based on identified objects and based on the image formation process, which provide information used by the reconstruction manager 310 to select an updated reconstruction configuration (used to restart the preliminary image reconstruction 332 and/or generate the reconstruction image 336).

    [0090] Indeed, in addition to stopping criteria and certain tests performed during the preliminary reconstruction, evaluation 334 of the resulting preliminary image is conducted. For example, for an image of a breast, quantitative measures such as percentage of fibroglandular tissue (sometimes referred to as mammographic density) or fibroglandular tissue volume or ratio (FGV, FGR)especially when skin volume is omitted) and the size of the breast can be obtained. These measures are then used to update the reconstruction configuration so as to best address the type of tissue and other object characteristics during the reconstruction algorithms. For example, the number of sections/data levels can be selected based on the size of the object/breast. Here, sections may refer to the number of data levels used in the reconstructionin one embodiment the top 24 mm (say 12 data levels at 2 mm separation) may be used for the reconstruction of the top section, the bottom 32 mm may be used for the reconstruction of the bottom part of the image and intermediate consecutive data levels may be used for the reconstruction of the middle part of the image. In addition, the frequency ranges may be adjusted based on the size, mammographic density, and results of test(s) 382.

    [0091] In the following discussion, examples are not meant to be restrictive to a particular body part. Adaptive imaging is applicable to a variety of imaging applications including, but not limited to, the pediatric, orthopedic, and whole body imaging contexts. Tissue types vary with different applications (pediatric, etc.) and need not refer to breast imaging only. The presence of bone and air is a known complication that can be addressed using the described techniques. For example, the percentage of bone and air in a particular imaged volume can determine different recipes similar to that described in more detail below.

    [0092] The updated reconstruction configuration (e.g., Config B 390) is then used to generate the reconstructed image 336. Compared to the preliminary image, the transmission image is a relatively slower image, using more frequencies, for more iterations, and is high-resolution. The process for forming the transmission image is similar to the process for the preliminary image. That is, acoustic wave propagation through an object is simulated and the properties of the object are optimized to make the simulated propagation fit the observed propagation from the transmission data. The optimization problem can be solved using a stochastic gradient algorithm and the optimization results are used to assign values to pixels and form images.

    [0093] In some cases, the updated reconstruction configuration includes a set of instructions involving capturing a portion of the volume of the 3-dimensional image space; capturing a portion of the volume of the 3-dimensional image space; capturing a portion of the volume of the 3-dimensional image space; and using an increased number of iterations to generate the reconstruction image. As an illustrative example, the recipe for the updated reconstruction configuration can include generating a 3D coarse image, generating a first 3D fine image, and generating a second 3D fine image, where the 3D coarse image is generated for each frequency of a set of frequencies selected between 0.4 MHz and 0.8 MHz, the first 3D fine image is generated for each frequency of a set of frequencies selected between 0.8 MHz and 1.2 MHZ, and the second 3D fine image is generated for each frequency of a set of frequencies selected between 1.2 MHz and 1.3 MHz. In some cases, the 3D coarse image and/or 3D fine image generated during the preliminary image reconstruction 332 is used as a starting point for the 3D coarse image during the image reconstruction 336. As with the preliminary image, multiple images are generated based on a different frequency signal/band and the various adaptions and stopping criteria tests described with respect to the preliminary image reconstruction are applicable. More or fewer images and/or processes may be part of the updated reconstruction configuration (e.g., Config B 390) and tests 392. However, unlike for the preliminary image reconstruction, fewer than all data levels may be used. Here, the number of data levels/sections can be based on the size of the object.

    [0094] FIG. 5 illustrates an example adaptive imaging process for generating transmission/attenuation image(s). Referring to FIG. 5, an example adaptive imaging process 500, which may be implemented as code executed by a computing system such as computing system 5300 of FIG. 53, can begin with an estimate of data properties 502. The estimate of data properties 502 may be received via a user input or be provided with the captured data (e.g., associated with a patient). From the estimate of data properties 502, an initial number of data levels (e.g., based on breast size) is determined for use by the fast reconstruction algorithm. An initial fast image is generated (504), for example using a coarse grid, 2D reconstruction, and/or time of flight algorithm. In some cases, the initial image is a transmission image. In some cases, a reflection image and/or data can be used to supplement the transmission image.

    [0095] The generation of the initial fast image can be carried out such as described with respect to preliminary image reconstruction 332 of FIG. 3 and preliminary image reconstruction 410 of FIG. 4A. From the initial fast image, size of the breast can be estimated (506) and mammographic density estimated (508). The breast size estimation (506) can involve determining the size category for the breast, for example, using categories of small, medium, and large. The estimation of the mammographic density (508) involves determining the percentage of fibro-glandular tissue and determining the density category for the breast, for example, using categories of fatty, scattered/heterogenous, and dense. Based on the breast size and mammographic density, the recipe for inversion and number of sections can be determined (510). For example, the adaptive imaging process 500 can use a lookup table (LUT) or matrix, which in the case of breast can be such as follows.

    Example LUT

    TABLE-US-00001 size Density Small Medium Large Fatty Config 1 Config 2 Config 3 Scattered/heterogeneous Config 4 Config 5 Config 6 dense Config 7 Config 8 Config 9

    [0096] In some cases, each configuration/recipe for the different combinations of breast tissue characteristics is different. In some cases, a same or similar recipe may be used for a subset of different combinations of breast tissue characteristics. Percentages of bone and air may also be part of the characteristics used in whole body/pediatric/partial body imaging. Similarly, for other anatomical regions of the body, a similar LUT could be created. For instance, for human body extremities and musculoskeletal tissue, it could be based on the expected ratio of fat vs muscle vs bone tissue.

    [0097] Using the matrix/table one or more reconstruction configuration/recipes can be selected for use on each section of the breast. In the illustrated example, a coarse transmission image 512 is generated and a fine grid transmission image 514 is generated. These images can be generated such as described with respect to image reconstruction 336 of FIG. 3 and image reconstruction 430 of FIG. 4A.

    Stopping Criteria and Tests

    [0098] The reconstruction configurations can be selected and/or changed on the basis of various stopping criteria and tests (e.g., test(s) 382, 392). The criteria and tests can be based on image characteristics and can trigger the restart of an inversion process using a different set of parameters (e.g., frequencies, iterations, anisotropic pixels, or coordinate systems more suited to the data). The tests can include whether there is a silicone implant, whether there is a salt and pepper local minimum developing, and whether the image is going to be blurry.

    [0099] Indeed, as mentioned above, as part of the test(s) 382, detection of local minima can be performed early in the optimization process, for example, detection of local minima during the 3D coarse image generation where the residual is above a threshold can trigger the operations to cease and restart using a different recipe. As another example in breast tissue, there is a test to detect the presence of a silicone implant if there is a high residual (e.g., using pixel counts and speed of sound estimation). Tests can be performed to identify whether abdominal fat is identified near the chest wall (or other anomalous values that may be different than expected at a particular location). Indeed, tests 382 can include pixel counts and speed of sound estimation, evaluation of gradients and residual magnitude, ratios of magnitudes of gradients and their behavior such as rate of decrease, and the like.

    [0100] Thus, according to certain embodiments, an adaptive imaging method can thus include detecting one or more errors in the preliminary reconstruction image in real time while generating the preliminary reconstruction image; stopping generating the preliminary reconstruction image; selecting a new preliminary reconstruction configuration from the database of reconstruction configurations based on the detected one or more errors in the preliminary reconstruction image; and restarting the method beginning with generating the preliminary reconstruction image using the new preliminary reconstruction configuration.

    [0101] In some cases, generating the preliminary reconstruction image comprises assigning values to a plurality of voxels, wherein detecting one or more errors in the preliminary reconstruction image in real time while generating the preliminary reconstruction image comprises: evaluating values of voxels located near each other; and detecting an error if the values of voxels located near each other are above or below a voxel value threshold. In an example implementation where the object being imaged is a patient's breast, detecting an error if the values of voxels located near each other are above or below a voxel value threshold can include detecting the presence of a silicone implant if the values of voxels located near each other are above a silicone implant voxel value threshold. Another test involving evaluating values of voxels located near each other includes evaluating values of voxels near a patient's chest wall, wherein detecting an error if the values of voxels located near each other are above or below a voxel value threshold comprises detecting the presence of fatty tissue or non-biological material near the patient's chest wall if the values of voxels located near the patient's chest wall are above a chest wall voxel value threshold.

    [0102] Similarly, tests are applied during generating the updated reconstruction image. For example, the adaptive imaging method can include detecting one or more errors in the reconstruction image in real time while generating the reconstruction image; and stopping generating the reconstruction image. Similar to tests during the preliminary reconstruction, detecting one or more errors in the reconstruction image in real time while generating the reconstruction image can include evaluating values of a residual of the reconstruction image; and detecting a local minimum if the values of the residual are above a local minimum threshold value.

    [0103] Once stopped, either a new preliminary reconstruction configuration can be selected (e.g., to start the process over at step 330) or a new updated reconstruction configuration can be selected (e.g., to restart process 350). For example, an adaptive imaging method can include selecting a new preliminary reconstruction configuration from the database of reconstruction configurations based on the detected one or more errors in the reconstruction image; and restarting the method beginning with generating the preliminary reconstruction image using the new preliminary reconstruction configuration. Of course, in other cases, the adaptive imaging method can include selecting a new reconstruction configuration from the database of reconstruction configurations based on the detected one or more errors in the reconstruction image; and restarting the method beginning with generating the reconstruction image using the new reconstruction configuration.

    [0104] Although not described in detail, generation of reflection images can also include adaptive parameters and tests for the reflection image configurations can include, but are not limited to, whether there is a bright spot in the reflection image and whether there is a dark spot in the reflection image.

    Adaptive Sectioning and Frequencies

    [0105] The breast is a 3D object but has different characteristics relevant to wave propagation near the chest wall and in the sub-areolar regions. Based on the acquisition system, it is possible to include data redundancy in the vertical direction.

    [0106] This means it is possible to collect data every 2 or 4 mm (which also could vary with the particular part of the volume of breast that is being imaged.) This vertical redundancy supports image quality in the presence of compromised SNR and data quality. By simultaneously inverting (imaging) from data at several different close levels, it is possible to increase the redundancy over not incorporating the information at the same time in inversion. It is noted that trying to invert from many levels simultaneously has an over-regularizing effect that creates a smoother image that may not be appropriate for the Reader or for diagnostic purposes. By separating the breast into sections, it is possible to optimize for the particular tissue/anatomy. In various embodiments, the breast can be divided into 2, 3, . . . . N sections.

    Recipes

    [0107] It is possible to determine a recipe for imaging the breast. A recipe includes the sequence of operations, including frequencies selected for use in the simulations. For example, a subset of frequencies within a range of frequencies corresponding to the frequencies used in the data acquisition can be selected (e.g., 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0, 1.1, 1.2, 1.3, . . . 1.35 MHz). The change between frequencies is determined empirically and is dynamic, it may change during the inversion process. In addition, the order in which frequencies are used is not required to be consecutive/monotonic That is, the frequency steps do not have to be monotonic increasing. In one embodiment the frequencies rise, then fall then continue to rise and this cycle can repeat. For example, a sequence can be 0.4, 0.425, 0.45, 0.475, 0.45, 0.425, 0.45, 0.475, 0.5, . . . .

    [0108] In one embodiment a set of frequencies is chosen from 0.4, 0.425, . . . , 1.3 MHz, where the step/difference between frequencies is chosen here as 0.025 MHz. In another embodiment a subset of frequencies are imaged simultaneously. The spacing between the simultaneous frequencies being used can also be adaptive intra and inter-breast. That is, the spacing may change depending on the section. The code/program determines which set of frequencies and how many iterations to carry out at a given frequency to optimize image quality and also avoid local minima.

    [0109] Accordingly, with reference to FIG. 3 and FIG. 4A, a preliminary reconstruction configuration can include a set of instructions, comprising forming a sequence of intermediate preliminary reconstruction images involving a sequence of intermediate preliminary reconstruction images, where each intermediate preliminary reconstruction image has an increased resolution relative to the previous intermediate preliminary reconstruction image. In some cases, the sequence of intermediate preliminary reconstruction images starts at a low frequency preliminary reconstruction image and proceeds in a stepwise fashion to higher frequency intermediate preliminary reconstruction images. In some cases, the sequence of intermediate preliminary reconstruction images is carried out at frequencies in a non-monotonic progression.

    [0110] Similarly, an updated reconstruction configuration can include a set of instructions, comprising forming a sequence of intermediate reconstruction images involving a sequence of intermediate reconstruction images, where each intermediate reconstruction image has an increased resolution relative to the previous intermediate reconstruction image. In some cases, the sequence of intermediate reconstruction images starts at a low frequency reconstruction image and proceeds in a stepwise fashion to higher frequency intermediate reconstruction images. In some cases, the sequence of intermediate reconstruction images is carried out at frequencies in a non-monotonic progression. The sequence of frequencies for the updated reconstruction configuration may be the same or different than that used in the preliminary reconstruction configuration. In addition, for reconstruction configurations involving multiple levels of reconstruction (see e.g., examples provided for Config A 380 and Config B 390), where the intermediate images increase in resolution (e.g., from coarse to fine) the frequency sequence used in the reconstruction images may differ between the coarse reconstruction and the fine reconstruction.

    Frequency Recipes

    [0111] How the frequencies are chosen can impact whether local minima are encountered (see also stopping criteria). .sub.j, j=1, . . . , N.sub.k, where k=1, . . . , N.sub.freq. In one embodiment the frequencies are equally spaced. In another embodiment the frequencies are chosen so that delta lambda () is constant since the change in wavelength will determine whether cycle skipping takes place. In such an embodiment, to maintain a constant , the relationship between (frequency) and is not linear. That is, since

    [00001] f j = - c o j 2 j

    so that at a particular frequency j, the step size in frequency is

    [00002] f j = - f j j j or f j = - f j 2 c o j ,

    that is the step size in frequency can in fact increase quadratically with frequency to keep the change in the wavelength the same. If the relative increase in wavelength is required to be constant than the formula

    [00003] f j = - ( j j ) f j ,

    indicates the step size increase linearly with frequency. The optimal step size may be determined empirically.

    Adaptable Anisotropic Pixel

    [0112] An anisotropic pixel selection can be carried out as part of an adaptive imaging process. That is, a size of steps used in an inverse scattering algorithm (e.g., propagation and backscattering) for generating a reconstruction image can be based on an anisotropic pixel selection, where the anisotropy is through the geometry (e.g., having different lengths in different directions). An anisotropic pixel selection includes a distance in one direction between surfaces of the steps used in the inverse scattering algorithm. The one direction may be in the direction of propagation, perpendicular to the direction of propagation or some other angle with respect to the direction of propagation. In some cases, at least two iterations of the inverse scattering algorithm use different anisotropic pixel selections. For example, the distance in one direction between surfaces of the steps used in the inverse scattering algorithm can change from one iteration to another iteration. As an illustrative example, a first distance may be used for five iterations and then a second distance is used for the next three iterations followed by a return to the first distance or a change to a third distance, which may be larger or smaller than the first and/or second distance.

    [0113] The anisotropic pixel selection can vary between frequencies and iterations in any manner of combinations. In addition, to the distance in one direction that can vary, the direction in which that distance is adjusted may vary between frequencies and/or iterations.

    Coordinate System

    [0114] As mentioned above, the adaptive imaging process can be adapted to different coordinate systems. Available coordinate systems include, but are not limited to, rectangular, circular-cylinder, elliptic-cylinder, parabolic-cylinder, spherical, prolate spheroidal, oblate spheroidal, parabolic, conical, ellipsoidal, and paraboloidal. Indeed, available coordinate systems can be a cartesian coordinate system, a curvilinear coordinate system, or even a hybrid/expanding coordinate system. As long as an exact solution exists, it is possible to perform an inverse scattering algorithm based on propagation a short distance through free space (or empty water) via a closed or semi-closed form solution, for example, and application of a phase mask. The reconstruction configurations for the different coordinate systems utilize an appropriate phase mask. For example, cartesian coordinates have the plane wave, cylindrical coordinates have Bessel functions and exponentials, spherical coordinates have spherical Bessel (half integer order) and spherical harmonics (associated Legendre functions and exponentials), and expanding coordinates have narrow waist Gaussian functions multiplied by solutions to a paraxial approximation equation.

    [0115] The ability to adapt the processes for a particular coordinate system such as curvilinear supports the receipt of data collected using a cylindrical array of transmitters and/or receivers. Such data can be transformed into cylindrical coordinates during preprocessing (e.g., operation 320 of FIG. 3) and subsequently analyzed in cylindrical coordinates.

    [0116] In some cases, an indication of a particular coordinate system from a selection of a cartesian coordinate system and a curvilinear coordinate system can be received; and available reconstruction configurations at the database of reconstruction configurations can be filtered according to the particular coordinate system before generating the preliminary reconstruction image using the preliminary reconstruction configuration. In some cases, the system is adapted to a curvilinear coordinate system, wherein available reconstruction configurations including for the preliminary reconstruction configuration and the updated reconstruction configuration are based on the curvilinear coordinate system.

    [0117] An example for spherical expanding coordinates via Gaussian beam is as follows.

    [0118] Note the system involves propagation of energy primarily in the z direction. Here, the paraxial approximation equation can be obtained by factoring out the predominant wave propagating in the z direction:

    [0119] Acoustic energy field: u(r)=u.sub.p(r)e.sup.iz.

    [0120] Given the Helmholtz equation

    [00004] 2 u x 2 + 2 u y 2 + 2 u z 2 + k 2 ( r ) u = 0

    [0121] We can make the assumption that the energy moves predominantly in the z direction and factor that out, as above, i.e.: u(r)=u.sub.p(r)e.sup.iz

    [0122] This gives us

    [00005] ( 2 u p x 2 + 2 u p y 2 - 2 i u p z - 2 u p + 2 u p z 2 + k 2 u p ) e - i z = 0

    [0123] Finally, the variation in the acoustic field in the z direction is considered small compared to the variation that is factored out:

    [00006] .Math. "\[LeftBracketingBar]" u p z .Math. "\[RightBracketingBar]" .Math. "\[LeftBracketingBar]" 2 u p z 2 .Math. "\[RightBracketingBar]"

    [0124] This gives the paraxial approximation:

    [00007] 2 u p x 2 + 2 u p y 2 - 2 i u p z + ( k 2 - 2 ) u p = 0

    [0125] Put =k for the following: (the energy is propagating at the appropriate frequency)

    [0126] Gaussian Beam factor:

    [0127] Note the Helmholtz equation does have the exact solution: (Green's function in 3D)

    [00008] u = e - ik .Math. "\[LeftBracketingBar]" r - r .Math. "\[RightBracketingBar]" 4 .Math. "\[LeftBracketingBar]" r - r .Math. "\[RightBracketingBar]" ,

    [0128] Now using r=(0, 0, ib)

    [0129] gives, with the Fresnel approximation:

    [00009] u = u o ibe - ik x 2 + y 2 ( z + ib ) ( z + ib ) ,

    [0130] Now put

    [00010] b = w o 2 2 .

    In this formulation this is a Gaussian beam with waist w.sub.o

    [0131] So we thus have the diverging solution:

    [00011] u ( x ) = e - ik ( x 2 + y 2 ) / q ( z ) q ( z ) Where q ( z ) = z + i w o 2 z + ib , b = 2 w o 2 2 = w o 2 2

    [0132] Now factor out the Gaussian beam with w.sub.o waist size and use expanded coordinates for the remaining part of the wave, where for now is an arbitrary complex value:

    [00012] x x z + ib y y z + ib z 2 - z 1 = 2 ( z 2 - z 1 ) ( z 2 + ib ) ( z 1 + ib ) u ( x ) = v ( x ) q ( z ) e - ik ( x 2 + y 2 ) / q ( z )

    [0133] The function variation (x) uses the expanding coordinates for reasons seen below:

    [0134] We take a particular form of the diverging beam:

    [0135] Now we assume that the Gaussian waist is infinitesimal, , so the wave is approximately spherical. Note the transformation:

    [00013] x x z + i , y y z + i , z 2 - z 1 = 2 ( z 2 - z 1 ) ( z 2 + i ) ( z 1 + i )

    [0136] In the limit as e goes to zero and R ( real). will be constrained by the calculations below.

    [00014] x = lim .fwdarw. 0 x z + i = x z , y y z , z 2 - z 1 = 2 ( z 2 - z 1 ) z 2 z 1

    [0137] For calculation purposes we use:

    [00015] z - z 1 = 2 ( z - z 1 ) ( z + i ) ( z 1 + i )

    [0138] Use

    [00016] z = x z x + y z y + z z z = x z 2 x + y z 2 y + 2 z 2 z

    [0139] since

    [00017] z z = 2 zz 1 - 2 z 2 z 1 ( z - z 1 ) = 2 z 2 and x = x x x = z x y = y y y = z y

    [0140] And so

    [00018] 2 x 2 = z x x 2 x 2 = ( z ) 2 2 x 2 ,

    with similar expression for y coordinate, and

    [00019] e - ik ( x 2 + y 2 ) / q ( z ) q ( z )

    solves the paraxial equation.

    [0141] Now factor out the expanding wave and transform to the expanded coordinate system.

    [00020] u ( x ) = v ( x ) w ( x ) = v ( x ) q ( z ) e - ik ( x 2 + y 2 ) / q ( z ) ,

    and we get.

    [0142] Using,

    [00021] 2 v ( x ) w ( x ) x 2 = 2 v ( x ) x 2 w ( x ) + 2 v ( x ) x w ( x ) x + ( x ) 2 w ( x ) x 2 ,

    and similar for y and chain rules above, so that:

    [00022] 2 v x 2 + 2 v y 2 - 2 ik v z = 0

    [0143] Propagation in the primed coordinates:

    [0144] The total field is

    [00023] u ( x ) = v ( x ) z e - ik ( x 2 + y 2 ) / z

    [0145] The first fact satisfies the paraxial equation (by above)

    [00024] 2 v x 2 + 2 v y 2 - 2 ik v z = 0 ,

    so use transverse Fourier transform, F.sub.T:

    [00025] v ( k x , k y , z ) = ( x , y , z ) e - k T .Math. x dx T = F T ( v )

    [0146] Equation is now

    [00026] ( k x 2 + k x 2 ) v ^ - 2 ik v z = 0 and so v z = 1 2 ik o ( k x 2 + k x 2 ) v

    has the exact solution:

    [00027] v ( k x , k y , z ) = e 1 2 ik o ( k x 2 + k x 2 ) ( z - z o ) A ,

    constant A: that is:

    [00028] v ( k x , k y , z ) = e 1 2 ik o ( k x 2 + k x 2 ) ( z - z o ) v ( k x , k y , z o )

    [0147] Is the exact solution. Using z=(zz.sub.o)

    [00029] v ( x , y , z + z ) = F T - 1 ( v ( k x , k y , z + z ) ) ,

    putting in all the operators:

    [00030] F T - 1

    is the inverse transverse Fourier transform:

    [00031] ( x , y , z + z ) = F T - 1 ( e 1 2 ik o ( k x 2 + k x 2 ) z v ( k x , k y , z ) ) = F T - 1 ( e 1 2 ik o ( k x 2 + k x 2 ) z F T ( v ( x , y , z ) ) )

    [0148] In operator theoretic notation (read from right to left) this is just the transport in free space formula.

    [00032] ( x , y , z + z ) = F T - 1 [ P O ] F T v

    [0149] with free space propagator:

    [00033] P o = e 1 2 ik o ( k x 2 + k x 2 ) z

    [0150] The total solution is:

    [00034] u ( x ) = v ( x ) q ( z ) e - ik ( x 2 + y 2 ) / q ( z ) q ( z ) = z + i w o 2 = z + ib , b = 2 w o 2 2 = w o 2 2

    [0151] Letting go to zero to get the used coordinate system:

    [00035] lim .fwdarw. 0 x z + i = x z , y = z , z 2 - z 1 = 2 ( z 2 - z 1 ) z 2 z 1

    [0152] The coordinate transformation valid for z.sub.j1<z<z.sub.j

    [0153] Gives for the original coordinates:

    [00036] y z y , x z x , ; = z i ; R i = z / z i x = lim .fwdarw. 0 x z + i .fwdarw. x = x R i

    [0154] z and x, y coordinates valid for z.sub.j1<z<z.sub.j

    [0155] use =z.sub.j; R.sub.i=z/z.sub.j;

    [0156] so now

    [00037] z j - z j - 1 = 2 ( z j - z j - 1 ) z j z j - 1 = ( z j - z j - 1 ) R j - 1 z j = z j R j - 1 ; y = y R i - 1 ; x = x R i - 1 z j = z j R j - 1 = ( z j - z j - 1 ) R j - 1

    [0157] Propagate from z.sub.j1.fwdarw.z.sub.j in primed coordinates using the propagator derived above, then evaluate Gaussian part at

    [00038] y j + 1 z y j + 1 = R i y j + 1 ; x = z x ; x = R i x ;

    and z=z.sub.j+z.sub.j=z.sub.j+1 (recall that z.sub.j1(z=z.sub.j1)=0 by construction so,

    [00039] e - i k ( x j + 1 2 + y j + 1 2 ) / 2 z j + 1 z j + 1 ,

    then use

    [00040] u ( x ) = v ( x ( x j + 1 ) ) e - i k ( x j + 1 2 + y j + 1 2 ) / 2 z j + 1 z j + 1

    [0158] Then apply the phase mask for z.sub.j1.fwdarw.z.sub.j which is

    [00041] e z j - 1 z j k o dz ,

    dz is the infinitesimal along the coordinate x, y path in x, y, z space:

    [00042] x = z x ; x = R i x and y = z y = R i y u ( x ) .fwdarw. u ( x ) e z j - 1 z j k o dz

    [0159] In this way we march along for Z.sub.o, . . . , Z.sub.j1, Z.sub.j Notice the x, y coordinates do expand due to the R.sub.i1 and increasing as z increases.

    [0160] FIGS. 6A-6D illustrate results of a cylindrical coordinate system-based algorithm.

    [0161] Referring to FIG. 6A, a simulation model was created from a breast image and is shown along with the position of the cylindrical algorithm circular array and the circle defining the point source boundary condition.

    [0162] The solution was first computed by solution of the integral equation. The simulation model was 1024 by 1024, wavelength pixels at 1.5 MHz (128 by 128 mm). The point source solution was computed with the Bi-Stab, FFT algorithm with a k-average preconditioner which converged in 40 steps. The total field on the receiver circle was then computed from the rectangular grid values using bi-quadratic interpolation.

    [0163] The total field on the receiver circle was then computed via a cylindrical FFT-propagator, phase-mask algorithm.

    [0164] FIG. 6B shows a comparison of a total field magnitude on the receiver circle; and FIG. 6C shows a magnified solution match. It can be seen that there is agreement of the cylindrical propagation algorithm with the simulation solution (referred to as IE solution) for all scattering angles.

    [0165] FIG. 6D illustrates a cylindrical problem space with a radius of 100 mm and a height of 60 mm. Referring to FIG. 6D, the source is shown as a truncated, focused line source 30 mm high. The line source is also apodized with a Hamming window. The line source focal range=100 mm. The small, offset circle containing the source has a radius, a.sub.1=5 mm. The line source position in the small circle is x.sub.1=2.5 mm, y.sub.1=1.5 mm. This offset was done to ensure proper propagation of an asymmetric field.

    [0166] The goal is now to evaluate the field on the 5 mm radius cylinder by Green's theorem and then to propagate the field onto the 100 mm radius cylinder using the cylindrical propagator algorithm. The Green's theorem calculation from the focused line source to the 5 mm cylinder is given by:

    [00043] f 1 ( 1 , z ) = - h L 2 h L 2 w ( z ) e - ik 0 ( a 1 cos ( 1 ) - x L ) 2 + ( a 1 sin ( 1 ) - y L ) 2 + ( z - z ) 2 - z 2 + f L 2 4 ( a 1 cos ( 1 ) - x L ) 2 + ( a 1 sin ( 1 ) - y L ) 2 + ( z - z ) 2 dz

    [0167] where w(z) is a Hamming window, a.sub.1=5 mm, h.sub.L=30 mm, .sub.L=100 mm, x.sub.L=2.5 mm, y.sub.L=1.5 mm,

    [00044] k 0 = 2 f c 0 , f = 1.5 MHz , c 0 = 1.5 mm / microsecond .

    [0168] Returning again to FIG. 3, after generating the transmission image, reconstruction manager 310 directs generation of reflection image(s) 340 from the reflection data 314. During the preprocessing 320, the intensity of acoustic reflection based on the time it takes the waves to travel back to the source can be obtained from the reflection data 314 (e.g., a Fourier transform may be applied such as described with respect to the transmission data 312). In addition, other preprocessing operations may be carried out such as coordinate system adjustments, etc. Any suitable reconstruction algorithm may be used to generate the reflection image. In some cases, the reflection image is corrected for refraction based on the transmission image. For example, more detail is added to interfaces within the object being imaged through use of the attenuation image (from the transmission data). The interface data can also be added after image formation. As an example implementation, an attenuation image can be obtained of the reconstruction image (e.g., generated in process 336), a reflection image can be generated (e.g., via suitable reconstruction algorithm), a morphological operation can be performed with respect to the attenuation image to generate a processed attenuation image, and the processed attenuation image can be fused with the reflection image to generate a final reflection image.

    [0169] FIG. 7 shows reflection images of a breast, including the fusing of a processed attenuation image and reflection image. As shown in FIG. 7, the original reflection images correspond to the reflection image generated by any suitable reconstruction algorithm. Attenuation images are obtained and processed using a morphological operation. Examples of morphological operations that may be carried out on the attenuation images include, but are not limited to, erosion, dilation, opening, and closing. The processed attenuation images are then fused with the original reflection images to generate the final reflection images shown. As a comparison, the speed of sound (SOS) image is shown in the figure to illustrate that the information now visible in the final reflection image are not artifacts being added to the image. Advantageously, when the attenuation images highlight interface information, that interface information can improve the reflection image.

    [0170] With the transmission image(s) and the reflection image(s) generated by processes 330 and 340, appropriate image evaluation processes 350 with respect to the generated images can be applied. The quantitative data obtained from the generated images can be similar to that obtained with respect to the preliminary image, but with more resolution/accuracy. For example, the quantitative data obtained from the generated images can be used for diagnostic and other purposes. Post processing 360 can be applied to the images to remove noise, perform decluttering, as well as apply desired metadata to make the images into a useful package for viewing and/or as training data for machine learning. As an example, the final reflection image(s) and transmission image(s) can be converted to DICOM format.

    [0171] In some implementations, data for transmission data 312 and/or reflection data 314 can be acquired using a system such as described with respect to FIG. 1 and/or FIG. 2. Indeed, data captured by an imaging procedure using a multifrequency data collection method such as by transmitting a pulse containing multiple frequencies can be received.

    [0172] In some cases, adaptive imaging architecture 300 includes or is in communication with a data resource of data collected using vertically limited diffraction-less beams. For such implementations, image reconstruction is carried out on the data collected using vertically limited diffraction-less beams. For example, in some of such implementations, the preliminary reconstruction image and/or the reconstruction image is generated by applying a 2D reconstruction algorithm. Through the adjusting of the reconstruction configuration as described herein, it is possible to reduce artifacts and speed up reconstruction.

    [0173] In other of such implementations, the preliminary reconstruction image and/or the reconstruction image is generated by applying a 3D reconstruction algorithm. In some cases, the thickness of the diffraction-less beams are varied (and the 3D reconstruction can reflect such variance). In some cases, the diffraction-less beams are rotationally limited.

    [0174] Creating diffraction-less beams to sonicate a breast (or other tissue) has the advantage that only second order scattering with be out of plane. The acoustic energy must scatter out of the plane, then scatter back into the plane of the receiver array.

    [0175] FIGS. 8A and 8B illustrate light wave propagation for diffraction-less beams. FIG. 8A illustrates a cross-section of non-diverging diffraction-less beams and FIG. 8B illustrates a cross-section of a present acoustic total field. A system for creating diffraction-less beams can form beams similar to optical light sheets used in lightsheet microscopy. As illustrated in FIG. 8A, it can be seen that the beam remains narrow and can hold form for approximately 171, which would be 252 mm at 1 MHz (acoustic). These beams can be concatenated to create the horizontal sheets.

    Formulation:

    [0176] The well known expression generates a Gaussian profile Bessel beam

    [00045] ( , z ) - i k A 2 z Q ( z ) e i k ( z + 2 2 z ) J 0 ( i k k 2 z Q ( z ) ) e - 1 4 Q ( z ) ( k 2 + k 2 2 z 2 )

    [0177] where Q(z)(qik/2z), k.sub. is transverse (x-y) wavenumber,

    [0178] It is also possible to use superposition as these represent solutions to the wave equation. They are reasonable approximations when considering finite apertures.

    [00046] j ( , z ) - i k A j 2 z Q j ( z ) e i k ( z + 2 2 z ) J 0 ( i k k 2 z Q j ( z ) ) e - 1 4 Q ( z ) ( k 2 + k 2 2 z 2 )

    [0179] with Q.sub.j(z)(q.sub.jik/2z)

    [0180] It is known these beams do not diffract or spread out as normal propagating wavefields do. This spreading requires the 3D algorithms to account for first order scattering from planes outside of the central plane. Only second order scattering effects occur with the confined beam. Using these Gaussian-Bessel beams (and superpositions of them) means that only second order scattering will occurscattering out of plane followed by scattering back into plane.

    [0181] Such a superposition is:

    [00047] ( , z ) .Math. j j ( , z )

    [0182] This formulation can create a pseudo-plane wave with limited diffraction in the vertical direction (a sheet of acoustic energy). This is similar to light sheet microscopy fields.

    [0183] It is also possible to use a time domain plane wave (pseudo plane wave).

    [0184] Advantageously, the confinement of the acoustic signal means that out of plane scattering is second order and so a 2D algorithm of any kind will produce a much better image than with standard unconstrained ultrasound incident fields. These beams can be produced in the time domain as well, as constrained pulses and 2D plane wave pulses.

    Evaluation Processes

    [0185] As noted above with respect to operations for evaluating a preliminary reconstruction image 334 (including obtaining (422) preliminary information from the preliminary reconstruction image) and image evaluation 350, mammographic density can be measured (see also operation 508 of FIG. 5). In some cases, estimating mammographic density includes separating exterior voxels from breast voxels of the preliminary reconstruction image; segmenting high speed value breast voxels from other breast voxels of the preliminary reconstruction image to generate a first segmented image, wherein high speed value breast voxels are breast voxels having a speed value above a threshold; removing, from the first segmented image, high speed value breast voxels corresponding to skin tissue of the patient's breast to generate a second segmented image; and calculating mammographic density by determining a percentage of the high speed value breast voxels in the second segmented image.

    [0186] Other information can also be obtained from the preliminary reconstruction image and/or final/updated reconstruction image. As also noted above, multifrequency images are collected. While generating a final multifrequency image, intermediate multifrequency images can be kept and used to obtain information from the preliminary reconstruction image and/or final/updated reconstruction image. As such, certain characteristics of the imaged object that vary with frequency can be used to extract useful information. While attenuation and speed of sound images are described in detail, the principle applies to images based on the backscatter (reflection) data, including with respect to attenuation coefficient slope estimate (ACS) and envelope statistics and derived parameters such as the effective scatterer diameter (ESD), effective acoustic concentration (EAC), and the parameter (effective scatterer density) from the HK distribution. Some of the estimates can be determined directly from the backscattered signal. As provided in detail herein certain parameters can be estimated based on tomographic reconstructed images using the transmitted signal.

    Tissue Type Identification

    [0187] It has been verified in the literature that certain mammalian tissue types behave in a particular manner with frequency. It is known that attenuation varies with frequency via the power law dependency. Advantageously, by using the Kramers-Kronig relationship that relates the frequency dependence of the attenuation to the speed of sound frequency dependence, it is possible to yield a corresponding power law for the speed of sound (and use the described reconstruction images to obtain useful information to identify tissue/lesions). From the power law relationship, it can be seen that the rate of change frequency for the SOS is less than for attenuation. In fact, the frequency dependence for speed can be shown to be

    [00048] 1 c ( x , ) - 1 c ( x , o ) = o ( x ) tan ( a 2 ) ( a - 1 - o a - 1 )

    when ()=.sub.o.sub.a (see below).

    [0188] This is reflected in measured values as well, where the dependence of speed is hard to measure, whereas the dependence of attenuation is measured relatively easily. There is strong evidence from the literature that attenuation varies with frequency in a way that may help to characterize biological tissue.

    [0189] In particular, the standard tissue model used for attenuation is

    [00049] ( ) = o a , = 2 f

    [0190] which leads to the linear relationship:

    [00050] log ( x , ) = log o ( x ) = a log .

    [0191] This is acknowledged in the literature as being an approximation since the true model involves a summation of terms which represent relaxation processes at various frequencies. The phenomenological results of investigations indicate the power law to be generally satisfied.

    [0192] FIG. 9 shows a log plot of attenuation coefficient vs frequency from Bamber, 1986, Attenuation and absorption. Physical Principles of Medical Ultrasonics, ed. C R Hill. The plot of FIG. 9 shows the power law behavior for most tissue types is a power law variance with frequency.

    [0193] The tissue type (bone, skin, ducts, glands, fat, fibroglandular tissue, cancer, etc.) may affect the attenuation coefficient as well as the power law. Speed varies with frequency by virtue of the Kramers-Kronig relations. Note that these relations are non-local in their exact form. They involve integration over the entire real line in fact. There are, however, semi-local forms that can be used in an approximate manner. Note also that an argument based on operator series shows that for the special case of a power law variation of attenuation with frequency, i.e., ()=.sub.o.sub.a the speed of sound must also have variation with frequency given by the formula: (proved below)

    [00051] 1 c ( x , ) - 1 c ( x , o ) = o ( x ) tan ( a 2 ) ( a - 1 - o a - 1 ) .

    [0194] These formula show that the variation of speed with frequency will not necessarily be large since a1.3. This is borne out by experimental evidence as shown in FIGS. 10A-10D.

    [0195] FIGS. 10A-10D illustrate identification of tissue type through the power law variation of attenuation with frequency. FIG. 10A shows a log-log plot of attenuation vs frequency (0.775, 0.8, 0.825, 0.85, 0.875, 0.9 MHz) for an image with a known cancer lesion. The region of inspection (ROI) is a 2 mm diameter centered in the known cancer lesion as determined by the speed of sound map. FIG. 10B shows a log-log plot of attenuation vs frequency for an image with fat. FIG. 10C shows panels of intermediate multifrequency images of a point in cancer and FIG. 10D shows panels of intermediate multifrequency images of a point in fat. Each image of the multifrequency images correspond to an image at a particular frequency (e.g., starting from the top with attenuation/SOS at 1.3 MHz and subsequent panels shown the attenuation images at sequentially lower frequencies to 0.400 MHz).

    [0196] The calculation of these parameters over multiple frequencies can be carried out over a volume or region of interest to increase signal-to-noise (SNR) and stability. As can be seen, there is a different relationship between attenuation and frequency for a fat ROI as compared to a cancer ROI.

    [0197] Using formula: log (x, )=log (x)= log

    [0198] The coefficient here is a=10.2.

    [0199] Power law result: Note these are empirical results subject to noise and are for illustrative purposes only. More accurate estimates can be obtained using a volume of interest in place of a single voxel.

    TABLE-US-00002 Power law Fat 10.2 cancer 6.16

    [0200] Advantageously, this strong separation through the power law parameters can be used to assign a cancer risk.

    [0201] Accordingly, it is possible to obtain reconstruction information from a reconstruction image by identifying frequency-dependent characteristics using values across frequencies of the intermediate multifrequency images, including attenuation and speed of sound. For example, it is possible to calculate a frequency-independent wavenumber by removing a frequency dependent part of a wavenumber from a Helmholtz equation based model using information across frequencies of the multifrequency image. By quantitatively determining the spectral behavior of the speed of sound and attenuation images, it is possible to determine any correlation to the linear coefficients to the tissue type and/or abnormality. It is also possible to estimate mass density and porosity using such spectral behavior (e.g., the frequency-independent wavenumber). Indeed, as described herein, it is possible to estimate porosity based on identified frequency-dependent characteristics.

    [0202] As an illustrative example of estimating mass density, once the attenuation is determined spatially (see e.g., examples in the imaging algorithms), mass density can be determined by first isolating the frequency independent wavenumber, solving a simple forward problem to estimate density distribution, and dividing out the density to yield the bulk modulus.

    [0203] It should be noted that in the linear approximation (Born approximation) of the forward problem a simple calculation indicates that different parts of the data originate from either the monopole scattering from the bulk modulus or the dipole scattering from the density variations.

    [0204] As illustrated, the data comes from different scattering potentials depending on where it is located: Note this is not an inversion scheme, rather it shows that there is some separation in the scattered data that may be exploited by a suitable scheme that has suitable regularization (see below for regularization schemes). Given a configuration such as shown in FIG. 11, which illustrates propagation between a transmitter and receiver. Note the Lippmann-Schwinger (LS) equation reads:

    [00052] f i ( r ) = f ( r ) - k o 2 R 3 ( r ) f ( r ) g k o ( R ) dr + R 3 .Math. ( ( r ) { f ( r ) } ) g k o ( R ) dr

    [0205] Directly from the differential equation. Integrate by parts and use Gauss theorem to remove total divergence:

    [00053] f i ( r ) = f ( r ) - k o 2 ( r ) f ( r ) g k o ( R ) dr + ( r ) { f ( r ) } .Math. g k o ( R ) dr

    [0206] Using R=rr and =.sup.i(r)=e.sup.ik.sup.i.sup..Math.r (plane wave incident) and the standard approximation:

    [00054] g k o ( R ) == e - ikR 4 R = e - ik .Math. "\[LeftBracketingBar]" r - r .Math. "\[RightBracketingBar]" 4 .Math. "\[LeftBracketingBar]" r - r .Math. "\[RightBracketingBar]" e - i k ( r - u r .Math. r ) 4 .Math. "\[LeftBracketingBar]" r .Math. "\[RightBracketingBar]" = e - i k r 4 r e - i k u r .Math. r

    and

    [0207] g.sub.k.sub.o (R)=iku.sub.re.sup.iku.sup.r.sup..Math.r for Green's function with approximation.

    [0208] Define: k.sub.r=ku.sub.r vector pointing toward the receiver array element.

    [0209] Substitute plane wave and Green's function approximation in the IE to get

    [00055] f s ( r ) e - ikR 4 R ( k 2 ( r ) e - i ( k r - k i ) .Math. r dr + k i .Math. k r ( r ) { e - i ( k r - k i ) .Math. r } dr )

    [0210] Inner product is cos of angle between them times k.sup.2 so:

    [00056] f s ( r ) e - ikr 4 r k 2 ( ( r ) e - i ( k r - k i ) .Math. r dr + cos ( r ) { e - i ( k r - k i ) .Math. r } dr )

    [0211] These are Fourier transforms and give expression for scattered field.

    [00057] f s ( r ) e - ikr 4 r k 2 ( ( k r - k i ) + cos ( k r - k i ) )

    where is the angle between k.sub.r, k.sub.i.

    [0212] This gives the relation in the linear approximation between scattered data and the object functions for density and bulk modulus. To complete the separation, the following correlations between mass density and speed can be utilized, particularly for the speed values of interest for mammalian tissue.

    [00058] K = ( 1 7 0 6 . 5 0 7 c 2 - 1 6 1 1 . 7 37 ) MPa K = 89 3 c 3 - 3 4 9 c 2 = c 2 ( 8 9 3 c - 3 49 ) MPa

    [0213] The above correlations are particularly useful as a regularizing term in density reconstruction and can be used during the preliminary image reconstruction. For example, it is possible to estimate the bulk modulus and shear modulus for breast and mammalian tissue from these correlations. Note:

    [0214] K=c.sup.2c.sup.3 which indicates that the cube of the speed image can be considered a stiffness image.

    [0215] Bulk modulus and shear wave estimation is possible from the speed of sound images (e.g., as reconstruction information from the reconstruction images). For example, the shear modulus turns out to be more sensitive in some sense to the presence of a hard lesion, since a cancer can have a shear speed 30 times greater than normal tissue.

    [0216] Accepting (based on

    [00059] c s = ) c c s = K

    [0217] gives (see below)

    [00060] K 9 0 0 0

    [0218] so that

    [00061] c 3 9 0 0 0

    [0219] Accordingly, it can be seen that a shear moduli estimation is possible.

    [0220] In particular, the estimate of based on the shear wave speed of tissue being approximately 5 m/sec is

    [00062] = K ( c / c s ) 2 K ( 3 0 0 ) 2 = K 9 * 1 0 4

    [0221] The Bulk modulus formula:

    [00063] K = 8 9 3 c 3 = c 2 ( 8 9 3 c - 3 4 9 ) , c in mm / sec , K in MPa .

    [0222] Using this formula, it is possible to show stiffness directly in the images.

    [0223] Here stiffness is bulk modulus. Thus it is possible to show a quantitative representation of stiffness that others are showing qualitatively. Note that it is possible to determine the Young's modulus as 3, the shear modulus presumably using the perturbation formula from (E is Young's modulus):

    [00064] E = 3 + 2 + 3 ( 1 + 2 / 3 ) ( 1 - + .Math. )

    [0224] The perturbation expansion to first order in being E3.

    [0225] Note also, as illustrated by the plot of FIG. 12, the shear wave speed greatly magnifies the differences in speed between compressional wave speed 1560 to 1610. This is the range for differentiating between fibroadenomas and cancer.

    [0226] Using the formula:

    [00065] = K ( c / c s ) 2 , where K = 8 9 3 c 3 - 3 4 9 c 2 = c 2 ( 8 9 3 c - 3 49 ) MPa

    Mass Density Estimation

    [0227] Accordingly, it is possible to utilize the multifrequency images in attenuation and remove the frequency dependent part of the equivalent wavenumber that results from the square root transformation utilized to justify neglect of density. Once the frequency independent of the equivalent wavenumber is determined, there remains a nonlinear second order differential equation that can be transformed in two stages into a linear second order equation (essentially a Helmholtz equation) with known wavenumber and boundary conditions known (it is an elliptic equation).

    [0228] The problem can be solved once (since it is not an inverse problem), and the spatial distribution of the density can be retrieved.

    [0229] Once the density is known it is possible to estimate the bulk modulus by virtue of the formula:

    [00066] c = K

    [0230] The density dependent wave equation is known to be

    [00067] 2 p + c o 2 p = .Math. ( p ) - c o 2 p

    where .sub.K(KK.sub.o)/K.sub.o, (.sub.o)/ are the gamma object functions for compressibility and density respectively. Pressure is p, frequency is =2f and background speed of sound is c.sub.o. Wavenumber

    [00068] k o 2 2 / c o 2 .

    Using

    [00069] p = p

    the equation becomes .sup.2p(x)+k.sup.2(x)p(x)=0 with pseudo-wavenumber given by:

    [00070] k 2 ( x ) = k 2 ( x ) - 3 4 2 ( ) 2 + 1 2 2 .

    Note the true wavenumber

    [00071] k ( x ) c ( x ) + i ( x ) .

    As noted the attenuation generally follows a power law with exponent of about 1.2 or so for mammalian tissue. Accordingly, it is possible to remove the frequency dependent part. Kramer's Kronig relations can be exactly integrated in this case and the power law for speed of sound 0.2 k.sub.ind (x) is the frequency independent part that remains when the frequency dependent part is subtracted out.

    [0231] Because multiple frequency images are generated, there is access to this information. We use the identity

    [00072] 1 2 = .Math. ( 1 ) + 1 2 .Math. :

    (proved by direct calculation).

    [0232] The equation now reads:

    [00073] k 2 ( x ) - k 2 ( x ) k ind 2 = - 1 4 ln .Math. ln + 1 2 .

    ( ln ), To solve this equation we use the transformation: To solve this equation, we first transform to a new y variable:

    [00074] 1 2 i ln = - i ln y .

    The equation now reads

    [00075] - 2 y y = k ind 2 ( x )

    which can be rewritten as

    [00076] 2 y ( x ) + k ind 2 ( x ) y ( x ) = 0 .

    [0233] The method can include generating images at the multiple frequencies; estimating the attenuation at multiple frequencies, removing the frequency dependent part of the wavenumber, leaving k.sub.ind(x), solving

    [00077] 2 y ( x ) + k ind 2 ( x ) y ( x ) = 0 ,

    the forward Helmholtz problem for y (suitable BCs), and estimating density as

    [00078] = e c y 2 = K y 2 ,

    for suitably determined constant, K determined by the known density of water at the boundary and in the water bath.
    Biotporous media

    [0234] It is noted that the integral equation formulation for the Biot Theory in the Acoustic approximation is provided in the imaging algorithm section (see the vector Lippmann-Schwinger equationFredholm of Second Kind) as well as the paraxial approximation to the Helmholtz.

    [0235] Expanding upon the imaging algorithm provided below, a conversion to the paraxial form is provided. An explicit method for determination of the porosity parameters based on the spectral determination of speed of sound and attenuation is included.

    [0236] The full elastic Biot model is shown as follows:

    [00079] ( ^ 1 1 ^ 12 ^ 21 ^ 22 ) ( t 2 u t 2 U ) = ( P Q Q R ) ( ( .Math. u ) ( .Math. U ) ) - ( N ( u ) 0 )

    [0237] This is reduced via homogenization theory to effective parameters (see below for definition of the parameters):

    [00080] U s + ( + ) grad div ( U s ) = - 2 ( 1 1 U s + 1 2 U l ) + i ( U s - U l ) n / grad div [ ( / n - 1 ) U s - U l ] = - 2 ( 2 1 U s + 2 2 U l ) - i ( U s - U l ) , strain inertial viscous energy terms dissipation where H = H 1 + iH 2 = 1 / K ( )

    with

    [00081] 1 1 = ( 1 - n ) s + ( nH 2 / - l ) 1 2 = n ( l - H 2 / ) = nH 1 2 1 = ( l - nH 2 / ) 2 2 = nH 2 / .

    [0238] This is a full elastic representation and the details are in Boutin et al. (C. Boutin, G. Bonnet, and P. Y. Bard, Green functions and associated sources in infinite and stratified poroelastic media, Geophysical Journal International, vol. 90, pp. 521-550, 1987).

    [0239] FIGS. 13A-13D provide images showing application of porosity estimation using Biot as described herein.

    [0240] As shown in FIG. 13A, segmentation of marrow in tibia based on the SOS and constrained by ellipsoid gives average value 1402.2 m/s which agrees with literature values. FIG. 13B shows bone segmentation (trabecular) in tibia. FIG. 13C shows segmentation of trabecular bone in coronal and sagittal views in femur.

    [0241] The Table 1 below gives the quantitative accuracy of the bone marrow and speed of sound of the Biot slow wave.

    TABLE-US-00003 TABLE 1 comparison of segmented vs literature values for marrow and Biot slow wave human bone marrow Biot slow wave SOS Literature values QTUS measured values ~1410 m/s 1402, 1389 ~1470 m/s [2, 3] 1472, 1466 m/s

    Homogenization in Biot Context:

    [0242] Note that often in mathematical analysis of wave or diffusion phenomena in disordered or periodically structured media, homogenization is used. When the characteristic size of the periodicity for example is size <<1 compared with the O(1) size of the macrostructure, analytic expressions relating coefficients at the macroscopic scale based on the microstructure and the concomitant coefficients can be formulated. Furthermore, as the ratio .fwdarw.0, characteristic homogenization parameters are created. These macroscopic parameters are useful for studying wave and diffusion parameters in microstructured media. In particular in porous media:

    [0243] Using the notation of Boutin et al., the following model for wave propagation in porous media results:

    [00082] ( + 2 ) 2 P s + 2 P s - 0 2 P = - .Math. F ( x ) 2 P - P - 0 P s = - V ( x )

    [0244] Where P.sub.sU.sub.s is the divergence of particle motion in the solid matrix, and .sub.0, , are parameter's defined in the following way:

    [00083] = ( 1 - n ) s + l [ n + l 2 K ( ) / .Math. ] = K ( ) / .Math. o = + l 2 K ( ) / .Math. = + l 2

    [0245] Where, =1K.sub.b/K.sub.s, with K.sub.b=3+2/3, the bulk modulus, and l, m are the Lame' parameters in standard linear elastic theory. Also =n/K.sub.s+n/K.sub., and: [0246] is the frequency of the interrogating wave [0247] .sub.l is the density of the liquid phase [0248] .sub.s is the density of the solid phase [0249] n is a saturation parameter [0250] K() the generalized, explicitly frequency-dependent Darcy coefficient introduced via homogenization theory

    [0251] In the above, an approximate value based on homogenization theory for straight ducts can be used:

    [00084] K ( ) = ( ik ) / ( v * ) J 2 ( - 8 i * ) / J 0 ( - 8 i * ) ; k = n a 2 / 8 ; * = k / nv

    with J.sub.0, J.sub.2 being the Bessel functions, a the radius of the ducts, and the kinematic viscosity of the fluid

    [0252] This is an acoustic approximation to the full elastic equations but presents an opportunity for an exact solution to the Green's function equation:


    BG.sub.Biot+I.sub.22(x)=0, where I.sub.22 is the 2D identity and


    G.sub.Biot: R.sup.2,3.fwdarw.R.sup.2

    [0253] Is a matrix Biot Green's operator. B is the operator matrix

    [00085] B = [ 2 - - 0 - 0 2 2 + 2 ]

    [0254] Using the adjoint of this matrix (i.e., the signed matrix of cofactors, not the linear adjoint), the following solution is obtained for the Biot Green's operator.

    [00086] G Biot = 1 4 0 ( 2 2 - 1 2 ) { ( w .Math. d ) [ 0 0 ] + ( w .Math. v ) [ _ 2 0 0 - _ ] }

    [0255] This is used in the Generalized acoustic Biot Lippmann-Schwinger equation:

    [00087] r 0 ( x ) = r ( x ) - [ - b _ 1 0 0 b _ 2 ] G 2 x 2 ( x - ) D ( ) r ( ) d 3 D ( x ) = [ 1 0 0 2 ]

    [0256] Contains the object functions that contain the tissue characteristics. Also where r: R.sup.2,3.fwdarw.R.sup.2, and s: R.sup.2,3.fwdarw.R.sup.2 are vector-valued functions on R.sup.3 or R.sup.2 given by

    [00088] r ( x ) = [ P ( x ) P s ( x ) ] , s = [ V ( x ) .Math. F ( x ) ]

    where w: R.sup.3.fwdarw.R.sup.2 is the 2D vector defined by:

    [00089] w e - i 1 | x | .Math. "\[LeftBracketingBar]" x .Math. "\[RightBracketingBar]" e - i 2 | x | .Math. "\[LeftBracketingBar]" x .Math. "\[RightBracketingBar]"

    [0257] Where it is seen that the d.sub.i are effective wavenumbers in porous media and the vectors

    [00090] d = ( - 1 2 2 2 ) , v = ( 1 - 1 )

    Are so defined.

    [0258] The .sub.i are the effective wavenumbers.

    [0259] Using this Green's function the forward problem is well defined.

    [0260] Furthermore, the Jacobian calculation and adjoint of the Jacobian calculation proceeds as described herein (see e.g., step 3850 of FIG. 38 and SCIENTIFIC BACKGROUND ON SQUARING AN OVERDETERMINED SYSTEM TO APPLY BiSTAB)

    [0261] These operations respectively provide a step length and the gradient direction.

    [0262] This allows for the full implementation of the inversion algorithm.

    [0263] In one embodiment the imaged parameters are: Darcy coefficient and Porosity. FIG. 13D shows an attenuation image of human (ex vivo) cadaver knee. The patella, tibia, fibula and femur are present, the tibiofemoral space is clear. This anomalous attenuation is related to the Biot generalized DArcy coefficient.

    [0264] Other embodiments can include more or fewer parameters in addition to the standard wave speed and attenuation. The other parameters that are not imaged via this disclosure can be determined from literature values.

    [0265] These calculations can be carried out on CUDA or similar AMD cards/processors for speed.

    [0266] Acoustic theory appropriate for multiple organs

    [0267] Note that although orthopedic images are shown here, the porosity and density parameters apply to lungs, kidneys, liver, and other organs or abdominal imaging.

    [0268] Paraxial approximation for Biot wave theory:

    [0269] Recall that pressure in the solid is: P.sub.sU.sub.s P.sub.sU.sub.s The inhomogeneous Biot equation from above is:

    [00091] ( [ ( + 2 ) 2 + 2 ] - o 2 - o 2 - ) ( P s P ) = ( - F - V )

    [0270] Rewriting this as:

    [00092] ( ( + 2 ) - o 0 ) 2 + ( 2 0 - o - ) ( P s P ) = ( - F - V )

    [0271] And using the matrix definitions:

    [00093] A ( ( + 2 ) - o 0 ) , B ( 2 0 - o - )

    [0272] Gives the following form for the Biot acoustic approximation.

    [00094] ( A 2 + B ) ( P s P ) = 0

    [0273] This can be solved using the Biot Acoustic Green's function developed above. This leads to a generalized Lippmann-Schwinger equation that is rigorous and leads to an inversion algorithm by itself and the concomitant step length and gradient direction calculations.

    [0274] As can be seen from the images, it is possible to detect the slow Biot nonstandard compression wave (P wave) and have segmented the trabecular bone region interior to the bone and obtained a value commensurate with the predicted speed of sound for this slow Biot wave, yielding clinically valuable effective parameters.

    Paraxial Approximation:

    [0275] This vector Helmholtz equation can be approximately factored into

    [00095] ( ( + 2 ) - o 0 ) 2 + ( 2 0 - o - ) ( A + i B ) ( A - i B )

    [0276] Where

    [0277] =(n)/K.sub.s+n/K.sub. is a volume averaged averaged Bulk modulus

    [0278] And a is a relative difference modulus, and a.sub.o is a frequency dependent relative difference modulus with the effective Darcy coefficient K:

    [00096] = 1 - K b / K s , o + l 2 K ( ) / i

    [0279] The matrix square root can be found in the standard way for A and B:

    [0280] This leads to the parabolic PDE (paraxial approximation);

    [00097] ( A - i B ) P 0

    [0281] By direct calculation then:

    [00098] B = ( 0 o ( i - ) 2 + i ) , and A = ( + 2 o ( - + 2 ) ( + 2 ) - 0 )

    [0282] We are working in the acoustic approximation, so 0 and

    [00099] A = ( + 2 o ( - + 2 ) ( + 2 ) - 0 ) ( o ( - ) - 0 )

    [0283] So that finally the acoustic approximation (0) Biot theory reads:

    [00100] ( ( o ( - ) - 0 ) - i ( 0 o ( i - ) 2 + i ) ) ( P s P ) 0

    Paraxial Approximation in Suitable Form

    [0284] The wave equation for the Biot acoustic model in the paraxial approximation now reads in a form that allows for determination of the effective speed of sound and other parameters:

    [00101] ( I 2 x 2 - i A - 1 B ) ( P s P ) 0

    [0285] Now the inverse of sqrt(A) can be calculated and the appropriate calculation carried out explicitly to yield:

    [00102] A - 1 B = ( o ( - ) - 0 ) - 1 ( 0 o ( i - ) 2 + i )

    [0286] So that now the wave (factored) equation is using:

    [00103] A - 1 B = 1 ( - o ( - ) - o ( i - ) 2 + - i o ( - ) - o ( i - ) 2 + i )

    [0287] And the acoustic approximation. 0 as below:

    [0288] That is, explicitly:

    [00104] ( I 2 x 2 - i 1 ( - o ( - ) - o ( i - ) 2 + - i o ( - ) - o ( i - ) 2 + i ) ) ( P s P ) = 0

    [0289] If we neglect terms that have w.sup.2, w in the denominator, we get:

    [00105] ( I 2 x 2 - i 1 ( - i o ( - ) - 0 i ) ) ( P s P ) = 0

    [0290] Note also the simplification:

    [00106] - o ( - ) - = o +

    [0291] So that

    [00107] ( I 2 x 2 - i ( 1 i o ( + ) 0 i 1 ) ) ( P s P ) = 0

    [0292] It is possible to separate out parameters

    [00108] ( I 2 x 2 - i ( 1 0 0 1 ) ( i o ( + ) 0 i ) ( 0 0 ) ) ( P s P ) = 0

    [0293] Note this form makes clear that there is only one component that truly corresponds to a propagating wave. (although the complex values of the effective parameters indicates a mixture of modes).

    [0294] Note that as expected the relevant parameters, q, b, l all occur in square root form and the frequency occurs linearly in this approximation.

    [0295] The bottom equation is diffusion since

    [0296] =(n)/K.sub.s+n/K.sub. is a volume averaged averaged Bulk modulus and l is real.

    [0297] Also a is a relative difference modulus, and a.sub.o is a frequency dependent relative difference modulus with the effective Darcy coefficient K:

    [00109] = 1 - K b / K s , o + l 2 K ( ) / i

    [0298] The top equation reads:

    [00110] P s - i P s + o ( + ) P = 0

    [0299] The last term is diffusion like (although sqrt(q) is complex so there is a propagation component) and corresponds to evanescent waves that won't propagate to the receiver arrays.

    [0300] Recall that =(n)/K.sub.s+n/K.sub., and

    [0301] that =1K.sub.b/K.sub.s, .sub.o+.sub.l.sup.2K()/i

    [0302] Also recall

    .sub.o, , are parameters defined in the following way:

    [00111] = ( 1 - n ) s + l [ n + l 2 K ( ) / .Math. ] = K ( ) / .Math. o = + l 2 K ( ) / .Math. = + l 2

    [0303] The equation approximately yields: (ignoring the evanescent waves)

    [00112] P s - i P s = 0 ,

    [0304] This has form of a paraxial wave propagation with an effective wavenumber

    [00113] k eff = c eff = Where c eff = ,

    [0305] Define the effective wavenumber and speed of sound. Note the numerator is interpreted as the known Lame' coefficient (first). The denominator is complex valued and has effective components involving porosity and the effective Darcy coefficient K. This K also encapsulates information about the size of the ducts in the porous medium and its tortuosity.

    [0306] Under certain assumptions, it is possible to measure the effective parameters which contain porosity (n), and the effective Darcy coefficient.

    [0307] These parameters may thus be effectively estimated in parts of the body and give phenomenological basis for the measurements related to and affected by osteoporotic conditions.

    [0308] This may allow monitoring and/or early detection of osteoporosis.

    [0309] To consider the frequency dependence of q or r, the following is provided.

    [00114] k R = c meas = Re ( k eff ) k eff = c eff = R + i I = k R - i c eff = R + i I = R - i I | | = ( A - iB ) | | c eff = ( | | + R - i | | - R ) 2 | |

    [0310] From which the real and imaginary parts of the c.sub.eff can be read off.

    [0311] Also note that

    [00115] R = ( 1 - n ) s + l n + l 2 K l ( ) I = - l 2 K R ( ) Re = ( 1 - n ) s + l n + l 2 K , ( ) = 2 ( 2 c meas 2 - 2 ) Im = - l 2 K R ( ) = - 2 c meas

    [0312] Therefore, the following overdetermined system can be set up to solve for the porosity and the effective complex DArcy coefficients. (K, real and imaginary parts).

    [00116] ( l - s 0 l 2 1 0 - l 2 1 0 .Math. .Math. .Math. l - s 0 l 2 N 0 - l 2 N 0 ) ( n K R K I ) = ( - s + 1 2 ( 1 2 c meas 2 - 2 ) - 2 1 c meas .Math. - s + N 2 ( N 2 c meas 2 - 2 ) - 2 N c meas )

    [0313] This can be solved in multiple ways including pseudo-inverse in the sense of Penrose:

    [00117] x = arg min x 1 2 .Math. Ax - b .Math. 2 x = ( A T A ) - 1 A T b x = ( n K R K I )

    [0314] Note that a parametric dependence of K on frequency yields the relationships:

    [0315] For frequencies:

    [00118] j , j = 1 , .Math. , N 2 N ( 1 + 2 ( N K + 1 ) ) = 2 N ( 3 + 2 N K )

    [0316] Where the constraints on the parametric form of K is given by:

    [00119] K R ( ) = o + 1 1 ; K R ( ) = o + j j ; j = 1 , .Math. N K K R ( ) = j j ; j = 0 , .Math. N K N K 2 N - 3 2 = N - 3 2 ; N K < N - 2

    [0317] This places constraints on the number of parameters but N.sub.K2 is likely and N is 20 or more so in fact N.sub.KN2 and the system is greatly overdetermined which allows for noisy data to be compensated for.

    [0318] Accordingly, it is possible to detect clinically relevant parameters by measuring the speed of sound in trabecular bone as outlined above by segmentation or similar means, estimating porosity, and estimating Effective Darcy coefficient K

    [0319] In one embodiment the relationship:

    [0320] (1n).sub.s+.sub.l(ni.sub.lK()) is used: [0321] (1)

    [00120] c eff =

    is measured by segmentation based on obtained images. [0322] (2) r is determined from

    [00121] c eff =

    where l is the first Lame' coefficient for bone and is assumed known, since the bone matrix itself is assumed to change in a known way with osteoporosis. For example, the bone itself loses volume content, but the chemical nature of the small amount of bone remaining is the same. [0323] (3) The relation:

    [00122] c eff = R + i I = .Math. "\[LeftBracketingBar]" .Math. "\[RightBracketingBar]" R - i I

    shows that c.sub.eff has a complex part. See above and {square root over (.sub.Ri.sub.I)}={square root over ()}({square root over (||+.sub.R)}i{square root over (||.sub.R)}). [0324] (4) The relationship

    [00123] = c eff 2

    is used to determine real and imaginary parts of r. [0325] (5) Re =(1n).sub.s+.sub.ln is used to estimate porosity. [0326] (6) Im =.sub.l.sup.2K() is used to estimate the Darcy coefficient since other parameters are known.

    [0327] The imaginary part of c.sub.eff is known to contain attenuation. This can be removed based on the known power law for attenuation of bone.

    [0328] The remaining anomalous attenuation is used in the above calculations.

    [0329] This anomalous attenuation has been observed in our images and it is related to the porosity parameters as well. (see above).

    [0330] The generalized Darcy coefficient (based on homogenization) is treated as a phenomenological parameter that is calculated for various stages of osteoporosis and interpreted after the fact in clinical situations after numerous data have been obtained.

    [0331] In one particular embodiment, only the real part is used to estimate porosity.

    [0332] Over time clinical results would be tabulated so that the relative effective values could be used for diagnosis or as an aid.

    [0333] Another embodiment involves using l+2m in place of 1 as an effective first Lame' coefficient in the above derivation.

    [0334] Another embodiment: The Darcy coefficient (based on homogenization) is treated as a phenomenological parameter that is calculated for various stages of osteoporosis and interpreted after the fact in clinical situations after numerous data have been obtained.

    [0335] Of course, instead of using the paraxial approximation, the full acoustic Biot approximation may be used. In this case the more complete formula for the effective wavenumber from the integral equation developed above may be used. In this case the simplified wavenumber developed above for the paraxial approximation may be used.

    [0336] Furthermore, as a body of images and effective values are developed it is possible to train a U-net, or encoder-decoder or other relevant NN to determine porosity.

    [0337] Transfer learning could be used to utilize ResNet or a multitude of other Universal models (LLM) that are trained on the described data.

    [0338] In another embodiment the image used for segmentation is created with machine learning (ML), including, but not limited to, deep neural nets (DNN), encoder-decoder architecture, GAN (including e.g. cycleGAN), transfer learning, etc.

    [0339] The acoustic Biot approximation equations can also be solved in the standard way with finite difference frequency domain methods (FDFD), or with finite element methods.

    [0340] This disclosure shows the ability to estimate porosity and other medically relevant parameters using the acoustic Biot model.

    [0341] In an embodiment, a method of imaging an object having variable density can include determining images of the object at multiple frequencies; estimating the attenuation of the object at multiple frequencies from the images of the object at multiple frequencies; estimating the speed of sound of the object at multiple frequencies from the images of the object at multiple frequencies; expressing the attenuation of the object at multiple frequencies and the speed of sound of the object at multiple frequencies as an elastic Biot model; performing a paraxial approximation of the elastic Biot model; and obtaining an image of the object based on the paraxial approximation of the elastic Biot model, wherein the image shows the variable density of the object.

    Imaging Algorithms

    [0342] There will be several types of algorithms discussed herein: [0343] (1) sine basis, rectangular coordinate, convolutional algorithms [0344] (2) cylindrical and rectangular coordinate recursion [0345] (3) Parabolic (Spectral and Finite difference) marching methods [0346] (4) Refraction corrected reflectivity and brightness functional gradient method adaptive focus. [0347] (5) Calibration algorithms employed in the cases (1), (2), and (3), which can be used to optimize resolution and quantitative accuracy (e.g., enhancement of inversion capability).

    [0348] These methods can be used for generating images from wave field energy applied to an unknown scattering object, which is then measured at some finite distance away from said scattering object. The incorporation of these algorithms support the production of accurate reconstructions of distributions of parameters which characterize the scattering object. These parameters may be reflectivity in the case methods such as (4) above, or they may be speed of sound, attenuation, compressibility, electromagnetic dielectric constants, conductivity, or Lame' parameters in the case of the more advanced and time intensive algorithms such as described with respect to (1-3) above. Furthermore, the reconstructions may or may not be quantitatively accurate depending upon the computational complexity of a given algorithm, and the amount of computational effort expended to obtain the reconstruction image.

    [0349] Certain algorithms described herein use the scattering potential (y) as the sole independent variable in the nonlinear minimization problem related to inverse scattering. (see example 1 below, for details).

    [0350] It is defined herein a functional FR.sup.2 where the residual R is defined as the difference between two values at the detectors: these two values being (1) the scattered field value at the detectors predicted by the forward problem on the basis of a postulated scattering potential distribution and (2) the measured value of the scattered field, i.e. Rf.sup.calcf.sup.meas.

    [0351] The essence of the described method (apart from the appropriate techniques to substantially reduce the computational cost of the algorithm) is the iterative construction of .sup.(n), for n=1, 2, . . . , such that .sup.(n).sup.true, until finally F0. Given a guess .sup.(n), one calculates the derivative of F with respect to , in order to calculate the next guess, .sup.(n+1). The actual calculation of this derivative is detailed below. The functional to minimize is interpreted as depending on the scattering potential alone and not both the internal fields and the scattering potential. Symbolically, if one calls the functional that is minimized F, one can write: FF(, f) to indicate the dependence upon , the scattering potential (see glossary), and f, the total field inside of the object. Here, the internal fields f are considered as intermediate variables dependent upon , i.e. f=f() so that the functional FF(, f())F().

    [0352] This employment of one variable instead of two involves much more than merely neglecting (or holding constant) one of the variables. Rather, as indicated above, the effect of changing variable , has a nontrivial effect upon the other variables f.sub.(), the field due to the incident wave from position and at frequency , for each possible and . The net result is a functional that is highly nonlinear in the remaining variable , and therefore much more difficult to solve numerically.

    [0353] As used herein real-time is defined as the performance which allows for practical, clinical implementation of breast scanners, or of geophysical imaging apparatus, or of Non-Destructive Imaging of composite material. The data is collected and processed on-site, not sent off-site to be processed.

    [0354] It can be seen that the construction and utilization of suitable Green's functions (e.g., the layered Green's functions) enables improved applicability of the inverse scattering algorithm. The layered Green's function takes the place of the free space Green's function in the presence of multiple layering in the environment surrounding the space to be imaged. This layered Green's function allows the quantitative imaging of objects located within an arbitrary distribution of layers of constant speed. The presence of these layers within the Green's function obviates the need to encompass them within the computation grid.

    [0355] The described convolutional structure is incorporated into not only-the free space Green's function, but also into the layered Green's, the acoustic Biot Green's function, the elastic (including shear wave motion) Green's function, and into all combinations of these Green's functions. Furthermore the direct application of this convolutional structure to the inverse scattering algorithm is used in conjunction with operations including the use of biconjugate gradients and BiConjugate Gradients Stabilized [BiSTAB].

    [0356] For the inverse scattering problem, iterative methods are used for the overall nonlinear system and the linear systems that arise during its solution. Three different nonlinear iterations are used: the Gauss-Newton (GN) iteration, the Fletcher-Reeves (FR) nonlinear conjugate gradient iteration, and a modification of the Fletcher-Reeves algorithmthe Ribiere-Polak (RP) iteration. All three nonlinear iterations are described in [Fletcher, R. D., 1980, Practical Methods of Optimization, Vol. I, Unconstrained Optimization, John Wiley and Sons, New York] herein incorporated by reference.

    [0357] The GN iteration is the fastest in CPU time per step but is not guaranteed to be globally convergent unless CPU time intensive exact line searches are used. Empirically it was found that the GN method sometimes fails or requires more steps in the presence of high contrast in the scattering parameters. The more CPU intensive FR and RP iterations have been found to succeed for many of these high contrast/large size problems. The optimum strategy is often to use a combination: start with FR or RP when far from the solution and then switch to the faster GN iteration as the solution is neared. All three methods require utilization of the Jacobian of the scattering equations.

    [0358] As provided herein, the Jacobian can be implemented entirely with shift invariant operations (FFT computable), thus avoiding explicit storage and time consuming direct matrix calculations. This is opposed to other techniques which require explicit storage of the Jacobian (requiring a very large amount of memory) and which implement its effect on a given vector by a direct matrix product (requiring enormous CPU time).

    [0359] During the computation of a nonlinear step with the above methods, linear systems are encountered. The use of the minimum residual conjugate gradient (MRCG) method and the biconjugate gradient (BCG) (and the stabilized BCG or BiSTAB) method, for the efficient solution of these linear systems are among the other ideas that can be present to create a workable and practical method of imaging in real time. Biconjugate gradients (or BiSTAB) is used to solve the forward problems. These problems are amenable to the BCG method because they have the same number of unknowns as they have equations to solve (they are square systems). The use of BCG results in approximately the square root of the number of iterations required by traditional conjugate gradients (CG) used in our previous patent. This BCG implementation of the forward problem also plays a critical role in the shift invariant operator implementation of the Jacobian.

    [0360] The MRCG algorithm can be utilized to solve the overdetermined (non-square) Jacobian equation that is encountered when computing a GN linearization. This obviates the need for a computationally intensive matrix inversion and avoids the introduction of a regularizing parameter since the MRCG algorithm is self-regularizing. This also results in a substantial savings in time. The FR and RP iterations are themselves nonlinear versions of the MRCG algorithm.

    [0361] The examples given herein assume that the different frequencies, , and source positions, p, are all computed in serial fashion. It is important to note, however, that the different frequencies and different views are independent computations (in both the forward problem and Jacobian calculations), and therefore can be computed in parallel. The implementation of this parallelization is explained in detail below.

    [0362] A Born-like approximation can be made within the Jacobian. This is not the standard Born approximation common in optics and acoustical imaging (diffraction imaging). This approximation has a much greater radius of convergence than the standard Born approximation. [Borup, 1992]. Its purpose is to give a much faster way of computing the Jacobian in the presence of high contrast objects than would be possible if the exact Jacobian (see glossary for definition of terms) were used in the Gauss-Newton algorithm (or FR and RP algorithms).

    [0363] Since operators are provided based on Green's theorem that allow the scattering at the border of the convolution range to be extrapolated to receivers at any position external to the border, it is possible for receivers with complicated shapes (non-point receivers) to be used in data collection. Furthermore, empirical measurements of the radiation patterns of the receiver elements (and their mutual coupling and cross talk) can be built into these projection operators.

    [0364] The described algorithms can handle geometries where the source or receiver locations do not completely circumscribe the object or where the solid angles defined by the source or receivers with respect to the body are small. This applies to not only the acoustic case, but also to the elastic and electromagnetic scenarios. The presence of layering is now incorporated into the Green's function so that it actually helps to increase the resolution in the incomplete view problem.

    [0365] In addition to exploiting the Cartesian convolution structure of the scattering equations, it is possible to utilize cylindrical coordinates since, when expressed in cylindrical coordinates, the scattering equations become separably symmetric in the radial coordinate while retaining convolutional form in the angular coordinate. A general 1-D operator requires order N.sup.2 arithmetic operations. One dimensional convolutions can be implemented by FFT in order Nlog.sub.2(N) arithmetic operations while separably symmetric kernels can be implemented recursively with order N arithmetic. Thus, cylindrical coordinates offer considerable improvement in speed. An even greater savings can be realized by going a step further by radially recurring multiple view scattering operators. This allows the calculation of the solution of the forward problems for all views (source positions) in order N.sup.3 log.sub.2(N) arithmetic as compared to order N.sub.bcgN.sup.3 log.sub.2(N) arithmetic for the 2-D FFT, BCG approach (N.sub.bcg is the number of BCG iterations required for adequate convergence). This approach to forward problems scattering also has the advantage that it is non-iterativeit requires a fixed amount of computation. This is particularly fortuitous since there are cases where the BCG iteration fails to converge.

    [0366] In addition to cylindrical coordinate recursion, a scattering matrix recursion in rectangular coordinates is possible. This new approach can be shown to require only order N.sup.3 operations for computing all views of a 2-D scattering problem.

    [0367] The following explanation of notation is given to facilitate obtaining an understanding of certain algorithms described herein. The scattering potential changes from point to point within an object or body as well as changing with frequency of the incident field. Thus .sub.jy.sub.(x.sub.j), x.sub.jR.sup.3 or R.sup.3 used to signify the scattering potential at pixel j or point j, for the incident field at frequency , .sub. can be considered to be the vector composed of all values of j, i.e. for all pixels: .sub.(.sub.1.sub.2 . . . .sub.N).sup.T, where T indicates transpose. For purposes of exposition and simplicity, the following considers the case where is independent of frequency, although this is not necessary.

    Notation: Vector Field Notation

    [0368] The following notation denotes a vector field describing elastic waves, or electromagnetic waves:

    [00124] f ( r ) ( f x ( x , y , z ) f y ( x , y , z ) f z ( x , y , z ) ) ( f x f y f z ) T f i ( r ) ( f x i ( x , y , z ) f y i ( x , y , z ) f z i ( x , y , z ) ) ( f x i f y i f z i ) T

    [0369] T denotes transpose to represent the total field. The incident field is the field that would be present if there was no object present to image. In the case of layering in the ambient medium, it is the field that results from the unblemished, piecewise constant layering.

    [0370] The scattered field is the difference between the total field and the incident field, it represents that part of the total field that is due to the presence of the inhomogeneity, e.g., the mine for the undersea ordnance locater, the hazardous waste cannister for the hazardous waste locator, the school of fish for the echo-fish locator/counter, and the malignant tumor for the breast scanner:

    [00125] f s ( r ) f ( r ) - f i ( r ) f s ( r ) ( f x s ( x , y , z ) f y s ( x , y , z ) f z s ( x , y , z ) ) ( f x s f y s f z s ) T

    [0371] Tdenotes transpose

    [0372] .sup.inc(r) denotes the scalar incident field coming from direction (source position) at frequency (o. The r could represent either a 3 dimensional, or a 2 dimensional vector of position.

    Scalar Field Notation

    [0373] A scalar field is indicated by a nonbold (r), rR.sup.3.

    Example 1

    Acoustic ScatteringScalar Model Equations

    [0374] This first example is designed to give the basic structure of the algorithm and to point out why it is so fast in this particular implementation compared to present state of the art. The background medium is assumed to be homogeneous medium (no layering). This example will highlight the exploitation of convolutional form via the FFT, the use of the Frechet derivative in the Gauss-Newton and FR-RP algorithms, the use of the biconjugate gradient algorithm (BCG) for the forward problems, the independence of the different view, and frequency, forward problems. It will also set up some examples which will elucidate the patent terminology. The field can be represented by a scalar quantity . The object is assumed to have finite extent The speed of sound within the object is c=c(x). The speed of sound in the background medium is the constant C.sub.o. The mass density is assumed to be constant. The attenuation is modeled as the imaginary part of the wavespeed. These are simplifying assumptions, which we make so as to focus on the important aspects of the imaging algorithm. By no means is the imaging algorithm restricted to the scalar case, either in theory, or, more importantly, in practical implementation.

    [0375] We consider the two dimensional case first of all (equivalently the object and incident fields are assumed to be constant in one direction). Given an acoustic scatterer with constant mass density and constant cross-section in z, illuminated by a time harmonic incident field, .sup.inc, with e.sup.it time dependence and source position (incident angle), , the total field satisfies the following integral equation.

    [00126] f inc ( ) = f ( ) - ( _ ) f ( _ ) g - ( .Math. "\[LeftBracketingBar]" - _ .Math. "\[RightBracketingBar]" ) dx dy ( 1 ) where = c 0 2 c 2 - 1

    [0376] p=(x, y) and where .sub. is the field internal to the object at frequency and resulting from the incident field from source position : .sub..sup.inc. The 2-D Helmholtz Green's function is given by:

    [00127] g ( .Math. "\[LeftBracketingBar]" - .Math. "\[RightBracketingBar]" ) = k 2 4 i H 0 ( 2 ) ( k .Math. "\[LeftBracketingBar]" - .Math. "\[RightBracketingBar]" ) , k = c 0 where H 0 ( 2 )

    [0377] where H.sub.02 is the Hankel function of the second kind, and zeroth order.

    [0378] Now it is required to compare the field measured at the detectors, with the field as predicted by a given guess, .sup.(n), for . To accomplish this, first define the scattered field at the detectors as the total field minus the incident field.

    [00128] f sc ( ) f ( ) f inc ( )

    [0379] This represents the field due entirely to the scattering potential. Using this relation to rewrite (1), gives

    [00129] f s c ( ) = ( _ ) f - ( _ ) g - ( .Math. "\[LeftBracketingBar]" - _ .Math. "\[RightBracketingBar]" ) dx d y ( 2 )

    [0380] These two equations, (1) and (2) are the basis of the imaging algorithm. They must be discretized, then exported to the computer in order to solve practical imaging problems. The purpose of the apparatus and method herein described, is to solve the discretized form of these equations without making any linearizing assumptions (such as used in conventional diffraction tomography), and in real time, with presently available computers.

    Discretization of the Acoustic Free-Space Lippmann Schwinger Integral Equation and Green'S Function2D Free Space Case

    [0381] Let us for a moment drop the frequency and source position subscripts. The scalar field is given by

    [00130] f ( x y ) = ( x y ) f ( x y )

    [0382] Discretization of the integral equation is achieved by first decomposing this function into a linear combination of certain (displaced) basis functions

    [00131] f ( x y ) .Math. n = 1 N x .Math. m = 1 N y a n m S ( x - n y - m ) ( 3 )

    [0383] where it has been assumed that the scatterer lies within the support [0, N.sub.x][0, N.sub.y]a rectangular subregion.

    [0384] The basis functions S can be arbitrary except that we should have cardinality at the grid nodes:

    [00132] s ( 0 0 ) = 1

    [0385] whereas for all other choices of n, m,

    [00133] S ( n m ) = 0 , n , m 0

    [0386] An example of such a function is the 2-D hat function. The algorithm uses the sinc function as its basic building blockthe Whittaker sine function which is defined as:

    [00134] ( x ) = sin ( x ) x

    [0387] The two dimensional basis functions are defined by the tensor product:

    [00135] S ( x - n y - m ) = sinc ( ( x - n ) / ) .Math. sinc ( ( y - m ) / ) ( 4 )

    [0388] If the equality in equation (3) is presumed to hold at the grid points x=n, y=m: the coefficients .sub.nm can be determined to be precisely the value of the field :

    [00136] f ( x y ) = .Math. n = 1 N x .Math. m = 1 N y ( n m ) f ( n m ) S ( x - n y - m ) ( 5 )

    [0389] Now this expression for the product can be substituted into (1):

    [00137] f i ( x y ) = f ( x y ) - k 2 4 i .Math. n .Math. m n m f n m H 0 ( 2 ) ( k .Math. "\[LeftBracketingBar]" - .Math. "\[RightBracketingBar]" ) S ( x - n y - m ) d 2 ( 6 )

    [0390] In particular this equation holds at

    [00138] ( x y ) = ( n m )

    [0391] for which we get:

    [00139] f n m i = f n m - .Math. m = 1 N y n m f n m g n - n , m - m , n = 1 , .Math. , N x m - 1 , .Math. , N y ( 7 )

    [0392] where the 2-D discrete Green's function is defined as:

    [00140] g ( n - n , m - m ) k 2 4 i H 0 ( 2 ) ( k .Math. "\[LeftBracketingBar]" ( n - x m - y ) .Math. "\[RightBracketingBar]" ) S ( x - n y - m ) d 2 ( 8 )

    [0393] Although it is not clear that the dependence of g on m and n is as so stated, in is indeed the case, since we have by the substitution of the transformation

    [00141] x .fwdarw. x + n y .fwdarw. y + m

    [0394] into this last equation shows explicitly that the discretized Green's function g, does in fact depend only on the differences nn, and mm. Thus the discrete equation (7) has inherited the convolutional form of the continuous equation (1). It is this fact which allows the use of the 2-D FFT to compute the summation in (8) in only order N.sub.xN.sub.y log.sub.2(N.sub.xN.sub.y) arithmetic operations and is therefore critical to the real-time speed of the algorithm.

    Extension of the Method to Remote Receivers with Arbitrary Characteristics

    [0395] There is a case that the receivers do not lie within the range of the convolution and/or they are not simple point receivers. It is well known that, given source distributions within an enclosed boundary, the scattered field everywhere external to the boundary can be computed from the values of the field on the boundary by an application of Green's theorem with a suitably chosen Green's function, i.e.:

    [00142] f s ( ) = B f s ( ) P ( , ) d l , outside boundary B ( 12 )

    [0396] where P is the Green's function, the integral is about the boundary, and dl is the differential arclength (in the 3-D case, the integral is on an enclosing surface). Equation (12) allows for the construction of a matrix operator which maps the boundary values of the rectangular support of the convolution (2N.sub.x+2N.sub.y4 values in the discrete case) to values of the scattered field external to the rectangle. Furthermore, this propagator matrix can be generalized to incorporate more complex receiver geometries. For example, suppose that the receiver can be modeled as an integration of the scattered field over some support function, i.e.;

    [00143] v n = signal from receiver n = f s ( ) S n ( ) ds ( 13 )

    [0397] where S.sub.n is the support of receiver n. Then from (12):

    [00144] v n = B f s ( ) { S n ( ) P ( , ) ds } d l , ( 14 ) outside boundary B = B f s ( ) P ( n , ) dl , P ( n , ) = S n ( ) P ( , ) ds

    [0398] Discretizing the integral gives the matrix equation:

    [00145] v n = .Math. l = 1 N b P nl f l s , n = 1 , .Math. , N d ( 15 )

    [0399] where N.sub.b=2N.sub.x+2N.sub.y4 is the number of boundary (border) pixels and N.sub.d is the number of receivers. Equation (15) defines the matrix that we shall henceforth refer to as P or the propagator matrix for a given distribution of external receivers. The equation:

    [00146] v m e a s = P ( G [ ] f ) ( 16 )

    [0400] Note that P is a function of frequency, but is not a function of source position.

    [00147] v m e a s

    is a vector of dimension N.sub.d.

    [0401] The added flexibility of this propagator matrix formulation is particularly advantageous when interfacing our algorithms with real laboratory or field data. Often times the precise radiation patterns of the transducers used will not be known a priori. In this event, the transducers must be characterized by measurements. The results of these measurements can be easily incorporated into the construction of the propagator matrix P allowing the empirically determined transducer model to be accurately incorporated into the inversion.

    [0402] Equations (16) then provides in compact notation the equations which we wish to solve for the unknown scattering potential, . First, we consider the forward problem, i.e. the determination of the field .sub. for a known object function and known incident field, .sub..sup.i. Then we establish how this forward problem is then incorporated into the solution of the inverse problem, i.e., the determination of when the incident fields, and the received signals from a set of receivers are known. Note that the internal field to the object is alsoalong with the object function , an unknown in the inverse problem.

    [0403] Since (7) is linear in .sub. it can be solved by direct inversion of the linear system:

    [00148] f = ( I - G [ ] ) - 1 f i ( 17 )

    [0404] In discrete form, this is a matrix equation (after linearly ordering the 2-D supports of the domain and range into vectors). Since the dimension of the range and domain of the linear system is N.sub.xN.sub.y, the dimension of the linear system is the same giving a total of (N.sub.xN.sub.y).sup.2 elements. The arithmetic required to solve the system by standard direct means is thus order (N.sub.xN.sub.y).sup.3. In other words, a doubling of the edge dimension of the problem space will increase the CPU time required by a factor 26=64 times! The arithmetic work required will quickly become intolerable as the size of the problem increases. It is precisely this large growth rate which has convinced many researchers not to pursue inverse scattering approaches based on integral equations. One could, of course, go to an iterative method. A single iteration of an iterative linear system solver, such as bi-conjugate gradients requires order (N.sub.xN.sub.y).sup.2 operations (essentially a matrix-vector product). It can be shown that the growth rate in the number of required iterations for sufficient convergence for the BCG algorithm is order N for this equation. Thus, the overall computational complexity is order N.sup.5only one order of N has been saved over direct inversion. Since inverse problems in 2-D generally require order N views, the iteration must be done N times and we are back to order N.sup.6 computation for BCG.

    [0405] The key to overcoming this objection is the convolutional form of (7). If this is exploited by use of the FFT algorithm, the computation needed to perform a matrix-vector product is only order N.sub.xN.sub.y log.sub.2(N.sub.xN.sub.y). This allows the BCG algorithm to be applied to a single view with order N.sup.3 log.sub.2(N) operations and to all views with order N.sup.4 log.sub.2(N) operations. Due to the slow growth rate of log.sub.2(N), this is essentially a reduction of two orders of N over nonconvolutional methods. Also, this convolutional approach avoids the necessity of storing the order N.sup.4 computer words needed to contain the linear system in full matrix form. Only order N.sup.2 words are needed due to the shift invariance of the discrete Green's kernel in (7). It is these two savings, more than any other advance that we have made, that allows us to perform inverse scattering in reasonable time for meaningfully sized problems. The use of BCG over the original CG algorithm also represents a major advance since it converges in essentially the square root of the number of iterations needed by CG. This combination of FFT and CG algorithm was originally developed in [Borup, 1989, Ph. D. dissertation, Univ. of Utah, Salt Lake City], herein incorporated by reference.

    The Imaging or Inverse Problem

    [0406] In order to solve the imaging problem, we need a set of equations relating the unknown scattering potential, , and total fields, .sub., with measurements of the scattered field on a set of external detectors. These detector equations are given in (16). Equation (16), and the internal field equations are the equations which are solved simultaneously to determine and the .sub.. There are N.sub.xN.sub.y unknowns corresponding to the values at each of the grid points, and N.sub.xN.sub.y, unknowns corresponding to the unknown fields, is the number of frequencies, and is the number of source positions (angles). We have improved upon the state of the art by considering the internal field equations to define .sub.. for a given . Thus, the total number of unknowns is reduced to N.sub.xN.sub.y.

    [0407] The total number of measurement equations is N.sub.d where N.sub.d is the number of detectors. In the general case, where, the sources and detectors do not completely surround the object, the problem of determining is ill-posed, in a precise mathematical sense, therefore, in order to guarantee a solution, the number of equations N.sub.d>N.sub.xN.sub.y, is chosen to be larger than the number of pixel values for over determination. Then the system is solved in the least squares sense. More specifically the solution of (9,16) for and the set of fields, .sub., in the least squares sense is obtained by minimizing the real valued, nonlinear functional:

    [00149] min .Math. .Math. r .Math. 2 min .Math. , .Math. v m e a s - P G [ f ] .Math. 2 ( 18 )

    [0408] subject to the satisfaction of the total field equations, (9), as constraints. The vector r.sub. of dimension N.sub.d is referred to as the residual for frequency, , and angle, .

    [0409] The methods used to solve the nonlinear optimization problem in our inverse scattering algorithms are, thus far, all gradient methods in that second derivative information is not used (as it is in the full Newton and quasi-Newton methods). The principal computation involves the calculation of the gradient vector of (18). A straight forward calculation gives the gradient formula: (x)=J.sup.H(x)r(x) where the superscript H denotes the Hermitian transpose (complex conjugate transpose of the matrix) and J is the Jacobian of the nonlinear system:

    [00150] J ( x ) = [ x 1 a 1 .Math. x N a 1 .Math. .Math. x 1 a M .Math. x N a M ] ( 19 )

    [0410] The simplest gradient algorithm is the Gauss-Newton (GN) iteration. The GN iteration for finding a solution to =0 is given by:

    [00151] x ( n ) = y - a ( x ( n ) ) ( 20.1 ) x ( n ) = ( J n H J n ) - 1 J n H r ( n ) ( 20.2 ) x ( n + 1 ) = x ( n ) + x ( n ) ( 20.3 )

    [0411] where a is the vector of nonlinear equations. This iteration is well defined assuming that the columns of Jn remain linearly independent. Since (20.2) is equivalent to the quadratic minimization problem:

    [00152] min x ( n ) .Math. J n x ( n ) - r ( n ) .Math. M 2 ( 21 )

    [0412] it can be solved by the minimum residual conjugate gradient method (MRCG). This approach also ensures that small singular values in J.sub.n will not amplify noise if care is taken not to overconverge the iteration.

    [0413] Here, the fields .sub. are considered to be dependent variables, with dependence upon given implicitly by

    [00153] f ( n ) = ( I - G [ ( n ) ] ) - 1 f inc .

    [0414] In order to find the Jacobian expression we must then differentiate the residual vector defined in (18) with respect to . The details of this calculation are given in [Borup, 1992.]. The result is:

    [00154] J = r = P ( I - [ ] G ) - 1 [ f ] ( 22 )

    [0415] The final Gauss-Newton algorithm for minimizing (18) subject to equality constraints is: [0416] GN 1. Select an initial guess, .sup.(0). [0417] GN 2. Set n=0 [0418] GN 3. Solve the forward problems using (biconjugate gradients)

    [00155] BCG : f ( n ) = ( I - G [ ( n ) ] ) - 1 f i n c , = 1 , .Math. , = 1 , .Math. , . [0419] GN 4. Compute the detector residuals:

    [00156] r ( n ) = v m e a s - P G [ f ( n ) ] ( n ) , = 1 , .Math. , = 1 , .Math. , . [0420] GN 5. If

    [00157] .Math. .Math. r ( ) ( n ) .Math. 2 < [0421] GN 6. Use MRCG to find the least squares solution to the quadratic minimization problem

    [00158] min ( n ) .Math. .Math. r ( n ) + P G ( I - [ ( n ) ] G ) - 1 [ f ( n ) ] ( n ) .Math. 2 [0422] for .sup.(n) [0423] GN 7 Update y: y.sup.(n+1)=.sup.(n)+.sup.(n) [0424] GN 8 Set n=n+1, go to GN 3

    [0425] The crux of the GN iteration is GN 6 where the overdetermined quadratic minimization problem is solved for the scattering potential correction. This correction is approximated by applying a set of M iterations of the MRCG algorithm. The details of GN 6 are: [0426] GN 6.1 Initialize the zeroth iterate of MRCG: y.sup.0=0 [0427] GN 6.2 Initialize the MRCG residuals equal to the GN outer loop residuals:

    [00159] r , 0 = r ( n ) [0428] Note that iterates pertaining to the MRCG iteration are indexed without the ( ) in order to distinguish them from outer GN loop iterates. [0429] GN 6.3 Compute the gradient of the MRCG quadratic functional:

    [00160] g 0 = .Math. [ f * ( n ) ] ( I - G * [ * ( n ) ] ) - 1 G * P H r 0 [0430] GN 6.4 Set the initial search direction equal to minus the gradient:

    [00161] p 0 = - g 0 [0431] GN 6.5 Set m=0 [0432] GN 6.6 Compute

    [00162] t m = P G ( I - [ ( n ) ] G ) - 1 [ f ( n ) ] p m , = 1 , .Math. , = 1 , .Math. , . [0433] GN 6.7 Compute the quadratic step length:

    [00163] a m = .Math. .Math. t m .Math. 2 / .Math. g m .Math. 2 [0434] GN 6.8 Update the solution of the quadratic minimization:

    [00164] m + 1 = m + a m p m [0435] GN 6.9 Update the MRCG residuals:

    [00165] r m + 1 = r m + a m t m [0436] GN 6.10 Compute the gradient of the MRCG quadratic functional:

    [00166] g m + 1 .Math. [ f * ( n ) ] ( I - G * [ * ( n ) ] ) - 1 G * P H r m + 1 [0437] GN 6.11 Compute:

    [00167] m = .Math. g m + 1 .Math. 2 / .Math. g m .Math. 2 [0438] GN 6.12 Update the MRCG search direction:

    [00168] p m + 1 = - g m + 1 + m p m [0439] GN 6.13 If m=M, go to GN 6.15 [0440] GN 6.14 m=m+1, go to GN 6.8 [0441] GN 6.15 Equate the solution of the quadratic minimization problem with the last iterate MRCG:

    [00169] ( n ) = M [0442] GN 6.16 Return to the GN algorithm at GN 7.

    [0443] A problem with the algorithm above is the presence of the inverse of the transposed total field equation (I[.sup.(n)]G.sub.).sup.1 in the computation of the Jacobian and its adjoint in 6.3 and 6.10 Since we do not invert (or even store) these large linear systems, the occurrence of these inverse operators must be dealt with. This problem can however be overcome by computing the action of this operator on a given vector, as needed during the MRCG iteration, by a few iterations of BCG. When performed in this way, the shift invariance of the Green's function is exploited in a maximal way. No large matrices need to be inverted or stored. This is because the Jacobian implementation now consists exclusively of shift invariant kernels (diagonal kernels such as pointwise multiplication by or and the shift invariant kernel composed of convolution with the Green's function) Such shift invariant kernels can be implemented efficiently with the FFT as previously described.

    [0444] An even greater increase in numerical efficiency can be obtained in cases for which the approximation:

    [00170] ( I - [ ( n ) ] G ) - 1 ( I + [ ( n ) ] G ) ( 23 )

    [0445] (which is similar to the Born approximation of the forward problem) can be used in the Jacobian. This has been found to be the case for many acoustic scattering problems for biological tissue, and for EM problems for which the contrast in dielectric contrast is small.

    [0446] The other two gradient algorithms that are used to solve the inverse scattering equations are the Fletcher-Reeves and Ribiere-Polak algorithms. For the inverse scattering equations, they are given by the iteration: [0447] RP 1 Select an initial guess, [0448] RP 2 Solve the forward problems using (biconjugate gradients) BCG:

    [00171] f i n c = ( 1 - G [ ( 0 ) ] ) f ( 0 ) , = 1 , .Math. , = 1 , .Math. , . [0449] for the internal fields,

    [00172] f ( 0 ) [0450] RP 3 Compute the detector residuals:

    [00173] r ( 0 ) = P G [ f ( 0 ) ] ( 0 ) - m e a s , = 1 , .Math. , = 1 , .Math. , . [0451] RP 4 Compute the gradient:

    [00174] r ( 0 ) = P G [ f ( 0 ) ] ( 0 ) - m e a s , = 1 , .Math. , = 1 , .Math. , . [0452] RP 5 Compute the search direction: p.sup.(0)=g.sup.(0) [0453] RP 6 n=0 [0454] RP 7 Compute the Jacobian operating on the search direction:

    [00175] t ( 0 ) = P G ( I - [ ( 0 ) ] G ) - 1 [ f ( 0 ) ] p ( 0 ) , = 1 , .Math. , = 1 , .Math. , . [0455] RP 8 Compute the step length (quadratic approximation):

    [00176] n = - R e .Math. ( r ( n ) , t ( n ) ) / .Math. .Math. t ( n ) .Math. 2 [0456] RP 9 Update the solution: .sup.(n+1}=.sup.(n)+a.sub.np.sup.(n) [0457] RP 10 Solve the forward problems using (biconjugate gradients) BCG:

    [00177] f i c = ( I - G [ ( n + 1 ) ] ) f ( n + 1 ) , = 1 , .Math. , = 1 , .Math. , . [0458] RP 11 Compute the detector residuals:

    [00178] r ( n + 1 ) = P G [ f ( n + 1 ) ] ( n + 1 ) - m e a s J , = 1 , .Math. , = 1 , .Math. , . [0459] RP 12 If

    [00179] .Math. .Math. r ( n ) .Math. < ,

    stop. [0460] RP 13 Compute the gradient:

    [00180] g ( n + 1 ) = .Math. [ f * ( n + 1 ) ] ( I - G * [ * ( n + 1 ) ] ) - 1 G * P H r ( n + 1 ) [0461] RP 14 Compute.

    [00181] n = { .Math. g ( n + 1 ) .Math. 2 / .Math. g ( n ) .Math. 2 , Fletcher - Reeves ( g ( n + 1 ) , g ( n + 1 ) - g ( n ) ) / .Math. g ( n ) .Math. 2 , Ribere - Pollack [0462] RP 15 Update the search direction: p.sup.(n+1)=g.sup.(n+1)+.sub.np.sup.(n) [0463] RP 16 n=n+1, go to RP 7

    [0464] The distinction between FR and RP lies in the calculation of P in RP 14. It is generally believed that the RP calculation is more rapidly convergent. Hence, the RP iteration may be used in most cases rather than FR.

    [0465] Comparison of the linear MRCG iteration and the nonlinear RP iteration reveals that they are very similar. In fact, RP is precisely a nonlinear version of MRCG. Note that the only difference between them lies in the fact that the RP residuals, computed in steps RP.10 and RP.11 involve recomputation of the forward problems while the MRCG residuals are simply recurred in 6.9 (additional, trivial, differences exist in the computation of the 's and 's). In other words, the RP iteration updates the actual nonlinear system at each iteration, while the MRCG simply iterates on the quadratic functional (GN linearization). The overall GN-MRCG algorithm contains two loopsthe outer linearization loop, and the inner MRCG loop, while the RP algorithm contains only one loop. Since the RP algorithm updates the forward solutions at each step, it tends to converge faster than GN with respect to total iteration count (number of GN outer iterations times the number, M, of inner loop MRCG iterates). The GN method is, however, generally faster since an MRCG step is faster than an RP step due to the need for forward recomputation in the RP step. The overall codes for the GN-MRCG algorithm and the RP algorithm are so similar that a GN-MRCG code can be converted to an RP code with about 10 lines of modification.

    [0466] Before leaving this example, it is important to note that the examples given assume that the different frequencies, , and views, , are all computed in serial fashion. It is important to note however, that the different frequencies and different views are independent in both the forward problem and Jacobian calculations, and therefore can be computed independently and in parallel. This should be clear by examining, for example, the computations in GN.3 and GN.6.3. The forward problem calculations, GN.3 are completely independent and in the gradient calculation, GN.6.3, the computations are independent in frequency and view up to the point at which these contributions are summed to give the gradient. The GN and RP algorithms could thus be executed on a multinode machine in parallel with node intercommunication required only 2 to 3 times per step in order to collect sums over frequency and view number and to distribute frequency/view independent variables, such as scattering potential iterates, gradient iterates, etc., to the nodes.

    Example 2

    Image Reconstruction in Layered Ambient Media Using Optimization of a Bilinear or Quadratic Objective Function Containing all Detector Measurement Equations, and with Internal Fields Represented as a Function of the Scattering Potential

    [0467] Example 1 has a characteristic that the object function is considered to be the sole independent variable, and the internal field resulting from the incident field and interaction is considered to be a function of this y. Thus the scattered field can be written as

    [00182] f sc = f sc ( ) = 1 , .Math. , = 1 , .Math. , ( 24 )

    [0468] The difference between Example 2, and Example 1 is that the background medium, in which the object is assumed to be buried, is assumed to be homogeneous in the previous case, whereas it is here assumed that the background is layered. The residual is defined in the same way as the previous example, with d.sub. used to represent the N.sub.dlength vector of measured scattered field at the detector positions. That is, the residual vectors for all , are defined in the following way.

    [00183] r f s c ( ) - d 0 = 1 , .Math. , = 1 , .Math. , ( 25 )

    [0469] The functional F(), dependent upon is defined in the same way as example 1:

    [00184] F ( ) = .Math. .Math. r .Math. 2 2 ( 26 )

    [0470] This is the functional, dependent upon the object function in a highly nonlinear manner, which must be minimized in order to determine the values at each gridpoint in the object space.

    [0471] It is again necessary to compute the Jacobian:

    [00185] ( f s c ) = T ( ( I - G .Math. [ ] ) - 1 ( G .Math. [ f ] ) ) = 1 , .Math. , = 1 , .Math. , ( 27 )

    [0472] where again refers to the multiple views and the to the multiple frequencies available to us. That is, we again assume that the noise level in the system is zero. We apply the Gauss Newton algorithm to the associated least squares problem and get the same result. This leads to the overdetermined system described above (the notation is identical to the previous example, the difference lies in the actual form of the layered Green's function).

    [00186] [ T ( [ I - G k .Math. [ ] ] - 1 G k ) .Math. I ] [ [ f 1 k ] .Math. [ f k ] ] ( n ) = - [ [ r 1 k ( n ) ] .Math. [ r k ( n ) ] ] k : ( 28 )

    [0473] which must be solved for the -update . The left hand side of the above formula is given by (29)

    [00187] T ( [ I - G k .Math. [ ] ] - 1 G k ) .Math. I { [ TG k ( I - [ ( n ) ] G k ) - 1 0 .. 0 0 .. . . .Math. . . .. 0 0 .0 TG k ( I - [ ( n ) ] G k ) - 1 ] }

    [0474] Again, it is possible to use the complex analytic version of the Hestenes overdetermined conjugate gradient algorithm, adapted for least squares problems to iteratively solve this system. This is equivalent to finding the minimum norm solution.

    [0475] The formula for the Jacobian in the layered medium situation in the presence of multiple frequencies, is,

    [00188] ( f s c ) = T ( ( I - G .Math. [ ] ) - 1 ( G .Math. [ f ] ) ) = 1 , .Math. , = 1 , .Math. , ( 30 )

    [0476] where G.sub. is the Layered Green's function for the frequency . Therefore, in effect, to determine the -update, , we merely solve the multiple view problem for each particular frequency, that is, we solve the overdetermined system:

    [00189] [ T ( [ I - G k .Math. [ ] ] - 1 G k ) .Math. I ] [ [ f 1 k ] .Math. [ f k ] ] ( n ) = - [ [ r 1 k ( n ) ] .Math. [ r k ( n ) ] ] ( 31 ) [0477] k=1, . . . , [0478] which in component form is:

    [00190] [ T ( [ I - G k .Math. [ ] ] - 1 G k ) ] [ f j k ] ( n ) = - [ r jk ( n ) ] k = 1 , .Math. , j = 1 , .Math. , ( 32 )

    [0479] For multiple view and multiple frequency inversion then, the inversion scheme in layered media reads: [0480] STEP 1. Choose an initial guess for the scattering potential, .sup.(0) [0481] STEP 2. Set n=0 [0482] STEP 3. solve the forward problems

    [00191] ( I - G [ ] ) f = f inc , = 1 , .Math. , = 1 , .Math. , , [0483] using biconjugate gradients, and use the result to compute the -forward maps

    [00192] ( ( n ) ) and r ( n ) = ( ( n ) ) - d = 0 = 1 , .Math. , = 1 , .Math. , , [0484] STEP 4. Determine the L.sub.2 norm of the total residual vector:

    [00193] tr ( n ) [ [ [ r 11 ( n ) ] .Math. [ r 1 ( n ) ] ] [ [ r 1 ( n ) ] .Math. [ r ( n ) ] ] ] [0485] STEP 5.

    [00194] If .Math. tr ( n ) .Math. 2 .Math. = 1 .Math. = 1 .Math. r ( n ) .Math. 2 < then stop . [0486] STEP 6. Solve the N.sub.d by N overdetermined system:

    [00195] [ [ [ f 11 ] .Math. [ f 1 ] ] [ [ f 1 ] .Math. [ f ] ] ] g ( n ) = - [ [ [ r 11 ( n ) ] .Math. [ r 1 ( n ) ] ] [ [ r 1 ( n ) ] .Math. [ r ( n ) ] ] ] [0487] in the least squares sense for .sup.(n), using the conjugate gradient algorithm adapted for least squares problems [0488] STEP 7. Update via the formula

    [00196] ( n + l ) = ( n ) + ( n ) [0489] STEP 8. Set n=n+1, go to begin

    [0490] The actual values of the angles of incidence are chosen to diminish as much as possible the ill conditioning of the problem. Equally spaced values 02 are ideal, but experimental constraints may prohibit such values, in which case the multiple frequencies are critical.

    [0491] For biological contrasts (<0.15 say) the following approximation, which is essentially identical to the assumption in the previous example, is valid:

    [00197] [ I - G [ ] ] - 1 [ I + G [ ] ] ( 33 )

    [0492] This is the same Born-like approximation restricted to the Jacobian calculation only, that is assumed in example 1. This approximation has a much larger radius of convergence than the standard born approximation, but substantially reduces the computational complexity of the problem.

    How to Implement the Convolution and Correlation in Layered Medium Imaging

    [0493] All of the above calculations have been carried out without using the fact that the operation of matrix multiplication with G is in fact the sum of a convolution and a correlation, which is transformed into a pair of convolutions. Symbolically G.sup.L=G.sub.R+G.sub.V, where G.sub.R is the correlation and G.sub.V is a convolution operator. The actual numerical implementation of the above formulas, therefore, is somewhat different than would naively appear. The implementation of the correlation with convolution is a little more challenging than the convolution in the homogeneous case, in that a change of variables must be used to convert the correlation to a convolution before the Fast Fourier Transform (FFT)s can be applied. A reflection operator must be applied to y before the convolution is applied, this operator is denoted by Z, and is defined by:

    [00198] ( Z f ) ( x , y , z ) = f ( x , y , h - z ) ( 34 )

    [0494] The use of this operator is incorporated into the action of the Lippmann-Schwinger equation on the field . That is, the equation for the internal fields, which in example 1 read: (IG)=.sup.inc, Now becomes (IG.sub.VG.sub.RZ)=.sup.inc.

    [0495] This change has non-trivial ramifications for the construction of the algorithms discussed above. The FFT implementation of this is given below. To prevent the notation from getting out of hand we use [ ] to indicate that a diagonal matrix is constructed out of an n vector, so that [.sup.(n)].sub..sup.(n) denotes pointwise multiplication of the vectors and . Also F is used to denote the Fourier transform, and * is used to indicate convolution. In this notation, the Lippmann-Schwinger equation becomes, in the symbolic form adopted above,

    [00199] ( I - G V - G R Z ) f = ( f - G V * ( [ ] f ) - G R * Z ( [ ] f ) ) = ( f - - 1 { ( ( G V ) .Math. ( [ ] f ) ) - ( ( G R ) .Math. { Z ( [ ] f ) } ) } ) ( 35 )

    [0496] Substantial savings is achieved through the observation that the Fourier Transforms of G.sub.V and G.sub.R need only be done once, then stored, and finally, that the reflection operator, Z, will commute with Fourier transformation in the x and y (horizontal) directions, since it only involves reflection in the z (vertical) direction.

    [0497] There are changes also, for example in the computation of . The biconjugate gradient algorithm requires the use of the adjoint in the solution of the above equation. This adjoint differs from the homogeneous ambient medium case with the introduction of the correlation operator. Also, there are substantial changes in the implementation of the overdetermined conjugate gradient algorithm used to solve (in the least squares sense) the overdetermined system of equations. In this section, as before, we incorporate the k.sub.sc.sup.2factor into the G.sub.V and G.sub.R terms. Of course, rather than carry out the inversion of a matrix, we use biconjugate gradients and Fast Fourier Transforms (FFT's), which require the Lippmann Schwinger operator, which we denote by LS, and its adjoint, which are defined by the following formulas:

    [00200] ( L S ) f ( I - F - 1 [ ( G y .Math. F ) + ( G ~ R .Math. F .Math. Z ) ] ) ( [ ] f )

    [0498] where {tilde over (G)}.sub.VF(G.sub.V) and {tilde over (G)}.sub.RF(G.sub.R) are the Fourier Transforms of G.sub.V and G.sub.R, and need only be calculated once, then stored. The unitarity of F is used below to determine the adjoint (LS).sup.H.

    [00201] ( L S ) H = ( ( I - - 1 [ ( G V .Math. ) + ( G R .Math. Z ) ] ) [ ] ) H = [ ] ( I - [ ( - 1 G y H ) + ( Z H - 1 G R H ) ] ( - 1 ) H ) = [ ] ( I - .Math. ( - 1 G V _ ) + ( Z - 1 G ~ R _ ) .Math. ) ( 37 )

    [0499] where we have used the unitarity of Fourier Transformation F: FH=F.sup.1, and the fact that point wise multiplication is used, and the fact that Z.sup.H=Z. For practical problems it is best to obtain a reasonable first guess using low frequency data, then to build on this using the higher frequency information.

    The Actual Construction of the Layered Greens Function

    [0500] Up to this point, it has been assumed there exists the layered Green's function. For simplicity in describing the construction of this function, we deal with the scalar case, although the inclusion of shear wave phenomena is conceptually no more difficult than this case. The details are given in [Wiskin, 1991]. The general idea is based upon standard ideas in the literature concerning reflection coefficients in layered media. See for example [Muller, G., 1985, Journal of Geophysics, vol. 58] which is herein incorporated by reference. Here, the combination of the reflection coefficients into a bona fide Green's function, and the utilization of this in a forward problem, then more importantly, in the inverse problem solution is described. The procedure involves decomposing the point response in free space into a continuum of plane waves. These plane waves are multiply reflected in the various layers, accounting for all reverberations via the proper plane wave reflection/transmission coefficients. The resulting plane waves are then re-summed (via a Weyl-Sommerfeld type integral) into the proper point response, which in essence, is the desired Green's function in the layered medium. The final result is:

    [00202] G V ( x - x ; .Math. "\[LeftBracketingBar]" z - z .Math. "\[RightBracketingBar]" ) = - e i u ( x - x ) C ( u ) [ e - i b sc .Math. "\[LeftBracketingBar]" z - z .Math. "\[RightBracketingBar]" + R + R - e i b s c | z - z | ] du , ( 38 ) and G R ( x - x ; z + z ) = - e i u ( x - x ) C ( u ) [ R + e - i b s c .Math. "\[LeftBracketingBar]" z + z .Math. "\[RightBracketingBar]" ] [ R - e - i b s c ( z + z ) ] du , ( 39 ) where C ( u ) = ( l - R + R - ) - 1 i u b sc ,

    [0501] R.sup. and R.sup.+ are the recursively defined reflectivity coefficients described in Muller's paper,

    [0502] u is the horizontal slowness,

    [00203] u = sin c

    [0503] Recall that Snell's law guarantees that u will remain constant as a given incident plane wave passes through several layers.

    [0504] is the frequency

    [0505] b.sub.sc is the vertical slowness for the particular layer hosting the scattering potential

    [0506] This Green's function for imaging inhomogeneities residing within Ambient Layered Media must be quantized by convolving with the sine (really Jinc) basis functions described below. This is done analytically in [Wiskin, 1991], and the result is given below.

    Discretization of Layered Medium Lippmann-Schwinger Equation

    [0507] Unfortunately the basis functions that were used in the free space case (Example 1) cannot be used to give the discretization in the layered medium case because of the arbitrary distribution of the layers above and below the layer which contains the scattering potential. The sine functions may continue to be used in the horizontal direction, but the vertical direction requires the use of a basis function with compact support (i.e., should be nonzero only over a finite region). The sampled (i.e., discrete) Green's operator in the layered case is defined by

    [00204] G L ( j , k , m ) = G L ( j , k , m ) ( 40 )

    [0508] where the three dimensional basis functions are of the form:

    [00205] B ( x ) = B ( x , y , z ) = S ( j , ) ( x ) S ( k , ) ( y ) r n ( z ) ( 41 )

    [0509] (I is the set of integers) and where:

    [00206] x = ( x y z ) 3 x j k m = ( j k m ) j , k , m I

    [0510] The basis functions in the horizontal (x-y) plane S.sub.(j,) are based upon the Whittaker sine function:

    [00207] sinc ( x ) = sin ( x ) x

    [0511] The basis functions in the vertical (z) direction, on the other hand, are given by

    [00208] m ( z ) = ( z - m ) ,

    [0512] where (z) is the tent function:

    [00209] ( z ) = { 1 ( z + ) z [ - , 0 ] - 1 ( z - ) z [ 0 , ]

    [0513] The form of the discretized Lippmann Schwinger equation is

    [00210] f i n c ( n m l ) = f n m l i n c = f ( n m l ) - k s c 2 .Math. n , m , l G / V ( n - n m - m l - l ) L f ( n m l ) k s c 2 - .Math. n , m , l G / R ( n - n m - m l - l ) L f ( n m l ) , ( 42 )

    [0514] with the vectors

    [00211] G / R and G / y

    are the result of first convolving the Green's function with the basis functions, and then sampling at the gridpoints. The superscript R refers to the correlation part, whereas the superscript V refers to the convolution part of the Green's function. Both are computed via Fast Fourier Transforms, and are very fast. In the free space case the correlation part is zero. That is, we make the definition for L=V or R(i.e., convolution or correlation):

    [00212] G / L ( n , m , l ) R 3 S ( x ) S ( y ) ( z ) G L ( n - x m - y l - z ) d x d y d z ( 43 )

    [0515] substitution into equation (17) yields the form (42).

    [0516] In matrix notation, this equation gives:

    [00213] f = ( I - k sc 2 G V * L - k s c 2 G R * Z L ) - 1 f i n c ( 44 )

    [0517] where now * represents discrete convolution in 3D. Now that we have constructed the Integral equations in discrete form, we must derive the closed form expression for the layered Green's operators. The construction of the closed form expression of the sampled and convolved acoustic layered Green's function

    [00214] G / L ,

    from the layered Green's function given in equations (38 and 39) above is given by performing the integration in (43) analytically. This process is carried out in [Wiskin, 1991]. The final result is, for the convolutional part,

    [00215] G v ( m , n ) = { A u = 0 u J 0 C ( u ) 2 i b sc { R - R + { - 1 + ( e i b sc - 1 ) / i b sc } + { 1 - e - i b sc ( e i b sc - 1 ) / ( i b sc ) } du n = 0 A u = 0 u J 0 C ( u ) l 2 b sc 2 { 2 - e - i b sc - e - i b sc } e - i b sc .Math. "\[LeftBracketingBar]" n .Math. "\[RightBracketingBar]" + R - R + e i b sc .Math. "\[LeftBracketingBar]" n .Math. "\[RightBracketingBar]" } du .Math. "\[LeftBracketingBar]" n .Math. "\[RightBracketingBar]" 1 ] ( 45 ) m [ 0 , M - 1 ] , n [ - N + 1 , N - 1 ] and G r ( m , n ) = A u = 0 u J 0 C ( u ) { 2 - e i b m - e - i b m } 2 b sc 2 { R + ( e - i b sc ( h + 2 a ) e - i b sc n ) + R - ( e i b sc ( h + 2 a ) e i b sc n ) } m [ 0 , M - 1 ] , V n [ - N + 1 , N - 1 ]

    [0518] In the above formulas J.sub.0J.sub.0(u|m|) is the zero.sup.th order Bessel Function, and the upper limit u.sub. is defined as:

    [00216] u = 1 / c = k 1 = /

    [0519] for wavenumber k.sub.=/. Also, C(u)(1R.sup.R.sup.+).sup.1S.sup.c where

    [00217] S c = i u b sc

    [0520] with b.sub.sc defined as the vertical slowness in the layer containing the scattering point. When this layer is assumed to have acoustic wave velocity .sub.sc, it is given explicitly by:

    [00218] b sc = l sc 2 - u 2

    [0521] These expressions give the discretized 3D Acoustic layered Green's function, which is used in the layered media discretized Lippmann-Schwinger equation to obtain the solution field within the inhomogeneity.

    [0522] As can be seen, the correlation part of the green's function is included. This correlation part is zero for the free space case. This correlational part is directly applicable as it occurs above to the fish echo-locator/counter, and to the mine detection device in the acoustic approximation. (There are specific scenarios, where the acoustic approximation will be adequate, even though shear waves are clearly supported to some degree in all sediments). The inclusion of shear waves into the layered media imaging algorithm (the technical title of the algorithm at the heart of the hazardous waste detection device, the fish echo-locator, and the buried mine identifier) is accomplished in Example 6 below. A generalized Lippmann-Schwinger equation is proposed. This general equation is a vector counterpart to the acoustic Layered Green's function, and is discretized before it can be implemented. The process of discretization is virtually identical to the method revealed above. Furthermore, the BCG method for solving the forward problem, the use of the sine basis functions, and the use of the Fast Fourier Transform (FFT) are all carried out identically as they in the acoustic (scalar) case.

    Example 3

    Imaging with Electromagnetic Wave Energy

    [0523] The use of electromagnetic energy does not greatly affect the basic idea behind the imaging algorithm, as this example will demonstrate. Furthermore, we will see that it is possible to combine the layered Green's function with the electromagnetic free space Green's function to image materials within layered dielectrics. In fact this process of combining the layered Green's function with Green's functions derived for other structures and/or modalities than the free space acoustic case can be extended almost indefinitely. Another example is given below, where the combination of the Acoustic Biot Green's function with the layered Green's function is carried out. Further extensions that are not detailed here, are: (1) Combining Elastic (including shear wave energy) Wave equations with the layered Green's function, (3) Combining the Elastic Biot equations with the layered Green's function, (4) Combining the elastic and electromagnetic wave equations to model transducers. For simplicity, we consider a homogeneous, isotropic, non-magnetic background material. The time dependence will be e.sup.it, the magnetic properties of free space are summarized in {circumflex over (Z)}.sub.o i.sub.o, and the electric properties are summarized in .sub.o=i.sub.o. Within the object being imaged the magnetic properties are {circumflex over (Z)}{circumflex over (Z)}.sub.oi.sub.o (the equivalence with the free space value is the nonmagnetic media assumption). The electric properties of the object being imaged are summarized in +.sub.0+i.sub.r.sub.0, which is the complex admittivity of the object. The larger is, the less able the medium is able to support electromagnetic wave transport. These properties are combined in

    [00219] k o 2 = - y 0 z 0 = 2 0 0 = 2 c o 2 ( 47 )

    [0524] which is a measure of the number of wave cycles present per unit distance for a given frequency , when the wave speed of propagation is c.sub.o. The object's electrical properties (recall it is assumed non-magnetic for simplicity) are summarized in the object function, . (the normalized difference from the surrounding medium):

    [00220] ( r ) y - y o y o = + i r o i o - 1 = - i o + r - 1 ( 48 )

    [0525] The total electric field equations are:

    [00221] E 1 ( r ) = E ( r ) - ( k o 2 + .Math. ) ( r ) E ( r ) e - i k o | r - r | 4 | r - r | d 3 r ( 49 )

    [0526] where E.sup.i(r) is the 3-D incident field, E(r) is the 3-D total field.

    [0527] The construction of the sine basis discretization and the GN and RP algorithms for this 3-D EM case is essentially equivalent to the 2-D scalar acoustic case described above. See [Borup, 1989] for the details of the discretization and solution of the forward problem by FFT-BCG. The vectortensor nature of the fieldsGreen's function is the only new feature and this is easily dealt with.

    [0528] For simplicity, we now look at the situation where there is no z dependence in either the object function, i.e. (x, y, z)=y(x, y), neither in the incident field. The vector =(x, y) is used for the position vector in (x, y)-space.

    [00222] E i ( ) = E ( ) - ( k o 2 + .Math. ) ( ) E ( ) 1 4 i H 0 ( 2 ) ( k o | - | ) d 2 ( 50 )

    [0529] in matrix form the equation (50) looks like:

    [00223] E i ( ) = E ( ) - [ k o 2 + 2 y 2 2 y x 2 y x k o 2 + 2 y 2 k o 2 ] ( ) E ( p ) 1 4 i H 0 ( 2 ) ( k o | - | ) d 2 ( 51 )

    [0530] From this form of the equation, it is clear that the electric field in the z-direction is uncoupled with the electric field in the x-y plane. Thus, in this situation the field consists of two parts. The so-called transverse electric (TE) mode, in which the electric field is transverse to the z direction, and the transverse magnetic (TM) mode, in which the electric field has a nonzero component only in the z direction. The TM mode is governed by the scalar equation:

    [00224] E z i ( ) = E z ( ) - k o 2 4 i ( ) E z ( ) H 0 ( 2 ) ( k o | - | ) d 2 ( 52 ) where E i ( ) = ( E x i E y i E z i ) and E ( ) = ( E x E y E z ) ( x , y )

    [0531] which is identical to the 2-D acoustic equation discussed previously. Thus, the 2-D TM electromagnetic imaging algorithm is identical to the 2-D acoustic case discussed in detail above.

    [0532] The TE mode is given by the following equation:

    [00225] ( E x i ( ) E y i ( ) ) = ( E x ( ) E y ( ) ) - 1 4 i [ k o 2 + 2 x 2 2 x y 2 y x k o 2 + 2 y 2 ] ( ) ( E x ( ) E y ( ) ) H 0 ( 2 ) ( k o | - | ) d 2 ( 53 )

    [0533] The field is a two component vector and the Green's function is a 22 tensor Green': function:

    [00226] G = [ g xx g xy g xy g yy ] = 1 4 i [ k o 2 + 2 x 2 2 x y 2 y x k o 2 + 2 y 2 ] H 0 ( 2 ) ( k o | - | ) ( 54 )

    [0534] In compact notation:

    [00227] E i = ( I - G ) E ( 55 ) where : E i = [ E x i E y i ] , E = [ E x E y ] , I = [ I 0 0 I ] and = [ 0 0 ]

    [0535] This equation also has a convolution form and can thus be solved by the FFT-BCG algorithm as described in [Borup, 1989] The construction of the GN-MRCG and RP imaging algorithms for this case is identical to the 2-D acoustic case described above with the exception that the fields are two component vectors and the Green's operator is a 22 component Green's function with convolutional components.

    [0536] Finally, note that the presence of layering parallel to the z direction can also be incorporated into these 2-D algorithms in essentially the same manner as above. Special care must be taken, of course, to insure that the proper reflection coefficient is in fact, used. The reflection coefficient for the TM case is different from the TE case.

    Example 5

    Extension to Biot Theory (Acoustic Approximation)

    [0537] Note: an extensive discussion of Biot Theory in the acoustic approximation and a resulting imaging method with corresponding implementations are described above in the section entitled Biotporous media. In the current example provided below, an initial imaging theory is discussed.

    [0538] In the article [Boutin, 1987, Geophys. J. R. astr. Soc., vol. 90], herein incorporated by reference, a Green's function for the determination of a point response to a scattering point located in an isotropic, homogeneous, porous medium that supports elastic waves is developed. The implementation of this Greens' function into an imaging algorithm has never been carried out before. In this section, we have adapted their approach to derive an acoustic approximation to the fully elastic Biot theory that enables us to present a simplified practical tool for the imaging of porosity-like parameters in a geophysical context. The implementation of the full elastic Biot imaging algorithm using the Green's function of [Boutin, 1987] in place of the one derived in [Wiskin, 1992] is no different from the discretization employed here. The use of the sine basis functions, the FFT's, the biconjugate gradients, and so on, is identical.

    [0539] We have developed a model that incorporates the parameters of fluid content, porosity, permeability, etc., but instead of the 3+1 degrees of freedom of the two phase elastic model, has only 1+1, corresponding to the two independent variables, .sub.s, and .sub.l, the pressure field within the solid (modelled as a liquid in this approximation) and truly liquid phase respectively. As with the previous example. The use of the Biot theory Green's function can be combined profitably with layering, to image material in a layered background.

    [0540] Define (see [Boutin, 1987]., for more complete discussion of the parameters): [0541] as the frequency of the interrogating wave [0542] .sub.l as the density of the liquid phase [0543] .sub.s as the density of the solid phase [0544] n as a saturation parameter [0545] K() as the generalized, explicitly frequency-dependent Darcy coefficient introduced via homogenization theory. [0546] .sub.o, , are parameter's defined in the following way:

    [00228] = ( 1 - n ) s + 1 [ n + 1 2 K ( ) / i ] = K ( ) / i ) o = + 1 2 K ( ) / i = + 1 2

    [0547] The acoustic two phase Green's function is the Green's function obtained by solving the above system with a distributional right hand side, where (x) is the Dirac delta distribution. That is we seek the solution to the following matrix operator equation: BG.sub.Biot+I.sub.22(x)=0, where G.sub.Biot: R.sup.2,3.fwdarw.R.sup.2 is a function on R.sup.3 or R.sup.2 and I.sub.22 is the two dimensional identity matrix. B is given by:

    [00229] B = [ 2 - B - 0 - 0 2 2 + 2 ]

    [0548] This author obtained

    [00230] G B i o t = 1 4 0 ( 2 2 - 1 2 ) { ( w .Math. d ) [ 0 0 ] + ( w .Math. v ) [ p _ 2 0 0 - _ ]

    [0549] where w: R.sup.3.fwdarw.R.sup.2 is the 2D vector defined by:

    [00231] w = ( e - i 1 .Math. "\[LeftBracketingBar]" x .Math. "\[RightBracketingBar]" .Math. "\[LeftBracketingBar]" x .Math. "\[RightBracketingBar]" e - i 2 .Math. "\[LeftBracketingBar]" x .Math. "\[RightBracketingBar]" .Math. "\[LeftBracketingBar]" x .Math. "\[RightBracketingBar]" )

    [0550] and xR.sup.3 for the acoustic two-phase 3D model, and where d and v are 2 component vectors (representing the two phases) given by

    [00232] d = ( - 1 2 2 2 ) , v = ( 1 - 1 )

    [0551] A very similar analysis gives an equation and corresponding Green's Operator for the acoustic two-phase 2D (two spatial independent variables) case, in which w: R.sup.2.fwdarw.R.sup.2 and xR.sup.2 The operator is very similar except that the w vector contains Hankel functions of the zeroth order:

    [00233] w = ( i 4 H 0 ( 2 ) ( 1 .Math. "\[LeftBracketingBar]" x .Math. "\[RightBracketingBar]" ) i 4 H 0 ( 2 ) ( 2 .Math. "\[LeftBracketingBar]" x .Math. "\[RightBracketingBar]" ) )

    [0552] Notice the existence of the two wavenumbers .sub.1 and .sub.2 in this acoustic approximation to Biot theory guarantee the existence of a fast and slow compressional wavea distinguishing characteristic of the classical elastic Biot theory.

    [0553] More importantly this algorithm provides us with a computationally feasible means of inverting for phenomenological parameters derived from the classical elastic two phase Biot Theory.

    [0554] Finally, it is important to point out that convolving the above functions with the sine basis function analytically is done in exactly the same manner with G.sub.Biot=G.sub.22 above as it was done above because x only occurs in the form of the scalar acoustic Green's operator. This is why it is possible to create very efficient algorithms based upon the sinc-Galerkin method and FFT's as described in [Borup, 1992], using the Green's Operator described above.

    [0555] It is now possible to obtain an algorithm for the inversion of the 2D Biot porous medium by analogy with the algorithm based upon the Lippmann-Schwinger equation used earlier. This new algorithm is based upon a two phase Lippmann-Schwinger type equation derived in a future publication.

    [00234] r 0 ( x ) = r ( x ) - [ - b _ 1 0 0 b _ 2 ] G 2 2 ( x - ) D ( ) r ( ) d 3 where D ( x ) = [ 1 0 0 2 ]

    [0556] and the object functions .sub.i are given by

    [00235] j ( x ) = ( - 1 ) j ( b j ( x ) b j - 1 ) j = 1 , 2

    [0557] This equation is discretized in the same manner as above, and the convolution character is preserved. With the explicit form of the discretized Biot Lippmann-Schwinger Integral Equation the imaging problem is solved by applying the analogous conjugate gradient iterative solution method to the Gauss-Newton Equations in a manner exactly analogous to the discussion above. See above for the paraxial approximation to this imaging theory (e.g., in the section entitled Biotporous media and the section entitled paraxial approximation).

    [0558] Finally, as mentioned above, it is certainly possible to construct, using the above principles, a layered Green's function for the Biot theory in exact analogy with the construction for the acoustic layered Green's Operator. This operator will be a matrix operator (as is the acoustic two phase Green's Operator constructed above), and will consist of a convolutional and correlational part (as does the acoustic single phase Green's Operator constructed earlier). Because of these convolutional and correlational parts, the FFT methods discussed above are directly applicable, making the algorithm feasible from a numerical standpoint.

    Example 6

    Non-Perturbative Inversion of Elastic Inhomogeneous Structures Buried within a Layered Half Space

    [0559] In elastic media (by convention in this patent, media that supports shear wave activity) the relevant parameters are , , (Lame' parameters), (density) and absorption. The inversion for these elastic parameters (i.e. the first and second Lame' parameters, and , follows virtually the same prescription as was outlined above for the acoustic scalar case. In a manner similar to the Electromagnetic Inversion problem discussed above, it is possible to break up the arbitrary 3 dimensional vector u(x, y, z), representing the displacement into components that propagate independently. The exact description of this procedure is given in [Wiskin, 1991], and [Muller, 1985]. This example will not deal with this decomposition since the idea behind this is more easily seen by looking at the electromagnetic example given earlier.

    [0560] The idea here is to incorporate the above solution to the layered inversion problem directly into the Green's operator. As before, in the electromagnetic and the acoustic case, this Green's function is used in the integral equation formulation of the inversion problem of imaging an inhomogeneous body extending over an arbitrary number of layers.

    [0561] The following is the general elastic partial differential equation which governs the displacement vector field u in the general case where and both depend upon xR.sup.3. When and are independent of x, we have the homogeneous elastic case.

    [00236] - 2 ( x ) u i ( x ) - x j [ ( x ) x k u k ( x ) ] + x i [ ( x ) { x i u k ( x ) + x k u j ( x ) } ] = f j ( x )

    [0562] .sup.i(x) represents the applied body force, and p(x) and X(x) are the Lame' parameters, their dependence upon xR.sup.3 is the result of both the inhomogeneity to be imaged and the ambient layered medium. In particular u.sub.i({right arrow over (y)}){right arrow over (y)}R.sup.3 is the i.sup.th component (i=1, 2, 3) of the displacement field at point {right arrow over (y)}R.sup.3

    [0563] u.sub.i.sup.0({right arrow over (y)}) is the i.sup.th component of the incident field.

    [0564] .sub.1({right arrow over (x)})+.sub.0(z)=({right arrow over (x)}) is the total density variation, it consists of the 3-D variation in .sub.1 and the vertical 1-D variation in .sub.0.

    [0565] .sub.1({right arrow over (x)})+.sub.0(z)=({right arrow over (x)}) is the total variation in X, the first Lame' parameter, .sub.1 has 3-D variation, and .sub.0 has 1-D vertical variation.

    [0566] .sub.1({right arrow over (x)})+.sub.0(z)=({right arrow over (x)}) is the total variation in the second Lame' parameter . The .sub.1 has variation in 3-D, the .sub.2 has variation in 1-D (vertical).

    [0567] ({right arrow over (x)})=density=.sub.1({right arrow over (x)})+.sub.0(z), where .sub.0({right arrow over (x)}) represents the layered medium without the object, and .sub.1({right arrow over (x)}) represents the object to be imaged.

    [0568] .sub.0(z) and .sub.0(z) are the z-dependent Lame' parameters representing the layered medium

    [0569] .sub.1({right arrow over (x)}) and .sub.1({right arrow over (x)}) are the Lame' parameters representing the object to be imaged

    [0570] The above differential equation is converted to the following elastic Generalized Lippmann-Schwinger equation

    [00237] u i ( y .fwdarw. ) = u i 0 ( y .fwdarw. ) - Vol { G i m ( LAY ) ( y .fwdarw. , x .fwdarw. ) } u m ( x .fwdarw. ) d x .fwdarw. , i = 1 , 2 , 3

    [0571] by means of the kernel

    [00238] G i m ( L A Y ) ( y .fwdarw. , x .fwdarw. )

    which is constructed below. Where the volume integral is performed over the finite volume that contains the image space (and the object being imaged, for example, an ore body, submarine mine, oil reservoir, etc.). This is the basic integral equation which forms the theoretical foundation for the Layered Green's Operator approach to Inverse Scattering in a layered medium.

    [0572] This kernel is a 3 by 3 matrix of functions which is constructed by a series of steps:

    Step 1:

    [0573] Beginning with the acoustic (scalar or compressional) Green's function given by

    [00239] G ( k o .Math. "\[LeftBracketingBar]" r - r .Math. "\[RightBracketingBar]" ) = e - i k o .Math. "\[LeftBracketingBar]" r - r .Math. "\[RightBracketingBar]" 4 .Math. "\[LeftBracketingBar]" r - r .Math. "\[RightBracketingBar]"

    Step 2:

    [0574] The free space elastic Green's matrix is a 3 by 3 matrix of functions, built up from the free space Green's function. It's component functions are given as

    [00240] G i m ( k T ; k L .Math. "\[LeftBracketingBar]" r - r .Math. "\[RightBracketingBar]" ) = G ( k T .Math. "\[LeftBracketingBar]" r - r .Math. "\[RightBracketingBar]" ) i m + k T 2 2 x i x m [ G ( k T .Math. "\[LeftBracketingBar]" r - r .Math. "\[RightBracketingBar]" ) - G ( k L .Math. "\[LeftBracketingBar]" r - r .Math. "\[RightBracketingBar]" ) ]

    Step 3:

    [0575] Next the layered Green's function, G.sub.im.sup.L({right arrow over (y)}, {right arrow over (x)}),f or a layered elastic medium is defined. It is defined in terms of G.sub.im, the components i,m=1, . . . , 3, of the elastic Green's function in homogeneous elastic media given above. The components of the layered Green's matrix are integrals over Bessel functions and reflection coefficients in essentially the same manner as the acoustic layered Green's function consisted of integrals over wavenumber, of the acoustic reflection coefficients. This dyadic is patterned after [Muller, 1985], in the manner discussed in [Wiskin, 1992].

    Step 4:

    [0576] Finally the layered Green's kernel

    [00241] G i m ( L A Y ) ( y .fwdarw. , x .fwdarw. ) ,

    is constructed {right arrow over (x)}R.sup.3

    [00242] G i m ( LAY ) ( y .fwdarw. , x .fwdarw. ) = - 2 G i m L ( y .fwdarw. , x .fwdarw. ) 1 ( x .fwdarw. ) - x m { ( x j G i j L ( y .fwdarw. , x .fwdarw. ) ) l ( x .fwdarw. ) } + x j { ( x m G i j L ( y .fwdarw. , x .fwdarw. ) + x j G im L ( y .fwdarw. , x .fwdarw. ) ) 1 ( x .fwdarw. ) }

    [0577] is the layered Green's function for the elastic layered medium. The progressive constructions can be represented in the following way, beginning with the acoustic free space Green's function:

    ##STR00001##

    [0578] Alternatively, in words:

    TABLE-US-00004 Acoustic elastic elastic free space free space layered kernel for elastic Green's Greens Green's layered Lippmann- function function function Schwinger equation

    [0579] Using the last kernel in this series in the generalized elastic layered Lippmann-Schwinger Equation gives the vector displacement.

    [00243] u i ( y .fwdarw. ) = u i 0 ( y .fwdarw. ) - V o l { G i m ( L A Y ) ( y .fwdarw. , x .fwdarw. ) } u m ( x .fwdarw. ) d x .fwdarw.

    [0580] which is then discretized, using the sine basis in exactly the same way as for the previous examples, and, the FFT, and biconjugate gradient, and conjugate gradient algorithms are then applied to this vector equation, in the same way as done above. Thus the elastic modality (including shear waves) is accounted for.

    [0581] The construction of the layered Green's function in the full elastic case (with shear wave energy included is slightly more complicated than the purely scalar case. For this reason, we look at the construction in more detail, see also [Wiskin, 1993]

    [0582] For discretization of the resulting equations see the electromagnetic case discussed in example 3.

    [0583] The Construction of the Continuous Form of the Layered Green's Operator G.sup.L-3D and with full elastic mode conversion Case

    [0584] Now we proceed with the construction of the continuous variable form of the Layered Green's operator. The closed form expression for the discretized form of the bandlimited approximation to this continuous variable Green's operator is constructed in a manner exactly analogous to the vector electromagnetic case discussed in example 3.

    [0585] By analogy with the acoustic (scalar) situation, the construction of the layered Green's operator can be viewed as a series of steps beginning with the 3D free space Green's operator.

    [0586] Step a) Decompose a unit strength point source into a plane wave representation (Weyl-Sommerfeld Integral). For the elastodynamic (vector) case we must consider the three perpendicular directions of particle motion, which we take to be the horizontal direction, the direction of propagation, and the direction perpendicular to the previous two. Muller has given the representation for a point source in elastic media in [Muller, 1985].

    [0587] Step b) Propagate and reflect the plane waves through the upper and lower layers to determine the total field at the response position. The total field will consist of two parts: u.sup.up(r), the upward propagating particle velocity, and u.sup.d(r), the downward propagating particle velocity at position r. This propagation and reflection is carried out analytically by means of the reflection coefficients, which in the elastic case, are the matrices R.sup. and R.sup.+ (R.sup. and R.sup.+ are 22 matrices), and the scalars, r.sup., and r.sup.+. The matrices correspond to the P-SV (compressional and shear vertical polarized waves) for the case of horizontal stratification. (x is the horizontal, and z is the vertical coordinate, z is positive downward). The scalar coefficients correspond to the SH (horizontally polarized) shear waves, which propagate without mode conversion to the other types for the case of horizontally layered media.

    [0588] R.sup. and r.sup., represent the wave field reflected from the layers below the source position, and R.sup.+ and r.sup.+ represent the cumulative reflection coefficient from the layers above the source position, for the matrix and scalar cases respectively.

    [0589] These total reflection coefficients are formed recursively from the standard reflection coefficients found in the literature (e.g., Muller [1985], and Aki and Richards, Quantitative Seismology, Freeman and Co., 1980, herein included as a reference].

    [0590] FIG. 14 is schematic showing the Reflection coefficients relevant to the layered medium Green's function approach. With reference to FIG. 14, as in the scalar case, in order to have a single reference position, R.sup.+ and R.sup. are both given with respect to the top of layer that contains the scattering point, which is denoted by sc. That is, R.sup.+ represents the reflection coefficient for an upgoing wave at the interface between layer SC and sc1.

    [0591] In our case the expressions Su and Sd, Sd and Su, and are those derived from the Sommerfeld integral representation of a point source of unit strength, given in Muller. [see also Aki and Richards, 1980]. For example:

    [00244] S d = e i b sc ( z - z sc ) = e i b sc z S u = e - i b sc ( z - z sc ) = e - i b sc z

    [0592] for the scalar case.

    [0593] As shown in detail, by Muller, the total contribution of the upward travelling wave is given by

    [00245] [ S u + R - R + S u + R - R + R - R + S u + R - R + R - R + S u + .Math. = [ 1 + R - R + + R - R + R - R + + R - R + R - R + R - R + + .Math. ] S u = [ 1 - R - R + ] - 1 S u

    [0594] for the transverse shear wave (SH or horizontal shear wave), and

    [00246] S u + R - R + S u + R - R + R - R + S u + R - R + R - R + S u + .Math. = [ 1 + R - R + + R - R + R - R + + R - R + R - R + R - R + + .Math. ] S u = [ 1 - R - R + ] - 1 S u

    [0595] for the P-SV matrix case. Note that some care must be exercised in the convergence of the matrix series, however, for practical situations we can omit this detail.

    [0596] Step c) The process of forming the total field at the response point must be broken up into two cases: [0597] Case 1: the response point is above the scatter point, that is zz<0 (z is the distance from the interface above the scattering layer, to the response point, the z axis is positive downward), and [0598] Case 2, where the response point is below the scattering point zz<0.

    [0599] Furthermore each case consists of an upward travelling, and a travelling wave:

    [0600] Upward travelling wavefield:

    [0601] First, in case 1:

    [0602] [1R.sup.R.sup.+].sup.1 S.sup.u and [1R.sup.R.sup.+].sup.1 S.sup.u represent the contribution to u.sup.up from the upward travelling part of the source. A similar expression can be formed for the contribution from the downward travelling part of the source, it is

    [00247] [ 1 - R - R + 1 - 1 R - S d , and [ 1 - R - R + ] - 1 R - S d

    [0603] Thus, the total upward component of the wave field at the response point is formed from:

    [00248] u up ( ( I - R - R + ) - 1 ( S u + R - S d ) ( I - r - r + ) - 1 ( S u + r - S d ) )

    [0604] Case 2: the response point is below the scatter point, that is zz>0

    [0605] The result here is similar, the change occurring in the coefficient of the S.sup.u, and S.sup.u.

    [00249] u up = ( ( I - R - R + ) - 1 ( R - R + S u + R - S d ) ( I - r - r + ) - 1 ( S d + r - S d ) )

    Downward Component of Wavefield

    [0606] Case 1: A similar expression gives the downward component of the total wave field at the response point r, for case 1:

    [00250] u down u d ( ( I - R + R - ) - 1 R + R - S d ( I - r + r - ) - 1 r + r - S d ) + ( ( I - R + R - ) - 1 R + S u ( I - r + r - ) - 1 r + S u ) or u down u d ( ( I - R + R - ) - 1 ( R + R - S d + R + S u ) ( I - r + r - ) - 1 ( r + r - S d + r + S u ) )

    [0607] Case 2: For case 2, the result is similar, here the response point resides below the scatter point, that is zz>0

    [00251] u down u d ( ( I - R + R - ) - 1 ( S d + R + S u ) ( I - r + r - ) - 1 ( r + r - S d + r + S u ) )

    [0608] Step d) The final step in the process is the recombination of the plane waves to obtain the total wavefield (upgoing and downgoing waves) at the response position:

    [0609] For the scalar case 1 zz<0, and:

    [00252] G L ( x - x , y - y , , z .Math. z ) = f u + f d ,

    [0610] with

    [00253] u up = J 0 ( u / r - r .Math. ) [ 1 - R - R + ] - 1 [ S u + R - S d ] exp - i bsa ( z - zsc ) d u u d = J 0 ( u / r - r .Math. ) [ 1 - R - R + ] - 1 [ R + S u + R + R - S d ] exp - i bsa ( z - zsc ) d u

    [0611] so that

    [00254] G L ( x - x , y - y , z / z ) = J 0 ( u / r - r .Math. ) [ 1 - R - R + ] - 1 [ S u + R - S d ] [ e - i b sc ( z - z sc ) + R + e - i b sc ( z - z sc ) ] d u ,

    [0612] While for case 2, zz>0, similar algebra gives:

    [00255] G L ( x - x , y - y , z / z ) = J 0 ( u / r - r .Math. ) [ 1 - R - R + ] - 1 [ R + S u + S d ] [ e - i b sc ( z - z sc ) + R - e - i b sc ( z - z sc ) ] d u .

    [0613] For case 1 this can be rewritten as:

    [00256] G L ( r - r , z / z ) = A u = 0 C ( u : r - r ) { S u e i b sc z + R - S d e i b sc z + R + S u e - i b sc z + R + R - S d e - i b sc z } d u

    [0614] where the coefficient function C(u: rr) is given by:

    [00257] C ( u : r - r ) = J 0 ( u .Math. "\[LeftBracketingBar]" r - r ' .Math. "\[RightBracketingBar]" ) ( 1 - R - R + ) - 1 i u b sc

    [0615] For case 2, G.sup.L can be written as:

    [00258] G L ( r - r , z .Math. z ) = A u = 0 C ( u : r - r ) { R + R - S u e i b s c z + R - S d e i b s c z + R + S u e - i b s c z + S d e - i b s c z } d u

    [0616] Finally, using the definitions for S.sup.u and S.sup.d and recognizing that products such as

    [00259] S d e i b sc z

    [0617] can be rewritten as

    [00260] S d e i b sc z = e i b sc z e i b sc z = e i b sc ( z + z )

    [0618] the operator G.sup.L turns out (after rearrangement of the above equations) to be the sum of a convolution and a correlation kernel:

    [00261] G L ( r - r , z .Math. z ) = G R ( r - r , z + z ) + G v ( r - r , z - z )

    [0619] Case I has (zz)<0, where z=zZ.sub.sc and z=zz.sub.sc with z.sub.sc being the z co-ordinate of the top part of the layer that contains the scatterer. In fact G.sub.R and G.sub.V turn out to be given by the following formulae (they appear to differ from the integrals above because of the rearrangement that leads to the decomposition into convolutional and correlational parts):

    [00262] G R ( r - r , z .Math. z ) = A u = 0 J 0 ( u .Math. "\[LeftBracketingBar]" r - r .Math. "\[RightBracketingBar]" ) C ( u ) { R + e - k b s c ( z + z ) + R - e i b s c ( z + z ) } du G V ( r - r , z .Math. z ) = A u = 0 J 0 ( u .Math. "\[LeftBracketingBar]" r - r .Math. "\[RightBracketingBar]" ) C ( u ) { R + R - e - i b s c ( z + z ) + e i b s c ( z + z ) } d u ,

    [0620] where, now:

    [00263] C ( u ) = ( 1 - R + R - ) - 1 i u b s c ,

    [0621] R.sup. and R.sup.+ are the recursively defined reflectivity coefficients described in Muller's paper,

    [0622] u is the horizontal slowness,

    [00264] u = sin c

    [0623] Recall that Snell's law guarantees that u will remain constant as a given incident plane wave passes through several layers.

    [0624] is the frequency

    [0625] b.sub.sc is the vertical slowness for the particular layer hosting the scattering potential

    [0626] Case II consists of the case where (zz)0. The G.sub.R and G.sub.V are now given by the following:

    [00265] G V ( r - r , z .Math. z ) = u = 0 J 0 ( u .Math. "\[LeftBracketingBar]" r - r .Math. "\[RightBracketingBar]" ) C ( u ) { R + R - e i b s c ( z + z ) + e - i b s c ( z + z ) } du and G R ( r - r , z .Math. z ) = A u = 0 J 0 ( u .Math. "\[LeftBracketingBar]" r - r .Math. "\[RightBracketingBar]" ) C ( u ) { R - e i b s c ( z + z ) + R + e - i b s c ( z + z ) } du ,

    [0627] These expressions can be combined into one equation by the judicious use of absolute values. The correlational part of the Green's operator can be transformed into a convolution by a suitable change of variables. The resulting Green's function for imaging inhomogeneities residing within Ambient Layered Media must be quantized by convolving with the sine (really Jinc) basis functions described below. This is done analytically and the result is given below.

    [0628] The same process is carried out as detailed above, in order to determine the P-SV total wavefield (the 2 by 2 matrix case), and is not repeated.

    [0629] The matrix case is handled in exactly the same manner, with allowance for the differing algebra associated with matrices, the preservation of the convolution is preserved for the same reason as shown above in the scalar situation.

    [0630] On avoiding convergence problems due to the presence of derivatives in the elastic Green's function (i.e. with shear wave motion)

    [0631] One difficulty with this formulation is the presence of four derivatives acting upon the acoustic free space Green's function in the construction of the dyadic G.sup.(LAY). This problem we have overcome (see related code in Appendix E) by the following method:

    [0632] Given an inhomogeneous distribution of isotropic density and Lame parameters X and , imbedded in a homogeneous medium with density .sub.0 and Lame parameters .sub.0 and .sub.0, the total infinitesimal displacement vector u satisfies the partial differential equation

    [00266] 2 u i + ( u j , j ) , j + [ ( u i , j + u j , i ) ] , j = 0 ( 1 )

    [0633] while the incident displacement field u.sup.i satisfies

    [00267] 2 u i i + ( 0 u j , j j ) , i + [ u 0 ( u i , j i + u j , i i ) ] , j = 0 ( 2 )

    [0634] where .sub.0, .sub.0, and .sub.0 are the homogeneous parameters of the imbedding medium. Subtracting (2) from (1) and rearranging results in

    [00268] u i s + ( 1 k p 2 - 2 k s 2 ) ( u j , j s ) , i + 1 k s 2 [ ( u i , j s + u j , i s ) ] , j = - f i ( 3 )

    [0635] for the scattered displacement field u.sup.s=uu.sup.i. The inhomogeneous term f is given by

    [00269] f i = ( 0 - 1 ) u i - ( 1 k p 2 2 k s 2 ) { ( 0 - 1 ) u j , j } , i + 1 k s 2 { ( 0 - 1 ) ( u i , j + u j , j ) } , j ( 4 )

    [0636] where the respective shear-wave and compression-wave velocities c.sub.s and c.sub.p and corresponding wave numbers ks and kp are given by

    [00270] c s 2 = 0 0 ; k s 2 = 2 c s 2 ; c p 2 = 0 + 2 0 0 ; k p 2 = 2 c p 2 ( 5 )

    [0637] for the imbedding medium. Introducing the scattering potentials

    [00271] = 0 - 1 ; = ( 1 k p 2 - 2 k s 2 ) ( 0 - 1 ) ; = 1 k s 2 ( 0 - 1 ) ( 6 )

    [0638] gives

    [00272] f i = u i + { u j , j } , i + { ( u i , j + u j , i ) } , j ( 7 )

    [0639] Solution of (3) by the introduction of the free-space Green's function results in

    [00273] u i s = G i j * f j = v f j ( r ) G i j ( F - r ) dv ( 8 )

    [0640] where * denotes 3-D convolution, v is the support of the inhomogeneity, and the Green's function is given by [Aki and Richards, Quantitative Seismology, Freeman and Co., 1980, herein included as a reference]:

    [00274] G i j ( r - r ) = k s 2 g ( k s .Math. "\[LeftBracketingBar]" r - r .Math. "\[RightBracketingBar]" ) i j + 2 x i x j { g ( k s .Math. "\[LeftBracketingBar]" r - r .Math. "\[RightBracketingBar]" - g ( k p .Math. "\[LeftBracketingBar]" r - r .Math. "\[RightBracketingBar]" ) } ( 9 )

    [0641] where g(kR) is the scalar Helmholtz Green's function

    [00275] g ( k R ) = e i k R 4 R . ( 10 )

    [0642] where e.sup.it time dependence has been assumed. Inserting (7) into (8) and integrating by parts yields the following integral wave equation:

    [00276] u i s = G ij * { p u j } + G ij , j * { u k , k } + G ij , k * { ( u j , k + u k , j ) } . ( 11 )

    [0643] For now, consider the case where .sub.=0 and note that:

    [00277] G i j , j = k s 2 x i g ( k s R ) + x i ( 2 g ( k s R ) - 2 g ( k p R ) ) . ( 12 )

    [0644] using

    [00278] ( 2 + k 2 ) g ( k R ) = - ( .Math. "\[LeftBracketingBar]" r - r .Math. "\[RightBracketingBar]" ) ( 13 )

    [0645] reduces (12) to:

    [00279] G i j , j = k p 2 x i g ( k p R ) , ( 14 )

    [0646] which will henceforth be denoted C.sub.i. We have now arrived at the integral equation:

    [00280] [ u x i u y i u z i ] = [ u x u y u z ] - [ G xx G xy G xz C x G yz G yy G yz C y G zx G zy G zz C z ] * [ 0 0 0 0 0 0 0 0 0 0 0 0 ] [ u x u y u z .Math. u ] ( 15 )

    [0647] since u.sub.k,k=.Math.u

    [0648] The integral equation (15) can be solved by application of the 3-D FFT to compute the indicated convolutions, coupled with the biconjugate gradient iteration, or similar conjugate gradient method. [Jacobs, 1981]. One problem with (15) is, however, the need to compute .Math.u at each iteration. Various options for this include taking finite differences or the differentiation of the basis functions (sinc functions) by FFT. Our experience with the acoustic integral equation in the presence of density inhomogeneity indicates that it is best to avoid numerical differentiation. Instead, the system is augmented as:

    [00281] [ u x i u y i u z i .Math. u i ] = [ u x u y u z .Math. u ] - [ G xx G xy G xz C x G yz G yy G yz C y G zx G zy G zz C z C x C y C z D ] * [ 0 0 0 0 0 0 0 0 0 0 0 0 ] [ u x u y u z .Math. u ] ( 16 ) where D = C x , x + C y , y + C z , z = k 2 2 g ( k R ) = - k 2 { k 2 g ( k R ) + ( R ) } ( 17 )

    [0649] Iterative solution of (16) for the four unknown fields u.sub.x, u.sub.y, u.sub.z, and .Math.u now involves no numerical differentiation of u, since all derivative operators have now been applied analytically to the Green's function. The incident component .Math.u.sup.i is assumed known. Equation (16) can be written symbolically as

    [00282] U i = ( I - G [ ] diag ) U ( 18 )

    [0650] where U.sup.i and U are the augmented 4-vectors in Eq. (16), [].sub.diag is the diagonal operator composed of .sub. and , and is the 44 dyadic Green's function in Eq. (16).

    [0651] Inclusion of .sub. Scattering to Give the General Elastic-wave Integral Equations.

    [0652] We now give a method for solving integral equations in the general case for the inhomogeneous material properties , , and .

    [0653] Because the inclusion of the more complicated term in .sub. causes the matrix notation used above to require breaking the equations into parts on several lines, we elect for efficiency of space to use the more compact tensor notation. This should cause no difficulty because the translation is clear on comparing the special case of .sub.=0 in the general tensor equations which follow with the previous matrix notation for this same case.

    [0654] First we give again (see Eq (11)) the integral equations (which in practice have been discretized by sine basis functions) for inhomogeneous , , and . Here * means convolution (the integration operation):

    [00283] u i s = G i j * [ p u j ] + G i j , j * [ u k , k ] + G ij , k * [ ( u j , k + u k , j ) ] ( 19 )

    [0655] We note that the displacement field u.sub.j and its derivatives also u.sub.j,k appear. We choose not to compute derivatives of the fields numerically and avoid the associated numerical instability. Instead, we augment the above equation with additional equations to compute the derivatives in a more stable way by solving for them directly. Thus, solving for (u.sub.i,l+u.sub.l,i) directly is more efficient and replaces computation of nine fields with only six. Forming the six unique symmetric-derivative pairs, we obtain an integral equation for these pairs.

    [00284] u i , 1 s + u 1 , i s = ( G i j , 1 + G 1 j , i ) * [ p u j ] + ( G i j , k 1 + G 1 j , k i ) * [ ( u j k + u k j ) ] + ( G ijj 1 + G 1 j j i ) * [ u k , k ] ( 20 )

    [0656] Note that

    [00285] G i j , j = k p 2 2 x i x j g ( k p R ) ( 21 )

    [0657] We now augment the system Eq. (19) for the three components of u.sub.i.sup.s with the system Eq. (20) which has six pairs of unique component derivatives.

    [0658] These nine equations can also be placed in a matrix form similar to that of the augmented system given in Eq (16). The augmented system of nine components could be solved for .sub., .sub., and .sub., assuming knowledge of the fields u.sup.i and derivatives

    [00286] u i , k i .

    Since the fields and derivatives are also unknown, we must solve for them simultaneously with .sub., .sub., and .sub.. This is done by adding these constraint equations. These nine equations are composed of the three equations for u.sub.i

    [00287] u i i = u i - G i j * ( p u j ) - G ij , j * [ u k , k ] - G ij , k * [ ( u j , k + u k , j ) ] ( 22 )

    [0659] and the six equations for U.sub.j,k+u.sub.k,j

    [00288] ( u i , l i + u l , i j ) = ( u i , l - u l , i ) - ( G ij , l + G l j , j ) * [ p u j ] - ( G i j , k l + G l j , k l ) * [ ( u j k + u k j ) ] - ( G i j , jl + G lj , j i ) * [ u k , k ] ( 23 )

    [0660] These 9 equations can also be placed in a matrix form similar to that of the augmented system given in Eq (16).

    [0661] Solving these twelve equations (Eq. (19, 22, 23)) for (.sub., .sub., .sub.), the fields, and their derivatives is accomplished using the Frechet derivative methods described next. As in Eq. (16) for the =.sub.0 case, Eq. (23) can be written symbolically as

    [00289] U i = ( I - G [ ] diag ) U ( 24 )

    [0662] where the operators custom-character and [].sub.diag are now 99, and the vectors U and U.sup.i consist of the three components of u and its six symmetric derivative sum-pairs.

    Example 7

    Cylindrical Coordinate Methods for Numerical Solution of the Sie

    [0663] The above examples all use the FFT-BCG algorithm for imaging. This method is one of three methods discussed in this patent. The other two methods are [0664] (1) The cylindrical coordinate recursion method (Cylindrical Recursion) [0665] (2) Rectangular recursion of scattering matrix method (Rectangular Recursion for short reference)

    [0666] We now discuss the cylindrical recursion method:

    7.1 Cylindrical Coordinate Sie Formulation and Discretization

    [0667] We begin with the acoustic scattering integral equation (SIE) for the constant density case:

    [00290] f i ( ) = f ( ) - k 0 2 4 i ( ) f ( ) H 0 ( 2 ) ( k 0 .Math. "\[LeftBracketingBar]" - .Math. "\[RightBracketingBar]" ) d 2 ( 1.1 )

    [0668] Let each field be expanded in a Fourier series in the angular variable:

    [00291] f ( ) = .Math. n f n ( ) e i n .

    [0669] Then using Graf's addition theorem:

    [00292] H 0 ( 2 ) ( k 0 .Math. "\[LeftBracketingBar]" - .Math. "\[RightBracketingBar]" ) = { .Math. n H n ( 2 ) ( k 0 ) J n ( k 0 ) e i n ( - ) > .Math. n H n ( 2 ) ( k 0 ) J n ( k 0 ) e i n ( - ) < ( 1.2 )

    [0670] results in the cylindrical coordinate form of (1.1):

    [00293] f n i ( ) = f n ( ) - k 0 2 2 i 0 a { .Math. m n - m ( ) f m ( ) B m ( k 0 ) C m ( k 0 ) d ( 1.3 )

    [0671] where the kernel is separably symmetric:

    [00294] B m ( k 0 ) C m ( k 0 ) = { H n ( 2 ) ( k 0 ) J n ( k 0 ) > H n ( 2 ) ( k 0 ) J n ( k 0 ) < ( 1.4 )

    [0672] Henceforth assume that k.sub.0=1. Discretizing the radial integral by trapezoidal rule using an increment of A results in:

    [00295] f l , n i = f n l , n - { h l , n .Math. l = 1 i ( f ) l , n j l , n + j l , n .Math. l = l + 1 L ( f ) l , n h l , n } ( 1.5 ) where f l , n = f n ( l ) , h l , n = 2 i H n ( 2 ) ( l ) , j l , n = 2 i J n ( l ) ( 1.6 ) ( f ) l , n = l .Math. m l , n - m f l , m ( 1.7 )

    [0673] Notice that the extra 1 resulting from the p in the integral in (1.3) has been placed in the definition of ()l.sub.l,n. Equation (1.5) is the final discrete linear system for the solution of the scattering integral equation in cylindrical coordinates. It can be rewritten as:

    [00296] f l i = f l - { [ h l ] .Math. l = 1 l [ j l ] [ l * ] f l + [ j l ] .Math. l = l + 1 L [ h l ] [ l * ] f l } ( 1.8 )

    [0674] where the vector components are the Fourier coefficients:

    [00297] f l = [ f l , - N .Math. f l , N ] , or , equivalently , { f l } n = f l , n , n = - N , .Math. , N ( 1.9 )

    [0675] where N is the range of the truncated Fourier series (The vectors are length 2N+1). The notation [x] denotes a diagonal matrix formed from the vector elements:

    [00298] [ x ] = [ x - N 0 .Math. 0 0 .Math. 0 x N ] ( 1.1 )

    [0676] and the notation [.sub.l*] denotes a convolution (including the l factor):

    [00299] { [ l * ] f l } n = l .Math. m = - N N l , n - m f l , m , n = - N , .Math. , N . ( 1.11 )

    [0677] Writing (1.8) out in full gives the matrix equation:

    [00300] [ f 1 i .Math. f L i ] = A [ f 1 .Math. f L ] ( 1.12 ) where A = [ I 0 .Math. .Math. 0 0 .Math. .Math. .Math. .Math. 0 0 .Math. .Math. 0 I ] - [ [ j 1 ] [ h 1 ] [ j 1 ] [ h 2 ] .Math. [ j 1 ] [ h L ] [ h 2 ] [ j 1 ] .Math. .Math. .Math. [ h L ] [ j 1 ] [ h L ] [ j 2 ] .Math. [ j L ] [ h L ] ] .Math. [ [ 1 ] 0 .Math. 0 0 .Math. .Math. 0 0 .Math. 0 [ L ] ] ( 1.13 )

    [0678] Notice that the kernel is composed of a LL block matrix with 2N+12N+1 diagonal matrix components and that the LL block matrix is symmetric-separable, i.e.:

    [00301] L mn = { [ J n ] [ h m } , n m [ h n ] [ j m ] , n > m ( 1.14 )

    [0679] where L.sub.nm is one of the LL component matrices.

    [0680] For reference in the next section, we make the definitions:

    [00302] s l = .Math. l = l L [ h l ] [ l * ] f l ( 1.15 ) p l = .Math. l = 1 l - 1 [ h l ] [ l * ] f l ( 1.16 )

    7.2 Recursive Solution of the Cylindrical Coordinate Sie

    [0681] The recursion for the 2-D cylindrical equation is:

    Initialize:

    [00303] P L = 0 ( 2.1 ) { s L } n = f L , n s / h L , n

    [0682] For 1=L, . . . , 1

    [00304] s l - 1 = s l [ j l ] [ l * ] { [ h l ] s l + [ j l ] p l } p l - 1 = p l + [ h l ] [ l * ] { [ h l ] s l + [ j l ] p l }

    [0683] Next 1.

    [0684] Thus far, we have assumed that the number of angular coefficients is the same for each radius 1. In fact, the number of angular coefficients should decrease as 1 decreases. We find that for =/4, an accurate interpolation is achieved with N.sub.1=211, n=N.sub.l, . . . , N.sub.l Fourier coefficients. To introduce this modification, we must slightly redefine the operators as:

    [00305] { [ j i ] x } n = { j l , n x n , .Math. "\[LeftBracketingBar]" n .Math. "\[RightBracketingBar]" N l 0 , .Math. "\[LeftBracketingBar]" n .Math. "\[RightBracketingBar]" > N l ( 2.2 ) { [ l * ] x } n = { l .Math. m = - N l N l i , n - m x m , .Math. "\[LeftBracketingBar]" n .Math. "\[RightBracketingBar]" N l 0 , .Math. "\[LeftBracketingBar]" n .Math. "\[RightBracketingBar]" > N l ( 2.3 )

    [0685] where the vector operated on is always length N where N is now the maximum number of Fourier coefficients, N=N.sub.L. The total field at each layer, 1, is given by:

    [00306] f l , n = { h l , n s l , n + j l , n p l , n + f l , n i , .Math. "\[LeftBracketingBar]" n .Math. "\[RightBracketingBar]" N l 0 , .Math. "\[LeftBracketingBar]" n .Math. "\[RightBracketingBar]" > N l ( 2.4 )

    [0686] In order to eliminate the need to know the starting values, S.sub.L, we note that at each iteration, S.sub.l and p.sub.l are linear functions of S.sub.L:

    [00307] s L = A l s L + b l , p l = C l s L + d l ( 2.5 )

    [0687] where the matrices A.sub.l, C.sub.l(dimension 2N+12N+1) and the vectors b.sub.l, d.sub.i(dimension 2N+1) have the initial values:

    [00308] A L = l , C L = 0 b l = 0 , d l = 0 ( 2.6 )

    [0688] Using (2.5) and (2.6) in (1.1) and equating common terms leads a matrix and a vector recursion:

    [0689] Initialize:

    [00309] A L = l , C L = 0 b L = 0 , d L = 0 ( 2.7 )

    For l=L, . . . , 1

    [00310] A l - 1 = A l - [ j l ] [ l * ] { [ h l ] A l + [ j l ] C J } b l - 1 = b l - [ j l ] [ l * ] { [ h l ] b l + [ j l ] d l + f l i } C l - 1 = C l + [ h l ] [ l * ] { [ h l ] A l + [ j l ] C l } d l - 1 = d l + [ h l ] [ l * ] { [ h l ] b l + [ j l ] d l + f l j }

    [0690] Then using the fact that S.sub.L=0 leads to the solution:

    [00311] A 0 s L + b 0 = 0 , s L = - A 0 - 1 b 0 , f L , n s = s L , n / h L , n ( 2.8 )

    [0691] for the scattered field at the outer boundary. Iteration (1.1) with (1.4) can then be used to evaluate the total field internal to the object.

    [0692] Notice that the LHS matrix recursion in (2.7) is independent of the incident field. Once it is used to compute A.sub.0, the RHS vector recursion can be done for any number of incident fields. Concurrent solution for any number of incident fields can be obtained by replacing the RHS vector recursion with the matrix recursion:

    [00312] B L = 0 , D L = 0 ( 2.9 ) B l - 1 = B l [ j l ] [ Y l * ] { [ h l ] B l + [ j l ] D l + F l i } D l - 1 = D l + [ h l ] [ Y l * ] { [ h l ] B l + [ j l ] D l + F l i }

    [0693] where the matrices B.sub.l, D.sub.1 are 2N+1N.sub.v where N.sub.v is the number of views and the matrix of incident fields, F.sub.1.sup.i is given by:

    [00313] F l r . = [ f h , - N i , 1 .Math. f h , - N i , N v .Math. .Math. f l , N i , 1 .Math. f h , N i , N v ] ( 2.1 )

    [0694] where the superscript after i is the view number (note that although the incident matrix is written as 2N+1N.sub.v, the entries are zero for the row index N.sub.l<|n|N). A more compact recursion can be obtained by concatenating the two recursions to give:

    [00314] G L = [ A L , B L ] = [ I , 0 ] , H L = [ C L , D L ] = [ 0 , 0 ] ( 2.11 )

    [0695] For l=L, . . . ,1

    [00315] G l - 1 = G l - [ j l ] [ Y l * ] { [ h l ] A l + [ j l ] C l + [ 0 , F l i ] } H l - 1 = H l + [ h l ] [ l * ] { [ h l ] A l + [ j l ] C l + [ 0 , F l i ] }

    [0696] where the matrices G.sub.l, H.sub.l are 2N+12N+1+N.sub.v and in the notation [A,B] the first matrix is 2N+12N+1 and the second is 2N+1N.sub.v. Then G.sub.0=[A.sub.0, B.sub.0] and the solution for all views is given by:

    [00316] F L S = [ f L , - N s , 1 .Math. f l , - N s , N v .Math. .Math. f L , N s , 1 .Math. f L , N s , N v ] = [ h L ] A 0 - 1 B 0 ( 2.12 )

    [0697] A slightly modified recursion can be shown to yield the scattering matrix of the object. Recall that the total field is given by:

    [00317] f r = [ h l ] s l + [ j l ] p l + f i l ( 2.13 )

    [0698] Any externally generated incident field can be expanded in a bessel series which implies that there exists a sequence, g.sub.N.sup.i, . . . , g.sub.N.sup.i such that:

    [00318] f i ( , ) = 2 i .Math. n g n i J n ( ) e in ( 2.14 )

    [0699] (recall that we are assuming that k.sub.0=1) which means that

    [00319] f l i = [ j l ] g i ( 2.15 )

    [0700] Redefining p.sub.l.fwdarw.p.sub.l+g.sup.i then gives .sub.l=[h.sub.l]s.sub.l+[j.sub.l]p.sub.l and leads easily to the iteration:

    [00320] G L = [ I , 0 ] , H L = [ 0 , l ] ( 2.16 )

    [0701] For 1=L, . . . , 1

    [00321] G l - 1 = G l - [ j l .Math. Y l * ] { [ h l A l + [ j l C l } H l - 1 = H l + [ h l ] [ l * ] { [ h l ] A l + [ j l ] C l }

    [0702] where the matrices are now 2N+12(2N+1). The last iterate G.sub.0=[A.sub.0, B.sub.0] yields the scattering matrix:

    [00322] S ( ) = A 0 - 1 B 0 ( 2.17 )

    [0703] for the body y which relates the incident field coefficients to the scattering coefficients:

    [00323] g s = Sg i , where f s ( , ) = 2 i .Math. n g n s H 0 ( 2 ) ( ) e in ( 2.18 )

    [0704] for all incident field coefficient vectors, g. Notice that in the previous notation:

    [00324] g n i = f L , n i j L , n g n s = f L , n s / h L , n ( 2.19 )

    7.3 Computational Formulas for the Jacobian and its Adjoint

    [0705] In order to apply the Gauss-Newton iteration to the solution of the imaging problem we must first derive recursive formulas for the application of the Jacobian and its adjoint. The Jacobian of the scattering coefficient vector, S.sub.L, operating on a perturbation in is given by:

    [00325] J = .Math. l = 1 L .Math. n = - N l N l l , n l , n s l ( 3.1 )

    [0706] From (2.8) we get:

    [00326] s L = - A 0 - 1 ( b 0 + A 0 s L ) ( 3.2 )

    [0707] where the prime denotes differentiation followed by summation over the elements of .

    [00327] s L = J = .Math. l = 1 L .Math. n = - N l N l l , n l , n s l ( 3.3 a ) b 0 = .Math. l = 1 L .Math. n = - N l N l l , n l , n b 0 ( 3.3 b ) A 0 = .Math. l = 1 L .Math. n = - N l N l l , n l , n A 0 ( 3.3 c )

    [0708] Equation (3.2) provides the formula for computing J is recursive formulas for

    [00328] A 0

    and b can be found. Define the notation:

    [00329] A l = .Math. l = l + 1 L .Math. n = N l N l y l , n l , n A l ( 3.4 a ) C l = .Math. l = l + 1 L .Math. n = N l N l l , n l , n C l ( 3.4 b )

    [0709] (note the lower limit of the 1 summation). Then using the LHS recursion in (2.7), it is simple to show that:

    [00330] A l - 1 = A l [ j l ] { [ l * ] { h l A l + [ j l ] C l } + [ l * ] { [ h l A l + [ j l ] C l } } ( 3.5 a ) C l - 1 = C l + [ h l ] { [ l * ] { [ h l ] A l + [ j l ] C l } + [ l * ] { [ h l ] A l + [ j l ] C l } } ( 3.5 b )

    [0710] A further reduction in computational requirements can be achieved by noting from (3.2) that we do not need

    [00331] A 0

    but rather

    [00332] A 0 S L

    where S.sub.L is the matrix whose columns are the S.sub.L's for each view. The matrix A.sub.0 is 2N+12N+1 while the matrix

    [00333] A 0 S L

    is 2N+1N.sub.v. Thus define:

    [00334] A l s = A S L , C l s = C l S L , A l s = A l S L , C l s = C l S L ( 3.6 )

    [0711] Then we get the recursion:

    [00335] A L s = A L S L = IS L = S L , C L S = 0 , A L s = 0 , C L s = 0 ( 3.7 a )

    [0712] For 1=L, . . . , 1

    [00336] A l - 1 = A l s - [ j l ] { [ l * ] { [ h l ] A l s + [ j l ] C l s } + [ l * ] { [ h l ] A l s + [ j l ] C l s } } ( 3.7 b ) C l - 1 s s = C l s + [ h l ] { [ l * ] { [ h l A l s + j l ] C l s } + [ l * ] { [ h l ] A l s + [ j l ] C l s } } ( l s } ( 3.7 c ) A l - 1 s = A l s - [ j l ] [ l * ] { [ h l A l s + [ j l ] C l s } ( 3.7 d ) C l - 1 s = C l s + [ h l ] [ l * ] { [ h l ] A l s + [ j l ] C l s } ( 3.7 e )

    [0713] Similarly, for the b's and d's we get the recursion:

    [00337] B L = 0 , D L = 0 , B L = 0 , D L = 0 J ( 3.8 a )

    [0714] For 1=L, . . . , 1

    [00338] B l - 1 = B l - [ j l ] { [ l * ] { [ h l ] B l + [ j l ] D l } + [ l * ] { [ h l ] B l + [ j l ] D l + F l i } } ( 3.8 b ) D l - 1 = D l + [ h l ] { [ l * ] { [ h l ] B l + [ j l ] D l } [ l * ] { [ h l ] B l + [ j l ] D l + F l i } } ( 3.8 c ) B l - 1 = B r [ j l ] [ l * ] { [ h l ] B l + [ j l ] D l + F l i } ( 3.8 d ) D l - 1 = D l + [ h l ] [ l * ] { [ h l ] B l + [ j l ] D l + F l i } ( 3.8 e )

    [0715] where the matrices are all 2N+1N.sub.. The final Jacobian is then:

    [00339] S L = - A 0 - 1 ( B 0 + A 0 S ) ( 3.9 )

    [0716] where the columns of S.sub.L are the vectors J.sub.n for each view n=1, . . . , N.sub..

    [0717] We can of course concatenate these two recursions to give:

    [00340] G L = [ S L , 0 ] , H L = [ 0 , 0 ] , G L s = [ 0 , 0 ] , H L = [ 0 , 0 ] ( 3.1 a )

    [0718] For 1=L, . . . , 1

    [00341] G l - 1 = G l - [ j l ] { [ l * ] { [ h l ] G l + [ j l ] H l } + [ l * ] { [ h l ] G l + [ j l ] H l + [ 0 , F l i ] } } ( 3.1 b ) H l - 1 = H l + [ h l ] { [ l * ] { [ h l ] G l + [ j l ] H l } + [ l * ] { [ h l ] G l + [ j l ] H l + [ 0 , F l i } } ( 3.1 c ) G l - 1 = G t - [ j t ] { [ Y l * ] { [ h t ] G l + [ j l ] H l + [ 0 , F l i ] } ( 3.1 d ) H l - 1 = H l + [ h l ] [ l * ] { [ h l ] G l + [ j l + H l + [ 0 , F l i ] } ( 3.1 e )

    [0719] where, at the last iterate,

    [00342] G 0 = [ A 0 S , B 0 ] .

    The matrices (G's and H's) in this recursion are all 2N+12N.sub.. This is the form of the Jacobian recursion used in the imaging programs.

    Example 8

    Rectangular Scattering Matrix Recursion

    [0720] This section describes a new recursive algorithm the used scattering matrices for rectangular subregions. The idea for this approach is an extension and generalization of the cylindrical coordinate recursion method discussed in the previous section. The computational complexity is even further reduced over CCR. The CCR algorithm derives from the addition theorem for the Green's function expressed in cylindrical coordinates. The new approach generalizes this by using Green's theorem to construct propagation operators (a kind of addition theorem analogue) for arbitrarily shaped, closed regions. In the following, it is applied to the special case of rectangular subregions, although any disjoint set of subregions could be used.

    [0721] FIG. 15 illustrates a rectangular scattering matrix according to the present invention. Consider two rectangular subregions A and B of R.sup.2 as shown in FIG. 15. Although the regions are drawn as disjoint, assume that they touch at the center of the figure. Let C denote the external boundary of the union of A and B. Define the scattering operator, SA, of region A as the operator that gives the outward moving scattered field on the boundary A, given the inward moving field, due to external sources, evaluated on the boundary A. Similarly, define the scattering matrix for boundary B. The goal is to find the scattering matrix for boundary C, given S.sub.A and S.sub.Bthe scattering matrices for A and B.

    [0722] Let the incident field due to sources external to boundary C, evaluated on boundary A be denoted

    [00343] f A i

    and similarly define

    [00344] f B i .

    For the total problem (A and B both containing scatterers) there exists a net, inward moving field at boundary A. Denote this field

    [00345] f A i n

    and similarly define

    [00346] f B i n .

    The total field leaving boundary A is then

    [00347] f A out = S A f A i n .

    Knowledge of the radiated field on a closed boundary due to internal sources allows the field external to the boundary to be computed at any point. Let the operator that maps f.sub.A.sup.ou onto the boundary B be denoted T.sub.BA (rectangular translation operator from A to B). Similarly denote the operator mapping

    [00348] f B out

    to boundary A be denoted T.sub.AB.

    [0723] The total, inward moving field at boundary A has two partsthat due to the incident field external to C and that due to sources internal to boundary B. From the forgoing definitions, it should be obvious that the inward moving fields satisfy:

    [00349] f A in = f A i + T AB S B f B i n ( 1 a ) f B i n = f B i + T BA S A f A i n ( 1 b )

    [0724] Solving for the total inward moving fields gives:

    [00350] [ f A i n f B i n ] = [ I - T AB S B - T BA S A I ] - 1 [ f A i f B i ] ( 2 )

    [0725] The total scattered field at boundary A has two componentsone from inside A and one from inside B. It should be obvious that the total scattered fields at boundaries, A and B are given by:

    [00351] f A s = S A f A A A i n + T AB S B f B i n ( 3 a ) f B s = T BA S A f A i n + S B f B i n ( 3 b ) or [ f A s r B s ] = [ S A T AB S B T BA S A S B ] [ f A i n f B i n ] ( 4 )

    [0726] Combining (2) and (4) gives:

    [00352] [ f A s f B s ] = [ S A T AB S B T BA S A S B ] [ I - T AB S B - T BA S A I ] - 1 [ f A i f B i ] ( 5 )

    [0727] Assuming that the scattering operators are invertible, then we have the equivalent form:

    [00353] [ f A s f B s ] = [ I T AB T BA I ] [ S A - 1 - T AB - T BA S B - 1 ] - 1 [ f A i f B i ] ( 6 )

    [0728] The scattered field on the boundary C can be obtained from

    [00354] f A s and f B s

    by simple truncation (and possible re-ordering depending on how the boundaries are parameterized). Let the operator that does this be denoted:

    [00355] f C s = [ C A C B ] [ f A s f B s ] ( 7 )

    [0729] Let the incident field on boundary C due to external sources be denoted

    [00356] f C i .

    There exist operators D.sub.A and D.sub.B that operate on

    [00357] f C i

    to give

    [00358] f A i and f B i

    (similar to the external translation operators T.sub.AB and T.sub.BA). Thus:

    [00359] f C s = [ C A C B ] [ I T AB T BA I ] [ S A - 1 - T AB - T BA S B - 1 ] - 1 [ D A D B ] f C i ( 8 )

    [0730] from which we see that the scattering matrix for boundary C is given by:

    [00360] f C = [ C A C B ] [ I T AB T BA I ] [ S A - 1 - T AB - T BA S B - 1 ] - 1 [ D A D B ] ( 9 )

    [0731] Equation (9) then gives the core computation for our rectangular scattering matrix recursion algorithm. Technical details concerning the existence and discrete construction of the translation and other needed operators has been omitted in this write-u.sub.p. We have, however written working first-cut programs that perform (9).

    An O(N.SUP.3.) Algorithm for Computing all Scattering Views Based on Rectangular Scattering Matrix Recursion

    [0732] Consider a region containing scattering material covered by an NN array of rectangular sub regions as shown in FIG. 16.

    [0733] Again, although drawn as disjoint, assume that the subregions touch. Assume that the scattering matrix for each subregion is known (for example, if each subregion is only /2 on edge, the calculation of S is trivial given the material enclosed). Now coalesce 22 sets of these scattering matrices into larger scattering matrices (this coalesce of 4 subregions into one is similar to the algorithm defined above for coalescing two subregions). There are N/2N/2 such 22 blocks to coalesce. When done, we have scattering matrices for the set of larger subregions shown in FIG. 17.

    [0734] This process is continued until, at the final stage, the scattering matrix for the total region is computed. Note that the physical parameters of the scattering media (speed of sound, absorption, etc.) are used only in the first stage (computation of the NN array of scattering matrices).

    [0735] Assuming that N is a power of two, the algorithm will terminate after log.sub.2(N) such stages with the scattering matrix for the whole region. A careful accounting of the computation required reveals that the total computation is O(N.sup.3). The end resulting scattering matrix then allows fast calculation of the scattered field anywhere external to the total region for any incident field (angle of view).

    [0736] Although derived assuming that N is a power of 2, the algorithm can be generalized by including 33 (or any other sized) coalescing at a stage, allowing thereby algorithms for any N (preferably N should have only small, say 2, 3, 5, prime factors). Also, there is no reason that the starting array of subregions cannot be NM (by using more general nm coalescing at some stages).

    [0737] In addition, the existence of layering can be included. If layering occurs above and below the total region so that the total inhomogeneity resides in a single layer, then the algorithm proceeds as before. Once the total region scattering matrix has been obtained, its interaction with the external layering can be computed. If inhomogeneous layer boundaries lie along horizontal borders between row of subscatterers, then the translation matrices can be modified when coalescing across such boundaries, properly including the layer effects. This is an advantage over our present layered Green's function algorithms which require that the inhomogeneity lie entirely within a single layer (This can be fixed in our present algorithms but at the expense of increased computation).

    [0738] The O(N.sup.3) computation of this approach is superior to the O(N.sup.4 log.sub.2(N)) computation of our original FFT-BCG approach and our present recursion based on cylindrical coordinates which is O(N.sup.3 log.sub.2(N)).

    Example 9

    Modeling System Transfer Function Including Driving Voltage, Transmitting Transducers Receiving Transducers, Preamplifiers, Analog to Digital Converter Etc

    [0739] Let the transfer function of the transmitting waveform generator, power amplifier, transmitting multiplexer, transmitting transducer, ocean/sediment, receiving transducers, preamplifiers, differential amplifier, differential waveform generator, and analog to digital converter be, respectively, T.sub.TWG, T.sub.PA, T.sub.TM, T.sub.TT, T.sub.O|S, T.sub.RT, T.sub.RM, T.sub.DA, T.sub.DWG, and T.sub.DAC. These separate transfer functions can be identified. Then the total transfer function is:

    [00361] T total = T DAC ( T DA 1 T RM T RT T 0 / S T TT T TM T PA T TWG - T D A 2 T DWG )

    [0740] Note that the term T.sub.DA2 T.sub.DWG is subtracted in order to remove direct path energy and to remove reverberations in the transducers and the platform; this subtraction effectively increases the analog to digital converter dynamic range. The signal in the differential waveform generator is programmed to produce a net zero signal output from the analog to digital converter for the case of no sediment present.

    [0741] Recall that the equation for the scattered field .sup.(sc) (from the sediment) at a transducer (not the output voltage) at a given temporal frequency is given in terms of the transducer-to-sediment Green's function D, the sediment's acoustic properties and the internal field in the sediment f by

    [00362] f ( s c ) = D f

    [0742] The field (at a given temporal frequency) within the sediment itself f is given in terms of the incident field .sup.(inc), the sediment's acoustic properties , and the sediment-to-sediment Green's function C by

    [00363] f ( inc ) = ( I - C ) f

    [0743] On combining these two equations we eliminate the internal field and find the scattered field in terms of the incident field and the sediment properties

    [00364] f ( s c ) = D ( I - C ) - 1 f ( inc )

    [0744] These equations involving C and D are true for both the free space Green's function [Nonperturbative Diffraction Tomography Via Gauss-Newton Iteration Applied to the Scattering Integral Equation, Borup, D. T. et al., Ultrasonic Imaging] and our newly developed layered Green's functions Johnson, S. A., D. T. Borup, M J. Berggren, Wiskin, J. W., and R. S. Eidens, 1992, Modelling of inverse scattering and other tomographic algorithms in conjunction with wide bandwidth acoustic transducer arrays for towed or autonomous sub-bottom imaging systems, Proc. of Mastering the Oceans through Technology (Oceans 92), 1992, pp 294-299.]. We now combine the scattering equations with the transfer functions. First, identify .sup.(sc) with T.sub.O/ST.sub.TTT.sub.TMT.sub.PAT.sub.TWG and .sup.(inc) with T.sub.TTT.sub.TMT.sub.PAT.sub.TWG. Next we note that measuring .sup.(inc) is a direct way of finding the product T.sub.TTT.sub.TMT.sub.PAT.sub.TWG. We also note that T.sub.total can be written as

    [00365] T total ( , T TWG ) = T DAC ( T DAC T RM T RT D ( I - C ) - 1 T TI T TM T PA T TWG - T DA 2 T DWG ) .

    [0745] Then T.sub.total (, T.sub.TWG) is a nonlinear operator that transforms (, T.sub.TWG) into recorded signals T.sub.total-measured. Thus, for a given set of T.sub.total-measured measurements and for a given T.sub.TWG, we may in principal find by a nonlinear inverse operator

    [00366] = T t o t a l - 1 ( T total - measured ( , T TWG ) ) .

    [0746] Since the exact form of T.sub.total.sup.1 is not known, we find by a Gauss-Newton iteration method. This requires that the Jacobian of T.sub.total(, T.sub.TWG) be computed. The Jacobian is readily computed in closed form [Borup, 1992] and is given by

    [00367] J ( ) = - T total ( ( n ) , T TWG ) / .

    [0747] Then the Gauss-Newton iteration for computing is given by: (1) set n=1, estimate a value for .sup.(n); (2) compute J(.sup.(n)); (3) solve J.sup.T(.sup.(n))J(.sup.(n)).sup.(n)=J.sup.T(.sup.(n))(.sup.(n))[T.sub.total-measuredT.sub.total(.sup.(n), T.sub.TWG)] for .sup.(n); (4) update .sup.(n) by the formula .sup.(n+1)=.sup.(n)+.sub.6.sup.(n); (5) if T.sub.total-measuredT.sub.total(.sup.(n), T.sub.TWG)< then set =.sup.(n+1) and quite, else go to step 2.

    [0748] The extra dynamic range provided by the differential waveform generator/analog to digital converter circuit raises questions as to the optimal setup procedure (e.g., how many bits to span the noise present with no signal). We have modeled the signal to noise ratio that can be achieved by a beamformer which delays and sums multiple channel signals, each channel being such a circuit. We find as a rule of thumb, that the lowest order one or two bits should span either the noise or the signal, depending which ever is smaller (i.e., for signal to noise ratios greater than unity the noise should be spanned, but for signal to noise ratios less than unity the signal should be spanned). Upon using commercial 16 or 18 bit analog to digital converters this method may well extend their range to 20 bits or more.

    2. Model Electrical Crosstalk, Acoustic Crosstalk

    [0749] Electrical cross talk can be removed by use of knowledge of the cross coupling matrix M. Let V.sub.n.sup.(true) be true electrical signal at transducer n and let V.sub.m.sup.(meas) be the measured signal at transducer m. Then V.sub.n.sup.(meas)=M.sub.nmV.sub.m.sup.(true). We observe for small cross talk that matrix M has the form M=D.sub.2(I+)D.sub.1, where I is the identity matrix, D.sub.1 and D.sub.2 are diagonal matrices and is the differential cross talk matrix whose elements are small in value. We seek V.sub.n.sup.(true)=M.sup.1V.sub.n.sup.(meas). By the binomial theorem (I+).sup.1(I). Thus, M.sup.1D.sub.1.sup.1(I)D.sub.2.sup.1. Once D.sub.1, D.sub.2, and M are measured the problem of removing cross talk is quite inexpensive numerically (even if is not small the exact inverse M.sup.1 can be computed once and stored). If the matrix M turns out to be noninvertible (as can be the case for large magnitude coupling) then we can alternatively concatenate M onto the inverse scattering equation to give:

    [00368] v ( m e a s ) = M P G [ f ]

    [0750] to which the inverse scattering algorithm can be directly applied.

    [0751] We believe that cross talk can be removed by good fabrication techniques including careful shielding. Nevertheless, the above numerical method can be used if necessary.

    [0752] Acoustic cross talk can be removed by several methods: (1) acoustic baffling of each transducers; (2) calibration of individual (isolated) transducers, computing the acoustic coupling in the array from wave equations methods, then inverting the model by a cross talk matrix as above; (3) direct measurement of the coupling in a finished array to find the cross talk matrix and then inverting as shown above. A more difficult type of cross talk to remove is the direct mechanical coupling between transducers. This problem will be attacked by using vibration damping techniques in mounting each transducer on the frame. We believe that such damping methods will eliminate direct mechanical coupling. As a backup we note that modeling of the coupled system are theoretically possible and has been successfully accomplished by the university's AIM Lab for circular mounting geometries (by derivation of a new total system Green's function for the imaging system that that includes cross coupling between elements).

    Example 10

    Inclusion of Transducer Coupling

    [0753] In the event that significant coupling exists between the object to be imaged and the transducers (and/or coupling between transducers is not negligible), a computationally efficient means of incorporating this coupling into the inverse scattering algorithm is needed. Equivalently, the transducers must be included as part of the scattering problem This section details a computational algorithm for achieving this incorporation.

    [0754] FIG. 18 shows a transducer coupling according to an embodiment of the present invention. Consider an object to be imaged, , illuminated by a transmitter, T, with the scattering received by a receiver, R, as shown in FIG. 18 Let C denote a closed surface separating from the transceivers.

    [0755] Let S be the scattering matrix of which, given the incident field generated from sources outside of C, gives the outward moving scattered field evaluated on C. This operator can be computed by solving a sufficient number of forward scattering problems for the object .

    [0756] Let P.sub.R denote the operator that computes the field impinging on R due to sources inside of C from the scattered field evaluated on C. This is a simple propagation operator computable by an angular spectrum technique.

    [0757] Let P.sub.T denote the operator that computes the field impinging on T due to sources inside of C from the scattered field evaluated on C. This is a simple propagation operator computable by an angular spectrum technique.

    [0758] Let A.sub.RT denote the operator that computes the field impinging on R due to scattering from T (it operates on the net total field incident on T). This operator is computed by a moment method analysis of the transmitter structure.

    [0759] Let A.sub.TR denote the operator that computes the field impinging on T due to scattering from R (it operates on the net total field incident on R). This operator is computed by a moment method analysis of the receiver structure.

    [0760] Let B.sub.T denote the operator that computes the field on C due to scattering from T (it operates on the net total field incident on T). This operator can be computed by a moment method analysis of the transmitter structure.

    [0761] Let B.sub.R denote the operator that computes the field on C due to scattering from R (it operates on the net total field incident on R). This operator can be computed by a moment method analysis of the receiver structure.

    [0762] Assume that the transmitter T also produces a field, f.sup.i, due to eternally applied excitation (electrical). Denote by

    [00369] f c i

    the values of this field on C. Denote by

    [00370] f R i

    the values of this field on the receiver surface. We assume that these field values are known (i.e., we know the free-space radiation characteristics of the transmitter).

    [0763] Given these definitions, the total field incident from outside of C evaluated on C is given by:

    [00371] f c tot = f c i + B T f T tot + B R f R tot ( 1 )

    [0764] where the superscript tot indicates the field incident on the particular element due to all other sources. For T and R we have:

    [00372] f T lot = P T S f c tot + A T R f R lot ( 2 ) f R tot = f R i + P T Sf c tot + A R T f T tot ( 3 )

    [0765] Note that the formula for

    [00373] f T tot

    has no superscript i term since the incident field emanates from its interior. Solving 1-3 for the tot fields gives:

    [00374] [ f C tot f T tot f R tot ] = [ I - B T - B R P T S I - A T R P R S - A RT I ] - 1 [ f C i 0 f R i ] = [ f C tot f T t o t f R t o t ] ( 4 )

    [0766] The size of this matrix operator is O(NN) and so computation of its inverse does not require much CPU time (mere seconds). In order to compute the signal received by the receiver transducer, we take

    [00375] f R t o t

    computed in 4 and compute the surface currents (EM case) or surface velocities (acoustic case) from which the signal from the receiver can be computed.

    [0767] This procedure for analyzing a scatterer in the presence of coupling between the T/R pair includes all orders of multiple interaction allowing transducers with complex geometries to be incorporated into our inverse scattering algorithms.

    Example 11

    Frequency Dependent Scattering Parameters

    [0768] Throughout the previous sections it has been assumed that is independent of frequency. Suppose now that is a function of frequency. In the event that only a single frequency is needed (transmission mode with encircling transducers and only one complex parameter to be imaged) this is not a problemthe algorithm will simply image the 2 or 3-D distribution of evaluated at that frequency. However, in reflection mode or when imaging multiple parameters, multiple frequencies are needed. This increases the number of unknowns to *N.sub.x*N.sub.y if we naively seek a separate image at each frequency. Since multiple frequencies were already needed to complete the data for a frequency independent unknown, we have no way of correspondingly increasing the number of equations by a factor of 2. Instead, consider approximating the frequency variation with a set of parameters at each pixel:

    [00376] n m ( ) n m ( 0 ) ( 0 ) ( ) + n m ( 1 ) ( 1 ) ( ) + .Math. + n m ( q - 1 ) ( q - 1 )

    [0769] where the basis functions are selected based on the physics of the frequency dependence (we may, for example, simply use the monomial basis: .sup.(n)().sup.n or, perhaps a rational function basis might be chosen since simple relaxation dispersion is a rational function). The formula for the GN update for this case is:

    [00377] P G ( ( I - [ G ) - 1 [ f ] M , j ( j ) = - r

    [0770] where summation over j is assumed and the matrix M is given by:

    [00378] M = [ ( 1 ) ( 1 ) ( 2 ) ( 1 ) .Math. ( q - 1 ) ( 1 ) ( 1 ) ( 2 ) ( 2 ) ( 2 ) .Math. ( q - 1 ) ( 2 ) ( 1 ) ( ) ( 2 ) ( ) .Math. ( q - 1 ) ( ) ]

    [0771] Solution for the qN.sub.xN.sub.y unknowns, assuming that q is sufficiently smaller than can then carried out via the GN-MRCG or RP algorithms. We have already verified the success of this approach using quadratic polynomial models of the frequency variation (q=3, .sup.(n)().sup.n).

    Example 12Parabolic Marching Methods

    [0772] Having seen in the previous pages how the full nonlinear inversion yields substantial increases in resolution and utility of image, we are now prepared to discuss the advanced marching technique employed in both the forward problem and the Jacobian calculations (as well as the closely related Hermitian conjugate of the Jacobian calculation).

    [0773] Scientific Background and Detailed Description to Advanced Parabolic Marching Method

    [0774] The parabolic equation method is a very efficient method for modelling acoustic wave propagation through low contrast acoustic materials (such as breast tissue). The original or classical method requires for its applicability that energy propagate within approximately +20 from the incident field direction. Later versions allow propagation at angles up to +90 from the incident field direction. Further modifications provide accurate backscattering information, and thus are applicable to the higher contrasts encountered in nondestructive imaging, EM and seismic applications. [M. D. Collins, A two-way parabolic equation for acoustic backscattering in the ocean, Journ. Acoustical Society of America, 1992, 91, 1357-1368, F. Natterer and F. Wubbeling, A Finite Difference Method for the Inverse Scattering Problem at Fixed Frequency, Lecture Notes in Physics, 1993, vol. 422:157-166, herein included as reference]. The source/receiver geometry we use in this device, and the relatively low contrast of breast tissue, allow us to utilize this efficient approximation. The resulting speed up relative to the Gauss Newton, conjugate gradient method [Borup et al., 1992] described in the previous patent, is dependent upon the contrast but is estimated to be 100-400 times. Furthermore the coarse grain parallelization employed in the integral equation method is equally applicable to the parabolic algorithm. The basic structure of the parabolic inversion algorithm is also essentially unchanged. The main difference is in the use of the Parabolic equation approximation, and more exactly, the split step Fourier Method [R. H. Hardin and F. D. Tappert Applications of the split-step fourier method to the solution of nonlinear and variable coefficient wave equations, SIAM Rev. 15, 423 (1973). and M. D. Collins, A Split-step Pade solution for the parabolic equation method, J. Acoust. Soc. Am. 93, 1736-1742 (1993), herein included as reference] in the construction of both the Jacobian of the residual function, and the solution to the forward problems for each view. The source/receiver geometry requirements and the relatively low contrast of breast tissue, allow us to utilize this efficient approximation, for breast cancer scanner applications, for example. In particular because we may elect to use only transmission data in the breast problem, we are able to use this parabolic approximation in a straight-forward, simple manner [see Hardin and Tappert].

    [0775] To derive and elucidate the parabolic equation method, we begin with the 2-D Helmholtz wave equation governing wave propagation in inhomogeneous media:

    [00379] { 2 x 2 + ( k 2 ( x , y ) + 2 y 2 ) } f ( x , y ) = 0 1

    [0776] Now Fourier transforming y to results in:

    [00380] [ 2 x 2 + ( 1 2 ( ) - 2 ) ] f ( x , ) = 0 2

    [0777] where:

    [00381] ( ) = ( x , ) = - k 2 ( x , y ) e - i y d y 3

    [0778] and the * notation denotes convolution:

    [00382] f ( ) * g ( ) = - f ( - ) g ( ) d

    [0779] Eqn (2) can be factored in the sense of pseudo-differential operators [M. E. Taylor, Pseudo-differential Operators, Princeton University Press, Princeton, 1981, herein included as reference]

    [00383] ( x + i 1 2 ( ) 2 - 2 ) ( x - i 1 2 ( ) 2 - 2 ) f ( x , ) = 0 5

    [0780] An intuitive feel for the manner in which the Parabolic equation arises can be seen by looking at particularly simple case: when k is a constant, then we have, upon Fourier Transforming (1):

    [00384] ( x + i k 2 - 2 ) ( x - i k 2 - 2 ) f ~ ( x , ) = 0 6

    [0781] where the square root is now simply the square root of a scalar. From 6, it is clear that in this special case, two solutions exist:

    [00385] ( x + i k 2 - 2 ) f ( x , ) = 0 , f ( x , ) = g ( ) e - ix k 2 - 2 7

    [0782] for an arbitrary function g, which represents a rightmoving wave for e.sup.iwt time dependence and:

    [00386] ( x - i k 2 - 2 ) f ( x , ) = 0 , f ( x , ) = g ( ) e ix k 2 - 2 8

    [0783] representing a wave which moves to the left. Suppose that we know that the field is due to sources entirely on the LHS of the x-y plane. Then, for x>0 we must have:

    [00387] f ( x , ) = g ( ) e - ix k 2 - 2 9

    [0784] Note that knowledge off on the line x=0 (boundary condition) completes the solution since:

    [00388] f ( 0 , ) = g ( ) 10

    [0785] and so, on inverse Fourier Transforming:

    [00389] f ( x , ) = 1 2 - f ( 0 , ) e - ix k 2 - 2 e i y d , x > 0 11

    [0786] In fact, for any x.sub.0>0:

    [00390] f ( x , y ) = 1 2 - f ( x 0 , ) e - i ( x - x 0 ) k 2 - x 2 e i y d , x > x 0 > 0 12

    [0787] The idea behind the parabolic equation method is to try to factor a general inhomogeneous (i.e. k=k(x, y)) problem into forward and backward moving (in x) factors and then solve only the forward (+x) moving part (assuming that the field source is to the left (on the x-axis) of the scatterer).

    [0788] Let x.sub.n=n, n=0, . . . then 12 is:

    [00391] f n + 1 ( y ) = 1 2 - f n ( ) e - i k 2 - 2 e i y d 13

    [0789] Since 13 is local x=(n+1/2) we consider the discretization/approximation:

    [00392] f n + 1 ( y ) = 1 2 - f n ( ) e - i k 2 ( x n + 1 / 2 , y ) - 2 e i y d 14

    [0790] for propagating forward through k that is inhomogeneous in x and y i.e., k=k(x, y), and f.sub.n()=(xn, y). Computationally (14) would be an inverse Fourier transform except for the y-dependence of k. Defining

    [00393] k n 2 = k 2 ( x n + 1 / 2 )

    [0791] gives

    [00394] f n + 1 ( y ) = 1 2 - f n ( ) e k 2 ( x n + 1 / 2 , y ) - 2 - i e i y d

    Advanced Parabolic Marching Method

    [0792] Now we desire to take the ydependence out from under the square root in order that it may then be factored out from under the integral, which will result in the integral being an inverse Fourier Transform, and yield a substantial increase in efficiency. This goal can be achieved approximately in the following manner:

    [00395] e - i k n , m 1 - / k n , m r 2 e i ( k n , m - k n ) e - i k n 1 - ( / k n ) 2

    [0793] so that, defining k.sub.n,m=k(x.sub.n+1/2, y.sub.m), and y.sub.m=m, then yields:

    [00396] f n + 1 ( y m ) 1 2 - f n ( ) e - i ( k n , m - k n ) e - i k n 1 - / k n ) 2 e i y m d ( 15 ) or f n + 1 ( y m ) e - i ( k n , m - k n ) 2 - f n ( ) e - i k n 1 - ( / k n ) 2 e i y m d

    [0794] We have thus approximately maintained the y variation in k, while simultaneously giving a Fourier Transform formulation for the field f.sub.n+1.

    [00397] f n + 1 ( y ) e - i ( k n , m - k n ) 2 - f n ( ) e - i k n 1 - ( / k n ) 2 e i y d

    [0795] and in fact using the notation:

    [00398] f ( y ) = 1 2 - f ( ) e - i y d

    [0796] to indicate the Inverse Fourier Transform, gives:

    [00399] f n + 1 ( y ) = f n ( ) P n _

    [0797] where P.sub.n is the range dependent propagator defined as:

    [00400] P n ( y ) = e k 2 ( x n + 1 / 2 ) - 2 - i

    [0798] This is one basic equation for the parabolic split step method. Various alternative forms can be used in similar algorithms. A common form for the parabolic equation is derived by using the binomial approximation:

    [00401] 1 - ( / k n ) 2 1 2 ( k n ) 2

    [0799] in the above integral. This yields a standard form of the split-step parabolic equation method:

    [00402] f n + 1 ( y m ) = e - i k n , m 2 - f ~ n ( ) e i 2 2 k 0 e i y m d 16

    [0800] Numerical experiments have indicated the superiority of 15 over 16. In Fourier notation, 16 is:

    [00403] f n + 1 ( y m ) = e - i k n , m F - 1 { e i 2 2 k 0 F { f n } ( ) } ( y m ) 17

    [0801] This is the usual form of the split-step PE method. Our more general equation is (in Fourier notation):

    [00404] f n + 1 ( y m ) = e - i ( k n , m - k n ) F - 1 { e - i k n 1 - ( / k n ) 2 F { f n } ( ) } ( y m )

    [0802] An interesting interpretation of the split step method is evident from this: The algorithm step consists of an exact propagation a distance of in k.sub.0 (which includes diffraction) followed by a phase shift correction, i.e., multiplication by

    [00405] e - i ( k n . m - k 0 ) 18

    [0803] which corrects the phase of a plane wave travelling forward.

    Relation to the Generalized Born Approximation

    [0804] The above split step Fourier method can be seen to be a generalization of the Generalized Born method as developed at TechniScan scientific research division in the following manner. Suppose that we assume that the field variation in y is very slow, i.e.,

    [00406] F { f n } ( ) 0 for 0

    [0805] so that we may replace

    [00407] e - i k n 1 - ( / k n ) 2 e - i k n

    [0806] in the above integral which

    [00408] 1 2 - f n ( ) e - i k 2 - 2 e i y d f n ( y ) e - ik 0 19

    [0807] Then 28 is:

    [00409] f n + 1 ( y m ) = e - i k n , m f n ( y m ) , f n + 1 ( y m ) = e - i n = 0 n k n , m 20 ( since f 0 ( y ) = 1 )

    [0808] which is:

    [00410] f ( x , y ) = e - i 0 x k ( x , y ) dx 21

    [0809] This is the generalized Born approximation to the field. Thus, the PE formula 17 reduces to the GB formula if straight line propagation and no diffraction are assumed.

    [0810] The PE method should be significantly superior to GB, particularly if the PE total field is rescattered:

    [00411] f s ( ) = k 0 2 ( ) f PE ( ) H 0 ( 2 ) ( k 0 .Math. "\[LeftBracketingBar]" - .Math. "\[RightBracketingBar]" ) 4 i ds 22

    [0811] where .sup.PE is computed from 17 by the split step algorithm. This will also improve the calculation of side and backscatter. The PE formulation has no backscatter (or large angle scattering) in it. Equation 33 is a way to put it back in, in a manner that gives a good approximation for weak scattering.

    [0812] It is important to note that the above formulation of the parabolic equation inversion method does not require the storage of any fields (unlike the integral equation method of the previous patent). Of course, this method (as in the previous method) does not require the storage of a large Jacobian, or its adjoint.

    Inverse Problem and Construction of Jacobian

    [0813] The construction of the inversion algorithm requires the formula for the Jacobian:

    [00412] k i , j f n + 1 ( y m ) = e - i ( k n , m - k o ) ( - i [ n , m ] [ i , j ] ) F - 1 { e - i k o 1 - ( / k o ) 2 F { f n } ( ) } ( y m ) + e - i ( k n , m - k o ) 2 F - 1 { e - i k o 1 - ( / k o ) 2 F { k i , j f n } ( ) } ( y m )

    [0814] More exactly, we use the conjugate gradient algorithms in our inversion, and consequently require only the action of the Jacobian defined above on the perturbation k.sub.i,j, i.e. the total variation of f with respect to k:

    [00413] f n + 1 ( y m ) .Math. i , j f n + 1 k i , j ( y m ) k i , j

    [0815] The recursion formula for the action of the Jacobian on the perturbation in k is given by

    [00414] f n + 1 ( y m ) = e - i ( k n , m - k o ) ( - i k n , m ) F - 1 { e - i k o 1 - ( / k o } 2 F { f n } ( ) } ( y m ) + e - i ( k n , m - k o ) F - 1 { e - i k o 1 - ( / k o } 2 F { f n } ( ) } ( y m )

    [0816] It is advisable to rewrite the recursion expression for the field values .sub.n+1(y.sub.m) as:

    [00415] f n + 1 ( y m ) = t n , m - f n ( ) e - i k o 2 - 2 e i y m d

    [0817] Where either

    [00416] t n , m = 2 1 + n , m + 1 e - i ( k n , m - k o ) or I n , m 2 1 + n - 1 , m + 1 n , m + 1 e - i ( k n , m - k n )

    [0818] is the transmission coefficient+phase mask characterizing the medium, depending upon The form used depends upon whether the model used as the background medium includes a priori known layering or not. In this case the equation for the Jacobian itself reads:

    [00417] t i , j f n + 1 ( y m ) = [ n , m ] [ i , j ] F - 1 { e - i k 0 2 - 2 F { f n } ( ) } ( y m ) + t n , m F - 1 { e - i k 0 2 - 2 F { f n t i , j } ( ) } ( y m )

    [0819] and the total variation in terms of t is:

    [00418] f n + 1 ( y m ) .Math. i , j f n + 1 t i , j ( y m ) t i , j

    [0820] The recursion formula for f is therefore:

    [00419] f n + 1 ( y m ) = t n , m F - 1 { e - i k o 2 - 2 F { f n } ( ) } ( y m ) + t n , m F - 1 { e - i k o 2 - 2 F { f n } ( ) } ( y m )

    [0821] The Formula for the Hermitian Conjugate of the Jacobian

    [0822] The recursion for the total variation in the data .sub.N can be written as:

    [00420] f N = t N v N + W N f N - 1

    [0823] where x denotes the Hadamard product, which is defined in the following manner

    [00421] t .Math. v [ t 1 .Math. t N - 1 t N ] .Math. [ v 1 .Math. v N - 1 v N ] = [ t 1 v 1 .Math. t N - 1 v N - 1 t N v N ]

    [0824] Also, the matrix W.sub.j is defined as:

    [00422] W j = [ t j ] A j

    [0825] where [t.sub.j] represents the diagonal matrix:

    [00423] [ I j ] [ t j , 1 0 0 0 0 t j , 2 0 0 0 0 0 0 0 0 t j , M ]

    [0826] whose diagonal terms consist of the elements of the vector t.sub.j, and A.sub.j is the matrix defined as (where juxtaposition always indicates matrix multiplication:

    [00424] A j F - 1 P j F

    [0827] Also the vector .sub.j is defined as

    [00425] v j = A j f j - 1

    [0828] i.e.

    [00426] v j = F - 1 P j Ff j - 1

    [0829] Note that with the definitions:

    [00427] A [ a 1 .Math. a M ] t [ t 1 0 0 0 0 t 2 0 0 0 0 0 0 0 0 t M ]

    [0830] for the MM matrices A, and [t], it follows that the matrix product is given by:

    [00428] [ t ] A = [ I 1 a 1 .Math. I M a M ]

    [0831] That is, the i.sup.th row of [t]A, is t.sub.i multiplied by the i.sup.th row of A.

    [0832] With these notational assumptions, the recursion for the total variation becomes:

    [00429] f N = [ v N ] t N + W N [ N - 1 ] t N - 1 + W N W ( N - 1 ) [ V N - 2 ] t N - 2 + .Math. + ( W N W ( N - 1 ) .Math. W 1 ) [ V 0 ] t 0

    [0833] The .sub.j=F.sup.1P.sub.j Ff.sub.j1 are computed and stored as the forward fields are computed within the subroutine jach which computes the action of the Hermitian conjugate of the jacobian on (the complex conjugate of) the residual vector.

    [0834] The updates for one view, then, are constructed in sequence using the formulae:

    [00430] [ t N ] = [ v N ] [ f N ] _ [ t N - 1 ] = [ v N - 1 ] W N T [ f N ] _ = [ v N - 1 ] FP N F - 1 [ t j ] [ f N ] _ etc .

    [0835] See FIGS. 19-24.

    Scientific Background for the Generalized Born Approximation:

    [0836] The exact scattering 2D integral equation is given by:

    [00431] f scat ( ) = k 0 2 ( ) f ( ) H 0 ( 2 ) ( k 0 .Math. "\[LeftBracketingBar]" - .Math. "\[RightBracketingBar]" ) 4 i ds 1

    [0837] where f is the exact total field. The generalized Born approximation is obtained by approximating f in (1) with a straight-line phase integrated field approximation. For a plane wave traveling in the +x direction, this approximation is:

    [00432] f ( x , y ) e - ik 0 x e - i - x ( 1 / c ( x , y ) - 1 / c 0 ) dx 2

    [0838] which has the proper phase (time delay) assuming straight-line propagation. For a point source incident field, the approximation is:

    [00433] f ( x , y ) H 0 ( 2 ) ( tran ( dl / c ( l ) ) ) 4 i 3

    [0839] where .sub.tran is the position of the point source. For a point receiver at .sub.rec, equation 1 using equation 3 gives:

    [00434] f scat ( rec ) = k 0 2 ( ) H 0 ( 2 ) ( tran ( dl / c ( l ) ) ) 4 i H 0 ( 2 ) ( k 0 .Math. "\[LeftBracketingBar]" rec - .Math. "\[RightBracketingBar]" ) 4 i ds 4

    [0840] Where the path of integration from .sub.tran to is a straight line.

    Transformation to the Time Domain

    [0841] Using the asymptotic approximation:

    [00435] H 0 ( 2 ) ( x ) .fwdarw. x .fwdarw. 2 i x e - ix 5

    [0842] gives the approximation:

    [00436] f sat ( rec , tran , ) = - i 8 c 0 ( ) ( e - i C ( dl / c ( l ) ) ) ( e - i .Math. "\[LeftBracketingBar]" rec - .Math. "\[RightBracketingBar]" c 0 ) .Math. "\[LeftBracketingBar]" tran - .Math. "\[RightBracketingBar]" .Math. "\[LeftBracketingBar]" rec - .Math. "\[RightBracketingBar]" ds 6

    [0843] where the path integral is given by:

    [00437] C ( dI / c ( l ) ) = tran ( dl / c ( l ) )

    [0844] Transforming to the time domain gives:

    [00438] f scat ( rec , max , t ) = - 1 8 c 0 t ( ) ( t - tran ( dl / c ( l ) ) - .Math. "\[LeftBracketingBar]" rec - .Math. "\[RightBracketingBar]" / c 0 ) .Math. "\[LeftBracketingBar]" tran - .Math. "\[RightBracketingBar]" .Math. "\[LeftBracketingBar]" rec - .Math. "\[RightBracketingBar]" ds 7

    [0845] In reflection mode (.sub.rec on the same side of the body as .sub.tran), there is a problem with equation 7 of this section in that the scattering from point arrives at the receiver at the wrong time. The transmitter pulse arrives at p at the properly delayed time,

    [00439] tran ( dl / c ( l ) ) .

    [0846] but then the response travels back to the receiver as if the body were absent (time back to receiver=|.sub.rec/c.sub.0). Thus, equation 7 in this section is acausal. To correct this, in reflection mode, we time delay the receiver path as well:

    [00440] f scat ( rec , tran , t ) = - 1 8 c 0 t ( ) ( t - tran ( dI .Math. "\[LeftBracketingBar]" c ( l ) ) - rec ( dl .Math. "\[LeftBracketingBar]" c ( l ) ) ) .Math. "\[LeftBracketingBar]" rman - .Math. "\[RightBracketingBar]" .Math. "\[LeftBracketingBar]" rec - .Math. "\[RightBracketingBar]" ds 8

    [0847] Equation 8 has been found to be quite accurate in reflection mode. In transmission mode, we retain 7 because in transmission, the scattered field is, in fact, acausal (in the sense that part of the scattered field arrives as if no body were present).

    [0848] In reality, transducers have a limited bandwidth. Let s(t) be the system response of the transducers. Then equation 8 becomes:

    [00441] f scal ( rec , tran , t ) = 1 8 c 0 s ( t ) ( ) ( t - tran ( dl .Math. "\[LeftBracketingBar]" c ( l ) ) - rec ( dl .Math. "\[LeftBracketingBar]" c ( l ) ) ) .Math. "\[LeftBracketingBar]" tran - .Math. "\[RightBracketingBar]" .Math. "\[LeftBracketingBar]" rec - .Math. "\[RightBracketingBar]" ds 9

    [0849] where the * denotes convolution in time. Equation 9 provides a very powerful algorithm for time-domain, reflection-mode scattering calculation. It's main advantage over, say, the parabolic algorithm, is that it is in the time domain. Reflection mode scattering using the parabolic method requires that each frequency be computed separately, while equ. 9 gives the full time waveform with one computation.

    [0850] The addition of attenuation into equ. 9 is trivial:

    [00442] ( rec , tran , t ) = - 1 8 c 0 s ( t ) * ( ) e tran ( l ) dl e rec , ( l ) dl ( t - tran ( dl / c ( l ) ) - rec ( dl / c ( l ) ) ) .Math. "\[LeftBracketingBar]" tran - .Math. "\[RightBracketingBar]" .Math. "\[LeftBracketingBar]" rec - .Math. "\[RightBracketingBar]" ds ( ( l ) )

    [0851] where is the inhomogeneous attenuation.

    [0852] Imaging algorithms can be derived from 9-10 by using these equations as the nonlinear operator (in ) for predicting the scattering data and applying our standard Fletcher-Reeves or Ribere-Polack approach.

    Basis for Fast Computational Algorithm Based Warping of Metric in Image Space

    [0853] The 1-D Generalized Born formula in the frequency domain is:

    [00443] f s ( , - d ) = e - i d / c 0 2 ic 0 0 a ( x ) e - i ( 2 c 0 0 x S r ( x ) dx + dlc 0 ) dx 1

    Assuming .sup.i(, x)=e.sup.i(x+d)/c.sup.0.sup.) where: [0854] S.sub.r(x)=relative slowness={square root over ((x)+1)}

    [0855] Inverse Fourier transformation of 1 to time gives:

    [00444] - 2 c 0 F - 1 { f s ( , - d ) i } ( t ) = 0 a ( x ) ( t - 2 c 0 0 x S r ( x ) dx - 2 d / c 0 ) dx 2 or - 2 c 0 0 t f s ( t + 2 d / c 0 , - d ) dt = 0 a ( x ) ( t - 2 c 0 0 x S r ( x ) dx ) dx 4

    [0856] Change of variables:

    [00445] z = 0 x ( z ) S r ( x ) dx , dx = dz S r ( x ( z ) ) 5

    to get:

    [00446] - 2 c 0 0 t f s ( t + 2 df c 0 , - d ) dt = 0 z ( a ) ( x ( z ) ) S r ( x ( z ) ) ( t - 2 c 0 z ) dz 6

    which gives:

    [00447] - 4 0 2 z / c 0 f s ( t + 2 d / c 0 , - d ) dt = ( x ( z ) ) S r ( x ( z ) ) 7

    [0857] where x(z) is defined in 5 which is also:

    [00448] x ( z ) = 0 z dz S r ( x ( z ) ) 8

    [0858] which provides a recursive formula for x(z) for the discretized case. For example, trapezoidal integration of 8 gives:

    [00449] x n - 2 1 S r ( x n ) = 2 { 1 S r ( 0 ) + 2 .Math. l = 1 n - 1 1 S r ( x l ) } , 9 x n = x ( z n ) , z n = n

    [0859] which is easy to solve for x.sub.n if, for example, S.sub.r is piecewise constant.

    [0860] This simple one dimensional example illustrates the technique for changing the metric to obtain a fast algorithm. The 2D case is exactly similar, with the direction of the incident plane wave being rotated to correspond to the x-axis in the above algorithm.

    Scientific Background for Propagation-Backpropagation Method and Propagation-CG Generalization

    [0861] This section is by nature technical.

    [0862] The Helmholtz Equation (also referred to as the reduced wave equation is desired to be solved exactly (i.e., without any type of linearization or perturbation assumption) in order to reconstruct certain parameters in an object. The unknown object is illuminated with some type of wave energy (whether acoustic, or electromagnetic). The wave equation that is solved is of the form:

    [00450] 2 f + k o 2 ( 1 - ) f = 0

    [0863] where k.sub.o is the wavenumber in free space. Note that Natterer uses the notation .sup.2 for the Laplacian, therefore in the following, this notation will be used. The is the object function:

    [00451] ( r ) = 1 - c o 2 c 2 ( r ) = - ( c o 2 c 2 ( r ) - 1 ) = - ( r )

    [0864] Where {circumflex over ()} is the object function definition employed by Natterer. It is the negative of the standard definition of as defined and used in this patent. Furthermore, in the papers included as reference, written by Natterer, and Natterer and Wubbeling, the notation is used to represent {circumflex over ()}.

    [0865] For purposes of this discussion we will define V.sub. in the following manner:

    [00452] f e ik 0 .Math. r ( 1 + v )

    [0866] where is a unit vector in R.sup.2.

    [0867] In other words

    [00453] f e ik 0 .Math. r + e ik 0 .Math. r v

    [0868] so that

    [00454] e ik 0 .Math. r

    [0869] is the incident plane wave and V.sub. is

    [00455] e - ik 0 .Math. r f sc

    [0870] where f.sup.sc is the scattered field.

    [0871] NOTE: The definition of {circumflex over ()}(r) here is designed to correspond to the object function used in A Propagation-Backpropagation Method for Ultrasound Tomography, Frank Natterer. This definition is the negative of the standard used in this patent, and previous patents.

    [0872] When the definition for the field is substituted into the Helmholtz equation, the result is the equation that V.sub. must solve:

    [00456] v j + 2 ik 0 j .Math. v j - k 0 2 ( 1 + v j ) y ^ = 0

    [0873] FIG. 25 shows the basic Geometry for the paper A Propagation-Backpropagation Method for Ultrasound Tomography, Frank Natterer, which is included herein as reference. This is referred to as FIG. 1 in this paper. The square Q.sub.j is the square of sidelength 2 whose boundary is made up of .sub.j (side-scatter directions),

    [00457] j -

    (Backscatter direction), and

    [00458] j +

    (forward scatter direction). It encompasses the region , which contains the support of the object function . .sub.j is the direction of the incident wavefield propagation. FIGS. 26A and 26B show an elongated rectangle Q.sub.j enclosing the region , with boundaries .sub.j (side-scatter directions),

    [00459] j -

    (Backscatter direction),

    [00460] j +

    (forward scatter direction). .sub.j is the direction of the incident wavefield propagation (for FIG. 26A). The geometry of FIG. 25 shows the incident field direction .sub.j, the boundary in the backscattered direction,

    [00461] j - ,

    the boundary in the sidescattered direction, .sub.j, as well as in the forward scattered direction

    [00462] j +

    The ultimate goal is to determine the distribution of appropriate scattering coefficients, , or , given the measured fields

    [00463] g j g j on Q j j + .Math. j .Math. j - , j = 1 , .Math. , N view ,

    where N.sub.view is the number of views. Q.sub.j is the boundary of Q.sub.j. This goal will be achieved by applying the Paige-Saunders Least Squares conjugate gradient algorithm to the functional which is the difference between the measured field on

    [00464] j + ,

    and the calculated scattered field on

    [00465] j + .

    The scattered field on

    [00466] j - ,

    and on the sides .sub.j is also incorporated into the algorithm, since these values are used as boundary values in the numerical solution of the partial differential equations enumerated below.

    [0874] As is well known, we are required to calculate the derivative of .sub.j with respect to in order to utilize the method of conjugate gradients. This derivative will be denoted by

    [00467] v j ,

    [0875] and is also referred to as the Frechet Derivative. It is the functional analysis equivalent of the Jacobian in the calculus of several variables. This Frechet derivative is a linear operator which acts upon object functions, , and delivers up a calculated total field on Q: i.e.,

    [00468] v j

    [0876] is a total calculated field. Now, in accordance with the papers by F. Natterer and F. Wubbeling included herein as references, we will also introduce the notation

    [00469] R j ( ) j .Math. j + .

    That is, R.sub.j().sub.j restricted to

    [00470] j + ,

    the forward scattering part of the boundary of Q. By definition, the derivative

    [00471] R j ( )

    [0877] is the restriction of

    [00472] v j

    [0878] to the forward scattered direction:

    [00473] ( R j ( r ) = v j .Math. ) j +

    [0879] We will calculate these operators explicitly below. To be exact, the conjugate gradient algorithm we employ requires the calculation of

    [00474] v j

    [0880] acting on , for specific, known , i.e. the calculation of the function

    [00475] v j .

    [0881] To make the equations easier to read, this function will be notated as .sub.j, that is:

    [00476] j v j

    [0882] is a function representing a total field on region Q.

    [0883] First, consider the forward problem in the direction j, which we have denoted (as in F. Natterer's papers) by R.sub.j: given some object function , which describes the distribution of parameters within the image grid Q, determine the solution .sub.j, to the following boundary value problem.

    [00477] j + 2 ik o j .Math. j - k o 2 ( 1 + v j ) = 0 for j = 1 , .Math. N view

    [0884] subject to the conditions from measured values g.sub.i:

    [00478] v j = g j on j .Math. j - and v j v = g j v on j -

    [0885] Then R.sub.j ().sub.j restricted to

    [00479] j + i . e . , R j ( ) j .Math. j + .

    Where .sub.j is the solution to the above boundary value problem.

    [0886] NOTE: The solution of this boundary value problem requires the knowledge of the total field on the sides and backscatter direction, and the normal derivative of the total field on the backscatter direction boundary

    [00480] j - ,

    by virtue of the finite difference marching method employed to solve the partial differential equation. Thus, from a physical point of view the aperture is 360. The side scattered and back-scattered fields are both included in the solution.

    [0887] The system that we require to solve for is a nonlinear system, of the form:

    [00481] R j ( ) = g j .Math. j + 1

    [0888] Recall that g.sub.i is the measured data in the direction .sub.j. Because it is a nonlinear system, it must be solved iteratively by means of the Newton-Raphson method. To this end, given a guess .sup.r, as an approximation to the solution of (1) consider R.sub.j(.sup.r+.sup.r). One can write:

    [00482] R j ( r + r ) R j ( r ) + [ R j ] r where [ R j ]

    [0889] is the Jacobian map which linearly approximates R.sub.j at .sup.r. Note that this is a linear map:

    [00483] [ R j ] : ( OBJECT FUNCTIONS ) .Math. ( MEASURED FIELDS )

    [0890] We can use this fact to obtain an explicit representation of the function

    [00484] [ R j ] : ( OBJECT FUNCTIONS ) .Math. ( MEASURED FIELDS )

    [0891] Let (.sub.j+.sub.j) be the total field resulting from applying the incident field in direction j to the object function (.sup.r+.sup.r) That is, (.sub.j+.sub.j)+2ik.sub.o.sub.j.Math.(.sub.j+.sub.j)k.sub.o.sup.2 (1+(.sub.j+.sub.j)) (.sup.r+.sup.r)=0

    [0892] Restricting (.sub.j+.sub.j) to the forward scattering border gives the symbolic equation:

    [00485] R j ( r + r ) = ( v j + v j ) .Math. j +

    [0893] Using the fact that

    [00486] ( v j ) + 2 ik o j .Math. ( v j ) - k o 2 ( 1 + v j ) ( ^ r ) = 0

    [0894] gives

    [00487] ( v j ) + 2 ik o j .Math. ( v j ) - k o 2 ( v j ) ( ^ r ) = k o 2 [ ( 1 + v j ) ^ r + ^ r v j ]

    [0895] The last term on the right hand side contains the quadratic terms in y and so will be ignored, since we are interested in the linear variation of .sub.j with . Therefore, using the definition

    [00488] j = v j ,

    [0896] which is the part of .sub.j which is linear in , it follows that .sub.j is the solution to the following initial value problem with known, nonzero right hand side.

    [00489] ( j ) + 2 ik o j ( j ) - k o 2 j ^ r = k o 2 ( 1 + v j ) ^ r

    [0897] with the boundary values:

    [00490] j = 0 on j .Math. j - ,

    [0898] and initial value

    [00491] j v = 0 on j -

    [0899] The boundary values follow from the following considerations: Since .sub.j is the linear part of the total variation of .sub.j it follows that .sub.j+.sub.j.sub.j+.sub.j+higher order terms . . . .

    [0900] It follows that, at the boundaries: .sub.j+.sub.jg.sub.j, but .sub.j=g.sub.j at the boundaries, by definition of .sub.j, therefore .sub.j=0 at the boundaries, as stated.

    [0901] For purposes of the Paige-Saunders method, or for direct application as backpropagation, it is important to determine a similar explicit representation for the Hermitian adjoint or Hermitian transpose of the Jacobian map

    [00492] [ R j ] :

    [0902] Again, it is the action of the Hermitian transpose on a given function which is actually used by the conjugate gradient type algorithms.

    [00493] [ v j ] H : ( TOTAL FIELDS ) .Math. ( OBJECT FUNCTIONS )

    [0903] Using the definition of R.sub.j as the restriction of the total field to

    [00494] j + ,

    it follows that the Hermitian conjugate of

    [00495] [ R j ]

    [0904] is a linear map:

    [00496] [ R j ] H : ( MEASURED FIELDS ON j + ) .Math. ( OBJECT FUNCTIONS )

    [0905] The calculation of the action of

    [00497] [ R j ] H

    [0906] on a given measured field on

    [00498] j +

    is a somewhat tedious process carried out in Natterer A Propagation-Backpropagation Method for Ultrasound Tomography [included in this patent as reference]. The final result is: Given g.sub.i, a function on

    [00499] j + ,

    the action of

    [00500] [ R j ] H

    [0907] on g.sub.j, is

    [00501] [ R j ] j H ( g j ) k o 2 ( 1 + v _ j ) z

    [0908] where z is the solution to the initial boundary value problem:

    [00502] z + 2 ik o j .Math. z - k o 2 _ z = 0

    [0909] (where (r) denotes the complex conjugate of {circumflex over ()}) with boundary values: [0910] z=0 on

    [00503] j .Math. j + ,

    and initial value

    [00504] z v = g j on j +

    [0911] NOTE that g.sub.j is used only on the forward scattering border

    [00505] j + ,

    which is as it should be since this is the only place that the function g.sub.j is defined. Note that the g.sub.j is back-propagated back across the region Q, in order to obtain the function z, which is then used to obtain the function

    [00506] [ R j ] H g j

    [0912] which is defined on all of Q.

    Newton-Raphson Method Applied to Inversion

    [0913] Now consider the system:

    [00507] g j = R j ( r + r ) R j ( r ) + [ R j ] r

    [0914] Rewriting this gives the following linear system which must be solved in order to obtain .

    [00508] [ R j ] ( r ) = g j - R j ( r )

    [0915] The vastly underdetermined form of this system leads one to define the function d such that the following equation holds:

    [00509] r [ R j ] H d

    [0916] Then the corresponding system for d is:

    [00510] [ [ R j ] [ R j ] H ] d = g j - R j ( r )

    [0917] and the expression for .sup.r is given by:

    [00511] r [ R j ] H d = [ R j ] H [ [ R j ] [ R j ] H ] - 1 ( g j - R j ( r ) )

    [0918] Now, the efficient method for the determination of .sup.r, will involve some form of approximation:

    [00512] C j [ [ R j ] [ R j ] H ] - 1

    [0919] For example C.sub.j=Identity has been shown to work well (This is the approach taken by Natterer in A Propagation-Backpropagation Method for Ultrasound Tomography. Other choices include C.sub.j=some diagonal matrix. One could also use Paige-Saunders Least Squares Conjugate Gradient method to find the minimum norm solution to:

    [00513] [ R j ] r = g j - R j ( r )

    [0920] In any case, once the update .sup.r has been found. It is added to the previous guess with some multiplicative factor 1 to obtain the new estimate for the .

    [00514] + 1 = + ( )

    [0921] This process is repeated for j=1, . . . N.sub.view. This approach differs significantly from the approach described earlier in that a vastly underdetermined problem is solved for each direction, as opposed to solving an overdetermined problem for the update .

    [0922] Note that the forward problem can be updated after 1, 2, or any finite number of directions have been carried out. The tradeoffs are that it is much more difficult to calculate the forward problem each time for each direction. However, the speed up in the convergence may make it worth the computational effort.

    Scientific Background and Detailed Description for Brightness Functional Approach to Phase Aberration Correction with Conjugate Gradient Methods

    [0923] The scientific background to the phase aberration correction based upon the brightness functional is simply that the L.sub.2 norm (functional) of the B-scan image intensity is maximized when the phase shifts (time delays) are such that the image is maximally focused [L. Nock and G. E. Trahey, Phase aberration correction in medical ultrasound using speckle brightness as a quality factor, Journ. Acoustical Society of America, 1989, 85, 1819-1833, herein included as reference].

    [0924] FIG. 27 shows the geometry of a linear B-scan acoustic transducer array illuminating an anatomical region through an aberrating layer of fat. We develop the algorithm here for the linear array for simplicity. Modification for convex, sector scanning arrays etc. is trivial. A region of interest (ROI), selected by the user, is also shown. The image in the ROI is formed by M beams produced by the beamformer hardware. The transducer elements that contribute to the formation of the M beams in the ROI are denoted e.sub.m1 to e.sub.m2.

    [0925] The goal of the algorithm is to focus the image in the ROI by finding a set of time delays applied to the signals from each transducer element e.sub.m1 to e.sub.m2 such that the brightness functional in the ROI (square of the L2 norm of the image intensity, B(t), over the ROI) is maximized:

    [00515] B ( t ) = .Math. { n , m } ROl I n , m 2 , maximize { B ( t ) } t

    [0926] where I.sub.n,m is the image intensity at pixel (n,m) and the vector t, t.sub.m: m=m.sub.1, . . . , m.sub.2 is the vector of transducer element time delays. Our algorithm applies gradient based optimization methods (steepest descent, Fletcher-Reeves conjugate gradients, Ribere Polak conjugate gradients, etc.) to solve the maximization problem.

    [0927] Clinical B-scanners operate by breaking the image up in range into a number of focal zones. The receiver beamformer then focuses the transmitter and receiver at a focal range equal to the center of the focal zone. Thus, each beam (laterally scanned) has one delay set (one delay for each element contributing to the beam) over the focal zone range. If the ROI is contained entirely within one focal zone (as in FIG. 27), then the delay perturbations for focusing need to be added to only one beamformer delay set per beam. In the case that the ROI overlaps two of more focal zones, the time delay perturbations must be added to the beam delay sets for each focal zone.

    [0928] The only hardware needed to implement this phase aberration correction algorithm is a B-scanner with a computer interface, allowing the image to be read from the B-scanner into the computer memory and allowing the beamformer hardware delays in the B-scanner to be reset from the computer.

    [0929] Nearly all ultrasound clinical scanners use and display the envelope of the RF beam signals as the image intensity. This is done by low pass filtering the modulus of the analytic RF signal or, in some low cost systems, by low pass filtering the rectificated video. The enveloping process is often followed by further processing that includes time variable bandwidth filtering to suppress noise and by logarithmic compression to extend dynamic range. We emphasise that the proper use of the brightness phase aberration correction algorithm must de-emphasize the amount of, or eliminate entirely, logarithmic compression, else logarithmic emphasized excess brightness in the side lobes of the point response function will destroy the attempt to focus the image by maximizing the strength of the central lobe of the point response function of the image.

    [0930] The following steps outline the algorithm for brightness based phase-aberration correction.

    [0931] Choose t=the time perturbation to be added to a selected element delay for derivative calculation.

    [0932] Choose line search time increment, t, and number of line search steps, N.sub.s.

    [0933] 1. Load the beamformer with precalculated delays based on 1540 m/s tissue average speed.

    [0934] 2. Acquire the initial image from the b-scanner.

    [0935] 3. Select the ROI (region of interest) comprising one or more receiver and transmit and one or more beam locations and focal ranges.

    [0936] This selection determines a sub set of transducer elements that are used in the image formation of this ROI, e.sub.m1, . . . , e.sub.m2 where m1 is the first element and m.sub.2 is the last. e.g., m.sub.1, m.sub.2[1, . . . , N.sub.tran] for an N.sub.tran element array.

    [0937] Set itertype=SD, FR, RP or RPP for steepest descents, Fletcher-Reeves, Ribere-Polack, or Ribere-Polak with the Powel modification.

    [0938] Set h.sub.m=0, m=m.sub.1, . . . , m.sub.2, set p.sub.m=0, m=m.sub.1, . . . , m.sub.2.

    [0939] Set r0=1.

    [0940] Set t.sub.m=0, m=m.sub.1, . . . , m.sub.2 the element time delay perturbation vector.

    [0941] 4. For I=1, . . . .

    [0942] 5. Compute b.sub.0=sum of squares of the image intensity on the ROI.

    [0943] If I=0, b.sub.initial=b.sub.0.

    [0944] 6. For m=m.sub.1, . . . , m.sub.2

    [0945] 6.a. Calculate a new delay set with t added to all delays for which element e.sub.m is the transducer element used.

    [0946] 6.b Load the new delay set into the beamformer.

    [0947] 6.c Acquire the new image from the B-scanner.

    [0948] 6.d Compute b.sub.m=sum of squares of the image intensity on the RO1.

    [0949] 6.e g.sub.m=(b.sub.mb.sub.0)/t.

    [0950] 6.f Next m.

    [0951] 7. If (itertype=SD) set p.sub.m=g.sub.m, m=m.sub.1, . . . , m.sub.2, go to 13.

    [0952] 8. Compute

    [00516] r 1 = .Math. m = m 1 m 2 g m 2

    [0953] 9. If(itertype=FR) =r.sub.1/r.sub.0.

    [0954] 10. If(itertype=RP)

    [00517] = .Math. m = m 1 m 2 g m ( g m - h m ) / r 0

    [0955] 11. If (itertype=RPP) then, If <0, .fwdarw.0.

    [0956] 12. Update the search direction and save the gradient in vector h:

    [00518] p m .fwdarw. g m + p m , m = m 1 , .Math. , m 2 h m .fwdarw. g m , m = m 1 , .Math. , m 2 r 0 .fwdarw. r 1

    [0957] 13. Compute

    [00519] M = max m = m 1 .Math. , m 2 .Math. "\[LeftBracketingBar]" p m .Math. "\[RightBracketingBar]" .

    [0958] 14. Line search by Trial and Error. For n=1, . . . , N.sub.s:

    [0959] 14.a f.sub.m=t.sub.m+np.sub.m/M, m=m.sub.1, . . . , m.sub.2

    [0960] (Note that this formula ensures that the maximum time perturbation is N.sub.s.R.

    [0961] 14.b Compute the new beamformer delay set with element time delay perturbation vector f.

    [0962] 14.c Acquire the new B-scan image and compute the brightness bn over the ROI.

    [0963] 14.d Next n.

    [0964] 15. Determine n for which bn is maximum, then set: t.sub.m.fwdarw.t.sub.m+50np.sub.m/M, m=m.sub.1, . . . , m.sub.2.

    [0965] Steps 14 thru 15 can easily be replaced by a gradient, or quadratic, or Fibonacci search, or other line search for efficiency. The above trial and error method is included for concreteness only)

    [0966] 16. Load new delays into beamformer with vector t of element time delay perturbations added.

    [0967] 17. Display B-scan image.

    [0968] 18. Check convergence criteria such as the percent change in brightness functional is less than some small and arbitrary number such as 5%. Do another gradient convergence step? If yes, increase 1 by 1, (go to 5).

    [0969] Clearly computing the gradient by perturbation of each time delay in sequence is one of many ways and is straightforward. It is possible to use a basis set for the gradient based upon the singular value decomposition of the Hessian of the Brightness functional (which is closely related to the Jacobian of the Brightness vectori.e. the vector whose modulus squared is the Brightness functional)

    [0970] The number of significant singular vectors will generally be somewhat less than m.sub.2-m.sub.1, so that using the singular vectors as a basis for finding the gradient will in general be much more efficient.

    Scientific Background for Imaging with Diffusion Equation Models

    [0971] Electric Conductivity Imaging by Frequency Domain, Nonlinear Inversion (the method we use for wave equation inversion, now modified for the Diffusion Equation)

    [0972] We start with the receiver and media diffusion equations, modified to eliminate the Electric field E, internal to the image grid, which then becomes a non-linear expression for the conductivity (or its reciprocal, resistivity) in terms of the incident field and the measured field at a fixed frequency (o. Define the residual field R (note the frequency, source and receiver indices of R are suppressed, but under stood to be active) by the standard formula:

    [00520] R = E m - E b + D [ ] ( [ I - C [ ] - 1 E b = 0

    [0973] Here, E.sub.m(r) is the measured electric field, E.sub.s=E.sub.mE.sub.b is the measured scattered electric field, and E.sub.b(r) is the incident field or response in the (homogeneous) background medium.

    [0974] It should be noted that the definitions provided in this section are isomorphic to the wave equation definitions given in the previous examples and only the Green's functions have changed (but these are available from Morse and Feschback, Methods of Theoretical Physics, Vols. 1, and 2, McGraw-Hill.).

    [0975] This provides the correct value for the measured field Electric field E.sub.m when substituting the correct value for and E.sub.b and on setting R=0. We solve for by finding the that minimizes R by minimizing the objective functional

    [00521] F ( ) = ( 1 / 2 ) .Math. E - E b - D [ y ] [ I - C [ y ] ] - 1 Eb .Math. 2 = ( 1 / 2 ) .Math. R .Math. 2

    [0976] In general, we can define the norm of the residual vector R to be general enough to weight each frequency component of F() to increase the convergence rate in some cases:

    [00522] F ( ) = ( 1 / 2 ) .Math. R .Math. 2 = .Math. frequency .Math. sources s .Math. receivers m .Math. "\[LeftBracketingBar]" W R sm .Math. "\[RightBracketingBar]" 2

    [0977] Define the Jacobian of the residual R with respect to by

    [00523] ( / ) R = - D ( I - [ ] C ) - 1 [ E ] ]

    [0978] Then the Gauss-Newton algorithm for finding the that minimizes F() is isomorphic to the earlier wave equation case, and is given by: [0979] (a) Select an initial guess y.sup.(n). Set n=0. [0980] (b) Solve the forward problem for E.sup.(n) by us of the biconjugate gradient (BCG) or stabilized biconjugate gradient (BiSTAB) algorithms.

    [00524] E ( n ) = ( [ I - C [ ( n ) ] ] - 1 E b

    [0981] compute the receiver residuals

    [00525] R ( n ) = E m - E b + D [ ( n ) ] ( [ I - C [ ( n ) ] ] - 1 E b

    [0982] (d) For some small number the size of the noise to signal ratio,

    [0983] if

    [00526] .Math. R ( n ) .Math. / .Math. E m .Math. < ,

    [0984] then stop. Else

    [0985] (e) Minimize

    [00527] .Math. ( / ) R ( n ) ( n ) + R ( n ) .Math. 2

    [0986] for .sup.(n) by using the conjugate gradient algorithm.

    [0987] (f) Update by .sup.(n+1)=.sup.(n)+.sup.(n)

    [0988] (g) Set n=n+1, go to (b).

    [0989] We note that for many problems that the Jacobian can be approximated by the first two terms of the Neumann series

    [00528] ( / ) R = - D ( I + [ .Math. "\[LeftBracketingBar]" .Math. "\[RightBracketingBar]" C ) [ E ] ]

    [0990] This is like a Born approximation to the Jacobian, but it is not the Born approximation to the scattering equations. We find that this Jacobian has a larger radius of convergence than the Born approximation itself.

    [0991] This method tends to be self regularizing if the iterations are halted when the changes in

    [00529] .Math. R ( n ) .Math. / .Math. E m .Math.

    [0992] become small relative their early changes. Other regularization methods may also be used such as the squaring method described below:

    Scientific Background on Squaring an Overdetermined System to Apply BisTAB:

    [0993] The basic algorithm for reconstruction of is reviewed below: We start with the receiver and media propagation equations, modified to eliminate E, which then becomes a non linear expression for the object function in terms of the incident field and the measured field at a fixed frequency . Define the residual field R (note the frequency, source and receiver indices of R are suppressed) by

    [00530] R - ( E m - E b ) - D [ ] ( [ I - C [ ] ] ] - 1 E b = 0

    [0994] Here, E.sub.m(r) is the measured field, E.sub.s=E.sub.mE.sub.b is the measured scattered field, and E.sub.b(r) is the incident field or response in the (homogeneous) background medium. [\\] is the diagonal matrix formed from the vector .

    [0995] This provides the correct value for the measured field E.sub.m when substituting the correct value for and E.sub.b and on setting R=0. We solve for by finding the that minimizes R by minimizing the objective functional

    [00531] F ( ) = ( 1 / 2 ) .Math. E - E b - D [ ] ] ( [ I - C [ ] ] - 1 E b .Math. 2 = ( 1 / 2 ) .Math. R .Math. 2

    [0996] In general, we can define the norm of R to be general enough to weight each frequency component of F() to increase the convergence rate in some cases:

    [00532] F ( ) = ( 1 / 2 ) .Math. R .Math. 2 = .Math. frequency .Math. sources s .Math. receivers m .Math. "\[LeftBracketingBar]" W R s m .Math. "\[RightBracketingBar]" 2

    [0997] Define the Jacobian of the residual R with respect to by

    [00533] ( / ) R = - D ( I - [ ] C ) - 1 [ [ E ] ]

    [0998] Then The Gauss-Newton algorithm for finding the that minimizes F() is given by: [0999] (a) Select an initial guess .sup.(n). Set n=0. [1000] (b) Solve the forward problem for E.sup.(n) by us of the biconjugate gradient (BCG) or stabilized biconjugate gradient (BiSTAB) algorithms.

    [00534] E ( n ) = ( [ I - C [ ( n ) ] ] - 1 E b [1001] (c) compute the receiver residuals

    [00535] R ( n ) = E m - E b + D [ ( n ) ] ( [ I - C [ ( n ) ] ] - 1 E b [1002] (d) For some small number [1003] the size of the noise to signal ratio, [1004] if

    [00536] .Math. R ( n ) .Math. / .Math. E m < [1005] then stop. Else [1006] (e) Minimize

    [00537] .Math. ( / ) R ( n ) ( n ) + R ( n ) .Math. 2 [1007] for .sup.(n) by using the conjugate gradient algorithm. [1008] (f) Update by .sup.(n+1)=.sup.(n)+.sup.(n) [1009] (g) Set n=n+1, go to (b).

    [1010] We note that for many problems that the Jacobian can be approximated by the first two terms of the Neumann series

    [00538] ( / ) R = - D ) I + [ ] C ) [ E ] ]

    [1011] This is like a Born approximation to the Jacobian, but it is not the Born approximation to the scattering equations. We have found that this algorithm with approximate Jacobian has a larger radius of convergence than the Born approximation per se.

    [1012] This method tends to be self regularizing if the iterations are halted when the changes in

    [00539] .Math. R ( n ) .Math. / .Math. E m .Math.

    [1013] become small relative to their early changes. Other regularization methods may also be used such as Singular Value Decomposition based methods. The difficulty with this algorithm is that it is based upon using some form of general conjugate gradients to solve an overdetermined system. The way to avoid this bottleneck is to create a square system from the overdetermined system and use BiSTAB on the square system. This can be done in several ways when the system is linear.

    [1014] Investigation of a new inversion method based on finding square Jacobian.

    [1015] It is known that solving the rectangular, over determined, linear system Ax=b can be accomplished by multiplying both sides of the equation by the adjoint Acustom-character, thereby giving the square system (Acustom-characterA)x=(Acustom-characterb). This system is called the normal system and the corresponding equations the normal equations. The solution is the least squares solution to the original over determined system. However, the new system has singular values that are the square of the singular values if the original system (i.e., Acustom-characterA is ill conditioned). This means the conjugate gradient (CG), one the most efficient of all methods, converges slowly in solving the normal equations. Although CG methods for solving the over determined system directly exist, they essential form products (Acustom-character,A) implicitly. Thus, the speed advantages of CG are compromised. The system Ax=b in our example represents the system

    [00540] ( / ) R ( n ) ( n ) + R ( n ) = 0

    [1016] i.e., equation 4.2; 18(e), where (/)R.sup.(n) is the Jacobian J.sup.(n).

    [1017] We note J.sup.(n)custom-characterJ.sup.(n) is square but ill conditioned. In our example Ax=b, let x=By and let AB be square. Then on substitution, ABy=b. Now, y can be solved by BiCG and then x is found by a simple multiplication by B. Further, suppose that B can be chosen so that yb, then ABI, where I is the square identity matrix. In this case, solving ABy=b is very well conditioned and BiCG should converge in only a few iterations. We next generalize A and B to be nonlinear operators (not matrices). Then the Jacobian of the operator AB is square. We also may choose B to be an improved generalized Born approximation, the Born approximation, or backpropagation. These methods are discussed in the detailed description section below in conjunction with FIGS. 28 and 29.

    General Discussion of Conditioning

    [1018] Suppose that the goal is to find

    [00541] min t .Math. At - r .Math. 2 ,

    where A is m by n matrix (m<n), t is an n-dimensional vector, and r is m-dimensional residual vector. By making a change of variables Bs=t, to s, where B is n by in., the system to be solved is now

    [00542] ABs = r

    [1019] and AB is a square system. Therefore the BiConjugate gradient method or BiCG squared algorithm can be applied to this system. BiCG squared has much better convergence behaviour than standard conjugate gradient methods for non-square systems. Consequently the convergence to a solution will generally be much quicker, whereupon {circumflex over (t)}B computes the answer to the original problem. Due to the position of the conditioning matrix B, after the original coefficient matrix A, this procedure is sometimes referred to as postconditioning

    [1020] Two special cases:

    [1021] 1. B=A.sup.T, in which case s is the minimum norm solution to an underdetermined problem

    [00543] A ( ) = G s c ( I - G ) - 1 f i n c ,

    [1022] Where .sub. indicates the direct sum in the sense of vector spaces over all frequencies, i.e. concatenation.

    [1023] In fact, for operators A.sub.1, which act on vectors v.sub.j: A.sub.jv.sub.j, the operator

    [00544] [ A j j ] = [ A 1 0 0 0 0 A 2 0 0 0 0 0 0 0 0 A N ]

    [1024] acts on the total vector

    [00545] [ j v j ] to yield : [ j A j ] [ j v j ] , where [ j A j ] [ j v j ] = [ A 1 0 0 0 0 A 2 0 0 0 0 0 0 0 0 A N ] [ v 1 v 2 .Math. v N ] = [ A 1 v 1 A 2 v 2 .Math. A N v N ]

    [1025] Of course, this is a nonlinear operator, and the corresponding Jacobian is

    [00546] AB g A ( ) B g = ( by linearity ) = A ( ) B 1

    [1026] This Jacobian is evaluated at some iterated guess, and is an object that is used in the BiCG minimization routine. B in this case is the Born or Rytov reconstruction.

    [1027] The general procedure for the solution of the Jacobian equation:

    [00547] A ( ) B g sc = - r ( n )

    [1028] then involves the application of said Jacobian (1), and its conjugate transpose:

    [00548] B H A ( ) H r ( n )

    [1029] In the above formula g.sup.sc is the scattered field to which the Born reconstruction procedure is applied to yield ,

    [00549] = Bg sc

    [1030] where the standard Born approximation is utilized:

    [00550] g sc ( x ) = G ( x , x ) r ( x ) f inc ( x ) dx

    [1031] which upon discretization reads:

    [00551] g sc = G [ f inc ]

    [1032] where the standard notation for the matrix whose diagonal elements are the components of the vector f.sup.inc has been used:

    [00552] [ f inc ] [ f 1 inc f 2 inc f N inc ]

    [1033] Consequently the operator B can be written explicitly:

    [00553] B = [ 1 / f inc ] G - 1

    [1034] although it is important to realize that the actual inversion is never carried out: rather, the standard Born reconstruction based upon the Fourier Diffraction Theorem is used, for computational efficiency.

    Scientific Background for Calibration Techniques

    [1035] The idea behind the use of nonlinear optimization for calibration is to collect scattering data from several known phantoms. A computational model of the scattering process, functionally dependent on the unknown calibration parameters, is then varied with respect to these parameters until a match with the data is obtained. If the objects are sufficiently well known and sufficient in number and location, then the resulting match should provide an accurate estimate of the unknown calibration parameters.

    Generic Gauss-Newton Minimization Algorithm for Calibration:

    [1036] Suppose that a dataset, .sup.s.sub., .sup.i.sub., of scattered and incident fields is measured for several view angles q, receiver positions , and known test phantoms, g. A typical phantom used in practice is an agar cylinder. Note that the incident measurements .sup.i.sub. are not dependent on the test object.

    [1037] Assume further that a computational model for the scattering problem exists which predicts the measured data, given such calibration parameters as position, radius, and speed of sound in the agar cylinder, and position and orientation of the receiver array (e.g., the orientation defined by Euler angles with respect to a selected fixed coordinate system). This might be a general forward solver such as an integral equation solution or an FDTD algorithm. A simpler, less computationally involved method, in the event that the test phantoms are cylinders or collections of cylinders, is the use of the Bessel series analytic solution. Let this computational solution be denoted:

    [00554] r s ( C ) , i ( C )

    [1038] for the M scattering parameters in the vector C

    [00555] C ( c 1 c 2 .Math. c M )

    [1039] The optimization approach is to minimize the L.sub.2 norm of the mismatch between theoretically predicted and experimentally measured fields by adjusting the values of the scattering parameters in vector C:

    [00556] min F = min [ .Math. "\[LeftBracketingBar]" s ( c ) - f s .Math. "\[RightBracketingBar]" 2 + .Math. "\[LeftBracketingBar]" i ( c ) - f i .Math. "\[RightBracketingBar]" 2 ]

    [1040] It is also possible and desirable to enforce bounds on the scattering parameters:

    [00557] L j c j U j , j = 1 , .Math. , M

    [1041] Minimization of the functional is computed by first explicitly deriving the elements of the Jacobian for the scattering equations:

    [00558] f s ( c ) c m f i ( c ) c m

    [1042] The Gauss-Newton method for the minimization of the functional

    [00559] F [ .Math. "\[LeftBracketingBar]" y s ( c ) - f y s .Math. "\[RightBracketingBar]" 2 + .Math. "\[LeftBracketingBar]" i ( c ) - f i .Math. "\[RightBracketingBar]" 2 ]

    [1043] is the following

    [1044] Step 1: Choose an initial guess for the parameter vector, C.sub.o and put n=0.

    [1045] Step 2: Form the residual vectors

    [00560] r n s = n s - f n s , r n i = n i - f n i

    [1046] where the vectors used in the definition of r.sub.j are given by:

    [00561] n s = ( .Math. y s ( c n ) .Math. ) f n s = ( .Math. f s ( c n ) .Math. )

    [1047] Step 3: Form the Jacobian matrices:

    [00562] [ y s ( c n ) c m i ( c n ) c m ] = [ n s ( c n ) c m n i ( c n ) c m ] = [ J n s ( c n ) J n i ( c n ) ]

    [1048] and solve the system:

    [00563] [ J n s ( c ) J n i ( c ) ] C n = - [ r n s r n i ]

    Step 4: Update the parameter values:

    [00564] C n + 1 = C n + C n

    [1049] and update n: nn+1, and go to STEP 1 if a convergence criterion has not been met.

    [1050] Modification of this algorithm to incorporate equality constraints can be achieved by the method of Lagrange multipliers.

    [1051] Inequality constraints can be included by the use of slack variables, Sequential Quadratic Programming, Projected Gradient, Augmented Lagrangian or Generalized Reduced Gradient methods. Of course Penalty function methods can also be employed in theory, but straightforward application of this procedure is generally ill-posed in the numerical sense. Enforcement of these constraints tends to be very important for the present problem. For instance, receiver positions might be constrained to movements less than the assumed precision in their physical measurement (these physical measurements determine the starting values, c.sub.0).

    Modifications

    [1052] Perhaps the most essential modification to the basic Gauss-Newton algorithm above, for the present problem, is the incorporation of rescaling and truncation operators. An example of rescaling would be the case where one of the unknowns is an angle, a, (linear array angle offset). In order to give this unknown angle the same weight as an unknown distance, we solve for =r where r is a relevant distance which may be a function of receiver position. A need for truncation is illustrated by the fact that multiple test phantoms are needed to fully test the receiver transducer pattern. For a given phantom, only a certain range of receiver pattern parameters are tested. Also, the sensitivity of each receiver position to a given pattern parameter can be estimated geometrically. This information can be incorporated into the Jacobian with a combination of rescaling (emphasizing some receiver values and deemphasizing others) and truncation operators (removal of data not affected by a given parameter). Rescaling and truncation improve the conditioning of the Jacobian and avoid the emphasis of noise in data not relevant to a given parameter.

    Description of Calibration Procedure for Acoustic Imaging

    [1053] This example gives a specific example of how the calibration procedure is employed for a particularly simple system as employed at TechniScan Inc. This example can easily be applied to electromagnetic waves with the appropriate changes in the notation.

    [1054] In order to obtain a well calibrated inverse scattering image, the acoustic pressure field due to the transmitter and the sensitivity field of the receiver (equal to the pressure field produced by the receiver acting as a transmitter) must be known. Presently available transducers (PZT and PVDF) suffer from piezoelectric parameter variations, preventing the construction of a transducer with a field pattern prescribed to within better than 10%. Thus, transducer field measurements are needed to calibrate the transducer fields prior to (or concurrent with) optimization for the target acoustic parameters.

    [1055] The inverse scattering algorithm also requires accurate measurement of geometrical parameters such as the physical location of the transmitter and receiver transducers relative to the region to be imaged. As a concrete example, suppose that multiple view data for the imaging of an object is obtained by a single receiver mechanically scanned to form a linear receiver array, a fixed transmitter location, and a rotated object holder: See FIG. 46 for a picture of the relevant geometry.

    [1056] In FIG. 46, the object to be imaged is assumed to be fixed in the primed coordinates. The unprimed coordinates are defined by segment 1 connecting the transmitter center to the rotation axis of the object. d.sub.t is the unknown length of segment 1, b.sub.t is the unknown transmitter misalignment angle, q.sub.m, m=1, . . . , M is the rotation angle of the object, d.sub.r is the unknown distance from the rotation center to the receiver scan line (in the direction of segment 1), a is the unknown misalignment angle of the receiver scan line with respect to the normal to 1, b.sub.r is the unknown misalignment angle of the receiver with respect to the scan line, D is the known receiver scan increment, y.sub.n, n=N, . . . , N is the distance from the intersection of segment 1 with the receiver scan line to the nth receiver position which is known if y.sub.0=y.sub.s and D are known. We assume that the rotation increment, dq, is known. The exact placement of the object to be imaged in the primed coordinate system is not needed since its absolute shift and rotation will be imaged. The 6 geometrical unknowns are thus: d.sub.r, d.sub.r, b.sub.r, b.sub.r, a, y.sub.s. In this 2D example, the third dimension is ignored. In a real experiment, displacements and angle offsets in this dimension are also unknown. However, a basic assumption of our present approach is that, even if fields with 3D variation are produced by the transmitter and receiver, if the object is 2D (no variation in the third dimension) then there exist 2D transducers which will produce the same eikonal (total field measurement divided by the incident field measurement) data.

    Calibration by Optimization

    [1057] In order to determine the transducer fields and geometry of the experiment, optimization techniques are used. Consider the case where the transducers fields are known and it is desired to calibrate for the geometry. Assume that a calibration phantom consisting of a saline filled plastic circular cylinder is placed in the sample holder. Assume that the thickness of the plastic is known but the exact speed of sound in the saline is unknown. Assume also that the cylinder is centered precisely on the rotation axis so that only one view is needed (if this is not so, one could rotate the object holder and include x, y displacements of the cylinder axis with respect to the rotation axis as unknowns). Adding the unknown scattering potentials of the saline and plastic to the list of unknowns gives the 8 values d.sub.r, d.sub.r, b.sub.r, b.sub.r, a, y.sub.s, g.sub.s, g.sub.p to be determined. As a further simplification, let us take the transducers to be line sources which are perfectly colinear with the cylinder axis. For these omnidirectional transducers, there are no b.sub.r, b.sub.r, unknowns.

    [1058] As a simulation example, data was generated by analytic cylinder solution for the following parameter values: [1059] frequency=312.5 KHz [1060] C.sub.0=1485 m/s (speed of sound in water) [1061] Outer diameter of plastic cylinder=1 [1062] Thickness of plastic=0.73 mm [1063] Number of receiver positions=128 [1064] Receiver sample interval, D=1.195 mm (y.sub.n=(n64.5)D+y.sub.s)

    [00565] g s = - 0.12 g p = - 0.5 d r = 5 cm a = 3 degrees y s = 3 mm d t 15. cm

    [1065] The data used to optimize for the parameters is the log of the eikonal:

    [00566] data n = ln ( f n / f n i ) = ln ( .Math. "\[LeftBracketingBar]" f n / f n i .Math. "\[RightBracketingBar]" ) + i Phase ( f n / f n i )

    [1066] The optimization functional is then:

    [00567] F ( s , p , , y s , d r , d t ) = .Math. n .Math. "\[LeftBracketingBar]" ln ( f n / f n i ) ( s , p , , y s , d r , d t ) - data n .Math. "\[RightBracketingBar]" 2

    [1067] The starting guess for the optimization is:

    [00568] g s = - 0.1 g p = - 0.4 d r = 4.5 cm a = 0. degrees y s 0. mm d t = 15.5 . cm

    [1068] Computing the data for these two parameter sets gives the comparison shown in FIGS. 31 and 32.

    [1069] Solution after 3 Ribere-Polack steps applied to the minimization of the functional gives the solution:

    TABLE-US-00005 [00569] Normalized residual F ( final solution ) .Math. n .Math. "\[LeftBracketingBar]" data n .Math. "\[RightBracketingBar]" 2 = 0.1016598 [00570] Normalized gradient F ( final solution ) F ( starting guess ) = 0.2283959 g.sub.s = .116789, 2.68% error g.sub.p = 0.59277, 18.6% error d.sub.r = 4.756 cm 4.9% error a = 2.29916 degrees 23.4% error y.sub.s = 2.18673 mm 27.1% error d.sub.t = 15.0 cm 0% error For 5 R-P steps: Normalized residual = 5.11123E02 Normalized gradient = 0.122296 g.sub.s = 0.117978, 1.75% error g.sub.p = 0.57, 14.0% error d.sub.r = 4.87 cm 2.6% error a = 3.08706 degrees 2.9% error y.sub.s = 2.92 mm 2.7% error d.sub.t = 15.0 cm 0% error For 9 R-P steps: Normalized residual = 1.01880E02 Normalized gradient = 1.22823E02 g.sub.s = 0.118573, 1.17% error g.sub.p = 0.518917, 3.78% error d.sub.r = 5.0 cm 0% error a = 3.08706 degrees 2.9% error y.sub.s = 2.96672 mm 1.1% error d.sub.t = 15.0 cm 0% error

    Overview of Calibration Steps for Acoustic 2D Geometry

    [1070] First incident field, scattered field, the total field corresponding to the FIG. 30 are read into memory.

    Step 1:

    [1071] Creation of receiver model. For this purpose a procedure is carried out whereby the receiver is rotated in its center position with transmitter at some fixed position. The receiver is rotated at equi-angular intervals from p/2 to p/2. For the 2D model the receiver is modelled as 51 weighted line receivers separated by some fixed distance drec, which is fixed, as shown in FIGS. 33 and 34.

    Step 2

    [1072] Next a transmitter model is constructed which consists of an infinitely long, infinite impedance cylinder with a source function on its surface. The transmitter model equivalent source surface is discretized by an np=31 pt. spline function, interpolated to n>np points for the computation of the incident field.

    [1073] The np node values on the surface of the non-physical infinite impedance cylinder are the unknowns which are optimized to give a best fit in the least squares sense of the incident field as measured on the receiver array, as shown in FIG. 47.

    Step 3

    [1074] The next step is the use of the incident field generated by this impedance cylinder model, and exact Bessel solution models for scattering from a cylinder to create a scattered field, which is matched with the measured scattered field, using the receiver model calculated above. The setup is as shown in FIG. 35.

    Software Descriptions

    [1075] Having described some examples in the previous section which illustrate mathematical and physical models which may be used to model scattered wavefield energy and to process data so as to reconstruct an image of material parameters using inverse scattering theory, reference is next made to FIG. 36.

    [1076] This figure schematically illustrates a flow diagram for a computing system carrying out an imaging process. The flow diagrams provided herein are merely illustrative of one implementation using the scalar Helmholtz wave equation or vector wave equation or similar wave equation for modeling scattered wavefield energy as described in Example 1.

    [1077] As shown in FIG. 36, as schematically illustrated in Step 3602, for each transmitter from which a signal is sent, the system next causes a receiver multiplexer to sequence each receiver array so as to detect the scattered wavefield energy at each position.

    [1078] Once the scattered wavefield energy has been detected, processed, and digitized, the system next determines the incident field f.sub..sup.inc, as indicated at step 3604. The incident field is determined from the wavefield energy (e.g., frequency and amplitude components of the transmitted waveform) and transducer geometry from which the incident field is transmitted. The process of determining this incident field may involve direct measurement of the incident field or calculation of the incident field using the appropriate Green's function whether it be Biot, elastic, electromagnetic, scalar, vector, free space or some combination or permutation of these. The incident field is determined at each pixel in the imaged space for each source position =1, . . . , and for each frequency , which varies from 1 to . In step 3606, the scattered wavefield is measured, recorded and stored at the wavefield transducer receivers for each , varying from 1 to each source position 1-, and each pixel element at each wavefield transducer receiver position. In step 3608, the initial estimate for the scattering potential can be set at either 0 or at an average value which is derived from the average material parameters estimated a priori for the object. When applicable, a lower resolution image of , such as produced by a time of flight method as described in U.S. Pat. No. 4,105,018 (Johnson) may also be used to advantage (in the acoustic case) to help minimize the number of iterations. Similar approximations for other forms of wavefield energy may be used to advantage to help minimize the number of iterations.

    [1079] The system next determines in step 3610 an estimate of the internal fields for each pixel in the image space and for each frequency , and each source position . This estimate is derived from the incident field and the estimated scattering potential. This step corresponds to step 3622 in FIG. 37A as well as to step 3610 in FIG. 36.

    [1080] For a given and fixed frequency , and fixed source position , the forward problem consists of using equation 1 or 17 (provided in EXAMPLE 1) to determine the total field f.sub. given the scattering potential and the incident field f.sup.inc.sub. and the appropriate Green's function. This Green's function could be the scalar Green's function which is essentially the Hankel function of the second kind of zero order, or it could be the elastic Green's functions which is derived from the scalar Helmholtz Green's function, or it could be the Biot Green's function, or it could be the electromagnetic vector or scalar Green's function as well as a layered Green's function derived from one of the preceding green's function in the manner described above. The process whereby this is done is elucidated in FIGS. 39A and 39B.

    [1081] Before applying this algorithm, equation 1 or 17 is discretized so that it may be input into a processor. This process is described in detail in the Examples. The number of frequencies that are used in the inverse scattering problem is a function of the source receiver geometry, among other things.

    [1082] All forward problems for all possible frequencies, and all possible source positions can be solved, for example, for the two-dimensional case or for the three-dimensional case, whichever is relevant. FIG. 38 gives in detail the procedure for solving for the internal fields at all possible source positions and all possible frequencies. From step 3622 in FIG. 37A we enter step 3840 on FIG. 38. In step 3840 of FIG. 38, all of the files that contain all of the Green's functions for all possible frequencies and the files that contain the incident fields for all possible frequencies and all possible source positions are rewound. In step 3842 of FIG. 38, is set equal to 1, i.e., we indicate that we are about to calculate the forward problem for the first frequency. In step 3844 of FIG. 38, the Green's function for this particular frequency is read into memory. In step 3846 of FIG. 38, the source position index , is set equal to 1 indicating that we are solving the forward problem for the first source position. In step 3848, the incident field for this particular , i.e., source position, and this particular , i.e., frequency, is read into memory. In step 3850, the forward problem for this particular frequency and this particular source position is solved using either biconjugate gradients or BiSTAB. This process is detailed in FIGS. 39A and 39B for the biconjugate gradient or BCG algorithm and FIGS. 40A and 40B for the BiSTAB or biconjugate gradient stabilized algorithm. In FIG. 39A, in step 450 which follows step 3850 in FIG. 38, the predetermined maximum number of biconjugate steps and tolerance are read into memory, and the forward problem, which in discrete form reads (IG.sub.)f=f.sup.inc, is solved. For purposes of generality, we consider the BCG or biconjugate gradient algorithm applied to the system Ax=. A is a matrix which is square, x and y is a vector of suitable dimension. Step 3952 consists of reading in an initial guess X.sup.0 and calculating an initial residual which is calculated by applying A to the initial guess X.sup.0 and subtracting from the right-hand side Y. In step 3954, the initial search direction P.sup.0, bi-search direction Q, bi-residual S are calculated and stored. In step 3956, we put N equal to 0. In step 3958, we calculate the step length an by taking the inner product of the two vectors s and r, the bi-residual and residual respectively, and dividing by a quantity which is the inner product of two vectors. These two vectors are 1) q, the bi-search direction, and 2) the result of the application of the operator A to the present search direction P. In step 3960 of FIG. 39A, the guess for x is updated as per the equation that is shown there. In Step 3962, the residual R is updated as per the formula given, and the bi-residual S is updated as per the formula given. (H indicates Hermitian conjugate.) In particular, the bi-residual is given by the previous bi-residual S.sup.n minus a quantity which is given as *n, which is the complex conjugate of .sub.n, multiplied by a vector, which is the result of the operation of the Hermitian conjugate of the operator A, applied to the vector q, the bi-search direction. This is carried out by a call to FIG. 43, which we discuss below. In step 3964, check to see if the stop condition has been satisfied, i.e., is the present updated residual less than or equal to (which is given a priori) multiplied by the initial residual in absolute value. Step 3966 consists of the determination of , the direction parameter, which is given as the ratio of two numbers. The first number in this ratio is the inner product of the new updated bi-residual with the updated residual and the denominator is the inner product of the previous residual with the previous bi-residual, which has already been calculated in box 3958. In step 3968, use the calculated to update and get a new bi-search direction, and a new search direction given by the formulas shown. In step 3970, N is incremented by 1 and control is taken back to step 3958. In step 3958, a new step length is calculated, etc. This loop is repeated until the condition in step 3964 is satisfied, at which time control is returned to step 3850 in FIG. 38. The forward problem for this particular frequency and this particular source position has now been solved and the result, i.e., the internal field, is now stored.

    [1083] In step 3852, back in FIG. 38, it is determined if is equal to , i.e., have we computed all the forward problems for this particular frequency for all possible source positions. If not, then is incremented by 1 in step 3854 and control is returned to step 3848 of FIG. 38. If so, we have calculated the forward problem for all possible source positions and control is transferred to step 3856 where it is determined if is equal to , i.e. have we calculated the forward problem for all possible frequencies ? If not, is incremented by 1 in box 3857 and control is returned to step 3844. If so, we have completed the computation of the internal fields for all possible source positions and all possible frequencies and have stored the results, i.e. we have finished the subroutine for the two-dimensional or three-dimensional problem, respectively. Control is now transferred to the main program for the two-dimensional case or for the three-dimensional case. This corresponds to step 3622 in FIG. 37A. This also corresponds to step 3612 of FIG. 36. FIGS. 37A, 37A, 37C, and 37D together comprise an expanded version of the inverse scattering algorithm in its entirety, which is given in compressed form by FIG. 36. Before proceeding further to describe in detail step 3612 of FIG. 36, i.e., also step 3626 of FIG. 37A, we mention that the BCG algorithm just described may be replaced by the BiSTAB (biconjugate gradient stabilized algorithm) of FIGS. 40A and 40B. Furthermore, in the calculation of both the BiSTAB algorithm or the biconjugate gradient algorithm, the implementation of the operation of the operator A acting on the vector x is carried out in a very specific manner which is described in FIG. 41. Furthermore, in FIG. 43, the Hermitian conjugate of the operator A, denoted A.sup.H, is applied to vectors in its domain. Specifically, in step 4100 of FIG. 41, we come in from any step in FIGS. 40A and 40B or 39A and 39B, respectively, the conjugate gradient stabilized algorithm or the bi-conjugate gradient algorithm which requires the calculation of the operation of the operator A acting on the vector x. In step 4102, the pointwise product of the scattering potential and the present estimate of the internal field at frequency and source position is formed and stored. Step 4104, the Fast Fourier Transform (FFT) of this pointwise product is taken. In step 4106, the result of the Fast Fourier Transform (FFT) taken in step 4104 is pointwise multiplied by G, which is the Fast Fourier Transform (FFT) of the Green's function at frequency , which has been stored and retrieved. In step 4108, the inverse Fast Fourier Transform (FFT) is taken the pointwise multiplication that was formed in the previous step. In step 4110 this result is stored in vector . In step 4112 the difference, taken pointwise, of the f.sub., the present value for the internal field at frequency , and source position , and the constructed vector y is taken. Step 4114 returns control to either the BCG or the BiSTAB algorithm, whichever has been called by step 3850 of FIG. 38. The application of the operator which is described in FIG. 41 is carried out multiple times for a given call of the biconjugate gradient or the BiSTAB algorithm, it is imperative, therefore, that this step be computed extremely quickly. This explains the presence of the fast Fourier transforms to compute the convolutions that are required. This operator, (IG) symbolically written, we refer to as the Lippmann-Schwinger operator. FIG. 43 calculates the adjoint or Hermitian conjugate of the given Lippman-Schwinger operator and applies this to a given vector. Step 580 of FIG. 43 takes control from the appropriate step in the biconjugate gradient algorithm. It is important to note that the BiSTAB algorithm does not require the computation of the adjoint of the Lippmann-Schwinger operator. Therefore, FIG. 43 is relevant only to FIGS. 39A and 39B, i.e., the biconjugate gradient algorithm. In step 580 of FIG. 43 the Fast Fourier Transform (FFT) of the field f to which the adjoint of the Lippmann-Schwinger operator is being applied, is taken. In step 582 of FIG. 43, the Fast Fourier Transform (FFT) computed above is pointwise multiplied by G*.sub., the complex conjugate of the Fast Fourier Transform (FFT) of the Green's function at frequency . In step 584, the inverse Fast Fourier Transform (FFT) of this pointwise product is taken and stored in vector . In step 586, the pointwise product of the complex conjugate of , the present estimate of the scattering potential is taken with the vector . The result is stored in vector y. In step 590, the difference is formed between the original field to which the adjoint of the Lippmann-Schwinger operator is being applied, and the vector y just calculated. Now (IG).sup.Hf.sub. has been calculated and control is now returned to box 3962 of FIGS. 39A and 39B, the biconjugate gradient algorithm, where use is made of the newly created vector, A.sup.Hq. Next the stop condition is checked at step 3964.

    [1084] We now return to step 3622 of FIG. 37A, i.e. step 3612 of FIG. 36 where we determine the scattered field at the detector locations for all possible source positions, i.e., incident fields and also all possible frequencies o. This process is carried out in FIG. 44, to which we now turn.

    [1085] Step 1150 of FIG. 44 transfers control to step 1152 of FIG. 44 where is set equal to 1. Control is then transferred to step 1154 where the Green's function for this particular is read into memory. Control is then transferred to step 1156 where is set equal to 1. Control is then transferred to step 1158 where the internal estimate for this source and this frequency is read into memory. Control is then transferred to step 1160 where the incident field is subtracted from estimated total internal field. In step 1162 the truncation or propagation operator shown FIG. 45 is performed. When the truncation or propagation has been performed, control is transferred back to FIG. 44 and in step 1164 the calculated scattered field is stored. In step 1166 it is determined if is equal to , if it is not, then is incremented by one, in step 1170, and control is returned to step 1158 of FIG. 44. If is equal to , then control is transferred to step 1172 where it is determined if is equal to . If it is not, is incremented by one in step 1174, and control is returned to step 1154 of FIG. 44. If is equal to , then control is returned to step 3626 of FIG. 37A, the inverse scattering subroutine.

    [1086] Now consider FIG. 45, which was called by FIG. 44 above.

    [1087] Step 1118 of FIG. 45 transfers control to step 1122 where the input is truncated to the border pixels. Control is passed to step 1124 which determines if the receivers are distant from the image space or whether the receivers are juxtaposed to the image space. If the detectors are in fact juxtaposed to the image space, then control is transferred to step 1126 and from there to the calling subroutine. If the detectors are not juxtaposed to the image space, the fields are propagated to the detector positions. The details of this propagation are given in the EXTENSION OF THE METHOD TO REMOTE RECEIVERS WITH ARBITRARY CHARACTERISTICS section of this patent. Actually, there are several methods which could be used to accomplish this. For example, an application of Green's theorem is one possibility. The beauty and utility of the method shown in the Extension of the method to remote receivers with arbitrary characteristics section however, is that the normal derivatives of the Green's function, or the fields themselves, do not need to be known or estimated.

    [1088] The input to FIG. 45 is a vector of length N.sub.xN.sub.y. FIG. 45 represents the subroutine that is called by the Jacobian calculation or following immediately after the calculation of the internal fields. The input in the first case of the Jacobian calculation consists of box 720 of FIG. 46B. The input in the second case is the scattered field calculated for a particular frequency and a particular source position . The purpose of FIG. 45 is to propagate the input (scattered field in the image space) to the detector positions. In step 1124 it is determined whether the receivers are distant. If the detector positions are located at some distance from the image space, the subroutine passes control from step 1124 to step 1128.

    [1089] In step 1128, the matrix P, constructed earlier is used to propagate the field to remote detectors. In step 1130 of FIG. 45, control is returned to the program which called the subroutine. The matrix P will change when correlation is present; the fundamental form of the propagation algorithm remains unchanged, however.

    [1090] These scattered fields are stored in vector d.sup.. Control is then transferred to step 3628 in FIG. 37A which subtracts the measured scattered field from the calculated scattered field and stores the result in one vector r.sub. which is the residual vector for frequency and source position . This step is repeated until all possible frequencies and all possible source positions have been accounted for, whereupon the residual vectors r.sup. for all possible 's and 's are stored in one large residual vector shown in FIG. 37A as step 3628. We now apply the Hermitian conjugate of the Jacobian to the residual vector r just calculated in order to establish the search direction P.sup.n where n=0, i.e., the initial search direction. The search direction is given explicitly by the negative of the action of the Hermitian conjugate of the Jacobian acting upon the residual vector r, in step 3638.

    [1091] There are in effect at least two possibilities regarding the use of the Jacobian. First, the full Jacobian could be used (e.g., step 3632) or, secondly, an approximation to the Jacobian could be used (e.g., step 3636). The determinator of which path to follow is the programmer, who bases his decision on the magnitude of the which is to be solved in the full inverse scattering problem. If the is large, the exact Jacobian must be used. On the other hand, if the is reasonably small, although still larger than required for the Born or Rytov approximation, the approximation to the full exact Jacobian which is shown on FIG. 37A, step 3636 may be used. The application of the exact or approximate Jacobian will be discussed later.

    [1092] Now we step through FIGS. 37A to 37D to get an overall view of how the algorithm works.

    [1093] Step 3638 computes the initial search direction P.sup.(0) step 3640 of FIG. 37A zeros the initial guess for , which is the Gauss-Newton (or Ribiere-Polak, or other) update to , the scattering potential. can be initially set=0. A large portion of the rest of FIG. 37A, 37A, 37C, 37D is concerned with the calculation of . This is used in FIG. 36 also, of course.

    [1094] Step 3612 of FIG. 36 has estimated the scattered field at the detector positions. step 3614 of FIG. 36 now determines if the residual (which is given by the magnitude squared of the difference between the measured scattered field and the calculated scattered field, for each possible source position and each possible frequency) is less than some predetermined tolerance. If the condition is satisfied then control is transferred to step 3616 where the reconstructed image of the scattering potential is either displayed or stored. If the condition is not met, then from the updated internal field calculated in step 3610 of FIG. 36 and also from the most recent estimate of the scattering potential and also from the estimated scattered field at the detectors calculated in step 3612 of FIG. 36 and also from the measured scattered field at the detectors, a new estimate of the scattering potential is calculated in a manner to be made explicit below. This updated scattering potential estimate is derived in step 3618, and is now used in step 3610 of FIG. 36 to calculate a new update of the estimate of the internal fields, for each possible frequency and each possible source position . Then in step 3612 of FIG. 36 a new estimated scattered field at the detectors is calculated by means of Green's theorem as shown in FIG. 44. This new estimate of the scattered field is then used in step 3614 as before, to estimate the magnitude of the residual vector between the measured scattered field and the calculated scattered field to determine if this quantity is less than , the predetermined tolerance. As before if this tolerance is exceeded the loop just described, is repeated until such time as either the tolerance is satisfied or the maximum number of iterations of this loop has been exceeded. This maximum number is predetermined by the operator.

    [1095] We have just concluded a general description of the overall inverse scattering algorithm leaving out many important details. We now look at each of the steps in more detail. First, we explain how the .sup.(0) shown in step 3640 of FIG. 37A is related to step 3618 of FIG. 36. .sup.(0) in FIG. 36 is the first estimate of in step 3714, which is the vector which is solved for, in Steps 3644 through to step 3714 in FIGS. 37A-37D. This is the correction term to the scattering potential which is applied to the present estimated scattering potential .sup.(n) in step 3714 of FIG. 37D in order to get the updated scattering potential .sup.(n+1). Once this new updated scattering potential has been determined, transfer to step 3716 of FIG. 37D where it is determined if the maximum number of Gauss-Newton corrections have been exceeded. Each pass through Steps 3644 through 3714, to calculate is one Gauss-Newton step. These Gauss-Newton steps are repeated until one of two things occurs. Either the maximum number of Gauss-Newton corrections is exceeded, or the tolerance condition of step 3694 is satisfied. It is important to reiterate that which was noted earlier, namely, that the description of the Gauss-Newton algorithm here, does not preclude the use of a suitable Ribiere-Polak, or Fletcher-Reeves non-linear algorithm, or other non-linear system solver. The changes in detail do not change the overall character of the minimization algorithm.

    [1096] It should be noted that the most general case is presented where layering as discussed in Example 2 can be accounted for. In addition, the Lippman-Schwinger operator described earlier incorporates a convolution and correlation part. If it is known a priori that there is negligible layering in the geophysical or non-destructive testing or other scenario envisioned then it is appropriate to use code covering the case of no correlation part in the Green's function. This code also computes the two-dimensional free space Green's function from scratch. This is in contradistinction to the two-dimensional case in the presence of layering, where three separate programs are used to solve the inverse scattering problem, the forward problem and the construction of the layered Green's function. The construction of this layered Green's function which is discussed in Example 2 for the acoustic scalar case, requires substantial amount of time for its computation. However, it is important to note that once the Green's function has been constructed for a given layered scenario, in a geophysical context, for example, it is not necessary to recalculate this Green's function iteratively. Rather the Green's function once it has been determined by methods discussed in the literature, see for example, [Johnson et al. 1992, or Wiskin et al, 1993],

    [1097] As noted, Example 2 shows explicitly how the convolution and correlation are preserved even though the distribution of the layers above and below the layer containing the image space are arbitrarily distributed. The preservation of convolution and correlation is a critical element of the ability to image objects in a layered or Biot environment (Biot referring to the Biot theory of wave propagation in porous media) in real time. The reflection coefficients which are used in the construction of the layered Green's function in the acoustic, elastic and the electromagnetic case are well known in the literature. See, for example [Muller, 1985] or Aki and Richards, 1980. The incorporation of this Green's function for layered media in the acoustic, elastic and electromagnetic case for the express purpose of imaging objects buried within such a layered medium is novel. The ability to image in real time is critical to practical application of this technology in the medical, geophysical, non-destructive and testing microscopic environments.

    [1098] We now elucidate the steps shown in FIG. 37A. We have already discussed in detail, steps up to 3640, step 3642 of FIG. 37A begins the conjugate gradient loop. step 3644 of FIG. 37A sets n equal to 0.

    [1099] This n is the counter that keeps track of the number of conjugate, gradient steps that have been performed in the pursuit of , the update to the scattering potential estimate. step 3646 of FIG. 37A again determines if the full Jacobian will be used. As in previous step 3630 of FIG. 37A the answer depends on the magnitude of . In the case that the is large, the approximation given in step 3650 is not appropriate. Instead step 3648 is performed. Furthermore, the actual implementation of the algorithm depends on whether there is correlation present. They are sufficiently different to warrant separate flow charts. FIGS. 46A/B/C deal with the free space case. Considering this case first, JACFLAG is set to one and step 3652 is computed via FIGS. 46A/B/C. step 700 of FIG. 46A transfers the algorithm control to step 702 where the disk file containing the Green's functions and the disk file containing the internal fields for all possible and all possible source positions are rewound. Control is then transferred to step 704 where , the index for the frequency, is set equal to 1. In step 706 thereafter, the complex Fast Fourier Transform (FFT) of the Green's function; at this particular frequency, is retrieved. In step 708, , the counter corresponding to the source positions, is set equal to 1. The step 710 in FIGS. 46A/B/C retrieves the internal field for this particular source position , the step 712 takes the point-wise product of this internal field which was retrieved in step 710, with the which is the input to FIGS. 46A/B/C and is also the same which is being sought after in the main program in FIG. 37A-37D. In step 3652 of FIG. 37A which is where we have come from in FIGS. 46A/B/C, the Jacobian is applied to p, which is the search direction for . That is p is the input to FIGS. 46A/B/C, i.e. in this call of FIGS. 46A/B/C is p of step 3652 in FIG. 37A. In step 714 of FIGS. 46A/B/C either the inverse of a certain operator or the first order binomial approximation of the inverse, is applied to the point-wise product produced in step 712. This operator, (IG), is similar to, but not the same, as the Lippmann-Schwinger operator which is represented as (IG). Note, the determinator of which operator (IG).sup.1 or (I+G) is applied is the JACFLAG variable defined in FIG. 37A. JACFLAG=1 corresponds to application of (IG).sup.1.

    [1100] It is important to note that the inverse of this operator, (IyG).sup.1, is never explicitly calculated in step 714. The actual computation of the inverse of this operator, which in discrete form is represented as a matrix would be computationally very expensive. Therefore, the inverse action of the operator (IyG).sup.1 acting on the point-wise product calculated in step 712 is calculated by using bi-conjugate gradients or BiSTAB as indicated in step 714 of FIG. 46A/B/C. This bi-conjugate gradient algorithm is the same bi-conjugate algorithm shown in FIGS. 39A and 39B and was used to calculate the application of the Lippman-Schwinger operator [IG] on a vector f. However, it is important to note that the operator represented by the matrix A in step 3950 of FIG. 39A is not the Lippman-Schwinger operator, rather it is the above (IG) operator in this case. As before, the BiSTAB or Biconjugate gradient stabilized algorithm may be used in place of the bi-conjugate gradient algorithm. This BiSTAB or conjugate gradient stabilized algorithm is shown in FIGS. 40A and 40B. The application of the algorithm shown in FIG. 41 for the actual application of the operator A in BiSTAB, FIGS. 40A and 40B, or BCG, FIGS. 39A and 39B, will of course, differ, in that the point-wise multiplication by will be performed after the convolution with the Green's function G appropriate to this frequency , in this case. However, the changes that must be made to FIG. 41 are clear in this case. Namely, the point-wise product shown in step 4102 is taken after the computation of the inverse Fast Fourier Transform (FFT) in step 4108 instead of before. A similar comment applies to b where the Hermitian conjugate of IG is computed. In this case the Hermitian conjugate of the operator IG requires that the point-wise product of the conjugate of which is carried out in step 586 of FIG. 43 be calculated before the Fast Fourier Transform (FFT) is taken in step 580 of FIG. 43. step 2654 of FIG. 37A now calculates the magnitude of the vector computed in step 2652 namely, the application of the vector A to the search direction p. step 2654 of FIG. 37A also computes the magnitude of the vector which is the result of the application of the Hermitian conjugate of the vector A to the residual vector r. Again, A is either the exact Jacobian or the approximation to the Jacobian shown in step 3650. The step 2656 computes the square of the ratio of the two magnitudes computed in step 3654, this ratio is denoted by a. step 3658 of FIG. 37A now updates which is the Gauss-Newton correction sought after in the loop from step 3644 to 3714 of FIGS. 37A-37D. The update is given explicitly in step 3658 of FIG. 37A. Also at the same time the residual is updated via the formula shown explicitly in step 3658 of FIG. 37A. step 3660 transfers control to the step 3686 in FIG. 37C which determines if the full, exact Jacobian will be used (e.g., step 3688) or the approximation shown explicitly in step 3690. Depending upon which A is defined i.e., either the exact Jacobian or the approximation, JACFLAG is set to 1 or 0 and step 3692 is carried out. It is the application of the Hermitian conjugate of the matrix A which is defined in either step 3690 or 3688, depending upon whether the exact Jacobian is used. The Hermitian conjugate of A is applied to the updated residual vector r. step 3694 calculates the ratio of two numbers. The first number is the magnitude of the updated residual calculated in step 3692, the second number is the magnitude of the scattered field for all possible 's, i.e. source positions, and 's i.e., frequencies, and compares this ratio to the pre-determined tolerance. If the tolerance is larger than this calculated number, control is transferred to step 3696 which effectively stores the image of the scattering potential and/or displays this image. If the tolerance is not satisfied, control is transferred to step 3698 where is calculated as shown explicitly in step 3698.

    [1101] Step 3700 calculates the new search direction which is given by the explicit formula shown. Note that the Hermitian conjugate applied to the residual vector r has already been calculated in previous step 3698. Now transfer to step 3704 of FIG. 37D and it is determined in step 3710 if the maximum number of iterations allowed for the conjugate gradient solution have been exceeded. If no, then n, the counter for the conjugate gradient steps is incremented by one (e.g., step 3712), and control is transferred to step 3646 of FIG. 37A. If, however, the maximum number of conjugate gradient iterations has been in fact exceeded, control is transferred to step 3714 of FIG. 37D. This is the step that has been discussed previously in which the scattering potential is updated by the vector which has been computed by the conjugate gradient iterations. Control is then transferred to step 3716 of FIG. 37D where it is determined if the maximum number of Gauss-Newton corrections, specifically the 's, have been calculated. If the maximum number of Gauss-Newton (see earlier note regarding Gauss Newton vs Ribiere Polak) steps has been exceeded then the control is transferred to step 3720 where the present scattering potential estimate is stored and/or displayed. If the maximum number of Gauss-Newton corrections has not been exceeded then the index parameter that counts the number of Gauss-Newton corrections that have been computed, is incremented by 1 (e.g., step 3718) and control is transferred to step 3684 (arrow), which transfers control to step 3622 of FIG. 37A. Again as before, in step 3622 the forward problems for all possible it's and all possible p's are calculated. More specifically control is transferred to step 3840 of FIG. 38. This Figure has been discussed in detail above.

    [1102] The loop just described and displayed in FIG. 37A-37D is now reiterated as many times as necessary to satisfy the tolerance condition shown in step 3694 of FIG. 37C or until the maximum of Gauss-Newton iteration steps has been performed.

    [1103] It is appropriate to study in detail the application of the Lippman-Schwinger operator and the Jacobian in the situation where layering is present. It is important to note that the discussion heretofore, has emphasized the special situation where layering is not present so that the free space or similar Green's function not involving any correlation calculation, has been used. This specialization is appropriate for pedagogical reasons, however, the importance of the layered Green's function makes it necessary to discuss in detail the application of the Lippman-Schwinger operator in this case. Before doing this however, we look at FIGS. 47A/B/C where the Hermitian conjugate of the Jacobian calculation is considered in the free space case. Just as in the calculation of the Jacobian itself (or its approximation) the variable JACFLAG determines whether BCG (or variant thereof) is applied or whether the binomial approximation shown is used.

    [1104] In step 600 the full Hermitian conjugate of the Jacobian calculation is undertaken with the disk files containing the Green's functions and the disk files containing the internal fields for each and each being rewound. In step 604 is set equal to 1, i.e., the first frequency is considered. In step 606 the complex Fast Fourier Transform (FFT) of the Green's function at this particular frequency is retrieved from disk where it has been stored previously. In step 608 is set equal to 1 indicating we are dealing with the first source position. step 610 retrieves the internal field f, for this source position at frequency . step 612 propagates the scattered field from the detector positions to the border pixels of the image space. Also in step 612 the field values at the non-border pixels of the image space are equated to 0. In step 614 the Fast Fourier Transform (FFT) of the field resulting from step 612 is computed and stored in vector x. In step 616 the complex conjugate of the Fast Fourier Transform (FFT) of the Green's function G is point-wise multiplied by x, the vector computed in step 614. As before the actual computation of this inverse operator is computationally prohibitive. Therefore, the application of the inverse of IG** to the vector x is carried out by using bi-conjugate gradients or BiSTAB as discussed above. In step 620 either the inverse of the operator IG** or the approximation (I+G**) is applied to this result. The system [IG**]y=x is solved for y using BCG. See FIGS. 39A and 39B. In step 622 of FIG. 47B the point-wise multiplication by the complex conjugate of f.sub., the internal field for this frequency and source position, is carried out. This point-wise multiplication is the product with the result of step 620. The step 624 stores the result of this point-wise multiplication. step 626 determines if is equal to , the maximum number of source positions. If the maximum number of source positions has not yet been achieved, then the is incremented by 1 to indicate movement to the next source position. Transfer of control is now made to step 610, where the internal field for this particular source position is retrieved. The loop from step 610 to step 626 is performed iteratively until the maximum number of source positions has been achieved. When this occurs, it is determined if is equal to , the maximum number of frequencies, in step 630. If the maximum number of frequencies has been achieved then control is transferred to step 634 which transfers control back to FIG. 37A where the Hermitian conjugate of the Jacobian, or its approximation, was being applied. As shown in FIG. 47C, if, however, the maximum number of frequencies has not been achieved then transfer is made to step 632 where the frequency index is incremented by 1 and control is transferred to step 606 of FIG. 47A which is the retrieval of the complex Fast Fourier Transform (FFT) of the Green's function at new frequency . This loop is now repeated until all possible frequencies and all possible source positions have been exhausted. At this time control is transferred to the inverse scattering routine in FIG. 37A-37D.

    [1105] It is appropriate now to turn to that part of FIG. 46A/B/C which details the application of the reduced Jacobian calculation which is used by FIG. 37A-37D, i.e., the Gauss-Newton iteration in the situation where it is permissible to use the approximation to the Jacobian in place of the exact Jacobian. This calculation is called the reduced Jacobian calculation by virtue of the fact that no inverse operator is required to be calculated in this Figure. Therefore, the total computational complexity of this part of FIG. 46A/B/C (i.e., step 715) is less than the corresponding step 714 of FIG. 46A/B/C, in which the full exact Jacobian calculation is carried out. When the Jacobian calculation in FIG. 46A/B/C is being carried out to compute step 3652 in FIG. 37A is in fact p.sup.n, the search direction for the conjugate gradient step in determining , which in turn is the update to the scattering potential determined this particular Gauss-Newton step. In step 715a) the point-wise product is stored in vector x. In step 715b) the Fast Fourier Transform (FFT) of the point-wise product is computed and in step 715c) this Fast Fourier Transform (FFT) is taken with a point-wise product with the Fast Fourier Transform (FFT) of the Green's function for this particular frequency , and the inverse Fast Fourier Transform (FFT) of the point-wise product is taken to yield G convolved with the product f* which result is stored in vector y. In step 715d) the point-wise product of vector y is taken with and the vector x computed and stored in step 715a) is added to it. The Fast Fourier Transform (FFT) of this sum is calculated in step 716 of FIG. 46A/B/C and the algorithm finishes exactly as described previously. Recall that transportation takes place in the scenario where the receivers exist at some finite distance from the image space in contradistinction to the truncation operation which takes place when the receivers exist at the border pixels of the image space. The truncation or transportation operator is discussed in FIG. 21 for the general case in which correlation, i.e. layering, is present or absent.

    [1106] It is very important to note that the correlation that is calculated in the case of layering is, without exception, calculated as a convolution by a change of variables. It is the presence of the convolution which allows the introduction of the Fast Fourier Transform (FFT) and allows this algorithm to be completed in real time when the Fast Fourier Transform (FFT) is used in conjunction with the other specific algorithms and devices of this apparatus.

    [1107] It is now appropriate to consider in detail the action of the Lippmann-Schwinger operator and the Jacobian in the case when layering is present. FIGS. 48A/B displays this case.

    [1108] It is also important to note regarding the Jacobian calculation with correlation, that both the reduced and the exact form have their counterpart in the Hermitian conjugate. This is discussed in FIGS. 52A/B. In step 654, the rewinding of the disk with the Green's function and the disk with the internal fields is carried out, and is set equal to 1. In step 658 the complex Fast Fourier Transform (FFT) of the Green's function at this particular frequency is retrieved. step 658 also sets (source position) equal to 1. step 660 retrieves the internal field for this . step 662 calculates the Hermitian conjugate of the truncation operator which puts values in border positions of the image space of the appropriate field and zeros everywhere else. step 664 point-wise multiplies the result of 662 by complex conjugate of the Fast Fourier Transform (FFT) of the Green's function. step 668 takes the inverse Fast Fourier Transform (FFT) and stores this in vector . Now, in step 670 depending upon the value for JACFLAG, either the full inverse (IG**).sup.1 (via BCG or BiSTAB, etc.) or (I+G**) is applied directly to . The second alternative is carried out in the standard manner. That is, one takes the point-wise multiplication of the complex conjugate of , the scattering potential estimate at this time with the vector . The one takes the Fast Fourier Transform (FFT) of the result and takes the point-wise multiplication of the complex conjugate of the Fast Fourier Transform (FFT) with this result. Finally one adds the vector y and stores it in vector y, (overwriting y). The step 672 multiplies the final result point-wise, with the complex conjugate of the internal field for this particular frequency and source position, and stores the result. step 672 transfers control to 674 where it is determined if is equal to . If no, then is incremented by 1 and returned to step 660 in step 676. If yes, control is transferred to 680 where it is determined if is equal to ; if yes, increment by 1 in step 682, and transfer control to step 656. If no, transfer control to step 684 where control is transferred back to the inverse scattering main algorithm, FIGS. 37A-37D.

    [1109] It is important to note at this point that when the operator IG is applied to the vector T.sub.1, and also when the Lippmann-Schwinger operator IG is applied to any input vector in the layering scenario, FIG. 42 must be followed. step 556 of FIG. 42 forms the vectors which store the Fast Fourier Transform (FFT) of the convolutional and the correlational parts of the layered Green's function separately. These convolutional and correlational parts are computed by appendices C and G for the two and three-dimensional cases, respectively, and are given explicitly by the formulas herein.

    [1110] The step 558 passes control to step 560 where the vector of pointwise products shown explicitly there, is formed. This pointwise produce of , the scattering potential estimate, and F, the internal field, is Fourier transformed in step 564 and in parallel simultaneously in step 562, the reflection operator is applied to this pointwise product F computed in step 560. In step 566, following step 562, the Fourier Transform of the reflected product F is taken. In step 570 and in step 568, there is pointwise multiplication by the correlational part and the convolutional part of the layered Green's function, respectively. The correlational part of the layered Green's function is pointwise multiplied by the Fast Fourier Transform (FFT) computed in step 566, the convolutional part of layered Green's function fast Fourier transformed is pointwise multiplied by the Fast Fourier Transform (FFT) which was computed in step 564. In step 572, the vector W, which is the difference of the results of step 568 and 570, is computed and stored. In step 574, the inverse Fast Fourier Transform (FFT) of the vector W is computed. In step 576, the final answer which is the action of the Lippmann-Schwinger operator acting upon the field f is computed for the layered medium case, i.e., the case in which correlation exists. This process of applying a Lippmann-Schwinger operator in the presence of layering established in FIG. 42 is used explicitly in the calculation of the forward problems in step 3622 of FIG. 37A in the presence of layering. It is also carried out explicitly in Appendices B and P for the two and three-dimensional cases, respectively. It is also required in some of the calculations of the application of the Jacobian in the layered medium case.

    [1111] The operator described by FIG. 49, (IG), is called the Second Lippmann-Schwinger operator. This operator is in fact the transpose of the Lippmann-Schwinger operator so-called, i.e. (IG). Note that this is not the Hermitian adjoint of the Lippmann-Schwinger operator, precisely because no complex conjugate is taken.

    [1112] The input to FIG. 49 is Fast Fourier Transformed in step 1222. The result is then pointwise multiplied by the Fast Fourier Transform (FFT) of the Green's function for this particular frequency. The result is inverse Fast Fourier Transformed and stored in vector y, in step 1226. The pointwise product .Math.y is computed and stored in vector . Finally the difference between input vector x, and y is formed in step 1230. Finally, the control is transferred back to the calling subroutine.

    [1113] FIG. 50 details the application of the Hermitian conjugate of the second Lippmann-Schwinger operator

    [1114] In FIG. 50 control is transferred to step 1242, where the pointwise product is formed, between the complex conjugate of the scattering potential estimate, and the input to this subroutine, x. The Fast Fourier Transform (FFT) is then applied to this product in step 1244. In step 1246 the pointwise multiplication by G*, the complex conjugate of the Fast Fourier Transform (FFT) of the Green's function at frequency , is effected. In step 1248, the Inverse Fast Fourier Transform (FFT) of the product is taken, and finally the result is stored in . In step 1252, the difference is formed between the input to this subroutine, x, and the computed vector y, and the result is stored in .

    [1115] FIG. 21 delineates the propagation operator that takes fields in the image space, and propagates them to the remote receivers at some distance from the image space. This propagator is discussed elsewhere in more detail.

    [1116] In FIG. 21, the input is truncated to the border pixels, in step 1122. Next it is determined if the receivers are remote, or juxtaposed to the image space. If the detectors are in fact juxtaposed to the image space, the control is returned to the calling subroutine.

    [1117] If the detectors are remote, the field is propagated with the aid of the propagator matrix P (described in section EXTENSION OF THE METHOD TO REMOTE RECEIVERS WITH ARBITRARY CHARACTERISTICS), in step 1128. Control is then transferred to the calling subroutine in the situation where the detectors are remote.

    [1118] FIG. 51 details the Hermitian conjugate of the propagation operator discussed in FIG. 21:

    [1119] In FIG. 51, the Hermitian conjugate of the Propagation operator P, it is first determined in step 1262, whether the receivers are at some remote location. If not, then control is transferred directly to step 1268. If the detectors are in fact remote, then control is passed to step 1266, where the fields are propagated via the propagation matrix to the border pixels of the image space. In either case, control is passed then to step 1270, and on to the calling subroutine.

    [1120] FIGS. 48A/B determines the effect of the Jacobian on an input vector x in the presence of layering. step 1050 transfers control to step 1052 where is set equal to 1. Also the files containing the Greens function for this frequency (both the correlations and the convolutional parts), and the internal field estimates, and the propagator matrix are rewound. Control is then transferred to step 1054 where the propagator matrix for this frequency is read in and also the Greens function for this particular frequency is read into memory. Also in step 1054 resource position index is set equal to 1. Control is then transferred to step 1056 where the file that contains the internal field estimate for this particular frequency and source position is read into memory. Control is then transferred to step 1058 where the pointwise product of the internal field estimate and the input vector x is computed. Next in step 1060 it is determined if the approximate Jacobian or the full Jacobian will be used. If the full Jacobian is to be used then control is transferred to step 1062 where biconjugate gradients or biconjugate gradients stabilized is used to solve for the action of the inverse matrix of IG acting on T.sub.1. Recall T.sub.1 is the temporary matrix which holds the pointwise product computed in step 1058. Control is then transferred to step 1066. If in step 1060 the determination is made to perform the approximate Jacobian operation then control is transferred to step 1064. In step 1064 the Fast Fourier Transform (FFT) is taken of the vector T.sub.1 and it is stored in temporary vector T.sub.2. Also the reflection operator is applied to the vector T.sub.1 and the result is stored in vector T.sub.3. Also the Fast Fourier Transform (FFT) is taken of the result of the planar reflection operator which is stored in T.sub.3 and T.sub.2 now contains the pointwise product of the convolutional part of the Greens function and the previous T.sub.2 added to the pointwise product of T.sub.3 with the correlational part of the Greens function for this particular frequency. The inverse Fast Fourier Transform (FFT) is applied to vector T.sub.2 and the result is stored in vector T.sub.2. Finally, the pointwise product between and T.sub.2 is formed and the result is added to vector T.sub.1 and the result is stored in vector T.sub.1. Regardless of whether step 1064 or 1062 has just been completed in step 1066 the reflection operator is applied to the result of either box 1062 or 1064 and the result is Fast Fourier transformed in vector T.sub.3. Also the original vector T.sub.1 is Fast Fourier transformed and the result is stored in Vector T.sub.2. T.sub.2 is the pointwise multiplied by the convolutional part of the layered Greens function at this particular frequency and the result is added to the pointwise product of T.sub.3 with the correlational part of the layered Greens function at this frequency. The result is stored in vector T.sub.2. Finally a Fast Fourier Transform (FFT) is applied to the resulting vector T.sub.2 control is then transferred to step 1068 where the transportation or truncation operator given in FIG. 21 is applied to the result. In step 1070 it is determined if is equal to if in fact is equal to control is transferred to step 1074 where it is determined if is equal to . If is not equal to control is transferred to step 1072 where is incremented by 1 and control is then transferred back to step 1056. If in step 1074 is determined to be equal to control is transferred to step 1076 where y is incremented by 1 and control is transferred back to step 1054. If is equal to in step 1074 control is transferred step 1080 from there control is returned to the calling subroutine.

    [1121] In FIG. 52A the Hermitian conjugate Jacobian with correlation present is applied to an input vector x in step 654 of FIG. 52A is set equal to 0 and the files containing the Greens functions for all frequencies are rewound in step 658. The Greens function both the convolutional and the correlational parts for this particular frequency are read into memory, also is set equal to 1. In step 660 the file containing the internal field estimate for this particular source position and frequency is read into memory in step 662 the Hermitian conjugate of the propagation operator is applied to the vector x the result is stored in vector T.sub.1. In step 664 the reflection operator is applied to the vector T.sub.1 and the result is stored in vector T.sub.3 also Fast Fourier transforms are applied to both vector T.sub.1 and vector T.sub.3. Finally the pointwise product of the FFT of the complex conjugate of the convolution part of the layered Greens function is formed with T.sub.1 and the result is added to the pointwise product of T.sub.3 with the FFT of the correlational part of the layered Greens function with its complex conjugate taken. The result is stored in vector T.sub.1. Control is then transferred to step 668 where the inverse Fast Fourier Transform (FFT) of T.sub.1 is taken and the result is stored in vector T.sub.1. In step 670 either the approximation or the full action of the inverse of the second Lippmann-Schwinger operator with Hermitian conjugate taken is applied as before. If the inverse operator is applied it is approximated by the solution to a system via biconjugate gradients or biconjugate gradients stabilized. In step 670 also, the approximation I+G* * is applied to the vector T.sub.1 resulting from step 668, and the pointwise product with Gamma* i.e., the complex conjugate of is taken. The Fast Fourier Transform (FFT) of the result is then pointwise multiplied by G* the complex conjugate of the Greens function, with care being taken to insure that the convolution and the correlational parts are performed separately as before. This result is added to T.sub.1, then stored in T.sub.1 and control is then transferred to step 672 where the result is pointwise multiplied by the complex conjugate of the internal field estimates and the result is added to the previous frequency and source position result. Then control is transferred to step 674 where it is determined if is equal to if not is incremented by 1 and control is transferred to step 660 if it is, control is transferred to step 680 where it is determined if is equal to . If not the in incremented by 1 and control is transferred back to step 658. If it is, control is transferred back to the calling subroutine.

    Detailed Description of FIG. 19

    [1122] Subroutine scatter is called from main program in FIG. 19, in order to effect the forward calculation of the fields for each view. Control is transferred to box 2000, where inv is set=1 (inv is the index of views). Next, in box 2002, is the calculation of rotation angle =(inv-1), and the setting up of rotation matrix M(), which is then applied to transmission coefficient matrix T to yield T.sub.y(x)=M()T(x)xR.sup.3

    [1123] Next control is moved to box 2004, where the field for the first slice (i.e., j=1) is set to predetermined values: s(i,k), and the y index, j is set=1, .sub.o(x, z)(x, jy, z)|.sub.j=0s(z).

    [1124] Control is then transferred to box 2006, where the Fourier Transform is applied to field slice j1 F(.sub.j1)

    [1125] Control is then transferred to box 2008, where the pointwise product is taken between the propagator P.sub.j, defined as

    [00571] P j = propagator = e - 0 2 - 2 ,

    [1126] and the result from box 2006, to get P.sub.jF(.sub.j1). Now transfer to box 2012 where we take the Inverse Fourier transform in z-direction to get .sub.jF.sup.1(P.sub.jF(.sub.j1)). Then move to box 2014, where j.sup.th field slice is given by pointwise product of transmission coefficient T.sub.j and .sub.j.Math.f.sub.j=T.sub.j(x).sub.j. Then transfer control to decision box 2016, whereupon: if j is strictly less than N.sub.y, then control moves to box 2018, where index j is increased by 1, and control is directed along arrow 2020 to box 2006. If j is exactly Nj, control is transferred to decision box 2024. Then, if inv<N.sub.view, control is transferred to box 2030 where inv is incremented by 1 to inv+1, and control is then transferred along arrow 2026 to box 2002. When inv=N.sub.view, control is returned to the main program.

    Detailed Description of FIGS. 20A/B/C

    [1127] Control is passed to subroutine Jac from the parent program in box 2102 It will calculate the action of the Jacobian of the forward problem on the vector . Control is transferred to box 2104 where is determined via the formula =2/N.sub.views, the spatially dependent transmission coefficient T.sub.=t.sub.+1(T.sub.=T.sub.(x, y, z) and the input t, or the corresponding (object function) are also initialized, and inv is set=1, i.e., the index corresponding to view number Control is then transferred to box 2106 where one sets =(inv-1), and thus to box 2108 which sets up rotation matrix M(0), and applies rotation matrix to transmission coefficient matrix T: T.sub.(x)M()T(x)xR.sup.3. Control is then transferred to box 2110 which initializes field at left side of pixelated cube using the formula f.sub.o(x, z)(x, jy, z) l.sub.j=0s(z).

    [1128] Now begin the march across the pixelated image space by transferring control to box 2112 where j (y index) is set=1. Then control moves to box 2114 where we apply Fourier Transform to field slice j1:F(.sub.j1). Then control is transferred to box 2116 of FIG. 20B where we multiply pointwise by propagator P.sub.j

    [00572] P j = propagator = e - i k 0 2 - 2 ,

    [1129] to get P.sub.j F(.sub.j1). We now move to box 2118 where the Inverse Fourier transform in z-direction is applied to get .sub.jF.sup.1(P.sub.j F(.sub.j1))Af.sub.j1 (this slice .sub.j will be used later, and so is not overwritten). Control is then transferred to box 2120, where A.sub.j(.sub.j1) F.sup.1 (P.sub.j F(.sub.j1)) is computed in exactly the same manner, i.e. the Fourier transform is applied to .sub.j1, then the propagator is pointwise multiplied, and finally the inverse Fourier Transform is taken.

    [1130] Control is then transferred to box 2122, where the pointwise product between the .sub.j(calculated above) and t.sub.j, [t.sub.j].sub.j is formed. Also the pointwise product involving t.sub.j: [t.sub.j]A.sub.j(.sub.j1) is formed. The definition of the pointwise product is given explicitly by

    [00573] [ t j ] v j .Math. i , k j fixed t ijk v ijk

    [1131] Finally form the sum of the resulting vectors, .sub.j[t.sub.j].sub.j+[t.sub.j]A.sub.j(.sub.j1).

    [1132] Control is then transferred to box 2124, where the field at slice =j is formed to obviate the need to store/read the fields on disk, which is very time consuming. The field is given by the pointwise product of transmission coefficient T.sub.j and slice .sub.j.Math..sub.jT.sub.j(x).sub.j

    [1133] Control is now transferred to decision box 2126: If j=N.sub. (i.e. the right hand side of the pixelated image space has been reached) then control is transferred to decision box 2130. If, however, the answer to query j=N.sub. ? is no, then control is transferred to 2128 where the slice index is incremented by +1, and control is transferred along arrow 2138 to box 2114 of FIG. 20A, where the Fourier transform is applied to the field slice .sub.j1.

    [1134] If control has been transferred to decision box 2130 of FIG. 20C, and if the index for view, inv=N.sub.view, then control moves to box 2134, where the Fourier Transform is applied to the error on the detectors .sub.N:fcF(r.sub.N), The resulting fc is pointwise multiplied by a scaling which in some cases, leads to more rapid convergence. This scaling is not critical to the implementation of the algorithm. The vector fc (scaled or not) is the output for the subroutine.

    Detailed Description of FIGS. 21A/B/C

    [1135] Subroutine JacH(r) is called from parent programit will calculate action of Hermitian conjugate of Jacobian on the residual vector r. Control is transferred from box 2202 to box 2204 where is determined: =2/N.sub.views, also t is initialized=0 and inv is set=1. Control is transferred then, to box 2206 where is defined according to: =(inv1). Control is transferred to box 2208, where the rotation matrix M() is defined, and applied to transmission coefficient matrix T to yield T.sub.. T.sub.(x)M()T(x)xR.sup.3. Then control is transferred to box 2210, where the field at the left side of the pixelated cube is initialized: .sub.o(x, z)(x, jy, z)|.sub.j=0s(z)

    [1136] Next control is transferred to box 2212, where j is set=1, and the march across the pixelated image space is begun.

    [1137] Next control is transferred to box 2214 the Fourier transform is applied to the field slice .sub.j1:.sup.F (.sub.j1) and control is transferred to box 2216, where pointwise multiplication by the propagator P.sub.j is carried out, where P.sub.j=propagator=e.sup.

    [00574] P j = propagator = e - i k 0 2 - 2 .

    [1138] The result, P.sub.j F(.sub.j1) is then transferred to box 2218 where the inverse Fourier transform in the z (vertical) direction is applied, yielding .sub.j=F.sup.1 (P.sub.j F(.sub.j1)). This .sub.j will be used below, and so is saved.

    [1139] From this, control moves to box 2220, where the field slice at =j (recall that field values are recomputed to obviate the time consuming storage/retrieval of these on disk) is given by pointwise product of transmission coefficient T.sub.j and .sub.j. .sub.j=T.sub.1(x).sub.j. Control now goes to decision box 2224, whereupon if j is strictly less than N.sub.y, control is switched to box 9292, where index j is increased by 1, and control reverts to box 2214, where the Fourier transform of the next field slice is taken. When j is equal to N.sub.y, control is transferred to box 2226 of FIG. 21B. This is the beginning of the backward moving loop (i.e., j decreases from N.sub. to 1). Here we form the Fourier transform of the conjugate of the error vector at the detectors, and store result in C: fcF(r.sub.N).

    [1140] Now use .sub.j (calculated in loop above) and fields fc.sub.j to calculate action of Jacobian on r, the error in field at the detector (far side of image space) in the following manner:

    [1141] Control then moves to box 2226 where fc is redefined as fc.Math.(t.sub.v+1) and the pointwise product between the resulting fc, and .sub.j is taken, and stored in .sub.j[.sub.j]fc. Control is then transferred to box 2228 where we pointwise multiply fc by rotated transmission coefficient T, Fourier Transform the result, pointwise multiply by propagator P.sub.j, then inverse Fourier Transform (in z direction) the result; finally store the resulting slice in fc (overwriting previous contents) fcF.sup.1P.sub.jF o T(x)fc. Next, control is transferred to box 2230, in order to test whether j=0. If not, then control is transferred to box 2238, where the y index, j, is decreased by one (put jj1), and control is transferred along arrow 2236, back to box 2226. If, however, j does=0, then the loop is done, and control moves to box 2232, where the hermitian conjugate of rotation matrix is applied to matrix of transmission coefficients tv computed in the above loop, where:

    [00575] tv = [ ( v 1 ) ( v 2 ) .Math. ( v N ) ]

    [1142] in order to rotate (estimate of) object back to original orientation

    [00576] [ ( t 1 ) ( t 2 ) .Math. ( t N ) ] Rot H ( tv )

    [1143] Then, control moves to box 2234, where we add contribution for this view to total transmission coefficient adjustment d.sub.t

    [00577] dt dt + [ ( t 1 ) ( t 2 ) .Math. ( t N ) ]

    [1144] Now control is transferred over to decision box 2240, where it is determined if index inv=N.sub.views. If it is, then control is transferred over to box 2242, where index inv is increased by 1: invinv+1, and control is transferred up arrow 2242 to box 2246 to do the next view If it is determined that index inv does in fact=N.sub.views, the control is passed along arrow 2244, to return to the calling program, with output.fwdarw.(t)

    Detailed Description of FIGS. 22A/B/C/D

    [1145] Control is transferred from the calling program through arrow 2300 on FIG. 22A to 2302 where the data g.sub.j, for j=1, . . . , N.sub.view is read in, and the predetermined maximum number of steps taken in the conjugate gradient method is read in as well. Then control is transferred to box 2304 of FIG. 22A where the initial guess for the object function, is read into memory. Also, in this box 2304, the index j is set=1 to indicate that the first view is being considered. Then control is transferred down to box 2306 where the forward problem for direction j is solved, that is, the partial differential equation for .sub.j is solved:

    [00578] 2 f + 2 i k o j .Math. f - k o 2 ( 1 + j ) r

    [1146] with .sub.j=g.sub.j on boundary

    [00579] j .Math. j - ,

    and with the initial condition

    [00580] n j = n g j

    [1147] on

    [00581] j - .

    The notation /n designates the interior normal derivative. The forward operator R is defined as follows:

    [00582] R j ( r ) v j .Math. r j + ,

    that is, R.sub.j().sub.j.sup. restricted-to-

    [00583] j + , j +

    being the forward scattering part of the boundary of Q.

    [1148] Now the control is transferred over to decision box 2308, where it is determined if jN.sub.view. If it is not, then the algorithm continues to box 2310, where control is returned to a calling subprogram, or the final image is stored and displayed. That is, if j>N.sub.view, then the algorithm has updated the object function for all incident directions, and control is transferred to back to the calling program or the image is stored/displayed.

    [1149] If jN.sub.view, then control is transferred over to box 2312, where an initial guess for .sup.r, the update to for this particular direction j. is read in.

    [1150] Control is then transferred to box 2314 (if jN.sub.view) where the solution to the initial value problem

    [00584] ( j ) + 2 i k o j .Math. ( j ) - j r = k o 2 ( 1 + v j ) r

    with the boundary values:

    [1151] .sub.j=0 on

    [00585] .Math. j -

    and initial value

    [00586] j v = 0

    [1152] on

    [00587] j -

    is determined by the method of finite differences or the marching method described in this patent (F. Natterer and F. Wubbelling paper). The .sub.j is the same as in box 2306. The solution to this problem, the function .sub.j, is the result of applying the Jacobian of R.sub.j to the current guess for the object function correction, .

    [1153] Next control is transferred to box 2316 where the residual r.sup.(0)(g.sub.jR.sub.j(.sup.r)) is defined. Next control is transferred to box 2318, where the boundary initial value problem:

    [00588] z + 2 i k o j z - k o 2 z = 0

    [1154] with boundary values:

    [1155] z=0 on

    [00589] j .Math. j + ,

    and initial values

    [00590] z v = r ( 0 )

    [1156] on

    [00591] j +

    is solved. Then, the function

    [00592] p ( 0 ) = - ( [ R ] j H r ( 0 ) )

    [1157] is defined as

    [00593] ( 0 ) - k o 2 ( 1 + v j ) z ,

    where z is the solution to the above initial value problem. Note: in the above formula: .sub.j is the complex conjugate of .sub.j.

    [1158] Now, control is transferred to box 2320, where the definition w.sub.j.sup.(0)p.sup.(0) is enforcedthe general w.sub.j.sup.(n) will be defined and used later in this algorithm, also the loop index n is set=0. Then control is transferred to box 2322 where we define y.sub.j.sup.(n)y.sub.j, with y.sub.j the solution to the boundary value problem

    [00594] ( y j ) + 2 ik o j ( y j ) - y j r = k o 2 ( 1 + v j ) ( n )

    [1159] with the boundary values:

    [1160] y.sub.j=0 on j.sub.j, and initial value

    [00595] y j v = 0

    [1161] on

    [00596] j 3 1

    [1162] Then control is transferred to box 2324, where an is defined as an appropriate ratio. Then control is transferred to box 2326 where the correction to the object function and the residual r are updated. Control is then transferred to decision box 2328, where it is determined whether or not the magnitude of the vector of residuals is smaller than some predetermined and fixed . If it is not then control is transferred to decision box 2332, where it is determined whether the index n is larger than N*. If either of these two criteria are met, then it is assumed that the subprocess that converges to .sub.j is finished, and control is transferred to box 2334 where the object function is updated by the formula .sup.r+1=y.sup.r+(.sup.r). Then in box 2336 the view index j is incremented by 1, and control is then transferred back to box 2306 to read in the next initial guess .Math.j for the next direction.

    [1163] If neither of these conditions is met then control is transferred to box 2338, where the action of the Hermitian adjoint of the Jacobian

    [00597] [ R j ]

    [1164] on the function r.sup.(n+1),

    [00598] ( symbolically represented as [ R j ] H r ( n + 1 ) )

    [1165] is calculated. We use the notation

    [00599] w j ( n + 1 ) [ R j ( v j ) ] H r ( n + 1 )

    [1166] This w.sub.j.sup.(n+1) is explicitly calculated in two stages: [1167] (1) Calculate z as the solution to the boundary value problem

    [00600] z + 2 i k o j .Math. z - k o 2 z

    with boundary values: [1168] z=0 on .sub.j.sub.j.sup.+, and initial values

    [00601] z v = r ( n + 1 )

    [1169] on

    [00602] j + , [1170] (2)

    [00603] w j ( n + 1 ) = k o 2 ( 1 + i ) z

    [1171] Control is then transferred to box 2340 where the constant .sub.n is calculated via the formula:

    [00604] n = .Math. w j ( n + 1 ) .Math. 2 .Math. w j ( n ) .Math. 2 .

    [1172] Control is then transferred to box 2342 where the search direction .sup.(n) is updated via the formula

    [00605] ( n + 1 ) w j ( n + 1 ) + n ( n ) .

    Now one loop of the conjugate gradient algorithm has been completed and control is transferred up to box 2344 where the loop index n is increased by 1 and control is then transferred to box 2322 where the next solution to the given boundary value problem is calculated. Note that the .sup.r, and the .sub.j are both known. This loop is repeated until one of the criteria in box 2328 or 2332 is finally met, whereupon control is transferred to the box 2306 and the next view is dealt with.

    [1173] Detailed Description of Flow Chart for Propagation-Backpropagation Method, FIG. 23

    [1174] Control is transferred to box 2350 of FIG. 23, where the initial guess or estimate for . From there control is transferred to box 2352 where the data (scattered or total field) is read in. Also in this box the forward problem with an object function of .sup.r is carried out for the j.sup.h direction.

    [1175] Next control is transferred to box 2354 where the initial value and boundary value problem shown in that box, is solved. This is part of the back-propagation stage of the process. Then control is transferred over to box 2356 where the back-propagation process is completed by the definition of given there. Then control is transferred down to box 2358 where .sup.r is updated with a multiplicative factor of . Then control is transferred over to box 2364 where it is determined if j is greater than or equal to N.sub.view. If so, then control is transferred over to box 2366 where it is determined if r is greater than or equal to N.sub.iter. If r is greater than or equal to N.sub.iter, then control is transferred down to box 2368 where control is transferred over to the calling program and/or the image of is displayed. If the criterion in box 2364 is not satisfied, control is transferred to 2362 where j is increased by 1. Then control is taken back to box 2352 where the data (scattered or total field) for new direction j is taken, and the corresponding forward problem is solved for direction j. If the r counter is not greater than N.sub.iter, control is transferred to box 2360, where j is reset to 1, since it has already been determined in box 2364 that all the views have been processed. Also the iteration counter r is increased by 1, and control is returned to box 2352, where the forward problem corresponding to the direction j=1 is performed. Note that one may wish to bypass the redoing of the forward problem after each iteration in the r variable. This would require the storing of each .sub.j for all of the directions, furthermore, since the computation of the forward problem is very fast, it is approximately as efficient to recompute the forward problem each iteration of r, as opposed to going through several iterations before recomputing the forward solutions.

    Detailed Description of Flow Chart for Maximization of Brightness Phase Aberration Correction FIGS. 24A/B/C/D/E

    [1176] First, flow is transferred down arrow 2400 from the calling program. Then in box 2402 t, the time increment for gradient calculation, t, the line search time step, N.sub.s the predetermined maximum number of line search steps, and L, the predetermined total number of CG iterations to be performed, are read in. Control is then transferred over to box 2404, where the precalculated beamformer delays based upon an assumed speed of sound in the tissue are loaded into the beamformer hardware. Then control is transferred to box 2405, where the b-scan image is acquired. Then control is transferred to box 2406 where the Region of Interest (ROI) is selected by the user. The element range used to form the image in the ROI, m.sub.1 to m.sub.2, are determined for this ROI. Then control is moved to box 2408 where the type of iteration is chosen: itertype=SD corresponds to steepest descents, itertype=FR corresponds to Fletcher Reeves', RP corresponds to Ribiere-Polak, and finally RPP corresponds to Ribiere-Polak with the Powell modification. Next control is transferred to box 2410 where the vector h with components h.sub.m for m=m.sub.1, . . . , m.sub.2 is set identically=0, similarly all of the components of the vector p, p.sub.m are set=0.

    [1177] Next control is transferred to box 2412 of FIG. 24B, where the initial gradient magnitude, r.sub.o, is set=1, and the vector t (with components t.sub.m=the time delay perturbation for element m) is set=0. Now control is transferred to box 2413 where the index 1 is set equal to zero, and the sum of squares of the image intensity in the ROI is computed and stored in b.sub.o. Now control is transferred to decision box 2414 where it is determined if index I=0. If it is then control is transferred to box 2415, b.sub.o is stored in b.sub.initial, and control is transferred down to box 2416. If 10, then control is transferred to box 2416 where t (NOT ) is added to all beamformer delays which correspond to transducer element e.sub.m. The new (perturbed) delay set is uploaded to the beamformer, and control goes to box 2417, where the new image is acquired form the b-scanner. Next control goes to box 2418, where the brightness in the ROI is computed and stored in b.sub.m. Now control is transferred over to box 2419, where the gradient component g.sub.m is approximated by a finite difference formula. Then control is transferred to box 2420, where it is determined if m<m.sub.2., if it is then control goes to box 2421, where index m is increased by 1 and control goes back to box 2416. If not, then control is transferred down to box 2422, where the gradient of brightness is plotted. Next control is transferred to decision box 2424, if itertype=SD, then the vector p is set equal to the gradient g and control is transferred directly down to box 2446 via arrow 2440. If itertype is not=SD, then transfer goes to box 2426, where the gradient magnitude r.sub.1 is computed as the sum of squares of the components of the vector g, the gradient. Next control is transferred down to decision box 2428, where itertype is compared to FR. If they are not equal then control goes to decision box 2432, where itertype is compared with RP. If they are in fact equal, then control is transferred to box 2430, where the constant is defined as the ratio: r.sub.1/r.sub.o. Control is then transferred to box 2446 via arrow 2438.

    [1178] At decision box 2432 it is determined if itertype=RP. It is true then control is switched over to box 2434 where the constant is calculated as shown. If itertype is not=RP, then control is transferred down to box 2444 where it is determined if itertype=RPP. If it is, then control is transferred over to box 2442, where is checked to make sure that it is non-negative, if not, then is set=0 and control is transferred over to box 2446. If it is, in fact non-negative, then control is transferred over to box 2446, where the search direction p is updated as shown in box 2446. Also in box 2446 the vector g with components, g.sub.m with m=m.sub.1, . . . , m.sub.2.is stored in vector h, and r.sub.o is redefined to be equal to r.sub.1, that is, the value r.sub.1 is stored in r.sub.o. The control is then transferred over to box 2448, where the constant M is calculated as the maximum of the absolute values of the components of direction p. Also in this box, n the line search index is set=1. Then control is transferred down to box 2450, where the vector with components f.sub.m is determined. Then control is transferred down to box 2452 where the net delay set with element perturbation vector f is computed and uploaded to the beamformer hardware. The corresponding image is acquired from the b-scanner and the resultant brightness in the ROL, bn, is computed (as the sum of squares mentioned before). Then control is transferred to box 2454 where it is determined if n is greater than or equal to N.sub.s the predetermined number of steps in the line search. If so, then control is transferred down to box 2458 where the normalized values b.sub.n/b.sub.initial are plotted. If not, then control is transferred over to box 2456, where the index n is increased by 1, and control is then transferred up to box 2450 where the perturbation vector f is recalculated with the new value of n.

    [1179] Once control has been passed down to box 2458 and the normalized b.sub.n/b.sub.initial for n=1, 2, . . . N.sub.s, has been plotted, then control is transferred to box 2460, where the n for which b.sub.n is maximal is determined. Then control moves to box 2462, where the delays t.sub.m (the components of vector t) are uploaded into the beamformer hardware. These delays are the ones which maximize the brightness of the image in the direction of the present search direction. Next control is transferred down to decision box 2466, where it is determined if 1 is strictly less than L, the predetermined number of iterations in the brightness maximizing subroutine. If it is true, then control is transferred to box 2464, where the index 1 is increased by 1, and control is transferred up to box 2413 where the image is acquired for the given beamformer delays. Also, the index m is reset to equal m.sub.1, in order that the gradient can be calculated once more.

    [1180] If the predetermined number of steps have been carried out, then control is then transferred over to box 2468, where the image is acquired from the b-scanner using the time delays in t and control reverts to the calling program, or else a stop is executed.

    [1181] FIG. 37 shows the method for solving for the object function in the scattered field domain as discussed in METHOD 1 directly below. It uses an iterative procedure applied to a square non-linear operator to first determine an equivalent scattered field, and then transform to the object function solution. Since the nonlinear system is square, it is possible to use BiSTAB (BiConjugate Gradients Stabilized) in place of generic Conjugate Gradients in the solution algorithm, which is a great advantage.

    [1182] FIG. 29 shows the method for solving for the object function iteratively in the object function domain. The method is discussed as METHOD 2 directly below. Again, since the system is square, it follows that BiSTAB is the proper algorithm to use.

    [1183] METHOD 1: iteration in the scattered field manifold

    [1184] Solving for by inverting E.sub.s=F(), where F()=Dy(IC).sup.1E.sub.b, requires inverting from a range to a domain that are not the same size. Define E.sub.S.sup.Born=DE.sub.b. It is now possible to solve for E.sub.s.sup.Born from the square system E.sub.s=F(B(E.sub.s.sup.Born)). After finding E.sub.s.sup.Born we find by the direct substitution =B(E.sub.s.sup.Born). This has the advantage of allowing use of square system solvers, such as biconjugate gradients (BCG) or stabilized biconjugate gradients (BiSTAB), which are faster and better conditioned than rectangular system solvers, e.g., conjugate gradients (CG). The function B can be any one-to-one mapping such as the Born approximation or backpropagation. Thus, we will also investigate and develop the potentially faster non linear, square system method described in FIG. 37.

    [1185] Solving for by inverting E.sub.s=F(), where F()=Dy(IC).sup.1E.sub.b, requires inverting from a range to a domain that are not the same size. However, we can solve for E.sub.s.sup.Born from the square system E.sub.s=F(B(E.sub.s.sup.Born)). After finding E.sub.s.sup.Born we find by the direct substitution =B(E.sub.s.sup.Born). This method has the advantage of allowing use of square system solvers, such as biconjugate gradients (BCG) or stabilized biconjugate gradients (BiSTAB), which are faster and better conditioned than rectangular system solvers, e.g., conjugate gradients (CG). The function B can be any one-to-one mapping such as the Born approximation or backpropagation.

    Method 2: Iteration in the Object Function Manifold

    [1186] Use the Born approximation as an example.

    [00606] E s = D Born E b = [ DE b ] Born

    [1187] Then

    [00607] Born = [ D E b ] - 1 E s

    [1188] Where [DE.sub.b].sup.1 represents the Born reconstruction. Multiply both sides of

    [00608] E s = D ( I - C ) - 1 E b

    [1189] by [DE.sub.b].sup.1, that is, apply the Born reconstruction procedure to both sides.

    [1190] Now, solve square nonlinear system

    [00609] Born = [ D E b ] - 1 D ( I - C ) - 1 E b ,

    [1191] for , in the domain by means of BiSTAB. The Jacobian will be square. There may be less work than with Method 1.

    [1192] Actually this will involve approximately the same workload as Method 1 at each iteration. This is because in Method 2, each iteration requires the application of the Born reconstruction procedure, whereas in Method 1), we are also required to calculate the Born approximation to the scattered field at each iteration, and then finish off with one application of the Born reconstruction procedure in order to get .

    Method 3: Use Previous Guess .sup.(n1)

    [1193] To understand the third method and put all 3 methods into a consistent framework, consider the nonlinear operator p, which maps object functions to scattered fields:

    [00610] ( ) = D [ ] ( I - C [ ] ) - 1 E b

    [1194] In terms of this operator, the equation we wish to solve for , is

    [00611] ( ) = E m

    [1195] where E.sub.m is the measured field (as above). The standard approach is to solve the least squares minimization problem

    [00612] min y .Math. E m - ( ) .Math. 2 ,

    as discussed above. This involves the use of conjugate gradient methods. However, if we now define the Born approximation operator, B

    [00613] B : E m B

    [1196] which maps a measured field to the corresponding Born approximation, we can apply B to both sides of ()=E.sub.m, to yield:

    [00614] B ( ) = B

    [1197] This provides a nonlinear operator B acting on M.sub.{manifold of all possible y's}. This operator also has as its range this same M.sub.. As with any nonlinear operator we will solve it iteratively. Each iteration involves the solution of the linear system of equations:

    [00615] [ J a c B ] ( ) = - ( B ( ( n ) ) - B )

    [1198] where the Jacobian Jac.sub.B

    [00616] Jac B B = B ,

    [1199] by virtue of the linearity of B.

    [1200] Method 2) above, proposes the use of BisTAB to solve the system [Jac.sub.B]()=(B(.sup.(n)).sup.B). Then adjust via .sup.(n+1).fwdarw..sup.(n)+.sup.(n).

    [1201] Similarly method 1) above defines a different operator B(E.sub.s)=E.sub.m, and solves this system for E.sub.s, which is then used to give via: =B(E.sub.s).

    [1202] The advantage of having a square operator (Domain and Range of the operator are of equal dimension) is that the BiSTAB method is used to solve the linear equation for the update .

    [1203] Within this context method 3) can be explained as follows:

    [1204] Suppose we have (as before):

    [00617] ( ) = D [ ] ( I - C [ ] ) - 1 E b

    [1205] The system (nonlinear) we are required to solve for an estimate of is

    [00618] ( ) = E m

    [1206] which in traditional methods involves some kind of minimization:

    [00619] min y .Math. E m - ( ) .Math. 2

    using rectangular CG.

    [1207] In Methods 1), and 2) we made particular use of the Born reconstruction operator:

    [00620] B : E m Y B

    [1208] to ameliorate the difficulties associated with ill-conditioning of the rectangular system by enabling use of BiCG. Now, however, imagine that we can define some nonlinear operator:

    [00621] : E m M

    [1209] As yet, this operator is undefined, but a particular example might be the Born reconstruction operator. With this more general, and as yet undetermined {circumflex over ()} operator, B()=.sup.B becomes {circumflex over ()}()={circumflex over ()}. Now suppose that {circumflex over ()} can vary with the iteration number {circumflex over ()}.sup.(K)()={circumflex over ()}

    [1210] This provides a nonlinear operator {circumflex over ()} acting on M.sub.{manifold of all possible 's}. This operator also has as its range this same M.sub.. As with any nonlinear operator we will solve it iteratively. Each iteration involves the solution of the linear system of equations:

    [00622] [ J a c ] ( ) = - ( ( ( n ) ) - )

    [1211] where {circumflex over ()}{circumflex over ()}(E.sub.m). Using {circumflex over ()}.sup.(n){circumflex over ()}.sup.(n)(E.sub.m)=.sup.(n1) as the definition of {circumflex over ()}.sup.(n) means that each iteration solves (using BiSTAB) the system [Jac.sub.{circumflex over ()}] ()=({circumflex over ()}(.sup.(n)).sup.(n1)). Then adjust .sup.(n+1).fwdarw..sup.(n)+.sup.(n)

    [1212] Method 4). uses very similar ideas but instead of solving {circumflex over ()}(K)()=, it solves {circumflex over ()}(E.sub.s)=E.sub.m iteratively for E.sub.s, and hence {circumflex over ()}(E.sub.s). Both of these methods 3) and 4) are, in actual fact, finding fixed points of square (nonlinear) maps:

    Method 3) Schematic

    Solve:

    [00623] ( ) = subject to ( E m ) =

    [1213] Determine {circumflex over ()}.sup.j.fwdarw.{circumflex over ()} iteratively. instead of using the Born reconstruction operator.

    Method 4) Schematic

    Solve:

    [1214] {circumflex over ()}(E.sub.m)=E.sub.m then determine ={circumflex over ()}(E.sub.m)

    [1215] Again {circumflex over ()}.sup.j.fwdarw.{circumflex over ()} is determined iteratively in a similar manner to the method described for Method 3).

    [1216] In one implementation using imaging algorithms as described above, generating, by a processor, a preliminary reconstruction image and/or reconstruction image comprises using a processing unit programmed to process data derived from incident wavefield energy that have been transmitted at one or more frequencies from one or more transmitter positions, each said transmitter position propagating wavefield energy at at least one transmitter orientation defined by Euler angles with respect to a selected fixed coordinate system, wherein said propagated wavefield energy is scattered by matter within a region and is detected by one or more receivers at one or more receiver positions and receiver orientations thereof, said receiver orientations defined by Euler angles with respect to said selected fixed coordinate system, by, at least: (a) choosing a forward scattering model which generates a total wavefield energy given said incident wavefield energy and one or more physical characteristics of said matter defined at selected points within said region; (b) propagating said incident wavefield energy toward said region from said one or more transmitter positions and transmitter orientations thereof, said incident wavefield energy having one or more frequencies; (c) detecting detected wavefield energy at each of said one or more receiver positions and respective receiver orientations thereof; (d) electronically processing said detected wavefield energy so as to transform said detected wavefield energy into detected scattered wavefield energy that is stored in a computer memory and that corresponds to said detected wavefield energy; (e) selecting a region characteristics estimate of selected physical characteristics at said selected points within the region and storing said region characteristics estimate in said computer memory; (f) said processing unit calculating a fixed target characteristics estimate from the detected wavefield energy, said fixed target characteristics estimate being defined such that when operated on by a selected fixed approximation of the forward scattering model, the detected wavefield energy results; (g) said processing unit performing a convergence step comprising the following steps: (1) preparing, for each of said one or more frequencies at each said transmitter position and respective transmitter orientations thereof, an estimate of said total wavefield energy at said selected points derived from said incident wavefield energy for said selected points stored in the computer memory and said region characteristics estimate for said selected points; (2) deriving, for each of said one or more frequencies at each said transmitter position and respective transmitter orientations thereof, a calculated scattered wavefield energy for one or more of said receiver positions and respective receiver orientations thereof from at least one of: (i) said region characteristics estimate at said selected points, and (ii) said estimate of said total wavefield energy at said selected points; (3) deriving from said calculated scattered field one or more variable region characteristics by solving a system whose right hand side is said calculated scattered wavefield energy, and whose left hand side is the result of applying said fixed approximation of the scattering model to said one or more variable region characteristics; (4) comparing said one or more variable region characteristics to said fixed target characteristics estimate to derive a comparator; (5) when said comparator is greater than a selected tolerance, said processing unit computing and storing in said computer memory an updated region characteristics estimate from: (i) said estimate of said internal field; (ii) said fixed target characteristics estimate; (iii) said one or more variable region characteristics; and (iv) the square Jacobian of the one or more variable region characteristics with respect to the region characteristics estimate, said square Jacobian utilization being implemented via a method of the family of gradient methods specifically designed for square systems, and then setting said region characteristics estimate equal to said updated region characteristics estimate; (h) repeating said processing unit convergence step until said comparator is less than or equal to said selected tolerance, said processing unit thereafter storing said updated region characteristics estimate in said computer memory.

    [1217] FIG. 53 shows an example computing system through which the described methods may be carried out. In some implementations, the computing system may be embodied, at least in part, as a viewing station and/or picture archival and communication system (PACS). PACS refers to the software tools and environment providing storage, retrieval management, distribution, and presentation of medical images. PACS software provides an access point across multiple sites and platforms to the imaging (and imaging-related) data stored on one or more storage systems managed by the PACS software. The PACS' storage systems contain archives for storage and retrieval of images (using DICOM or other formats), documentation, and reports, which are generally accessed via secure networks and may reside on local hardware or in the cloud (AWS, Azure, or other hosting platform for example, but not limited to specific platforms). One or more secure networks (e.g., intranet, local area network, wide area network, etc.) are used to communicate between various devices and computing systems and the cloud. The PACS generally includes and/or communicates with one or more imaging systems such as ultrasound systems, magnetic resonance imaging (MRI) systems and computed tomography (CT) scan equipment; and one or more computing devices (including workstations and portable computing devices).

    [1218] Referring to FIG. 53, the computing system 5300 can include a processor 5310 and a storage system 5320 in which a program 5330 may be stored. Examples of processor 5310 include general purpose central processing units, application specific processors, and logic devices, as well as any other type of processing device, combinations, or variations thereof. The processor 5310 processes data according to instructions of program 5330.

    [1219] Storage system 5320 includes any computer readable storage media readable by the processor 5310 and capable of storing software, including program 5330. Storage system 5320 may include volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information, such as computer readable instructions, data structures, program modules, or other data. Examples of storage media include random access memory (RAM), read only memory (ROM), magnetic disks, optical disks, CDs, DVDs, flash memory, solid state memory, phase change memory, or any other suitable storage media. Certain implementations may involve either or both virtual memory and non-virtual memory. In no case do storage media consist of a propagated signal or carrier wave. In addition to storage media, in some implementations, storage system 5320 may also include communication media over which software may be communicated internally or externally.

    [1220] Storage system 5320 may be implemented as a single storage device but may also be implemented across multiple storage devices or sub-systems co-located or distributed relative to each other. Storage system 5320 may include additional elements, such as a controller, capable of communicating with processor 5310.

    [1221] Program 5330 includes the instructions for performing adaptive imaging such as described herein, including for a reconstruction manager 310 and the various reconstruction algorithms described herein.

    [1222] A database 5340 storing speed of sound, reflection, and other imaging data (i.e., images [1223] speed of sound, attenuation and/or reflection) from an imaging system can be coupled to the system via wired or wireless connections. In some cases, the database 5340 can be part of a PACS with which system 5300 communicates. In some cases, database 5340 is part of storage system 5320. Database 5340 and/or storage system 5320 can also store reconstruction configurations such as described herein (e.g., configuration directory 370).

    [1224] Visual output can be provided via a display 5350. Input/Output (I/O) devices (not shown) such as a keyboard, mouse, network card or other I/O device may also be included. It should be understood that any computing device implementing the described system may have additional features or functionality and is not limited to the configurations described herein.

    [1225] Although the subject matter has been described in language specific to structural features and/or acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as examples of implementing the claims and other equivalent features and acts are intended to be within the scope of the claims.