SYSTEM AND METHOD FOR NEXT-GENERATION MRI SPINE EVALUATION
20170273593 · 2017-09-28
Inventors
- Atilla Peter Kiraly (Plainsboro, NJ, US)
- David Liu (Richardson, TX, US)
- Shaohua Kevin Zhou (Plainsboro, NJ)
- Dorin Comaniciu (Princeton Junction, NJ)
- Gunnar Krüger (Watertown-Boston, MA, US)
Cpc classification
A61B2576/02
HUMAN NECESSITIES
A61B5/055
HUMAN NECESSITIES
G06T19/20
PHYSICS
G16H50/20
PHYSICS
A61B5/7264
HUMAN NECESSITIES
International classification
Abstract
A method of visualizing spinal nerves includes receiving a 3D image volume depicting a spinal cord and a plurality of spinal nerves. For each spinal nerve, a 2D spinal nerve image is generated by defining a surface within the 3D volume comprising the spinal nerve. The surface is curved such that it passes through the spinal cord while encompassing the spinal nerve. Then, the 2D spinal nerve images are generated based on voxels on the surface included in the 3D volume. A visualization of the 2D spinal images is presented in a graphical user interface that allows each 2D spinal image to be viewed simultaneously.
Claims
1. A method of visualizing spinal nerves, the method comprising: receiving a 3D image volume depicting a spinal cord and a plurality of spinal nerves; for each spinal nerve, generating a 2D spinal nerve image by: defining a surface within the 3D volume comprising the spinal nerve, wherein the surface is curved such that it passes through the spinal cord while encompassing the spinal nerve, and generating the 2D spinal nerve images based on voxels on the surface included in the 3D volume; presenting a visualization of the 2D spinal images in a graphical user interface that allows each 2D spinal image to be viewed simultaneously.
2. The method of claim 1, wherein the visualization presents the 2D spinal images stacked according to their location along the spine.
3. The method of claim 1, further comprising: identifying a spinal cord landmark and a plurality of spinal nerve landmarks within the 3D image volume, wherein the surface is defined within the 3D volume as a curved surface comprising the spinal cord landmark and the spinal nerve landmark.
4. The method of claim 3, further comprising: receiving image data acquired using scout scan of the spinal cord and the plurality of spinal nerves; determining the orientation of the 3D image volume based on the image data.
5. The method of claim 1, further comprising: determining a spinal cord centerline based on the 3D volume; and identifying a location on a transverse process within the 3D volume, wherein the surface is defined by rotating the location on the transverse process by a pivot to arrive at an approximate location associated with the spinal nerve.
6. The method of claim 5, wherein the pivot is defined based on a vertebral body included in the 3D image volume.
7. The method of claim 1, further comprising: receiving one or more clinical findings input by a user via the graphical user interface; and using the one or more clinical findings as input into a clinical decision support reasoning engine to determine one or more treatment recommendations.
8. The method of claim 7, further comprising: applying a deep learning network to locate one or more degenerative spinal elements in the 3D image volume, wherein the one or more degenerative spinal elements are additionally used as input into the clinical decision support reasoning engine to determine the one or more treatment recommendations.
9. The method of claim 8, wherein the degenerative spinal elements comprise one or more of discs, spine, bone marrow.
10. The method of claim 8, wherein the deep learning network comprises 3 convolutional layers and 3 fully connected layers trained to label volumetric regions as containing normal or degenerative spinal elements.
11. A computer-implemented method for generating inputs for clinical decision support engine based on unfolded spinal image data, the method comprising: receiving a 3D image volume depicting a spinal cord, a plurality of vertebra, and a plurality of spinal nerves; unfolding the plurality of spinal nerves depicted in the 3D image volume to yield a plurality of unfolded spinal nerves; generating a plurality of unfolded spinal nerve images depicting the plurality of unfolded spinal nerves; unfolding the plurality of vertebra depicted in the 3D image volume to yield a plurality of unfolded vertebra; generating a plurality of unfolded vertebra images depicting the plurality of unfolded vertebra; determining one or more degenerative spinal elements based on the plurality of unfolded spinal nerve images and the plurality of unfolded vertebra images.
12. The method of claim 11, wherein the one or more degenerative spinal elements are determined based on input received from a clinician in response to presentation of the plurality of unfolded spinal nerve images and the plurality of unfolded vertebra images on a display.
13. The method of claim 11, wherein the one or more degenerative spinal elements are determined automatically by a machine learning network using the plurality of spinal nerve images and the plurality of unfolded vertebra images as input.
14. The method of claim 11, further comprising: documenting the one or more degenerative spinal elements in a clinical decision support reasoning engine.
15. The method of claim 11, wherein the plurality of spinal nerves depicted in the 3D image volume is unfolded by: defining a surface within the 3D volume comprising the spinal nerve, wherein the surface is curved such that it passes through the spinal cord while encompassing the spinal nerve; and identifying voxels on the surface included in the 3D volume.
16. The method of claim 11, wherein the plurality of spinal nerves depicted in the 3D image volume is unfolded by: segmenting the plurality of spinal nerves depicted in the 3D image volume to yield a plurality of segmented spinal nerves; for each of the plurality of segmented spinal nerves, identifying a centerline of the segmented spinal nerve; and unfolding each of the plurality of segmented spinal nerves based on its centerline.
17. The method of claim 16, wherein each unfolded spinal nerve image is generated by selecting voxels from the 3D image volume on each side of the centerline.
18. A computer-implemented method of viewing spinal imaging data, the method comprising: identifying a plurality of landmarks in the 3D image volume, wherein the landmarks comprise a spinal cord landmark, and one or more vertebral body landmarks; selecting a left sagittal slice from the 3D image volume, wherein the left sagittal slice comprises the vertebral body landmarks on a left lateral side of the spinal cord landmark; selecting a right sagittal slice from the 3D image volume, wherein the right sagittal slice comprises the vertebral body landmarks on a right lateral side of the spinal cord landmark; presenting a visualization in a graphical user interface that displays the left sagittal slice and the right sagittal slice side-by-side.
19. The method of claim 18, further comprising: receiving an update request via the graphical user interface, wherein the update requests a request to move the left sagittal slice and the right sagittal slice in a distal or proximal direction with respect to the spinal cord landmark; in response to the update request, selecting a new left sagittal slice and a new right sagittal slice from the 3D image volume; and presenting a new visualization in the graphical user interface that displays the new left sagittal slice and the new right sagittal slice side-by-side.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0012] The foregoing and other aspects of the present invention are best understood from the following detailed description when read in connection with the accompanying drawings. For the purpose of illustrating the invention, there are shown in the drawings embodiments that are presently preferred, it being understood, however, that the invention is not limited to the specific instrumentalities disclosed. Included in the drawings are the following Figures:
[0013]
[0014]
[0015]
[0016]
[0017]
[0018]
[0019]
[0020]
[0021]
[0022]
DETAILED DESCRIPTION
[0023] The following disclosure describes the present invention according to several embodiments directed at methods, systems, and apparatuses related to evaluating Magnetic Resonance Imaging (MRI) images of the spine using automation of scanning, reformation, and examination processes employed during examination. Current diagnosis and reporting methods can be improved in terms of accuracy and speed by systematically addressing bottlenecks and weaknesses in workflow. However, any changes should not greatly disrupt the radiologists' current workflow as it has been shown to reduce their accuracy. The techniques described herein employ complete series of methods to address these issues to assist CDS tools and workflows without interfering with the current workflow.
[0024] Briefly, the methods described herein comprise three features: scanning automation, reformation automation, and examination automation. Scanning automation is provided by assisting clinicians and technicians in determining orientation of subsequent scans based on scout scan. Additional scanning automation techniques generally known in the art may also be incorporated into the techniques described herein. Reformation automation is provided in four respects. First, starting with spine parsing of the volumetric T1 or T2 image for structure (see
[0025] The examination automation techniques described herein include six features. First, abnormalities are automatically detected using deep learning methods. Secondly, the portion of the examination the radiologist is currently performing is determined, for example, based on previous preferences set by radiologist, application inputs such as window/level and slice, and/or eye-tracking methods to determine which region is under examination. Third, the findings presented to the radiologist may be limited to current task at hand or findings that relevant to the current findings being input by the radiologist. Fourth, regions with large appearance changes in different time points may be highlighted. Fourth, a patch view display (see, e.g.,
[0026]
[0027] The definition of this curved surface 105 shown in
[0028] Alternatively, the spinal cord centerline can be located along with the transverse process (shown by markers 120 and 130). In general any technique known in the art may be used for locating the spinal cord centerline and transverse process. For example, in some embodiments, the centerline is determined using a fully automatic spinal cord tracing-based method as described in U.S. Pat. No. 8,423,124 to Kiraly et al., issued Apr. 16, 2013, entitled “Method and system for spine visualization in 3D medical images,” the entirety of which is incorporated herein by reference. Once the centerline is determined, the plane can then be defined by rotating location by a pivot to arrive at the approximate nerve location (as shown by the dotted lines in
[0029]
[0030] Next, at steps 145 and 150, a 2D image is generated for each spinal nerve. These images are referred to herein as a “2D spinal nerve images.” At step 145, a surface is defined within the 3D volume as described above with reference to
[0031] Continuing with reference to
[0032] As an alternative to the techniques described above with reference to
[0033] Unfolding of spinal bones may be performed by adopting techniques previously applied to rib unfolding techniques. Examples of such techniques are described in U.S. Patent Application Publication No. 2013/0070996, “Method and System for Up-Vector Detection for Ribs in Computed Tomography Volumes”, D. Liu et al., published on Mar. 21, 2013 and U.S. Pat. No. 7,627,159, “2D Visualization for Rib Analysis”, A. Kiraly et al., issued on Dec. 1, 2009, each of which is hereby incorporated by reference for all purposes. These techniques allow for the unfolding of the 3D rib cage image into a 2D image to improve examination time and reduce ambiguity in interpreting the CT data of the ribs.
[0034]
[0035]
[0036] The approaches shown in
[0037]
[0038] The CDS Reasoning Engine 430 combines the image information with Population Statistics/Guidelines 435 to make one or more Treatment Recommendations 440. As is generally known in the art, a CDS system assists clinicians to analyze and diagnosis patient data at the point of care. In some embodiments, the CDS Reasoning Engine 430 comprises a knowledge base if-then rules that allow inferences to be made based on the various data inputs 410, 415, 420, and 425. In other embodiments, the CDS Reasoning Engine 430 uses machine learning to find patterns in clinical data. Examples of machine learning models that may be used by the CDS Reasoning Engine 430 include support vector machines and artificial neural networks. Although the machine learning models may provide a diagnosis in an automated manner, a clinician may interface with the CDS Reasoning Engine 430 following modeling processing to verify or correct predicted treatment recommendations. Aside for decision support, the various data inputs 410, 415, 420, and 425 may also be used by the CDS to support clinical coding and documentation. For example, based on the Image Analytics 410 input, a degenerative disc can be identified, coded, and documented automatically for reference to an insurance agency.
[0039]
[0040] Parallel portions of a deep learning application may be executed on the architecture 500 as “device kernels” or simply “kernels.” A kernel comprises parameterized code configured to perform a particular function. The parallel computing platform is configured to execute these kernels in an optimal manner across the architecture 500 based on parameters, settings, and other selections provided by the user. Additionally, in some embodiments, the parallel computing platform may include additional functionality to allow for automatic processing of kernels in an optimal manner with minimal input provided by the user.
[0041] The processing required for each kernel is performed by grid of thread blocks (described in greater detail below). Using concurrent kernel execution, streams, and synchronization with lightweight events, the architecture 500 of
[0042] The device 510 includes one or more thread blocks 530 which represent the computation unit of the device 510. The term thread block refers to a group of threads that can cooperate via shared memory and synchronize their execution to coordinate memory accesses. For example, in
[0043] Continuing with reference to
[0044] Each thread can have one or more levels of memory access. For example, in the architecture 500 of
[0045] The embodiments of the present disclosure may be implemented with any combination of hardware and software. For example, aside from parallel processing architecture presented in
[0046] While various aspects and embodiments have been disclosed herein, other aspects and embodiments will be apparent to those skilled in the art. The various aspects and embodiments disclosed herein are for purposes of illustration and are not intended to be limiting, with the true scope and spirit being indicated by the following claims.
[0047] An executable application, as used herein, comprises code or machine readable instructions for conditioning the processor to implement predetermined functions, such as those of an operating system, a context data acquisition system or other information processing system, for example, in response to user command or input. An executable procedure is a segment of code or machine readable instruction, sub-routine, or other distinct section of code or portion of an executable application for performing one or more particular processes. These processes may include receiving input data and/or parameters, performing operations on received input data and/or performing functions in response to received input parameters, and providing resulting output data and/or parameters.
[0048] A graphical user interface (GUI), as used herein, comprises one or more display images, generated by a display processor and enabling user interaction with a processor or other device and associated data acquisition and processing functions. The GUI also includes an executable procedure or executable application. The executable procedure or executable application conditions the display processor to generate signals representing the GUI display images. These signals are supplied to a display device which displays the image for viewing by the user. The processor, under control of an executable procedure or executable application, manipulates the GUI display images in response to signals received from the input devices. In this way, the user may interact with the display image using the input devices, enabling user interaction with the processor or other device.
[0049] The functions and process steps herein may be performed automatically or wholly or partially in response to user command. An activity (including a step) performed automatically is performed in response to one or more executable instructions or device operation without user direct initiation of the activity.
[0050] The system and processes of the figures are not exclusive. Other systems, processes and menus may be derived in accordance with the principles of the invention to accomplish the same objectives. Although this invention has been described with reference to particular embodiments, it is to be understood that the embodiments and variations shown and described herein are for illustration purposes only. Modifications to the current design may be implemented by those skilled in the art, without departing from the scope of the invention. As described herein, the various systems, subsystems, agents, managers and processes can be implemented using hardware components, software components, and/or combinations thereof. No claim element herein is to be construed under the provisions of 35 U.S.C. 112, sixth paragraph, unless the element is expressly recited using the phrase “means for.”