SYSTEM, METHOD AND COMPUTER PROGRAM PRODUCT FOR IMPROVED MINI-SURGERY USE CASES
20220387129 · 2022-12-08
Inventors
Cpc classification
A61B2017/00221
HUMAN NECESSITIES
A61B34/20
HUMAN NECESSITIES
A61B6/5247
HUMAN NECESSITIES
A61B90/37
HUMAN NECESSITIES
H04N13/254
ELECTRICITY
G01B11/2545
PHYSICS
H04N13/239
ELECTRICITY
H04N2013/0081
ELECTRICITY
A61B8/4245
HUMAN NECESSITIES
A61B90/39
HUMAN NECESSITIES
International classification
A61B90/00
HUMAN NECESSITIES
A61B34/20
HUMAN NECESSITIES
Abstract
An imaging system aka 3d camera operative in conjunction with a tube having two open ends, the system comprising active portions small enough to fit into the tube and an electronic subsystem including a hardware processor operative to receive image/s from the active portions and to generate therefrom at least one 3D image of a scene visible via one of the tube's open ends. The system may comprise a tracker configured to be secured to the tube, and a method for monitoring location, e.g. absolute location, of the tube, accordingly.
Claims
1.-30. (canceled)
31. An imaging system aka 3d camera operative in conjunction with a tube (e.g. retractor or trocar) having two open ends, the system comprising: active portions small enough to fit into the tube; and an electronic subsystem including a hardware processor operative to receive at least one image from said active portions and to generate therefrom at least one 3D image of a scene (aka miniature scene) visible via one of the tube's open ends (aka portion of a surgical field aka topology).
32. The system of claim 31 and also comprising a tracker configured to be secured to the tube, thereby to monitor an absolute location of the retractor.
33. The system of claim 31 and wherein said active portions comprise: at least one image sensor/s or cameras oriented to have a partially or totally overlapping field of view, and at least one structured light projector/s projecting a known pattern onto the field of view of the image sensor/s.
34. The system of claim 31 and wherein at least one dimension of said active portions is smaller than the tube's inner diameter.
35. The system of claim 31 which includes at least one component which is larger in size than the tube's inner diameter.
36. The system of claim 31 and wherein the imaging system includes at least one mechanical subsystem configured to secure the camera at a fixed location and orientation vs. markers that track the tube.
37. The system of claim 33 and wherein the hardware processor receives data from the at least one image sensor and generates said 3D image from said data.
38. The system of claim 37 and wherein at least one image sensor is deployed at an offset from at least one structured light projector and wherein the offset is known to the hardware processor and is used for triangulation which generates said 3D image from said data.
39. The system of claim 31 and wherein the hardware processor assigns absolute coordinates to the 3d image of the surgical field.
40. The system of claim 31 and wherein the hardware processor is also configured to monitor a tool which is tracked, hence its absolute coordinates are known, and is moving.
41. The system of claim 31 and wherein the hardware processor is configured to recognize a location of the scene within a larger topology e.g. a 3D representation of one or more vertebrae.
42. The system of claim 31 and wherein said tool is deployed inside the retractor.
43. The system of claim 31 and wherein said tool is deployed outside the retractor.
44. The system of claim 31 and also comprising at least one tracker attached to the tube, and wherein the hardware processor is configured to use data from the tracker to be presented to a human user, thereby enabling the human user to monitor a current position of the tube.
45. The system of claim 31 and wherein the hardware processor is configured to superimpose the 3D image of the miniature scene onto an earlier captured image of a larger scene which is larger than, and includes, the miniature scene, thereby to generate a superimposed image, and to display the superimposed image to a human user.
46. The system of claim 36 wherein the mechanical subsystem is larger in size than the tube's inner diameter.
47. A computer program product, comprising a non-transitory tangible computer readable medium having computer readable program code embodied therein, said computer readable program code adapted to be executed to implement An imaging method operative in conjunction with a tube having two open ends, the method comprising: receiving at least one image from active portions, of a 3d camera, which are small enough to fit into the tube and using a hardware processor to generate therefrom at least one 3D image of a scene.
48. The system of claim 38 wherein said at least one image sensors comprises two image sensors, and wherein said triangulation comprises stereo triangulation.
49. The system of claim 38 wherein said at least one projector comprises but a single projector, said at least one image sensor comprises but a single image sensor, and wherein pattern correlation and measurement of sub-pattern displacement are used for said triangulation or for depth estimation.
50. The system of claim 41 and wherein at least one pre-operative image, having a resolution, represents the larger pre-mapped topology, and wherein the pre-operative image comprises a CT image.
51. An imaging method operative in conjunction with a tube having two open ends, the method comprising: providing a 3d camera with active portions small enough to fit into the tube and using an electronic subsystem including a hardware processor operative to receive at least one image from said active portions and to generate therefrom at least one 3D image of a scene.
52. The method of claim 51 and wherein the tube bears fiducial markers and wherein the tube's location in space is known to said hardware processor due to said markers.
53. The method of claim 52 and wherein at least when an inferior edge of at least one lamina and/or ipsilateral base of spinous process are identified, the 3d camera is secured to a top end of the tube, and the inferior edge of the lamina, as viewed through the bottom end of the tube, is measured, thereby to yield a measured surface; and wherein at least one vertebra's 3D location is presented to a human user, thereby to facilitate performance of Tubular Laminotomy.
54. The method of claim 53 wherein said vertebra's 3D location is derived by matching the measured surface to a portion of a 3D image of at least a portion of the lamina and from the tube's known location in space.
55. The method of claim 52 wherein the camera is secured to a top end of the tube and measures an inferior articulating facet, as viewed through the bottom end of the tube, and wherein at least one vertebra's 3D location is presented to a human user, thereby to facilitate performance of MIS TLIF (Transforaminal Interbody Fusion).
56. The method of claim 55, said vertebra's 3D location being derived from a 3D image of the facet and from the tube's known location in space.
57. The system of claim 31 and also comprising at least one tool tracker and wherein the hardware processor is configured to use data from the tracker to be presented to a human user, thereby enabling the human user to monitor a current position of the tool.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0102] Example embodiments are illustrated in the various drawings. Specifically:
[0103]
[0104]
[0105] Certain embodiments of the present invention are illustrated in the following drawings; in the block diagrams, arrows between modules may be implemented as APIs and any suitable technology may be used for interconnecting functional components or modules illustrated herein in a suitable sequence or order e.g. via a suitable API/Interface. For example, state of the art tools may be employed, such as but not limited to Apache Thrift and Avro which provide remote call support. Or, a standard communication protocol may be employed, such as but not limited to HTTP or MQTT, and may be combined with a standard data format, such as but not limited to JSON or XML.
[0106] Methods and systems included in the scope of the present invention may include any subset or all of the functional blocks shown in the specifically illustrated implementations by way of example, in any suitable order e.g. as shown. Flows may include all or any subset of the illustrated stages, suitably ordered e.g. as shown. Tables herein may include all or any subset of the fields and/or records and/or cells and/or rows and/or columns described.
[0107] Computational, functional or logical components described and illustrated herein can be implemented in various forms, for example, as hardware circuits such as but not limited to custom VLSI circuits or gate arrays or programmable hardware devices such as but not limited to FPGAs, or as software program code stored on at least one tangible or intangible computer readable medium and executable by at least one processor, or any suitable combination thereof. A specific functional component may be formed by one particular sequence of software code, or by a plurality of such, which collectively act or behave or act as described herein with reference to the functional component in question. For example, the component may be distributed over several code sequences such as but not limited to objects, procedures, functions, routines and programs, and may originate from several computer files which typically operate synergistically.
[0108] Each functionality or method herein may be implemented in software (e.g. for execution on suitable processing hardware such as a microprocessor or digital signal processor), firmware, hardware (using any conventional hardware technology such as Integrated Circuit technology) or any combination thereof.
[0109] Functionality or operations stipulated as being software-implemented may alternatively be wholly or fully implemented by an equivalent hardware or firmware module, and vice-versa. Firmware implementing functionality described herein, if provided, may be held in any suitable memory device and a suitable processing unit (aka processor) may be configured for executing firmware code. Alternatively, certain embodiments described herein may be implemented partly or exclusively in hardware, in which case all or any subset of the variables, parameters, and computations described herein may be in hardware.
[0110] Any module or functionality described herein may comprise a suitably configured hardware component or circuitry. Alternatively or in addition, modules or functionality described herein may be performed by a general purpose computer, or more generally by a suitable microprocessor, configured in accordance with methods shown and described herein, or any suitable subset, in any suitable order, of the stages included in such methods, or in accordance with methods known in the art.
[0111] Any logical functionality described herein may be implemented as a real time application, if and as appropriate, and which may employ any suitable architectural option, such as but not limited to FPGA, ASIC or DSP or any suitable combination thereof.
[0112] Any hardware component mentioned herein may in fact include either one or more hardware devices e.g. chips, which may be co-located or remote from one another.
[0113] Any method described herein is intended to include within the scope of the embodiments of the present invention also any software or computer program performing all or any subset of the method's stages, including a mobile application, platform or operating system e.g. as stored in a medium, as well as combining the computer program with a hardware device to perform all or any subset of the stages of the method, suitably ordered e.g. as described herein.
[0114] Data can be stored on one or more tangible or intangible computer readable media stored at one or more different locations, different network nodes or different storage devices at a single node or location.
[0115] It is appreciated that any computer data storage technology, including any type of storage or memory and any type of computer components and recording media that retain digital data used for computing for an interval of time, and any type of information retention technology, may be used to store the various data provided and employed herein. Suitable computer data storage or information retention apparatus may include apparatus which is primary, secondary, tertiary or off-line; which is of any type or level or amount or category of volatility, differentiation, mutability, accessibility, addressability, capacity, performance and energy use; and which is based on any suitable technologies such as semiconductor, magnetic, optical, paper and others.
DETAILED DESCRIPTION OF CERTAIN EMBODIMENTS
[0116] Conventional MISS reduces the surgeon's visual orientation by limiting the actual open areas to relatively small (about 25 mm diameter) openings, showing limited view of the spine and internal structure. In procedures where the surgeon elects to use only a single incision—for insertion of two screws, rods, and cage, while placing the other screws percutaneously—the surgeon may require guidance for accurate placement of the implants. Often this means additional radiation for the patient and sometimes the surgeon—either by CT (full or partial), or fluoroscopy.
[0117] There is a growing need for a navigation system that allows accurate information flow or guidance and implant placement without added radiation to either staff or patient. Certain embodiments herein answer this need e.g. by optical and ultrasound scanning and/or tracking of spine vertebrae bone surface.
[0118] The MISS navigation system includes, according to certain embodiments, all or any subset of the following hardware components: [0119] 1. 3D camera aka 3DC that tracks the surgery tools, MISS retractor, and optional ultrasound transceiver [0120] 2. Surgery Tool tracking unit/s aka STT that each include an Inertial Measurement Unit (IMU), wireless communication module, indicator marks for user feedback, and fiducial/sphere markers for tool tracking [0121] 3. A computer that runs the surgery software and connects to both the 3D camera aka 3DC and tool tracking units by either wired or wireless communications [0122] 4. At least one miniature 3D (M3D) camera/s attached to retractor/s inside tubes [0123] 5. Tool tracking unit/s attached to MISS retractor tube/s for tracking by 3D camera aka 3DC [0124] 6. Optional ultrasound transceiver including an attached tool tracking unit secured thereto.
[0125] Retractor sets as available e.g. from medikrebsusa.com, typically include several retractors of different diameters, and/or or different lengths (ranging say from 4 cm to 7 cm); these various lengths may be used selectably or selectively by a surgeon e.g. depending on the thickness of the fat layer that may separate vertebrae from the skin and outer world. Retractor lengths may vary, say, from 30 mm to 90 mm in, say, 10 mm intervals. External diameters may vary, say, from 12 mm to 26 mm in 2 mm intervals. For example, medfix.com markets a retractor set which includes 4 lengths (50, 60, 70, 80 mm)×3 diameters (18, 22, 26 mm)—a total of 12 tubes.
[0126] According to some embodiments, a group of cameras is provided, including one camera per retractor diameter, since typically, retractor sets include plural retractors with plural different diameters respectively.
[0127] According to other embodiments, a single camera is provided, which is smaller than the smallest expected or common tube diameter (e.g., the single camera's active parts may have a dimension which is less than 12 mm if the narrowest retractor in a retractor set is known to have a tube whose external-internal diameters are 14 mm-12 mm respectively.
[0128] According to some embodiments, adapters are provided to allow a single camera to be used for tubes of several different lengths and/or of several different diameters, all larger than the dimension of the single camera's active parts. For example, the M3D camera described in Example 2 herein, is 12 mm in diameter. This fits well in a 14 mm external/12 mm internal diameter tube. However, for use with a 26 mm external/24 mm internal tube, for example, an adapter may be provided, such as a ring with an internal diameter of 12 mm and external diameter of 24 mm.
[0129] The system described in co-owned PCT patent application PCT/IL2019/050775, incorporated herein by reference, is an example of a system which provides the first three components above: 3D camera, tool tracking units, and computer. The additional M3D camera/s (4) and tool tracking units attached to MISS retractor tube/s (5) add the capability to ‘see’ bone structure through relatively narrow openings common in MISS, and to track MISS tubes during surgery.
[0130] Typically, each M3D camera secured to a tube, having range imaging functionality, provides accurate 3D measurement of a bone section exposed by the surgeon through the opening or via the tube.
[0131] Typically, the surgeon marks the exposed area on the pre-operational CT and the bone structure is registered to the bone structure derived from the CT. The output of the registration stage is a set of rigid transformations relating each vertebra in the pre-operational CT to the vertebra's location and orientation as measured by the 3DC and M3D cameras in the OR (operating room) setting.
[0132] In case of multiple incisions with multiple retractors, the registration over multiple areas of the same vertebra allows accurate registration of the vertebra position to the CT. However, some MISS procedures use a single incision per vertebra. In this case one needs to ensure enough bone exposure, visible through the single incision, to allow good registration of the full vertebra. See detailed description for example of ‘enough’ bone surface.
[0133] Since the openings in MISS are small and the exposed bone area is limited, additional information that can be integrated with the M3D and 3DC cameras information may be used. The additional information may be obtained from existing navigation solutions such as markers attached to bone structures and/or intraoperative CT scan, fluoroscopy with markers, etc. In certain embodiments, the additional information is obtained from accurate tracking of an ultrasound probe location and angle with the 3D camera aka 3DC. Matching the bone surface structure in the ultrasound image with the bone surface derived from the preoperational imagery e.g. CT, provides an ‘internal’ view of the vertebrae positions that enhances the system's accuracy.
[0134] The M3D camera is configured and operative for use in MISS surgery including being secured onto the retractor and viewing vertebrae via the retractor opening. For example, the camera system (e.g. as shown in
[0135]
[0136] The structured light projector 204 typically projects light towards bone surface 205, the projection forming a known pattern that illuminates bone surface 205. The pattern may be a random dot pattern, lines, binary coded images, or any of other patterns known in the art. The two cameras 202 and 203 determine the 3D shape of bone surface 205 by comparing the pixel coordinates of the detected illumination pattern between the two cameras. The co-owned PCT application describes an example method for 3D distance computation from pixel offsets.
EXAMPLE I
[0137] Providing an off-the-shelf example for low-cost implementation, units A, B may each comprise an off-the-shelf miniature structured light projector e.g. AMS BELICE-850 dot pattern illuminator for 3D stereoscopic imaging (www.ams.com). The single projector package size may be 3.4 mm×3.5 mm×3.56 mm. the projector may produce 5500 dots in a camera field of view (FOV) of 68°×48°. The configuration typically includes two adjacent projectors—type A and B—placed 4.0 mm apart, where type A and B produce a rectangular pattern of points at 5° and 15° inclination to the line connecting the projectors, respectively.
[0138] An example of suitable off-the-shelf miniature devices to implement video cameras 202, 203 (which may be deployed parallel to one another), are Misumi MD-BL24M7 B/W camera, 640×480 pixels, f=1.83 mm lens, F#=5.0 (Numerical aperture), 7 mm width, optional IR LED and IR pass filter (www.misumi.com.tw). The camera minimal viewing angle is 56°×44° so the example projector FOV of 68°×48° is slightly larger than the camera FOV, ensuring the full camera FOV is covered by the projected patterns. The camera PCB can be changed to allow integration of two cameras and two projector units on one PCB, where the two cameras are placed on both sides of the projector, e.g. as shown in
[0139] The baseline distance b between the cameras 202, 203 and the projection angle of the projector 204 may be specified to match, say, a required FOV and distance d from bone surface 205 while allowing insertion into the retractor tube 201. Taking a retractor tube inside diameter of w=20 mm, for example, and circular FOV of θ=44°, puts the M3D camera at a distance of d=cotan(θ/2)*w/2=24.8 mm from bone surface 205. The depth of field (DOF) of the cameras 202, 203 may be >5 mm to allow for varying distance to the features of bone surface 205.
[0140] Typically, DOF≈2*d.sup.2*F#*c/f.sup.2 (https://en.wikipedia.org/wiki/Depth_of_field) where d is the distance between cameras 202, 203 and bone surface 205, F# is the numerical aperture, c circle of confusion (typically the pixel size for the camera) and f the focal length. Taking the data above −d=25 mm, F#=5, c=0.003 mm (3 micron pixel), f=1.83 mm, we get DOF=5.6 mm.
[0141] The projector 204 may send, say, about 5000 points on a rectangular FOV, or about 60 points across for a circular FOV, giving spatial sampling with resolution of 0.33 mm for 20 mm diameter FOV. The depth resolution may depend on the offset between the images of the same projected dot between the two cameras 202, 203. For example, taking a baseline distance between image sensor centers b=14 mm, the offset between images of the same dot for bone surface 205 distance d=25 mm and focal length f=1.83 mm, parallel cameras, may be offset Δ=0.51 mm. The geometry gives δ d/d=δΔ/Δ, so changing the offset by two pixels (one for each camera) δΔ=0.006 mm, may yield δd=0.3 mm as the depth resolution of this example setup.
[0142] The parameters of the camera, projector, and their optics are given as examples only. Example I demonstrates that even off-the-shelf components can yield a setup satisfying the available space and restrictions of MISS surgery so as to provide a system that can give 3D information having the required accuracy over the full available FOV.
[0143] The projector and cameras may be characterized by all or any subset of the following: [0144] 1. Spatial resolution and depth resolution equal the required system resolution, typically 0.3 mm [0145] 2. Projector and cameras FOV at exit of tube are slightly larger than the internal diameter of the retractor tube [0146] 3. Camera DOF has larger than expected variation in depth of bone surface, typically >5 mm [0147] 4. Distance of M3D camera from bone surface is determined according to FOV, DOF, resolution
[0148] In the “basic” configuration described in
[0149] It is, however, also possible to use optics configured to achieve higher resolution depth images e.g. all or any subset of the following: [0150] 1. Variable focus camera lens—typically allows higher resolution by requiring a smaller depth of field and adjusting the focal length of the lens to ‘scan’ the various distances from the object. An example of an off-the-shelf variable lens is Corning Varioptics A16F lens (www.corning.com) which has a 20 diopters change of focus and the lens' small 6.2 mm diameter is typically compatible with embodiments herein. [0151] 2. Building endoscopic ‘optical wave guides’ that typically allow wider angular separation between the images of the two cameras and consequentially a wider effective baseline distance between them. For example, the setup described in Example I includes baseline distance b=14 mm and object distance d=25 mm, giving an effective baseline angle (angle between rays collected by the two cameras) of beta=2*atan(b/2d)=16°. Placing folding mirrors or prisms that collect light at a relative angle of 32° will provide X2 better depth resolution. The wider baseline can be achieved either by placing the folding optics utilizing the full internal width of the retractor, by placing the M3D setup closer to the bone surface, or both. [0152] 3. Fiber optic design—using fiber bundles for structured light and/or image collection. The fiber bundles may be integrated into the retractor side walls and allow minimum interference to placing tools through the retractor while operating the M3D camera.
[0153] Options 1-3 described above—separately or in any combination—can improve the M3D camera performance at the expense of added complexity and/or cost. The M3D design is not limited to the examples above and can utilize zoom and/or telephoto lenses and/or flexible image guiding optics, or other camera options known in the art.
[0154] The sizes listed in Example 1 are example sizes which may be used in MISS retractors. However, the design is not limited to these sizes. Some MISS procedures use endoscopic tube entry ports—narrow tubes that are typically 8 mm in diameter and support use of endoscopic tools and vision systems. In this case, the M3D camera design may change to fit the smaller tube diameter. https://www.sciencedirect.com/science/article/abs/pii/S1361841512000473?via%3Dihub) aka Reference 2, describes an endoscopic 3D scanner based on structured light, having a diameter of 3.6 mm. Other optical components (camera/s, scanner) and designs may be used to allow fitting the M3D camera in a more confined tube.
Example 2
[0155] Example 2, also based on off-the-shelf hardware, is a design with a customized laser projector that allows the M3D camera 20 to fit in smaller retractor tubes, such as a 14 mm external diameter/12 mm internal diameter retractor tube. Using as cameras 202 and 203, for example, Missumi MD-B311M5-77 B/W miniature camera, 4.9 mm diameter, 1920×1080 pixels, 1.4×1.4 um pixel size, f=1.83 mm lens, with distance b between camera centers b=5 mm (the two cameras adjacent to each other), and as laser projector 204 a customized 6 mm diameter projector using a single mode VCSEL light source (for example, II-VI APA8501010001, https://www.ii-vi.com/product/850 nm-low-power-sm-chip/?sfw=pass1603358166), focusing lens f=6 mm, diameter=6 mm (for example Edmund Optics 36-626 https://www.edmundoptics.com/p/6 mm-dia-x-6 mm-f1-small-diameter-plastic-aspheric-lens/34651/), and random-dot diffractive optics pattern generator (for example, Holoor MS-469-850-N-X, https://www.holoor.co.il/structured-light-doe/) diced to 4 mm×4 mm square, so the diagonal is 5.6 mm.
[0156] The cross section of the M3D camera in this example is shown in
[0157] Both Example 1 and Example 2 are only examples of a miniature 3D camera which may be used in MISS operations according to certain embodiments herein.
[0158] The term “miniature” is used herein to include any camera whose active portions are small enough to fit into a surgical tube e.g. one or more dimensions of all or any subset of the two image sensors, projector, LED light, and their optics are typically smaller than the tube's inner diameter. The remaining portions of the camera e.g. electronics and/or mechanics may be larger in size than the tube's inner diameter. Typically, methods described in the co-owned PCT patent application are used to assign ‘global’ or ‘world’ or absolute (rather than merely relative) coordinates to the 3D image of the surgical field imaged by the miniature camera.
[0159] According to certain embodiments, the Surgery Tool Tracking unit or tracker is secured or attached to the MISS retractor tube. The M3D camera enables 3D view and registration of spine vertebrae bone features through the MISS retractor tube/s. To register the 3D structure measured by the M3D camera to the coordinate system of the 3D camera aka 3DC, the M3D camera location and orientation need to be measured by the 3D camera aka 3DC. For this purpose, each retractor tube has a tracker or Surgery Tool Tracking unit aka STT attached thereto. A detailed description of an example STT and example methods for use of an STT are provided in PCT/IL2019/050775. The 3D camera aka 3DC tracks the fiducials/markers on the STT or tracker, enabling transfer of the 3D coordinates from M3D to 3DC coordinates. Alternatively or in addition, the IMU that is part of each STT unit or tracker provides added information on the tube orientation, and can be used to track and alert tube motion during the surgery. Alternatively or in addition, the feedback LEDs on each STT or tracker may also be used to align the tube to a desired angle, in a similar way that the STT or tracker is used to align surgery tools to the planned angle, e.g. as described in PCT/IL2019/050775.
[0160]
[0161] Specifically,
[0162] System operation according to an embodiment of the invention is now described. The M3D camera integrates with the rest of the system components to allow registration of a specific bone feature on a single vertebra to the pre-operation CT. An example MISS sequence, aka fow1, may include all or any subset of the following stages, suitably ordered e.g. as follows: [0163] 1. Surgeon makes first incision and inserts MISS retractor into the incision. The retractor is locked in position, for example by the surgeon activating a mechanical lock mechanism (see https://www.aesculapimplantsystems.com/products/spine-solutions/thoracolumbar-solutions/spyder-mis-retractor-system) [0164] 2. Surgeon exposes and clears a section of bone including identifiable features through the retractor tube [0165] 3. Surgeon inserts M3D camera into retractor tube and locks the camera in position [0166] 4. Surgeon marks the exposed bone feature on pre-operation CT using system software which may include all or any subset of the software functionalities described in the co-owned PCT application which is hereby incorporated by reference. [0167] 5. Software performs registration of vertebra—measure bone feature 3D shape from M3D camera typically combined with 3D position of retractor tube from 3D camera aka 3DC [0168] 6. Software presents to surgeon the retractor superimposed on CT image typically with successful registration indication [0169] 7. Surgeon takes out M3D camera from retractor tube to clear tube for other surgery tools (or the M3D design may use minimal cross section and may be kept inside retractor tube during surgery) [0170] 8. Surgeon shows surgery tools to 3D camera aka 3DC for tool registration and tracking [0171] 9. Surgeon verifies vertebra registration e.g. by placing tool tip on known bone feature and seeing the software present the tool tip at the known bone feature on monitor showing tool and CT [0172] 10. Surgeon repeats stages 1-6 and 9 for registering each vertebra using all existing retractors [0173] 11. Software and system continuously track the positions and angles of retractors and tools using 3D camera aka 3DC and markers e.g. the STT on the tubes e.g. retractors and/or DRFs on tools, e.g. as described in our PCT [0174] 12. Software alerts surgeon to any changes in retractor positions that may affect registration, e.g. detected motion >2 mm (or other chosen threshold) [0175] 13. Surgeon repeats stages 3-6 and 9 for any retractor with motion alert to verify proper registration
[0176] The registration accuracy of a bone features from CT to 3D camera aka 3DC typically depends on the CT slice thickness (typically 1 mm for high density scan), 3D camera aka 3DC resolution (typically 0.3 mm), exposed area size (typically 10 mm×10 mm) and bone contrast in CT image. From past trials and from literature—see (https://link.springer.com/article/10.1007%2Fs00586-004-0797-y; aka reference 3—the expected registration error for proper choice of features and points is 0.5 mm.
[0177] Taking a 0.5 mm error over an exposed bone area of 10 mm gives angular error (in two angular directions) of 80=arctan(0.5/10)=3°. For single incision on one side of a vertebra, taking a typical distance of 25 mm between the two pedicles of the same vertebra, yields an error of 1.25 mm for the counter-side pedicle location in addition to 3° angular error in pedicle screw direction. This makes the single incision registration usable also for the counter-side pedicle without additional information, enhancing the overall registration accuracy.
[0178] Use of M3D camera and two incisions per vertebra—preferably on both left and right sides of the same vertebra—provides enhanced accuracy in the registration of each vertebra. The positional accuracy can be kept close to the system's resolution at around 0.3 mm.
[0179] Following the registration stage 6 the system 3D camera aka 3DC may be used for tracking of tools and retractor positions continuously during the operation. The M3D camera can be re-inserted into or re-secured to each retractor at any time during the surgery to verify the registration accuracy e.g. as described herein with reference to stages 12, 13.
[0180] Continuous tracking of each vertebra is also possible using the M3D camera, with a design that includes minimum interference with the use of the retractors as entry points for the surgery. The ‘endoscopic’ or fiber-optic design options described above may be used for leaving large usable internal space in the retractor and/or for allowing continuous vertebra tracking.
[0181] Preferred methods for executing two conventional MISS surgeries are now described by way of example; the M3D camera is used during the surgeries and enhances the surgeon's performance.
[0182] Step by step images of conventional MISS are available on the AOSpine website (https://aospine.aofoundation.org/education).
MISS Example 1: Tubular Laminotomy
[0183] According to certain embodiments, this surgery may include all or any subset of the following stages, suitably ordered e.g. as shown: [0184] 1. Tube placement for L4 laminotomy—the tube typically carries, bears, includes or is marked with a set of at least one fiducial markers that are identified by the system 3D camera aka 3DC and located in space. These markers may include any feature, marking or object, typically of known pattern and/or size, that is deployed in a camera's field of view and is then used as a point of reference or measure. [0185] 2. Removal of soft tissue and identification of inferior edge of the lamina and ipsilateral base of spinous process—in this stage, following tissue removal, the M3D camera is inserted into or secured to the tube and measures the inferior edge of the lamina. Using the 3D image of the lamina edge and the location of the tube in space allows locating the full vertebra (in this case L4) in 3D space, and directs the surgeon to adjust the tube if the relative vertebra location requires adjustment. [0186] 3. Laminotomy—drilling of the medial portion of the lamina in a cranial direction and bone removal. During a surgical process e.g. the bone removal process the M3D camera can be entered into the tube multiple times e.g. to measure the bone removal process and provide feedback to the surgeon regarding the amount of bone removed (in 3D) and how close the final bone is to the pre-operational design goal.
MISS Example 2: MIS TLIF (Transforaminal Interbody Fusion)
[0187] According to certain embodiments, this surgery may include all or any subset of the following stages, suitably ordered e.g. as shown: [0188] 1. Tube placement for L4 TLIF—the tube carries or bears or includes or is marked with a set of fiducial markers that are identified by the system 3D camera aka 3DC and located in space [0189] 2. Removal of soft tissue and identification of inferior articulating facet. Typically, following tissue removal, the M3D camera is inserted into or secured to the tube and measures the inferior articulating facet. Using the 3D image of the facet and the location of the tube in space allows locating the full vertebra (in this case L4) in 3D space, and directs the surgeon to adjust the tube if the relative vertebra location requires adjustment. [0190] 3. Resection of the inferior facet. During the bone removal process, the M3D camera can be entered into the tube multiple times to measure the bone removal process and provide feedback to the surgeon regarding the amount of bone removed (in 3D) and how close the final bone is to the pre-operational design goal. [0191] 4. Resection of superior facet. The superior facet is resected from its tip down to the superior border of the pedicle. Also in this stage, the M3D camera can be entered into the tube multiple times to measure the bone removal process and provide feedback to the surgeon regarding the amount of bone removed (in 3D) and/or how close the final bone is to a pre-operational design goal.
[0192] The system software, according to an embodiment of the invention, is now described in detail. The system software may inter alia comprise any software functionalities described in co-owned published PCT patent application PCT/IL2019/050775.
[0193] The M3D software may, alternatively or in addition, include all or any subset of the following modules: [0194] 1. Depth computation module—acquires depth information from the two cameras 202 and 203. Each camera may be used to get depth information independently e.g. by computing pixel offsets of the pattern projected by projector 204. Pre-calibration of the relative position and/or orientation of the projector and/or each camera, and/or both cameras with respect to each other, can be performed, say, during the M3D camera testing following M3D camera assembly. Each camera 202, 203 image processing software can provide depth of objects in the camera field of view e.g. by converting pixel offsets (computed by cross-correlation of the projected points in the camera image with the original projected point image) of sections of the projected pattern (for example, small groups of semi-random projected dots) to distance from the camera e.g. by triangulation. The computation of depth from pixel offsets may be performed using open source software packages such as OpenCV (https://opencv.org/). The two images from cameras 202 and 203 may be combined to form a continuous 3D depth map of the image area. The software typically combines the depth information from the two cameras, with projector lighting and with non-patterned LED lighting, into a single depth map e.g. high accuracy (<0.3 mm) depth map. The integration of the depth maps allows accuracy that is about X2 better than using only the projected pattern for 3D depth estimation. [0195] 2. Integration with global coordinate system—the depth map of the M3D camera is placed in the 3D camera aka 3DC global coordinate system e.g. by taking into account the relative 3D transformation (translation and/or rotation) between the M3D camera and the 3DC camera. The position and/or orientation of the retractor may be measured by the STT connected to the retractor (or other tube), and the M3D camera relative position and/or orientation to the retractor is typically secured or set e.g. by mechanical means that keep the camera locked in a pre-determined position and orientation. The mechanical means may comprise a pin entering a slot, set screw, or any other mechanical setting mechanism known in the art. [0196] 3. User interface, registration—the user typically marks the area of bone that is cleared on a pre-operation CT of the patient. For example, in a Laminotomy procedure, this refers to the inferior edge of lamina of the vertebra undergoing surgery, and the extent of tissue removal that exposes the bare bone. The depth map from the M3D camera may be matched to the marked area on the CT and may provide registration between the vertebra CT and the actual position of the vertebra vs. the MIS tube. [0197] 4. User interface, bone removal—the user marks a desired amount of bone to be removed on the pre-operation CT. During surgery, each time the user inserts the M3D camera to the tubular retractor, the system typically indicates the amount of bone removed. For example, the software my show the current 3D bone image superimposed on the pre-operational CT image of the vertebra. [0198] 5. User interface, retractor motion—the 3D camera aka 3DC continuously tracks the retractor position and orientation (e.g. by tracking the STT) and updates the vertebra position with respect to the retractor accordingly. The user may check the position at any time during surgery by inserting the M3D camera into the retractor tube and performing re-registration of the vertebra.
[0199] Reference is now made to
[0200] Generally, integration of an ultrasound probe with the system shown and described herein yields enhancement of accuracy and the ability to locate the right position for skin incisions with no additional radiation. The ultrasound scan can be repeated multiple times during surgery to verify registration and navigation.
[0201] The surgery flow aka flow 2 with integration of an ultrasound probe e.g. the probe of
[0218] Stages 1-3 in flow 2 integrate the ultrasound data with the pre-operational CT for radiation-free vertebrae registration prior to any incision.
[0219] The methods of References 4 and 5 herein can be used to perform stage 3 in flow 2. Specifically, the methods and algorithms described in both references can be implemented in the US2CT software in stage 3.
[0220] Reference 4 (https://link.springer.com/article/10.1007%2Fs11548-019-02020-1) is a report on matching ultrasound scanning of lumbar spine with CT scanning for posterior bone surface registration. The results show registration errors <2 mm, mostly around 1 to 1.5 mm.
[0221] Reference 5 (https://arxiv.org/abs/1609.01483) describes enhancing the accuracy of detection of spinal bone surfaces by using minimum variance beam forming of the ultrasound transducer beam, coupled with phase-symmetry analysis of the images. The authors claim registration to CT image with <1 mm error.
[0222] The combination of accurate tracking of the ultrasound probe by the 3D camera aka 3DC, enhanced bone surface contrast using beamforming and optimized detection, with software (aka US2CT, as described in References 4 and 5) that reduces noise and enhances bone structure features—especially posterior cortical bone surfaces of the spinous process and articular processes—will allow sub-millimeter accuracy for registration and navigation during MISS procedures, with no additional radiation to patient or staff.
[0223]
[0228] The incision is positioned to allow a tool inserted via an outer end, far from the vertebra, of the tube, to perform a surgical operation on (e.g. remove a portion of, clear tissue from, insert a pedicle screw, etc.) the exposed vertebra via an inner (aka bottom) end of the tube, adjacent the vertebra and inside the incision. The tool thus may for example be a bone removal tool and/or a tool to place pedicle screws and/or any other surgical tool. [0229] 4b. Surgeon exposes a portion of a vertebra by cleaning tissue therefrom, e.g. via the tube. [0230] 4c. Surgeon marks the exposed portion of vertebra on pre-operative CT image e.g. via touching a touch screen presenting the pre-operative CT image of the spine.
The surgeon may touch just one point on screen, or may trace around the entire area that s/he has exposed, so as to mark the perimeter of the exposed area. [0231] 5. Surgeon mounts the M3D camera in or on the tube. [0232] 6. The M3D camera captures and/or measures a 3D image of the portion of the exposed vertebra visible through the inner end of the tube (e.g. of the exposed vertebra) and saves this image e.g. as described herein and/or in the co-owned PCT application. [0233] 7. The system aligns the operatively-captured 3D image of the portion of the exposed vertebra, to the 3D representation of the pre-operatively CT-imaged vertebrae extracted in stage 2 above, typically including searching a vicinity of the region found in stage 2b until, e.g. via surface matching, a portion of the 3D representation of the vertebrae extracted in stage 2 is found, which portion is similar to a 3D rotation (orientation) and/or 3D translation of the captured 3D image of the body portion. Typically, the system software conventionally computes rigid coordinate transformation (3D rotation and/or 3D translation) between the vertebra and the tube and between the tube and the ‘world’ or global coordinate system of the external 3D camera (3DC); it is appreciated that rotations and translations are both rigid transformations. A display of at least the exposed vertebra, in 3D, and/or 2D DICOM cross sections, may be provided e.g. as described in the co-owned PCT application. The tube's location and orientation may be determined e.g. by virtue of the fiducial markers thereupon. [0234] 8. The surgeon removes the camera and takes a surgery tool; the tool's location and orientation are tracked (e.g. by virtue of the fiducial markers)Typically, each surgery tool, whether in the tube or out, bears markers e.g. e.g. as descriebd herein and is tracked by the 3D camera 3DC thus its tip position and orientation is accurately known.
[0235] It is appreciated that the fiducial markers may include any feature which is easily detected by image processing. If at least 3 such markers are present in a given image (or, for tracking, in many frames of), the markers' positions allow the tool pose or orientation of the overhead camera or 3DC relative to the features to be derived. Any suitable tool tracking techniques may be employed by the 3D camera (3DC) e.g. as described in the PCT. Typically, when the system tracks an item such as a tool or a tube, the tracked items are connected or secured to or associated with different dynamic reference frames (DRFs) that include fiducial markers tracked by 3DC.
[0236] It is appreciated that some (“external”) tools may be tracked even if they are not inserted into the tube, e.g. an awl or screwdriver used to insert a pedicle screw on another area of the vertebrae. [0237] 9. The surgeon manipulates the tool. The tool is tracked by the 3DC camera e.g. as described in the co-owned PCT patent application. Advantageously, despite the fact that in an MIS procedure vertebrae are not exposed, the imagery generated by the M3D camera typically allows the system software to continuously superimpose the current tool tip position onto the display generated in stage 7, so as to display an image of the tool tip orientation/location superimposed onto the vertebrae, accurately at all times. Thus the surgeon, by viewing the current position of the tool tip on the screen, can conveniently navigate the tool to where s/he wants to operate and receive real-time feedback on tool tip orientation and location. [0238] 10. The surgeon may remove the tool from the tube, re-insert the camera, and view a 3D image of the exposed vertebra (e.g., if the tool is a bone removal tool, of the trimmed vertebra from which some bone has been removed). [0239] 11. If the tool inserted in stage 8 is (say) a bone removal tool, and the surgeon's manipulation of the bone removal tool removes some bone from the exposed vertebra, the method of
[0241]
[0247] It is appreciated that any suitable method may be employed by the system software, to register the original vertebra (e.g. as imaged in the pre-operative CT showing vertebra in pre-operative state, before bone removal or any other surgical operation) to the 3D image which may be generated by the camera in the tube.
[0248] Thus the system knows the starting bone structure; a 3D image of the vertebra before bone removal has begun, is available (e.g. may be derived from the pre-operative CT of the vertebra). The new 3D image after bone removal, captured by the camera in the tube each time the surgeon completes all or some of the required bone removal, is registered to a previous 3D image of the vertebra e.g. the most recent available 3D image of the vertebra, just prior to the most recent round of bone removal.
[0249] Any suitable method may be used for registration. As described in the co-owned PCT application, typically, the input to registration includes markings, generated by the surgeon who has cleaned tissue off those bone features s/he has chosen as tracking features and marks the “tracking features” on the pre-operative image and on the spine or imagery thereof generated by the miniature camera. In order to establish alignment or registration between the pre-operational scan and the 3D camera images, the bone features extracted from the CT scan and marked on the image may be identified in the 3D camera image. For example, the surgeon may mark a specific point on the patient's spine with a tool typically including or bearing tracking or fiducial markers. The 3D camera tracks the tool tip, and the surgeon may mark the relevant point also on the pre-operative scan data. Even marking only 1 or 2 or 4 or 6 or 10 points per vertebra, is enough to allow system software to match, or align e.g. by surface matching, the pre-operative scan to the 3D camera image. Typically, the surgeon identifies specific points, areas or traces on each vertebra, both on the CT and by marking them to the camera, for example by placing the tip of a suitable tool to each point or by tracing a line on exposed bone area. The software matches the pre-operative image with the 3D positions of the points or traces as seen by the camera, and typically informs the surgeon if there is good registration between the pre-operative imagery and camera image. If the software determines that registration is not high quality, the surgeon may repeat registration, and/or may add additional points, traces or features. For example, the surgeon may expose bone features, clearing away tissue. The surgeon may trace the exposed bone features for the 3D camera using a tracked tool tip, and label the same areas on the pre-operative scan using any suitable computer input device. This way the correspondence or alignment (e.g. registration between the pre-operational scan and at least one 3D camera image generated as surgery begins) between bone features is established, and the 3D image can be registered to the CT scan. The computer or processor or logic may use a suitable registration algorithm including known 3D point cloud registration methods such as Iterative Closest Point (ICP). When the computer signals it has achieved a “good” registration result, for example with Root Mean Square Error (RMSE) fitness better than, say, 0.5 mm between the CT and 3D points, the surgeon can start the operation and the system may use the registration of the bone features for continuous tracking of each individual vertebra.
[0250] Once these two images of the same vertebra are registered, the later image may be subtracted from an earlier image, yielding a 3D image of ‘missing’ or already-removed bone volume. Typically, if the vertebra is imaged, say, 5 times, following 5 rounds of bone removal, yielding images 1-5 in addition to the pre-operative image of the vertebra (aka “image 0”), then first, image 1 is aligned or “registered” to image 0, and the system may display the bone removed in round 1 by subtracting image 1 from image 0. The surgeon elects to continue to round 2, typically removes the camera from the tube, and continues bone removal, then returns the camera to the tube and generates image 2. The system aligns image 2 to image 1, but since image 1's alignment to image 0 is known as well, alignment of image 2 to image 0 may be derived, typically using conventional linear algebraic techniques to derive alignment of image 2 to image 0, from the alignment of image 2 to image 1, and from the alignment of image 1 to image 0—. Then, image 2 may be subtracted from image 0 to yield a 3D image of all bone removed thus far (i.e. all bone removed in round 1, plus all bone removed in round 2). This 3D image of missing bone volume (or similar images derived after rounds 3-5 are completed) may be shown to the surgeon superimposed on the pre-operational superimposing may be achieved using any suitable visualization scheme such as, for example, rendering one of the images semi-transparent and/or adding color to the difference.
[0251] The surgeon may observe the images (images 1-5 in the above example) visually presenting the surgical progress e.g. the extent of removed bone, and decide when to stop (e.g. when to stop removing bone). In the above example, if the surgeon is satisfied with image 5, the surgeon will not embark on a bone removal round 6, and will instead close the surgical field and terminate surgery.
EXAMPLES
[0252] Tubular Laminotomy may be performed, wherein soft tissue is removed and, subsequently, an inferior edge of at least one lamina and/or ipsilateral base of spinous process having been identified, the M3D camera is secured to a top end of the tube, and the inferior edge of the lamina, as viewed through the bottom end of the tube, is measured. Then, the full vertebra's 3D location is derived e.g. by matching of the measured surface to the marked surface of this area from a 3D image of the lamina edge and from the tube's known (due to the fiducial markers on the tube) location in space.
[0253] According to another example, MIS TLIF (Transforaminal Interbody Fusion) is performed, including removal of soft tissue and identification of the inferior articulating facet, followed by securing the M3D camera to a top end of the tube and measurement by the 3D camera of the inferior articulating facet, as viewed through the bottom end of the tube. Then, the full vertebra's 3D location is derived from a 3D image of the facet and from the tube's known (due to the fiducial markers on the tube) location in space.
[0254] The system may direct the surgeon to adjust the tube if the vertebra location, relative to axes marked by the surgeon, requires adjustment. Typically, the system software generates a display of a tool tip which helps the surgeon determine if and how to move the tool.
[0255] A particular advantage of certain embodiments is that an adequate depth of field for mini-surgery use cases is achievable e.g. if it is desired to achieve a depth of plus/minus 5 mm (total 10 mm) which is achievable by a surgeon, and/or corresponds to or takes into account, a maximum vertical distance between various features of a portion of human bone whose diameter is the diameter of a tube e.g. up to 25 mm. This may be achieved, in the embodiments herein, by providing a suitable tube length, one which provides a distance an order of magnitude which is larger (×10) e.g. 50 mm between the near end of the tube, where the camera is mounted, and the far end of the tube.
[0256] References to CT herein are merely by way of example. Instead, any scan modality which yields 3D information regarding bone surface of vertebrae may be employed, such as but not limited to MRI, ultrasound, combination of 2D X-ray images (fluoro), combinations of any of the above, or other known alternatives.
[0257] According to certain embodiments, the system software is operative to recognize a location of the scene within a larger topology e.g. a 3D representation of one or more vertebrae. Typically, mapping a 3D topology imaged by the M3D camera to bone surface extracted from pre-operative imagery, e.g. CT scan/s, provides relative coordinates of vertebra with respect to the M3D camera. Conversion to ‘world’ coordinates may be performed by using the known location and orientation of the M3D camera provided by the 3DC system camera that tracks the tubular retractor and the surgery tools. Typically, the surgeon marks the exposed bone surface and this marked area is used for registration or alignment e.g. as described herein. It is appreciated that extraction of bone features from pre-operational imagery or conversion of pre-operative imagery into 3D structures may be performed in accordance with any of the teachings in the co-owned PCT application. Surface matching may be used to best match a surface measured by a 3D camera during surgery, to a typically larger and three-dimensional bone surface extracted from a pre-operative image, such as but not limited to a CT image.
[0258] Any suitable method may be employed for conversion of pre-operative imagery into 3D structures. The areas used for registration or alignment (e.g. by surface matching) may include whatever is visible (e.g. to the miniature camera) via the tube. The surgeon typically marks the exposed area on the pre-operation imagery and this is the basis for registration or alignment which may, as described in the published PCT application, include all or any subset of: i) vertebrae bone features for real time tracking; ii) extraction of bone features from pre-operational imagery; and iii) registration & tracking of individual vertebrae. The individual features to be tracked vary between surgeries. For example, the area for registration may be a lamina's inferior edge, or may be an inferior articulating facet.
[0259] It is appreciated that terminology such as “mandatory”, “required”, “need” and “must” refer to implementation choices made within the context of a particular implementation or application described herewithin for clarity and are not intended to be limiting, since, in an alternative implementation, the same elements might be defined as not mandatory and not required, or might even be eliminated altogether.
[0260] Components described herein as software may, alternatively, be implemented wholly or partly in hardware and/or firmware, if desired, using conventional techniques, and vice-versa. Each module or component or processor may be centralized in a single physical location or physical device or distributed over several physical locations or physical devices.
[0261] Included in the scope of the present disclosure, inter alia, are electromagnetic signals in accordance with the description herein. These may carry computer-readable instructions for performing any or all of the operations or stages of any of the methods shown and described herein, in any suitable order, including simultaneous performance of suitable groups of operations as appropriate. Included in the scope of the present disclosure, inter alia, are machine-readable instructions for performing any or all of the operations of any of the methods shown and described herein, in any suitable order; program storage devices readable by machine, tangibly embodying a program of instructions executable by the machine to perform any or all of the operations of any of the methods shown and described herein, in any suitable order i.e. not necessarily as shown, including performing various operations in parallel or concurrently rather than sequentially as shown; a computer program product comprising a computer useable medium having computer readable program code, such as executable code, having embodied therein, and/or including computer readable program code for performing, any or all of the operations of any of the methods shown and described herein, in any suitable order; any technical effects brought about by any or all of the operations of any of the methods shown and described herein, when performed in any suitable order; any suitable apparatus or device or combination of such, programmed to perform, alone or in combination, any or all of the operations of any of the methods shown and described herein, in any suitable order; electronic devices each including at least one processor and/or cooperating input device and/or output device and operative to perform, e.g. in software, any operations shown and described herein; information storage devices or physical records, such as disks or hard drives, causing at least one computer or other device to be configured so as to carry out any or all of the operations of any of the methods shown and described herein, in any suitable order; at least one program pre-stored e.g. in memory or on an information network such as the Internet, before or after being downloaded, which embodies any or all of the operations of any of the methods shown and described herein, in any suitable order, and the method of uploading or downloading such, and a system including server/s and/or client/s for using such; at least one processor configured to perform any combination of the described operations or to execute any combination of the described modules; and hardware which performs any or all of the operations of any of the methods shown and described herein, in any suitable order, either alone or in conjunction with software. Any computer-readable or machine-readable media described herein is intended to include non-transitory computer- or machine-readable media.
[0262] Any computations or other forms of analysis described herein may be performed by a suitable computerized method. Any operation or functionality described herein may be wholly or partially computer-implemented e.g. by one or more processors. The invention shown and described herein may include (a) using a computerized method to identify a solution to any of the problems or for any of the objectives described herein, the solution optionally including at least one of a decision, an action, a product, a service or any other information described herein that impacts, in a positive manner, a problem or objectives described herein; and (b) outputting the solution.
[0263] The system may, if desired, be implemented as a network, e.g. web-based system employing software, computers, routers and telecommunications equipment as appropriate.
[0264] Any suitable deployment may be employed to provide functionalities e.g. software functionalities shown and described herein. For example, a server may store certain applications, for download to clients, which are executed at the client side, the server side serving only as a storehouse. Any or all functionalities e.g. software functionalities shown and described herein may be deployed in a cloud environment. Clients, e.g. mobile communication devices such as smartphones, may be operatively associated with, but external to, the cloud.
[0265] The scope of the present invention is not limited to structures and functions specifically described herein and is also intended to include devices which have the capacity to yield a structure, or perform a function, described herein, such that even though users of the device may not use the capacity, they are, if they so desire, able to modify the device to obtain the structure or function.
[0266] Any “if-then” logic described herein is intended to include embodiments in which a processor is programmed to repeatedly determine whether condition x, which is sometimes true and sometimes false, is currently true or false and to perform y each time x is determined to be true, thereby to yield a processor which performs y at least once, typically on an “if and only if” basis e.g. triggered only by determinations that x is true, and never by determinations that x is false.
[0267] Any determination of a state or condition described herein, and/or other data generated herein, may be harnessed for any suitable technical effect. For example, the determination may be transmitted or fed to any suitable hardware, firmware or software module, which is known or which is described herein to have capabilities to perform a technical operation responsive to the state or condition. The technical operation may, for example, comprise changing the state or condition, or may more generally cause any outcome which is technically advantageous given the state or condition or data, and/or may prevent at least one outcome which is disadvantageous given the state or condition or data. Alternatively or in addition, an alert may be provided to an appropriate human operator or to an appropriate external system.
[0268] Features of the present invention, including operations which are described in the context of separate embodiments, may also be provided in combination in a single embodiment. For example, a system embodiment is intended to include a corresponding process embodiment, and vice versa. Also, each system embodiment is intended to include a server-centered “view” or client centered “view”, or “view” from any other node of the system, of the entire functionality of the system, computer-readable medium, apparatus, including only those functionalities performed at that server or client or node. Features may also be combined with features known in the art and particularly, although not limited to, those described in the Background section or in publications mentioned therein.
[0269] Conversely, features of the invention, including operations, which are described for brevity in the context of a single embodiment or in a certain order, may be provided separately or in any suitable sub-combination, including with features known in the art (particularly, although not limited to those described in the Background section or in publications mentioned therein) or in a different order. “e.g.” is used herein in the sense of a specific example which is not intended to be limiting. Each method may comprise all or any subset of the operations illustrated or described, suitably ordered e.g. as illustrated or described herein.
[0270] Devices, apparatus or systems shown coupled in any of the drawings may in fact be integrated into a single platform in certain embodiments, or may be coupled via any appropriate wired or wireless coupling, such as but not limited to optical fiber, Ethernet, Wireless LAN, HomePNA, power line communication, cell phone, Smart Phone (e.g. iPhone), Tablet, Laptop, PDA, Blackberry GPRS, satellite including GPS, or other mobile delivery. It is appreciated that in the description and drawings shown and described herein, functionalities described or illustrated as systems and sub-units thereof can also be provided as methods and operations therewithin, and functionalities described or illustrated as methods and operations therewithin can also be provided as systems and sub-units thereof. The scale used to illustrate various elements in the drawings is merely exemplary and/or appropriate for clarity of presentation, and is not intended to be limiting.
[0271] Any suitable communication may be employed between separate units herein e.g. wired data communication and/or in short-range radio communication with sensors such as cameras e.g. via WiFi, Bluetooth or Zigbee.
[0272] It is appreciated that implementation via a cellular app as described herein is but an example, and, instead, embodiments of the present invention may be implemented, say, as a smartphone SDK; as a hardware component; as an STK application, or as suitable combinations of any of the above.
[0273] Any processing functionality illustrated (or described herein) may be executed by any device having a processor, such as but not limited to a mobile telephone, set-top-box, TV, remote desktop computer, game console, tablet, mobile e.g. laptop or other computer terminal, embedded remote unit, which may either be networked itself (may itself be a node in a conventional communication network e.g.) or may be conventionally tethered to a networked device (to a device which is a node in a conventional communication network, or is tethered directly or indirectly/ultimately to such a node).
[0274] Any operation or characteristic described herein may be performed by another actor outside the scope of the patent application and the description is intended to include apparatus whether hardware, firmware or software which is configured to perform, enable or facilitate that operation or to enable, facilitate or provide that characteristic.
[0275] The terms processor or controller or module or logic as used herein are intended to include hardware such as computer microprocessors or hardware processors, which typically have digital memory and processing capacity, such as those available from, say Intel and Advanced Micro Devices (AMD). Any operation or functionality or computation or logic described herein may be implemented entirely or in any part on any suitable circuitry, including any such computer microprocessor/s as well as in firmware or in hardware or any combination thereof.
[0276] It is appreciated that elements illustrated in more than one drawing, and/or elements in the written description, may still be combined into a single embodiment, except if otherwise specifically clarified herewithin. Any of the systems shown and described herein may be used to implement or may be combined with, any of the operations or methods shown and described herein.
[0277] It is appreciated that any features, properties , logic, modules, blocks, operations or functionalities described herein, which are, for clarity, described in the context of separate embodiments, may also be provided in combination in a single embodiment, except where the specification or general knowledge specifically indicates that certain teachings are mutually contradictory and cannot be combined. Any of the systems shown and described herein may be used to implement or may be combined with, any of the operations or methods shown and described herein.
[0278] Conversely, any modules, blocks, operations or functionalities described herein, which are, for brevity, described in the context of a single embodiment, may also be provided separately or in any suitable sub-combination, including with features known in the art. Each element e.g. operation described herein, may have all characteristics and attributes described or illustrated herein, or, according to other embodiments, may have any subset of the characteristics or attributes described herein.