Reflective Coating for Material Calibration
20220024136 · 2022-01-27
Inventors
- Aaron Weber (Arlington, MA)
- Desai Chen (Arlington, MA)
- Harrison Wang (New York, NY, US)
- Gregory ELLSON (Cambridge, MA, US)
- Wojciech Matusik (Lexington, MA)
Cpc classification
G01B11/2545
PHYSICS
G01B11/00
PHYSICS
B33Y30/00
PERFORMING OPERATIONS; TRANSPORTING
G06N5/01
PHYSICS
C23C16/52
CHEMISTRY; METALLURGY
B33Y50/02
PERFORMING OPERATIONS; TRANSPORTING
B29C64/393
PERFORMING OPERATIONS; TRANSPORTING
International classification
B29C64/393
PERFORMING OPERATIONS; TRANSPORTING
B33Y30/00
PERFORMING OPERATIONS; TRANSPORTING
B33Y50/02
PERFORMING OPERATIONS; TRANSPORTING
G01B11/00
PHYSICS
Abstract
A method includes generating correction data for a construction material that is used by an additive-manufacturing machine to manufacture an object. This correction data compensates for an interaction of the construction material with first radiation that has been used to illuminate the construction material.
Claims
1. An apparatus for using instructions representative of an object to selectively deposit construction material to manufacture said object, said apparatus comprising a printhead having a nozzle that ejects said construction material towards said object and a controller that controls operation of said printhead based at least in part on a depth to a surface of said object, wherein said controller is configured to receive first information and second information, wherein said first information incorrectly characterizes a depth to a surface of said object, and wherein said controller is configured to use said second information in connection with said first information to estimate said depth.
2. The apparatus of claim 1, wherein said second information is indicative of said construction material.
3. The apparatus of claim 1, wherein said layers of construction material comprise layers of different kinds of construction material and wherein said second information is indicative of said different kinds of construction material.
4. The apparatus of claim 1, wherein said construction material comprises a composite construction material having plural constituents arranged in a spatial distribution and wherein said second information is indicative of said spatial distribution.
5. The apparatus of claim 1, wherein said second information is obtained from an optical measurement.
6. The apparatus of claim 1, wherein said second information is obtained from a contact profilometer.
7. The apparatus of claim 1, wherein said second information comprises information indicative of topography of a surface of said object.
8. The apparatus of claim 1, wherein said second information represents interaction of said object with incident radiation and wherein said controller is configured to output an estimate of depth to said object based on said first and second information.
9. The apparatus of claim 1, wherein said second information comprises calibration data.
10. The apparatus of claim 1, wherein said second information comprises a family of characteristic curves.
11. The apparatus of claim 1, further comprising a tangible and non-transitory computer-readable medium on which said second information is stored, said medium being accessible to said controller.
12. A kit comprising a construction material, a container, and a pointer, where said construction material is one that is to be used by an additive-manufacturing machine to form an object, wherein said container holds said construction material, and wherein said pointer identifies construction-material data.
13. The kit of claim 12, wherein said pointer comprises a QR code.
14. The kit of claim 12, wherein said construction-material data comprises calibration data.
15. The kit of claim 12, wherein said construction-material data comprises a family of characteristic curves.
16. The kit of claim 12, wherein said construction material is one of a plurality of construction materials for use by said additive-manufacturing machine, wherein said container is one of a plurality of containers, each of which contains a different one of said construction materials, wherein said pointer is one of a plurality of pointers, wherein said construction-material data includes a plurality of data portions, each of which corresponds to one of said construction materials, and wherein each of said pointers points to one of said data portions.
17. A method comprising using an additive-manufacturing process to manufacture an object, wherein using said additive-manufacturing process comprises using instructions representative of said object to cause a printhead to selectively deposit layers of construction material to manufacture said object, receiving information that incorrectly characterizes said depth, receiving, from a machine-learning system, a parameterized transform that compensates for interaction of said object with incident radiation, based on said first information and said parameterized transform, obtaining an estimate of said depth, and, using said estimate, controlling said depth.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0056]
[0057]
[0058]
[0059]
[0060]
[0061]
[0062]
[0063]
[0064]
[0065]
[0066]
[0067]
[0068]
[0069]
[0070]
[0071]
DETAILED DESCRIPTION
[0072]
[0073] During the course of an object's manufacture, an actuator 150 causes motion of the object 130 relative to the print head 120. In the illustrated embodiment, the actuator 150 translates the object 130 in a vertical direction z and in two horizontal directions x, y. The two horizontal directions define a “transverse plane.”
[0074] To promote more accurate manufacture, a controller 110 relies on feedback to control both the operation of the print head 120 and the movement imparted by the actuator 150. Such a controller 110 relies at least in part on information on the topography of the object's surface 132. This information is derived from an edge profilometer 160 that includes an emitter 161 and a camera.
[0075] In some embodiments, the additive-manufacturing apparatus 100 includes a machine-learning system 112 that, in some implementations, includes a neural network 114. The operation of these components is described in detail below.
[0076] In other embodiments, the additive-manufacturing apparatus 100 includes a mixer 125 that draws resins from in installed resin module 126A. The installed resin module 126A is taken from a kit 123 that includes plural resin modules 126A-126C.
[0077] Affixed on each of the resin modules 126A-126C in the kit 123 is a corresponding pointer 127A-127C that identifies the particular resin contained in that resin module 126A-126C. Characteristics of each resin are contained in resin data 128 stored in a materials database 129. This materials database 129, along with the resin data 128, comes as part of the kit 123.
[0078] A suitable pointer 127A-127C is one encoded in a bar code or a QR code. The controller 110 thus reads the pointer 127A of the installed resin module 126A and identifies the particular resin that is contained within it. As a result, the controller 110 accesses the relevant resin data 128 in connection with operating the printhead 120 and in connection with calibrating the printhead 120 to accommodate inks having different properties. In a typical embodiment, the emitter 161 is an LED pattern-projector that generates an in-focus image of an optical slit or mask on the object 130. As a result, the emitter 161 illuminates the object's surface 132 with a pattern having an edge 183. This provides a sharp transition between illuminated and non-illuminated portions of the object's surface 132.
[0079] The camera 163 records the location of the edge 183. In a typical embodiment, the camera 163 is an area-scan camera. Such a camera 163 has an array of sensors that defines an array 165, which is shown in
[0080] In
[0081] More generally, the illustrated edge profilometer 160 features a first component at a first elevation angle and a second component at a second elevation that is less than the first elevation angle. The elevation angle is defined such that a point directly overhead as seen from the object, i.e., a point at the zenith, has an elevation angle of 90° and a point that would correspond to the horizon has an elevation angle of 0° degrees or 180°. In a preferred embodiment, the first elevation angle is ninety degrees and the second is either in a first interval or a second interval. The first interval is (90°, 180°) and the second interval is (0°, 90°). The use of parentheses indicates an open interval that does not include its endpoints. In such embodiments, the first component is referred to as an “overhead” component and the second component is referred to as an “off-axis” component.
[0082] If the first component is an emitter 161, then the second component is a camera 163. If the first component is a camera 163, then the second component is an emitter 161.
[0083] As shown in
[0084] An image of the pattern forms on this array 165 with the pattern's edge 183 falling at a particular location along the array 165. For a known geometry, the location of the edge 183 along this array 165 provides a basis for estimating the z-coordinate of the object's surface 132 in the particular region of the object 130 that is being inspected. The value of this z-coordinate will be referred to herein as “depth.” This value could, of course, also have been referred to as a “height,” the distinction between the two being a result of an arbitrary choice of a datum. Thus, although the term “depth” is used herein for consistency, the value of the z-coordinate is ultimately a distance to a reference datum.
[0085] In general, the depth of the object's surface 132 changes over time. This can result from activity by the print head 120, which deposits resin onto the object 130 and thus reduces the depth, or from movement by the actuator 150. In either case, the edge's location moves along the sensing array 165. This movement provides a feedback signal that the controller 110 relies upon for controlling either or both the actuator 150 and the print head 120.
[0086] To improve scanning rate, it is useful for the array 165 to be relatively small. After all, the process of optical triangulation upon which the controller 110 relies requires many data samples to be processed in real-time. Thus, an excessively large array 165 imposes a greater computational burden. On the other hand, as the array 165 becomes smaller, it becomes increasingly likely that the edge 183 will no longer fall within the array 165.
[0087] In some embodiments, the camera 163 has an array 165 with a selectable length. This length depends in part on the desired region-of-interest. Since the array 165 is formed by a set of adjacent rows of pixels, this can be implemented by enabling only a subset of the rows. Such an embodiment provides the opportunity to trade scanning speed for depth-measurement range. To measure a greater range of depth, the array 165 can be made longer by re-enabling selected pixels. However, since frame rate depends on how many rows are being used in the array 165, a longer array 165 will cause a smaller frame rate.
[0088] For example, in the situation shown in
[0089] A useful feature of the edge profilometer 160 is that even when the edge 183 falls outside the array 165, it is still possible to distinguish between an array 165 that is fully illuminated from one that is not illuminated at all. This distinction provides the controller 110 with information concerning which side of the array 165 the edge 183 has surpassed. In particular, this information tells the controller 110 whether the area being inspected by the camera 163 is too high or too low to see the edge 183 and hence, how to move the object 130 to bring the edge 183 back into view. In some cases, this information tells the controller 110 whether or not to deposit more resin. The edge profilometers 160 illustrated thus far each have an overhead component and an off-axis component. Alternative embodiments feature redundancy in the off-axis component. For example,
[0090] The embodiments shown in
[0091]
[0092]
[0093] In another embodiment, shown in
[0094] The camera's field of view covers a particular range of rows and columns. At each row and column, there exists a pixel intensity. These pixel intensities collectively define a profile. A column profile shows pixel intensities along a particular row's associated column. A row profile shows pixel intensities along a particular column's associated row.
[0095]
[0096]
[0097] However, in many cases, the material from which the object 130 is made is slightly translucent, as shown in
[0098] For each column shown in the column profile 181, it is useful to estimate the step's location in that column's associated row. This is carried out by a regression that fits the measurements of the row profile 185 to a logistic curve 189 from a family of logistic curves. In the illustrated embodiment, the family of logistic curves is given by
where A and B define the extrema of the logistic curve 189, k is the slope, and x.sub.0 is the midpoint. The row that is closest to being at the midpoint of the logistic curve 189 can be used as a surrogate for the depth of the object's surface at the lateral location being measured. In addition to the row profile 185,
[0099] In the absence of subsurface scatter and other noise sources, including noise resulting from having a camera 163 and/or an emitter 161 with imperfect focus, the camera 163 should record a distinct step 187 as shown in
[0100] In
[0101] The first half-step's extent therefore defines an “intermediate value.” To the extent this intermediate value can be estimated from the column profile 181 shown in
[0102] The illustrated column profile 181 can be viewed as a vector of intensity values (x.sub.1, x.sub.2, . . . x.sub.n). This vector is referred to herein as the “edge vector” and denoted by “x”. The edge profilometer 160 provides this vector to the controller 110. The controller 110 then uses a mapping function D(x), sometimes referred to as a “parametrized transformation,” to transform the edge vector x into a measured depth.
[0103] Referring back to
[0104] Deriving a mapping function D(x) begins with capturing ground-truth data and providing it to the machine-learning system 112 shown in
[0105] A first way to capture ground-truth data is to use an alternate three-dimensional scanner. Examples include a contact scanner or contact profilometer, a micro-CT scanner, an atomic force microscope, an OCT scanner, and a confocal 3D-scanner. Whichever alternate scanner is chosen should have a resolution no less than that of the edge profilometer 160.
[0106] When using an alternate scanner, it is particularly useful to spatially register the data obtained from the alternate scanner to that obtained by the edge profilometer 160. Registration in the transverse directions is easily accomplished using fiducial markers or, equivalently, by registering features in the surface 132. However, because of subsurface scatter, registration in the vertical direction is more difficult. This is because alternative scanners do not experience subsurface scattering in the same way. For example, an atomic force microscope does not experience subsurface scattering at all.
[0107] A suitable method for addressing the problem posed by vertical registration is to use a test object that mimics the actual object's form but suppresses subsurface scattering. A suitable test object would be a metal plate.
[0108] By using a metal plate, it is possible to compensate for subsurface scattering by comparing the depth data from the edge profilometer 160 in the region with no subsurface scattering to the depth data from the external high-resolution scanner.
[0109] Another method of obtaining ground-truth data avoids using an alternate scanner altogether. This method includes printing the object 130 and capturing corresponding scan data using the still-uncoated object 130, as shown in
[0110] An optional further step is that of coating the object 130 with a thin layer of a fluorescent material. Examples of a suitable material include an optical brightener that fluoresces in response to incident light and does so in the visible range so that the resulting fluorescence can be captured by the camera. Alternatively, the coating can be a highly-scattering material to provide a stronger signal to the camera 163.
[0111] The foregoing methods are particularly useful for obtaining calibration data that can be used in the kit 123, and in particular, in the materials database 129 as part of the resin data 128. In such cases, the procedure would include making a measurement using a slab of the bulk material, coating the slab with a metal coating 136 or a fluorescent layer, and making another measurement. The difference between the two measurements is indicative of the extent of subsurface scattering.
[0112] In either case, the method continues with scanning the now-coated object 130, using the same scanning setup. For each surface point, corresponding row profiles 185 such as those shown in
[0113] Another difficulty that arises is inhomogeneous subsurface scattering. One way this occurs is in the case of an object 130 that is made of different materials in different regions. Another way this can arise is by coupling of light to structures outside of the object 130. For example, there may be relatively thin parts of the object 130 that rest on the build platform. When scanning these regions, it is quite possible for light to pass all the way through the object 130 and reflect off the build platform itself. As a result, the optical properties of the build platform come into play. It is therefore particularly useful for the mapping function D(x) to work reliably by providing reliable depth estimates for measurements made under such conditions.
[0114] One way to solve this problem is to devise a universal mapping function D(x) that correctly computes the depth value for different types of row profiles 185 corresponding to different material types or different spatial distributions of materials.
[0115] Another way to solve this problem is to carry out an equalization procedure. This can be done by adjusting the additives in different materials such that their row profiles 185 are as similar as possible. This can include adjusting the types of additives and the concentrations of those additives. Since all parts of the structure would have roughly the same subsurface scattering properties, this would allow the same mapping function to be used regardless of the material from which the area under measurement was made.
[0116] Alternatively, one can avoid the use of additives altogether by determining multiple types of mapping functions D.sub.i(x) and choosing the correct mapping function based on which portion of the object is being inspected. For example, in an object that is made of a build material and a support material, there would be two mapping functions: D.sub.BUILD(x) and D.sub.SUPPORT(x). The controller 110 would then select the correct mapping function based on what is being scanned. This choice could be made based on advance knowledge of the object and knowledge of the regions to be scanned. It could also be made based on spectroscopic data that is collected in real time during the scanning process itself.
[0117] One way to convert a row profile 185 into actual depth data is to fit a logistic curve 189 and to then identify its midpoint. While this method is effective, the computationally intensive nature of curve fitting can tax the controller's real-time processing ability. Additionally, this method imposes the additional step of having to convert the results based on calibrated data.
[0118] In an alternative method, illustrated in
[0119] Preferably, each unit curve 191 has also been scaled to the maximum and minimum values of the row profile 185. By convolving each unit curve 191 with a row profile, it becomes possible to identify the unit curve that most closely matches the row profile 185. Based on the resulting set of convolutions, it becomes possible to identify a best estimate 193 of the step's location and the corresponding depth value.
[0120] A method that relies on convolution with a family of pre-computed unit curves 191 is advantageous because the computational steps for carrying out convolution are simpler and more rapidly carried out than those for carrying out curve fitting. Another advantage that arises is the ease with which it becomes possible to accommodate different materials and different thicknesses. When using the foregoing convolution method, it is only necessary to adjust the choice of unit curves 191.
[0121] Since each resin has different properties, it is useful to provide different families of pre-computed unit curves 191. These can be stored in the resin data 128. When a new resin module 126 is made available for the printer's use, the controller 110 uses information in the pointer 127 to identify the portion of the resin data 128 that has the correct unit curves for the resin in that resin module 126.
[0122] A suitable method for deriving the mapping function D(x) is to carry out machine learning based on collected edge profiles x and collected disparity vectors y. A machine-learning process would use these vectors to identify a mapping function D that satisfies the property: y=D(x). Depending on the choices of edge profiles x and disparity vectors y, the learned mapping-function could be one that works for a single virtual material, which was made by suitably doping different materials with additives so as to equalize their optical properties. Or the learned mapping-function could be one that works for a particular material, in which case the correct mapping function would have to be selected based on the region being scanned, or it could be one that accommodates variations that result from interaction of the material with adjacent structures, for example the build platform.
[0123] A number of different models can be used to model the mapping from an edge profile x to a disparity vector y. Examples of such models include support-vector regression, linear regression, and polynomial regression. Neural networks can also be used. These include single layer networks or multiple layer networks. Additionally, it is possible to use regression trees as well as various random forest methods.
[0124] To carry out machine learning using the neural network 114 shown in
[0125] The neural network 114 uses the training data to successively refine estimates of weights between its nodes. It does so by using a suitable optimization algorithm. An example of such an algorithm is a stochastic gradient descent based on training data.
[0126] Having obtained suitable weights, the neural network 114 estimates the resulting model's predictive capacity using the validation data and also provides an estimate of uncertainty in its prediction of depth values.
[0127] To promote more efficient computation, it is useful to truncate the edge profile x to include only the edge's immediate neighborhood. The bounds of this neighborhood can easily be estimated using simple thresholding to obtain its approximate end points.
[0128] Information from a spatial neighborhood can improve depth estimation. One way to do this is to carry out post-processing steps, such as using a noise-reduction filter to filter the depth data to remove noise from a small spatial neighborhood. A suitable filter is a smoothing filter. Another way is to learn an estimating function based on a vector that has been derived from a small spatial neighborhood of the edge.
[0129] Having described the invention and a preferred embodiment thereof, what is new and secured by letters patent is: