Evaluate and quantify the drift of a measuring instrument

. The question of drift in measuring instruments is an essential one in metrology. Calibration establishes the "metrological state" of the instrument on the date it is carried out, but what does it tell us about tomorrow, a month from now, or at the end of the calibration validity period? If, with certain instrumentation (in particular "fixed" instruments, for instance "physical" standards and limit gauges) the situation is relatively straightforward, it is not so simple with instruments that take measurements. The FD X 07-014[1] booklet, moreover, distinguishes between the two types of device, those where drift can be observed (since calibration uncertainties tend to be minor, which is generally true of size standards) and the others (measuring instruments). This article deals with the specific case of measuring.

With the advent of VIM 2008 [2], calibration ceased to be more or less confined to a table of deviations between measured values and standard values. Common practice since that date has been to establish the relationship between and so as to investigate the instrument over its whole range. This relationship, termed the calibration model, often takes the form of a polynomial of suitable degree (usually 1 or 2): NB: for the rest of this article, the convention used is to designate coefficients of this type of calibration model by the letter , the index being associated with the power of .
The French College of Metrology (C.F.M.) has published a technical guide [3] on this subject. In fact, it offers a free download of M-CARE, an Excel application that enables the recommendations in the guide to be implemented. Yet few laboratories, to our knowledge, currently offer this service. It is to be hoped that industry will become more exacting in this matter, and set about creating these types of model with the care and attention they deserve.
It is already possible to acquire very useful information using Excel. The LINEST matrix function gives an estimate of the coefficients of the polynomial model. As well as the values themselves, it also gives their associated measurement uncertainties. These coefficients and their uncertainties are calculated via the so-called Ordinary Least Square method (often abbreviated to OLS). It should be remembered, however, that this coefficient estimation method is unsuitable for the vast majority of calibration situations. In fact, the estimations are only meaningful if the following three conditions are satisfied: 1. There is no uncertainty associated with (which is not the case during calibration, since there are always measurement standard uncertainties). 2. The measurement uncertainty is constant across the full range of measurements (a condition that is rarely met in metrology, as uncertainties are often proportional to the measured value). 3. There is no covariance between either the ( ), the ( ), or the and , whereas the latter variances are frequent. (We shall not develop this point further in this article: See [4]).
Nevertheless, these "infringements" of the correct use of the coefficient estimation method are not detrimental to the approach that we propose for evaluating and taking into account the drift of measuring instruments, if there really is drift…

Does the calibration model actually drift?
It is not easy to demonstrate the drift of a model obtained from calibration. The coefficients of the model will have uncertainties and these uncertainties are correlated. As with any statistical estimation, the reality is "hidden" behind the estimates, which makes it difficult to determine whether or not a drift between two calibrations really exists just from a simple algebraic comparison of the estimates obtained from the latest and from previous parameters. Are the inevitable deviations due to drift, or simply the effect of uncertainties?
To elucidate this vital question, we propose a simple and effective graphical analysis. In order to simplify the demonstration, we shall restrict our investigation to a linear model, but it is perfectly feasible to deal with higher order models in the same way. We shall therefore model the calibration results on a linear relationship between and in the form = + , bearing in mind that theoretically is close to 0 and close to 1.
In order to determine whether or not this calibration model, characterized by its and coefficients, changes over time or not, we shall construct a graph with the values on the abscissa, and on the ordinate axis the values obtained from each calibration. The points are plotted in chronological order of calibration and joined by a line.

Case where drift is not demonstrated
If segments between two consecutive points keep crossing, it is safe to conclude that the apparent differences between coefficients are in reality the result of estimation uncertainties. This situation allows us conclude without too high a risk that the instrument is not exhibiting drift, a hypothesis that should be either proved or disproved during further calibrations by simply adding the new points. The "drift" factor in the analysis of the causes of uncertainty of the process concerned may be deemed "negligible".

Case where drift is probable
Conversely, if segments do not cross (or only cross rarely), there is very good reason to suppose that the instrument is drifting. In this instance, it is advisable to model the behavior of the and parameters as a function of time. Obviously this modeling will not be totally predictive, but it can allow us to estimate the portion of uncertainty that is due to drift in an uncertainty calculation.

Modeling the behavior of and with time
The calibration model makes it possible to establish the relationship between measured values ( ) and true values ( ). This relationship examines coefficients ( and ), which, we shall presume (Figure 2), change with time. We now, therefore, want to model the temporal evolution of these coefficients.
The regression of each of the coefficients with time can be carried out using the OLS method (for example with Excel), with the temporal uncertainties (days) on the abscissa being negligible, and the uncertainties on the ordinate being independent and of the same amplitude for all calibrations.
To carry out this regression, one marks the calibration dates on the abscissa and the appropriate parameter ( or ) on the ordinate. The "LINEST" function will calculate the coefficients by converting dates into "number of days". The parameters are thus given in "variation per day".

: Slope on ordinate that gives ( ) of instrument calibration model
In this example, we have ten calibrations and therefore ten experimental points with which to describe the behavior of each of the calibration model coefficients. Figure 3 and Figure 4 show that we can choose to model both behaviors with straight lines. If observation revealed that either of the progressions was not linear, then a more suitable model would have to be chosen, and the calculations shown below carried out with the chosen models.
The coefficients of these linear models for the evolution of coefficients ( or ) are annotated with the letter as follows, where represents time in days:

Evolution of coordinate point 1 with progressive calibrations
As always, the fewer calibrations we have, the more imprecise will be our knowledge of the coefficients of these models. This knowledge will develop as more calibrations are carried out. Nevertheless, the uncertainties of these coefficients, uncertainties whose significance is proportionate to the lack of information, will be taken into account.
So, using traditional statistical notation † , under the assumption of normality (i.e. where designates the th calibration of and residual variance for = 0 or = 1, referring to the index ), the estimators and of coefficients and of a linear regression using OLS check: The covariance between and is given by: Parameter estimates are obtained from the data ( , , ) … as follows: Estimate Σ of the variance-covariance matrix is achieved by substituting by its estimate in the expression for Σ .

"Drift" term estimate for uncertainty calculation
When a linear calibration model drifts, the true model of the instrument has, on date , moved away from that obtained on the last calibration date. As the days pass it gets further from the known model being utilized. Any correction applied to a model that has become obsolete only adds more uncertainties. We shall call the last known model "Model at ", and "Model at " is the "provisional" model days later -the date of the next scheduled calibration, for instance. The coefficients of the "Model at " will therefore be represented by and . The coefficients of the "Models at " will be represented by ( ) and ( ). † A capital " " is a random variable and a lower-case " " is a number. When a lower-case has a hat ( ), it represents the estimate of the corresponding unknown quantity , i.e. the realization of the random variable .
It is easy to imagine two straight lines, one for the "Model at ", the other for the "Model at " moving away from the original position due to the drift. The more time goes by, the wider the deviation between the two models becomes...
As the two models are probably not parallel (unless alone changes), the deviation between the two models, "Model at " and model presumed to be "Model at ", depends on the time elapsed and the value measured across the nominal measurement range of the instrument. Figure5 shows the deviations between the models extrapolated at 100, 500 and 1000 days with regard to the "Model at " across the measurement range of the instrument.

Figure5: Evolution of calibration model
What we clearly have here is a systematic phenomenon. There is nothing random about the deviation between the "Model at t0" and the "Model at ". It is a function of the measured value and the number of days that have elapsed since the last calibration.

An interesting parallel...
Let us take the example of a laboratory: temperature is considered to be random and, over time, is considered to be comprised between two limits. In fact, at any given time, it is what it is. We could make a point of knowing what that is and taking it into account so as to make appropriate corrections. Or use the accepted approach, which is to treat the temperature of the laboratory as "random", since we need to be able to calculate the uncertainty that qualifies a result regardless of the day on which the measurement is made. It is therefore the moment at which the measurement is made that gives the temperature its random nature, and not the temperature itself. In the case currently under discussion, the situation is much the same.
Literally, the deviation will be whatever it is, with a measurement field value of and a measurement day of . As we are proceeding with an uncertainty calculation whatever the value of in the measurement field, and whatever day is in the period between now and the next calibration, one can consider that the "whatever is" and "whatever is" give a random nature to the inevitable deviation between the correction applied and the "true" correction, which nobody knows.

Evolution of deviations from "Model at t 0 " with time
Calibration date at 100 days at 500 days at 1000 days Therefore, to evaluate the term for the drift of an instrument, we suggest simply calculating the deviations between the two corrections ("Model at " and "Model at ") at points dispersed across the whole measurement field of the instrument, then to calculate the mean and standard deviation of the deviations obtained. These two terms, mean and standard deviation, are the contributions made by the drift of the instrument to the uncertainty of the measurement process. One could say that they represent the error one risks by not using an appropriate calibration model.
The deviation mean varies according to the number of days between now and the last calibration. Because it varies over the period, we suggest including it in the uncertainty calculation and assuming, for example, that it has a uniform distribution between 0, the date of the last calibration, and a value determined by calculation based on the periodicity in question. As for the standard deviation, this will be directly taken into consideration in the analysis, as a measurement can be made at any time and any point of the nominal measurement range of the instrument.    Table 2: Results of analysis of correction deviations between "Model at " and "Model at "

How should the uncertainty of our model based on the evolution of coefficients be used in our calculations?
However many calibrations we have at our disposal to evaluate whether an instrument is suffering from drift and, where relevant, however the calibration model coefficients behave over time, no model will be free of uncertainty. Taking into account the uncertainties of model coefficients is not a trivial matter. Digital simulation is one possible solution, but the context here is very specific. The aim of the exercise is to simulate the possible coefficients to give us ( ) and ( ), then to produce calculations for the mean and standard deviation of the deviations between the "Model at " and the "Model at ".

Brief reflection on digital simulation
While a unique random variable simulation can be achieved simply using spreadsheets, the need here is to simulate correlated random variables. Whatever the evolution model, the model coefficients are not independent of each other. in other words, since we are talking here of the evolution of coefficients and of the calibration model, which have themselves been modeled by coefficient straight lines ( , )(respectively ( , )), a random value of is not necessarily compatible with a random value of . One can, on the other hand, assume that coefficients ( ) and ( ) are independent of each other, on the basis that the evolution of and of the instrument do not have the same technical origin. It is worth putting the hypothesis to the test, and if it is disproved, the terms of the covariance between , , and should be evaluated and used. On this subject, it is a pity that the "LINEST" function of Excel does not give the covariance between coefficient estimates, and must therefore be manually calculated using the formula given in 2.2.1. The term , the variance of residual errors, is given in the results matrix of the "LINEST" function (Table 3). It can also be directly calculated, but in this case attention must be paid to the number of degrees of freedom ( −2 versus − 1 for a linear model).  [5] provides information on a method to generate correlated random numbers, to which the reader might usefully refer.
There are therefore two steps to the calculations.
Step 1: Simulate values ( , ) (respectively ( , )) according to the normal distribution (1) (cf. 2.2.1) where , and are replaced by their estimates. Using these four simulated values, calculate ( ) and ( ) coefficients for "Models at " as many times as required. Then calculate the mean and standard deviation for the model deviations ( Table 1 for = 100, = 500 and = 1000). In this way, for each , one can obtain the same quantity of means and standard deviations as simulations.
Step 2: Construct histograms of the means (Figure 7) and standard deviations (Figure 8) obtained through simulations (step 1), then determine, at chosen confidence level (as a rule 95%), the maximized values for the "Mean" and "Standard deviation" parameters (Table 4). This is done by searching in the simulated values for a value (mean and standard deviation) that is only exceeded in the simulations by (1 − )% (as a rule 5%) of the values.  NB: As can be seen from our simulations, the maximized values ( Figure 9), like the mean values ( Figure 6), seem to follow a linear model based on the number of days. This observation, which should be verified for each particular case, can be very useful when it comes to fixing periodicity, as the choice can now be based on the significance of drift in the instrument versus the other causes of uncertainty of the process…  If the test gives statistically identical variances ( ≤ ), we can learn more about the variance of remaining errors by calculating a weighted average of the two variances. The value thus obtained, which is more reliable than taking only one of the two values, becomes the new known value (a priori) for future calibration tests. The procedure is cumulative, and thus becomes increasingly effective in detecting changes in residual error as an increasing number of calibrations are conducted. If the test gives statistically different variances ( > ), it will be necessary to model the evolution of residual error variance in order to take into account any deterioration over the days/months/years since the last calibration. The modeling principle of this temporal evolution is the same as that already detailed for calibration model coefficients. It becomes the third component in the contribution of instrument drift to the overall uncertainty of a measurement process.

Conclusion
Drift in measuring instruments is a subject that is dealt with relatively little in the literature. Some, rather summary, methods suggest calculating, calibration point by calibration point, the difference between measured values from calibration at date − 1 and calibration at date , taking the higher as the absolute value in the uncertainty calculations, and applying a (typically uniform) probability distribution. These methods, seemingly simple, do not actually show us drift. The subtractions do not reveal any actual evolution of the instrument between two dates, as each value of the subtraction contains calibration error. The deviation obtained might therefore only be the result of two different calibration errors, without the instrument having in any way altered. Moreover, the method forces us to analyze the same measurement points at each calibration. The number of points during calibration will always be limited for the obvious reason of cost, and the lower the number of points, the greater the calibration model uncertainty. By changing the points used at each calibration, as our method allows, more information about the instrument can be gained with each calibration. So when there is no drift in the instrument, the various calibration results can be used to estimate the model parameters, thereby reducing uncertainties related to the reference device. To conclude, these calculations allow us to set calibration periodicity rationally, using our knowledge of the contribution made by drift to the overall uncertainty of the process in which the instrument is used.