Sensors with Digital Output – A Metrological Challenge

. The Internet of Things (IoT) as well as many other new applications require sensors that can already process data inside the sensor and exchange the pre-processed data more or less directly with their environment. Such sensors typically have a digital output and thus challenge current calibration systems which usually have analogue input channels. Furthermore most calibration standards were written for an analogue world and do not fit to sensors with internal A/D converters and data pre-processing. Based on experiences of the authors with the calibration of accelerometers with digital output, the paper will give an overview over the challenges that we will face in a digital sensor world. How will calibration systems for such transducers will look like? How do I calculate a measurement uncertainty if the signal processing inside a sensor is a black box? The paper addresses the challenges and tries to give an outlook how to meet them.


Digital Transducers
Most transducers that are designed for the use in consumer devices or in automotive applications are based on MEMS technology today and do not have an analogue output anymore. Instead these transducers combine a MEMS sensor element with an ASIC chip, which implements signal conditioning, A/D conversion and data transmission functions, in one package. Such sensors have a digital interface and the communication with measurement applications or controller units is purely digital. The manufacturer 'adjusts' the sensitivity of these sensors within his high-volume production process in a kind of a calibration procedure. Most of these sensors will be mounted in customer devices and will usually not be recalibrated anymore.
So why should we care about the calibration of digital sensors in a calibration laboratory? Because there are already some laboratory applications on the market that use transducers with a digital output. Examples are sensors inside of crash test dummies with a RS485 interface or machine monitoring sensors that communicate with a PC software via a CAN-bus interface.
Furthermore sensors manufactured in a high-volume production as described above are cheap while their performance is increasing from generation to generation. Thus, the authors expect to see more and more laboratory applications where such sensors will be used. And although these sensors are adjusted to a defined sensitivity during the production process, the customer will usually get neither a calibration certificate nor a traceability statement from the manufacturer. Therefore, in certain applications there will be plenty of legal reasons why these transducers have to be calibrated in an ISO 17025 certified calibration laboratory.

Digital Transducers vs Analogue Transducers -What's different?
In a calibration setup for an analogue transducer as shown in Fig.1, the device under test (DUT) can be regarded as an electric source. The calibration system has to provide a signal conditioner (e.g. a charge amplifier), an A/D converter and a calibration software for the signal processing. All three parts of the calibration system are accessible and can be calibrated themselves or rather the software can be validated. The calibration method is usually well described in standards like the ISO 16063 standard series [1] for the calibration of vibration and shock transducers. Furthermore, we can synchronize the A/D converters in the reference channel and the DUT channel easily and will be able to determine the magnitude and the phase response of the DUT precisely. In contrast the digital transducer as shown in Fig. 2 is more or less a black box for the calibration system. Signal conditioning and A/D conversion is now a part of the DUT and the only accessible part of the calibration system is the software. However, the calibration system may use an analogue reference transducer. But this raises the question how to synchronize the A/D converter inside the DUT with A/D converter in the reference signal path of the calibration system. Is the digital DUT output free of a jitter and can we determine a meaningful phase response of the DUT? Additionally, the technological advance in the design of digital sensors allows to integrate more and more signal processing capabilities into the transducer (see Fig. 3). This can be simply some signal weighting filters or advanced functions like FFT's. This is raising new questions. What do I have to calibrate in this case? A raw data stream (if available)? The filtered data stream? Can the signal processor be updated and do I have to take care of a software revision? Currently there are no standards available that describe methods how to calibrate such sensors. However, the challenge is not new to the metrological community since it is known from the calibration of complex measurement devices like sound level meters or vibration meters. Compared to such devices e.g. a digital accelerometer with integrated signal processing would be like a vibrometer without display but with a digital data interface.
New calibration standards for each class of such transducers may be needed or existing standards may be extended, but the task can be solved with an acceptable effort. From the experience of the authors the real challenge will be to provide a calibration system that can handle the wide variety of interfaces and functions of such transducers.

A Calibration System for Digital Transducers
This section describes a calibration system for digital vibration and shock transducers. However, the main idea how to handle the variety of digital sensor interfaces will apply to other types of transducers as well.

Requirements for the Digital Input Channel
As described in section 1 the calibration system must be capable to communicate with a wide variety of digital transducers. This requires a communication on two layers, a hardware layer and a software layer (see Fig. 4). While the hardware layer is responsible for the low level data transfer over a physical interface like SPI or I2C, the software layer is responsible for the initialization of the DUT, setting parameters like sampling rate or measurement range and reading actual data from certain registers in order to transfer it to higher level functions in the calibration software. Table 1 shows an example of three types of triaxle accelerometers to illustrate this. The accelerometers offer different hardware interfaces for communication. Two of them can communicate via SPI or I2C while the third has only an SPI interface. If the calibration system would for example offer a SPI interface (hardware layer) as DUT input channel, this interface must still be capable to handle different clock timings (here 10 MHz and 8 MHz) in order to be able to communicate with all three types of sensors in table 1. However, the variety of common sensor interfaces is much broader than in this example. I3C, CAN, CAN-FD, SENT, PSI5, ZACwire, DTI, CAN-MD are some more interfaces that can be found in the datasheets. The calibration system needs to be flexible enough handle these interfaces on a hardware layer level.
The software layer is responsible for the logical communication with the DUT. For example the sensors in table 1 can be used in different measurement ranges. To select a measurement range the software layer needs to write a data word into a corresponding setup register during initialization of the DUT. Furthermore, the software layer is responsible to read out the actual measurement data. The red frames in the register map in table 1 show the addresses of the registers where the actual x-axis measurement values of the sensor are stored. Different types of sensors store the data in different registers and even worse, some store the values of the same axis in different registers with different resolutions (here the AD XL362). The software layer is responsible for the logical communication with the DUT and the transportation of the raw data stream to the higher signal processing functions of the calibration software. Thus, the calibration system needs a flexible architecture on the hardware layer as well as on the software layer.

From an Analogue to a Digital Calibration System
The calibration system is based on an established calibration system for analogue shock or vibration transducers (see Fig. 5). The system uses an analogue reference transducer for secondary calibrations according to [3]. Alternatively, it allows to use a laser vibrometer as reference sensor since the system can also be used as a primary calibration system for accelerometers according to [2]. The signal conditioners and A/D converter in the reference signal path were calibrated traceable to a national standard. Furthermore the system provides a closed loop control of the vibration exciter and thus can represent the measurand acceleration precisely and traceable. Based on this setup the task is to add an input channel for digital DUT's.
For this purpose the authors are using a specialized input card (UTB = universal tester board) that was originally designed for test systems in the sensor production. This well approved card can be directly connected with the backplane of the vibration controller (see Fig. 6) of the calibration system shown in Fig. 5 and has access to the system clock and trigger of the A/D converters in the analogue input channels. Furthermore, the UTB has the capability to handle the hardware layer as well as the software layer for the communication with digital transducers (see Fig. 7). For this purpose a FPGA on the board can be reconfigured flexibly to provide any required digital hardware interface. Any function of the physical layer like signal timings or management of data frames can be flexibly programmed into the logic of the FPGA. For the software layer the UTB provides a real time processor where the logical communication with the digital transducer can be managed by means of a kind of 'software driver' (called 'sequence' in Fig 7). The driver runs on the real time processor and transfers the sensor data to signal processing units in the controller firmware and/or in the calibration software via a standardized protocol. As shown in the example above each sensor type can have different setup parameters and register maps and thus the system need to provide a dedicated 'software driver' for each sensor type. However, since the programming of such a 'software driver' does not require much effort and leaves the rest of the firmware as well as the software of the system untouched, the suggested calibration system has a very flexible architecture (see Fig. 8).

Measurement Uncertainty
The determination of a measurement uncertainty budget (MUB) for a calibration system is a complex process that needs to take care of a lot of influence variables. Especially in an electrodynamic calibration system like the system described in section 2, both the mechanical and the electronic parts of the systems need to be considered. The basic influence variables of such a system can be found in the standards [2] and [3]. However, since the system in section 2 is based on a well-known analogue calibration system, all mechanical variables from an existing MUB and most of the electrical variables can be reused (see table 2). The MUB may have to be extended by some new variables considering the special conditions of a calibration of a digital transducer. Table 2. Influence variables for the determination of a measurement uncertainty budget for DTI-transducers

Differences in a MUB for Digital Calibration Systems compared to Analogue Systems
The main difference between the calibrations of a digital transducer compared to an analogue transducer is that the signal conditioning, A/D conversion and signal processing inside the digital transducer may be more or less a black box for the determination of a MUB. Does the sensor manufacturer provide sufficient data regarding • Resolution of the A/D converter?
• Sampling rate of the A/D converter?
• Signal conditioning inside the transducer?
• Signal processing inside the transducer?
• Influence of the stability of the power supply?
Furthermore, if a phase response of the digital transducer shall be measured, is it possible to synchronize the clock of the A/D converter inside the digital transducer with the clock of the A/D converter in the analogue reference sensor path of the calibration system? Is there a possibility to trigger the sampling process in both A/D converters with one hardware trigger signal? What information does the manufacturer provide about the synchronization and triggering of the A/D converter inside the transducer that may help us to determine a measurement uncertainty for the phase response measurement?
Practical experiences of the authors show that especially the determination of a measurement uncertainty for the phase response can lead to unexpected difficulties. Because even if it is possible to synchronize both A/D converters for example by means of the clock of the digital bus, the data processing inside the transducer can lead to an unexpected jitter in the raw data stream. This can happen for example if a digital accelerometer provides besides the measurement of an acceleration additional functions like the measurement of a temperature or a magnet field. While the register with the acceleration values is usually updated in regular intervals with new acceleration values, the signal processor inside the transducer may drop some acceleration data when it is updating the temperature or magnet field register with new measurement values. This can provoke a significant jitter in the data stream that needs to be considered in a measurement uncertainty estimation of the phase response measurement. However, often this type of information cannot be found in the sensor data sheets or other documents published by the sensor manufacturer.

Example: Determination of a Measurement Uncertainty Budget for the Calibration of DTI Transducers
So called DTI transducers are mainly used in the automotive industry inside of crash test dummies. The idea behind these transducers is that the number of sensor cables inside the dummy must be reduced because at certain places inside a dummy several transducers have to be installed at the same point. For example the acceleration in three axes as well as the angular rate in three axes has to be measured in the dummy neck. Using analogue sensors a bundle of six sensor cables has to be routed through the dummy to an analogue data recorder. The DTI technology bundles the analogue sensors together with an electronic module that provides signal conditioning and A/D conversion of the sensor signals in one sensor block. From the electronic module the sensor data is transferred via a RS485 bus interface to a digital data recorder with only one cable for all six axes.
Compared to a calibration of similar analogue sensors, the digital calibration has to consider the behavior of DTI acceleration sensors at low excitation amplitudes. This is due to the fact that these sensors are low sensitivity shock, the gain of the signal conditioner inside the DTI sensor is fixed and the A/D converter has only a 16 bit resolution. Thus, if the excitation amplitude is low like during a calibration on a vibration exciter at low frequencies (due to a limited stroke of the shaker), there may be only a few quantization steps of the A/D converter left for the conversion of the analog signal to a digital data stream (see Fig. 9 below). Knowing the resolution of the A/D converter and the full scale measuring range of the DTI transducer, it is possible to calculate a worst case quantization error for a certain excitation amplitude (see example in Fig. 10)  Fig. 9. A/D conversion error with significant signal amplitude and low signal amplitude However, in practice the authors were observing lower errors because the sine approximation filtering of the sensor signals that is used in the calibration system, seems to reduce the quantization error. Since a mathematical approach to calculate the reduced maximum quantization error failed, an 'experimental approach' was chosen to estimate the quantization error. For this purpose a set of measurements was performed at a frequency of 80 Hz where other disturbances that may influence the measurement uncertainty are known to be very low for this calibration system. The excitation amplitude was decreased step by step and the remaining number of quantization steps that the A/D converter can provide for the A/D conversion of the signal, was calculated. Furthermore, the difference between the output value from the DTI transducer and the acceleration measured by the analogue reference transducer was calculated and plotted against the quantization steps. The results for three accelerometers with different measurement range can be found in Fig. 11.
The results show that a calibration with excitation amplitudes where less than ten quantization steps for the A/D conversion are left, makes no sense because the quantization error is growing to an unacceptable amount. However, even if more quantization steps of the A/D converter are left, the error is varying within a certain range with the number of remaining quantization steps. For the determination of a value for the influence variable 'KLSB' in the MUB (see table 2 above) the maximum error was estimated by a simplified envelope curve around the measured values (see Fig. 12).

Fig. 12. Envelope function for the approximation of the quantization error
The envelope function is then used in a calculation scheme for the measurement uncertainty budget. This is divided into several frequency ranges. For all frequency ranges a maximum amplitude or optimal amplitude for calibration purposes was fixed (see table 3). For the calibration of DTI transducers the MUB calculation sheet has an additional input field in order to determine the uncertainty contribution of the influence variable 'KLSB' describing the quantization error as a function of the nominal sensitivity of the device under test given in xx LSB/ (m/s 2 ). Multiplying the nominal sensitivity with the fixed amplitude in the frequency range results in the number of available steps. Finally this number is used to look-up the estimated quantization error from the envelope curve described in Fig. 12. As expected this uncertainty contribution is only significant at low frequencies where the maximum vibration exciter displacement amplitude is limiting the acceleration amplitude.

Conclusions
The number of sensors with a digital output that are available on the market is increasing quickly. It is predictable that some of these transducers will be used for applications where a traceable calibration of the sensor in an accredited calibration laboratory will be inevitable. From a metrological point of view the change from transducers with analogue output to transducers with digital output is less a revolution than an evolution. However, this evolution requires calibration systems with a new hardware and software architecture in order to be capable to handle the variety of transducer interfaces and sensor types. Furthermore, the determination of a measurement uncertainty for the calibration of a digital transducer will become more laborious because of the lack of information about the internals of the digital transducer or because of specific properties of the transducer. An example for the latter was given above for the calibration of a DTI accelerometer where the fixed gain of the signal conditioner combined with the limited amplitude of the vibration exciter can lead to a situation where the measurement range of the A/D converter is not used in an optimal way and the measurement uncertainty will be increased. In an analogue calibration system the signal conditioner can be switched to a higher gain in the specific frequency range and the quantization error is usually negligible over the whole frequency range.
The measurement of a phase response of digital transducers will be the most challenging part because the synchronization of the A/D converters as well as a jitter in the raw data stream caused by the transducer can lead to erratic results. This raises the question if a new standard should be established, which defines minimum requirements a digital transducer has to fulfil regarding his 'calibration capability'.