Calibrating circuits
Calibration may be a useful solution... or not. And you need to know when is and when not.
Some time ago, Michele and I did an study about the precision of temperature measurement in the context of industrial machines. When presenting the conclusions to different teams, we were invariably asked: would it be possible to reduce error by calibrating temperature sensors?
I would like make some reflexions about when calibrating a circuit is a good idea. However, I should notice that many things that follows is speculation. I would like to think that they are good speculation, but speculation anyway and should need confirmation. Feel free to send comments.
Calibrating the sensor
In our imaginary system, we will use platinum sensors (PT1000) which are a very stable temperature dependent resistor.
This sensor is done with very thin platinum deposition over a ceramic substrate. It is easy to understand that any variation in the metal trace width or thickness influences the conductor area and thus the resistivity. It is essential for the manufacturer to carefully control both.
In this context, it is reasonable to think that the wire has an «equivalent section», average of all the infinitesimal sections and the slope of the temperature to resistance will change according to it. If this hypothesis were true, it would be enough to measure sensor resistance at just one fixed temperature. If you know it, you will known how the actual sensor will behave.
The Callendar-Van-Dusen equation relates temperature and resistivity:
The constants for the IEC 60721 sensors that have a temperature coefficient of 3850 ppm are:
R0 has a nominal value of 1000 Ω, which is the resistivity at 0 Celsius. This gaves name to the sensor.
I guess that the manufacturer places the sensor in a well controlled temperature ambient, waits until it reaches thermal equilibrium and then measures resistance with high precision. According to read value the sensor will go to different quality level boxes. This is very important because it is reasonable to expect that the best quality devices have a distribution that is approximately uniform and the lower quality one, a Gaussian without samples in the central (most accurate) section.
All the temperature setting point errors and the resistance measurement errors should be taken into account to guarantee that, in worst case conditions, the sensor is correctly characterized in terms of error.
Chemically pure
We can continue speculating. What would it happen if the platinum wire where not chemically pure? The resistance variation with temperature would not be the one predicted by the Callendar-VanDusen expression but an unknown one.
This means that the manufacturer probably has to do more than one point characterization.
«The only constant in life is change»
Heraclitus was right in general but more specifically for temperature sensors that operate at high temperature. Despite platinum is a noble metal, has some drift (oxidization?). The manufacturer specifies «Long term stability: Max R o drift 0.04 % after 1000 h at 500 ºC». This is equivalent to ± 0.2 C error at 250 C.
Playing the sensor calibrator's game
We can continue insisting in our idea of calibrating temperature sensors in order to improve the measurement accuracy. We could purchase low cost sensors and do the same the manufacturer does.
Every sensor should be uniquely identified and its measurement filed. When a given sensor is placed in a machine, it should be identified, calibration data captured and used to correct measurement. If the sensor is replaced for any reason, the correct data had to be used.
A good option would be devices of the M310 series (link), manufactured by Heraeus/Yageo, The actual cost is about 5.3 € for Class A and 3.7 € for Class B in quantities of 1 k (data taken from Mouser).
Makes sense to calibrate the sensor with all the difficulties and tracking requirements when we can buy the best class sensor for a cost increase of 1.6 € per sensor?
Calibrating electronics
Most times, the measurement of sensor resistance is referenced to high precision resistor. We could use 0.1 % initial tolerance with a thermal coefficient of 25 ppm/ºC. In a system expected to work from -10 to 60 ºC (a variation of ±35 ºC over 25 ºC), in the worst case, the temperature will produce an extra ±0.09 % variation in the resistivity value.
Does it make sense to calibrate a 0.1 % device to improve its precision if due to temperature it may change 0.09 %?
Surprisingly, the answer could be «yes», if we use Wheatstone dividers and «no» if we use a single reference resistor. If all resistors are manufactured with the same technology, it is reasonable to expect similar temperature coefficients. This means that the ratio of resistivity values is very likely to be quite insensible temperature changes. This should be confirmed.
In this particular scenario, storing the possible calibration could be easier if the measurement is done by an on-board MCU. We could calibrate by using a single very high precision external resistor (best if in the end of the measured resistance range).
The second most significant effect in the Pareto is typically the amplifier's voltage offset (see previous article). Calibrating amplifier offset would be dangerous because input offset is due to IC process tolerance and it is very likely to be temperature dependent. We can guarantee nothing in this respect.
Playing the electronic calibrator's game
The cost of a 0.1 % resistor with 25 ppm TC is 0.02 $ each. A typical resistor of 1 % tolerance has a cost of 0.001 $, which is ten time less but has a difference of a mere 0.02 $ per device. If we need this range of precision, for sure the optimum solution is to solve the problem by proper design not by calibration. However, this require a precise analysis and some testing to validate it.
Summary and conclusions
You can only calibrate the things you know how they vary. You really need to understand the nature of the issues that affect performance.
Before entering the calibration game a detailed technical and economic analysis is a must.