I was given a Wemos D1 Mini (ref: https://www.wemos.cc/en/latest/d1/d1_mini.html) which is an ESP 8266-based board similar to many other boards in the Arduino ecosystem. The ESP8266 contains built-in WiFi (802.11a/b/g), Bluetooth, a 10-bit ADC, and of course digital and analog pins.
As part of teaching myself more about this ecosystem and electronics in general, I wanted to use the built-in ADC to run some experiments. The first experiments were to create a potential (or voltage) divider so that I could experiment with making a voltmeter. The Wemos already has a potential divider using 220K and 100K resistors, converting the effective input voltage range from 0-1.0V to 0-3.3V (which is the same as VCC; the voltage that the board itself supplies through USB, or the voltage expected through the onboard voltage pins).
The initial experiments were not great. The calculated voltages were often all over the place; high or low. At first, I assumed I had done something wrong. After much exploration and futzing around, and much searching, I learned these things:
After fighting with things for far too long, I ran across several pages that I thought were useful.
The approach I took is this:
What follows is some of that data and some of those experiments.
First, I authored a function (rms) to take the root mean square of a bunch of
measurements:
/* roughly (2^(32 - 1)) / (1024^2) -=> 2048 */
static const uint_fast16_t MAX_RMS_SAMPLE_COUNT = 2048-1;
uint_fast16_t rms(int (*func)(uint8_t), uint8_t intArg, uint_fast16_t sample_count) {
if (sample_count > MAX_RMS_SAMPLE_COUNT) {
sample_count = MAX_RMS_SAMPLE_COUNT;
}
uint_fast32_t v = 0;
for(uint_fast16_t i = 0; i < sample_count; ++i) {
/* we have to convert func's output (uint8_t) to uint32_t so we
* can safely square
*/
uint_fast32_t sample = func(intArg);
v += sample * sample;
}
v /= sample_count;
float sampleValue = v;
sampleValue = sqrt(sampleValue);
return sampleValue;
}
This is how I call it:
uint_fast16_t readADC(int adcPin, uint_fast16_t sample_count) {
return rms(analogRead, adcPin, sample_count);
}
I have a Calibration structure
which I populate:
struct Calibration {
uint_fast16_t DACposition;
uint_fast16_t expectedADC;
uint_fast16_t measuredADC;
uint_fast16_t adjustedADC;
int_fast16_t measuredDifference;
int_fast16_t adjustedDifference;
};
void populateCalibration(struct Calibration *c, boolean withVoltageDivider, uint_fast16_t sample_count) {
c->DACposition = dacPosition;
if (withVoltageDivider) {
c->expectedADC = (float(dacPosition)/4.0) / undo_voltage_divider_factor;
} else {
c->expectedADC = dacPosition/4;
}
c->measuredADC = readADC(ADC_PIN, sample_count);
c->adjustedADC = adjustADC(c->measuredADC);
c->measuredDifference = c->measuredADC - c->expectedADC;
c->adjustedDifference = c->adjustedADC - c->expectedADC;
}
Please forgive any FIXMEs.
Example handlers:
void handleSetDACPosition() {
/* get args */
String v = server->arg("newDACValue");
if (v == "") {
server->send(400, "text/plain", "need newDACValue param.");
return;
}
uint_fast16_t newPos = v.toInt();
setDacPosition(newPos);
server->send(200, "text/plain", String(dacPosition));
}
import pandas
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
# set the style to something nice(r)plt.style.use('fivethirtyeight') # bmh is also nice; https://vegibit.com/matplotlib-in-jupyter-notebook/
plt.style.use('fivethirtyeight') # bmh is also nice; https://vegibit.com/matplotlib-in-jupyter-notebook/
# read the data
df = pandas.read_csv("without-divider.csv", usecols=[0,1,2,4])
Let's see what does it look like?
df.head()
What does it look like?
plt.plot(df.expectedADC, df.measuredDifference, linewidth=1.0)
Interesting, with what appear to be some modalities (more on that later).
One way to handle calibrations is to set an offset for every possible value. I consider this a "brute force" approach. The following routine does that.
#! /usr/bin/env python3
import textwrap
fn="without-divider.csv"
output = list()
with open(fn) as fh:
keys = next(fh).strip().split(',')
for i, l in enumerate(fh, 0):
l = l.strip()
if not l: continue
v = dict(zip(keys, l.split(',')))
if int(v['expectedADC']) != i:
raise ValueError(v['expectedADC'] + ' != ' + str(i))
diff = -int(v['measuredDifference'])
if diff < -255 or diff > 255:
raise ValueError("Have to use int_fast16_t")
output.append( '%d' % (diff,))
txt = ", ".join(output)
with open("adc_offset_adjustment_lut.h", "w") as fh:
fh.write("#ifndef ADC_OFFSET_ADJUSTMENT_LUT__H\n")
fh.write("static const int_fast8_t adc_offset_adjustments[] = {\n")
fh.write("\n".join(textwrap.wrap(txt)))
fh.write("};\n\n")
fh.write("#define ADC_OFFSET_ADJUSTMENT_LUT__H\n")
fh.write("#endif // ADC_OFFSET_ADJUSTMENT_LUT__H\n")
cat adc_offset_adjustment_lut.h
You use a look up table (LUT) like this:
`C++
#include "adc_offset_adjustment_lut.h"
uint_fast16_t adjustADCWithLUT(uint_fast16_t adcValue) {
if (adcValue >= sizeof(adc_offset_adjustments)/sizeof(int_fast8_t)) {
/* problem */
Serial.println("logic error: adcValue > max index.");
while (1) { delay(1000); }
}
return adcValue + adc_offset_adjustments[adcValue];
}
What does that look like? I put that data into a different CSV file.
df_brute_force = pandas.read_csv("brute-force-without-divider.csv", usecols=[0,1,2,3,4,5])
df_brute_force.head()
plt.plot(df_brute_force.expectedADC, df_brute_force.measuredDifference, df_brute_force.expectedADC, df_brute_force.adjustedDifference, linewidth=1.0)
As you can see, there's just a bunch of noise in the system, and since we are using absolute values to compensate we're making the per-point reliability worse even if we're making the overall system better. (Pareto Optimization?)
Let's try using math to solve this problem. Most ADC's have a problem with offset and slope. Let's tackle that.
# calculate offset by determining the least value of y
min(df.measuredDifference)
# what x value is associated with the highest y value?
def get_x_for_largest_y(dataframe, x_name="expectedADC", y_name="measuredDifference"):
largest_y = 0
associated_x = None
x_index = None
for (index, (x,y)) in enumerate(zip(dataframe[x_name], dataframe[y_name])):
if y > largest_y:
associated_x = x
x_index = index
largest_y = y
return (x_index, associated_x, largest_y)
(x_index, associated_x, largest_y) = get_x_for_largest_y(df)
(x_index, associated_x, largest_y)
Another thought I had is to consider that a measured value of 1024 should really have been about 971. If it's reading high, we could start by adjusting the slope. Let's see what that does. $pre_slope = 971.0/1024.0;$ Roughly 0.9482421875.
Let's try changing the slope first with no offset or other corrections.
We'll call that without-divider-pre-slope.csv.
Standby.
pre_slope_df = pandas.read_csv("without-divider-pre-slope.csv", usecols=[0,1,2,3,4,5])
pre_slope_df.head()
plt.plot(pre_slope_df.expectedADC, pre_slope_df.measuredDifference, pre_slope_df.expectedADC, pre_slope_df.adjustedDifference, linewidth=1.0)
That looks pretty good! Let's re-analyze.
get_x_for_largest_y(pre_slope_df)
If we stop at that index, what is our average adjusted difference?
stop_idx = get_x_for_largest_y(pre_slope_df)[0]
offset = sum(pre_slope_df.adjustedDifference[:stop_idx+1])/len(pre_slope_df.adjustedDifference[:stop_idx+1])
# we have to remember to "undo" the slope for this since it will be applied _before_ the slope
offset *= (1.0 / (971.0/1024.0))
offset
# what might it look like if we simply shifted everything up (or down)
# take the lowest unadjusted number and compare it to zero.
min(pre_slope_df.measuredDifference)
Interesting. We'll still use the pre-adjusted number. If we add this number before we apply the slope, let's see what happens.
offset_then_slope_df = pandas.read_csv("without-divider-offset-then-slope.csv", usecols=[0,1,2,3,4,5])
offset_then_slope_df.head()
plt.plot(offset_then_slope_df.expectedADC, offset_then_slope_df.measuredDifference, offset_then_slope_df.expectedADC, offset_then_slope_df.adjustedDifference, linewidth=1.0)
That looks good enough for me. What is the average adjusted vs expected?
sum(offset_then_slope_df.adjustedDifference[:stop_idx+1])/len(offset_then_slope_df.adjustedDifference[:stop_idx+1])
That looks fine. Going forward, let's use that dataframe and perform some more analysis.
df = offset_then_slope_df
def find_jumps(dataframe):
# find every place the measuredDifference jumps up by more than 2 LSB
# This is ... not very great.
# It doesn't pick up the one at 132 and doesn't find the last one either
pairs = list(zip(dataframe.expectedADC, dataframe.measuredDifference))
jumps = []
prior = pairs[0]
for (x,y) in pairs[1:]:
if y - prior[1] > 2:
jumps.append( (x,y,y-prior[1]) )
prior = (x,y)
return jumps
jumps = find_jumps(df)
jumps
print("Difference between jumps:", jumps[0][0], "-", 0, "=", jumps[0][0] - 0)
for i in range(1, len(jumps)):
print("Difference between jumps:", jumps[i-1][0], "-", jumps[i][0], "=", jumps[i][0] - jumps[i-1][0])
# due to super weirdness near the end of the chart, exclude all data from that point forwards
# do this optionally
elide_data = True
if elide_data:
expected_data = df.expectedADC[:x_index+1]
measured_difference_data = df.measuredDifference[:x_index+1]
adjusted_difference_data = df.adjustedDifference[:x_index+1]
else:
expected_data = df.expectedADC
measured_difference_data = df.measuredDifference
adjusted_difference_data = df.adjustedDifference
def make_xkcd_plot(dataframe):
# make a nice xkcd-style plot (for fun)
with plt.xkcd():
plt.plot(dataframe.expectedADC, dataframe.measuredDifference, label="measuredDifference")
plt.plot(dataframe.expectedADC, dataframe.adjustedDifference, label="adjustedDifference")
plt.xlabel("expectedADC")
plt.ylabel("difference")
plt.legend()
def make_big_plots(expected_data, measured_difference_data, adjusted_difference_data):
fig = plt.figure(figsize=(20,30))
ax1 = fig.add_subplot(2,1,1)
ax2 = fig.add_subplot(2,1,2)
ax1.plot(expected_data, measured_difference_data, linewidth="1.0")
ax2.plot(expected_data, adjusted_difference_data, linewidth="1.0")
ax1.set_xlabel('expectedADC')
ax1.set_ylabel('measuredDifference')
# ax1.set_ylim([-3.0,3.0])
ax1.grid(True)
ax2.set_xlabel('expectedADC')
ax2.set_ylabel('adjustedDifference')
ax2.set_ylim([-5.0,5.0])
# ax2.set_yticks(np.arange(-5.0, 5.0, 0.5), minor=True)
ax2.grid(True)
return (fig, ax1, ax2)
(fig, ax1, ax2) = make_big_plots(expected_data, measured_difference_data, adjusted_difference_data)
modalities = [ 136, 400, 664, 929 ]
divisor = 18
for x in modalities:
ax1.annotate('modality', xy=(x, x/divisor), xytext=(x-5, x/divisor + 5), arrowprops=dict(facecolor='black', shrink=0.05))
Looks like the inflection point is around row 970/971. There is nothing we can do at this point to make things better that I'm aware of, because we can't pre-adjust the ADC which is what we'd need to do here. So we're stuck with a partial range. Boo.
I did measure the ADC output vs. expected and it's awful close.
Also, as you can see, there is also a modality. This isn't a huge surprise but I've not seen this written about anywhere else.
The modality works out to multiples of (around) 264. I have no idea why.
The first modality is between X index 0 and 136 (not quite 1/2 of 264), roughly. As you can see, the ADC "jumps" up a bit at that point. The same thing happens every 264 points later. I tried compensating for that and found a 1/2 modality (or rather, the 264 modality was really 2x132), and so on.
I found it wasn't worth it.
I simplified everything into an offset and a slope.
This gets me very close (typically within 2 LSBs). One LSB is $3300mV / 1024$ or about 3.2mV.
I'm totally OK with this being accurate to within 12.8mV on average.
Note that according to the data below (assuming we exclude the weirdness at the top end), we're accurate to +/- 4 LSB in absolute terms. 8LSB = 25.6mV.
# what is the rough distribution of the _adjusted_ difference?
bin_counts, bin_values, _ = plt.hist(adjusted_difference_data)
list(zip(list(bin_counts),list(bin_values)))
Quite happy with that.
This next section all involves the voltage divider (potential divider). The resistors I'm using all claim to be 1%. They are:
Which works out as: $(20K = 5.1K)/5.1K = 4.92156862745098$
Therefore, at 3300mV nominal we should expect to see $3300mV / 4.92... = 670.5mV$ and indeed we do at about 674mV. Confirmed by mulitmeter; Innova 3320 that I calibrated against a 2.048 and 4.096 Voltage Reference. Don't laugh it's what I've got.
What do I get on the 3.3V reference? 3.30..3.31. It keeps moving around. Interesting. Let me split the difference at 3305. How much difference does that make? 0.15%.
Anyway, the following is a run with the DAC hooked up through the voltage divider. One thing to note is that the "expected" values are a bit off yet. I calculate them as noted above by dividing the DAC position by 4.0 (to account for the 12-bit DAC vs. 10-bit ADC), and then again by the voltage divider ratio above (4.92...).
There is a $19.686...:1$ ratio ($4.0 * 4.92....$) between the DAC and the ADC. At full tilt (DAC of 4095) and a vRef of 3300mV I'd expect to see an ADC of 208 (if all were perfect) ($1023/4.92 = 208$) With adjustments, I see 206. That is only 2 LSB! One LSB on the low-side of the voltage divider is $3300mV/1023 = 3.226mV$, however on the high-side we have to multiply that by 4.92....: $3300mV/1023 * 4.9216 = 15.87612903225806$. This is the maximum voltage we can (presumably) safely measure, although in practice we're eliminating some 7% of the top range due to the ADC reading high. That means our 'effective' range is $0.93 * 15.87 = 14.8$, awful close to automotive voltages.
How close do we get?
Let's see.
Reminder that since the ADC can only cover 0-3300mV which is only $3300/15876 = 0.21$ -=> 21% of the full range.
UPDATE: I could move the ADC vRef to the 5.0vRef (which is more like... 4.68V) but the noisy vRef would make things worse.
df_v = pandas.read_csv("with-divider.csv")
df_v.head()
#(fig, ax1, ax2) = make_big_plots(df_v.expectedVoltage, df_v.measuredVoltageDifference, df_v.adjustedVoltageDifference)
plt.plot(df_v.expectedVoltage, df_v.adjustedVoltageDifference, linewidth=1.0)
It's weird that it looks to me like it's reading low, but remember this is only like... 20% of the range.
bin_counts, bin_values, _ = plt.hist(df_v.adjustedVoltageDifference)
Another quick sanity-check: grab a 9V battery. REMEMBER TO DISCONNECT THE DAC or you'll let the magic smoke out. AMHIK. My multimeters both say 9.10V. This thing says 8.8V. That's 300mV off. Not great. I might have to check vs. a third multimeter.
Instead, let me hook up a 4.096V voltage reference (LM4040). What does it say? When hooked up to the 5V ref on the Wemos, I get 4064mV, or 0.78125% low. $9100 * (4064/4096) = 9029$, still higher than $8800$.
I also checked a 3x1.5V Alkaline pack measured at 4.43/4.44. I got 4.381V. 1.1% low.
This tells me that there is more understanding and work to be done. When I use a multimeter to measure the same battery pack through the same potential divider, I get: $.903V x 25100/5100.0 = 4.444V$ which is right on the money.
I don't understand what's going wrong here.
It's a new day! I'm going to try 220K and 1M Ohm resistors. Something something impedence. Let's see what happens?
df_v = pandas.read_csv("with-divider-big.csv")
df_v.head()
plt.plot(df_v.expectedVoltage, df_v.adjustedVoltageDifference, linewidth=1.0)
bin_counts, bin_values, _ = plt.hist(df_v.adjustedVoltageDifference)
OK, how about much smaller ones? (1K and 5.1K)?
df_v = pandas.read_csv("with-divider-small.csv")
df_v.head()
plt.plot(df_v.expectedVoltage, df_v.adjustedVoltageDifference, linewidth=1.0)
bin_counts, bin_values, _ = plt.hist(df_v.adjustedVoltageDifference)
So at full tilt (3300mV) input, with a 1K/5.1K potential divider (ratio: $(5.1+1.0)/1.0 = 6.1:1$) we should see $3300mV / 6.1 = 541mV$. What does my multimeter say? it says 532mV which, when worked backwards, is 3239mV (1.85% low). That's not too bad.
idx = get_x_for_largest_y(df_v, x_name="expectedVoltage", y_name="measuredDifference")[0]
df_v.iloc[idx]
Now, I'm currently taking 256 samples with a 500 microsecond delay between readings (and then applying RMS). I noticed that the readings take a while to stabilize in the shell:
With one call to get readings per second on a 9V battery, this is what I see:
5175.16
8599.03
8638.39
8697.42
8717.10
8736.77
8736.77
8756.45
8756.45
8756.45
That's kinda weird. I don't understand that, either.