Summary

I was given a Wemos D1 Mini (ref: https://www.wemos.cc/en/latest/d1/d1_mini.html) which is an ESP 8266-based board similar to many other boards in the Arduino ecosystem. The ESP8266 contains built-in WiFi (802.11a/b/g), Bluetooth, a 10-bit ADC, and of course digital and analog pins.

As part of teaching myself more about this ecosystem and electronics in general, I wanted to use the built-in ADC to run some experiments. The first experiments were to create a potential (or voltage) divider so that I could experiment with making a voltmeter. The Wemos already has a potential divider using 220K and 100K resistors, converting the effective input voltage range from 0-1.0V to 0-3.3V (which is the same as VCC; the voltage that the board itself supplies through USB, or the voltage expected through the onboard voltage pins).

The initial experiments were not great. The calculated voltages were often all over the place; high or low. At first, I assumed I had done something wrong. After much exploration and futzing around, and much searching, I learned these things:

  1. the built-in ADC is not great. It shares some on-die infrastructure with the WiFi, so using one can impact the other.
  2. ADCs often exhibit one of three forms of error:
    1. offset error ; the readings are consistently high or low by a fixed amount (additive, e.g., +1 unit)
    2. slope error ; the readings are consistently high or low by a slope (multiplicative; e.g., 5% high or low)
    3. non-linearity ; the readings vary from one end to the other in a curvy weird way

After fighting with things for far too long, I ran across several pages that I thought were useful.

  1. Overall, opinions of the 8266 ADC are that it's fine but not great.
  2. Some folks "solved" the non-linearity problem by fitting a polynomial to a 3 or 4 point curve. (This is very clever; I tried this with 3 points and it was a big help but still insufficient) (NOTE: numpy can solve polynomials given some data points)
  3. Others took the "brute force" approach and used a DAC (more or less the opposite of an ADC) and mapped an observed value to an expected one. In other words, if the DAC is hooked up to the ADC and the DAC is emitting exactly 2.70V, and the measurement shows 2.95, you can compensate for that to some degree.
  4. Various other approaches like oversampling, means, p90, etc.

The approach I took is this:

  1. I purchased from Adafruit (ref: https://www.adafruit.com/) the MCP4725 12-bit ADC breakout board (reference: https://www.adafruit.com/product/935)
  2. I also purchased a precision LM4040 Voltage reference, also from Adafruit: https://www.adafruit.com/product/2200
  3. I used the ESP8266 web server and hooked up routines so that I can "drive" the bulk of the process over HTTP. The goal is to have an HTTP endpoint (or, really, endpoints) that:
    1. Set the DAC
    2. Wait a very small period of time (~5ms)
    3. Take a measurement from the ADC
    4. Compare the measurement to what it should be.
    5. Respond with a CSV output line containing all of the above and more.
  4. I authored bash to drive this.
  5. Run
  6. Chart results
  7. Learn
  8. Make adjustments
  9. Go-to 5.

What follows is some of that data and some of those experiments.

First, I authored a function (rms) to take the root mean square of a bunch of measurements:

/* roughly (2^(32 - 1)) / (1024^2) -=> 2048 */
static const uint_fast16_t MAX_RMS_SAMPLE_COUNT = 2048-1;
uint_fast16_t rms(int (*func)(uint8_t), uint8_t intArg, uint_fast16_t sample_count) {
    if (sample_count > MAX_RMS_SAMPLE_COUNT) {
      sample_count = MAX_RMS_SAMPLE_COUNT;
    }
    uint_fast32_t v = 0;
    for(uint_fast16_t i = 0; i < sample_count; ++i) {
        /* we have to convert func's output (uint8_t) to uint32_t so we
         * can safely square
         */
        uint_fast32_t sample = func(intArg);
        v += sample * sample;
    }
    v /= sample_count;
    float sampleValue = v;
    sampleValue = sqrt(sampleValue);
    return sampleValue;
}

This is how I call it:

uint_fast16_t readADC(int adcPin, uint_fast16_t sample_count) {
    return rms(analogRead, adcPin, sample_count);
}

I have a Calibration structure which I populate:

struct Calibration {
    uint_fast16_t DACposition;
    uint_fast16_t expectedADC;
    uint_fast16_t measuredADC;
    uint_fast16_t adjustedADC;
    int_fast16_t measuredDifference;
    int_fast16_t adjustedDifference;
};

void populateCalibration(struct Calibration *c, boolean withVoltageDivider, uint_fast16_t sample_count) {
    c->DACposition = dacPosition;
    if (withVoltageDivider) {
        c->expectedADC = (float(dacPosition)/4.0) / undo_voltage_divider_factor;
    } else {
        c->expectedADC = dacPosition/4;
    }
    c->measuredADC = readADC(ADC_PIN, sample_count);
    c->adjustedADC = adjustADC(c->measuredADC);
    c->measuredDifference = c->measuredADC - c->expectedADC;
    c->adjustedDifference = c->adjustedADC - c->expectedADC;
}

Please forgive any FIXMEs.

Example handlers:

void handleSetDACPosition() {
    /* get args */
    String v = server->arg("newDACValue");
    if (v == "") {
        server->send(400, "text/plain", "need newDACValue param.");
        return;
    }

    uint_fast16_t newPos = v.toInt();
    setDacPosition(newPos);
    server->send(200, "text/plain", String(dacPosition));
}
In [1]:
import pandas
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt

# set the style to something nice(r)plt.style.use('fivethirtyeight') # bmh is also nice; https://vegibit.com/matplotlib-in-jupyter-notebook/
plt.style.use('fivethirtyeight') # bmh is also nice; https://vegibit.com/matplotlib-in-jupyter-notebook/

# read the data
df = pandas.read_csv("without-divider.csv", usecols=[0,1,2,4])

Let's see what does it look like?

In [2]:
df.head()
Out[2]:
DACposition expectedADC measuredADC measuredDifference
0 0 0 0 0
1 4 1 1 0
2 8 2 5 3
3 12 3 5 2
4 16 4 4 0

What does it look like?

In [3]:
plt.plot(df.expectedADC, df.measuredDifference, linewidth=1.0)
Out[3]:
[<matplotlib.lines.Line2D at 0x7fdff2d99c50>]

Interesting, with what appear to be some modalities (more on that later).

One way to handle calibrations is to set an offset for every possible value. I consider this a "brute force" approach. The following routine does that.

In [4]:
#! /usr/bin/env python3
import textwrap

fn="without-divider.csv"
output = list()
with open(fn) as fh:
    keys = next(fh).strip().split(',')
    for i, l in enumerate(fh, 0):
        l = l.strip()
        if not l: continue
        v = dict(zip(keys, l.split(',')))
        if int(v['expectedADC']) != i:
            raise ValueError(v['expectedADC'] + ' != ' + str(i))
        diff = -int(v['measuredDifference'])
        if diff < -255 or diff > 255:
            raise ValueError("Have to use int_fast16_t")
        output.append( '%d' % (diff,))

txt = ", ".join(output)

with open("adc_offset_adjustment_lut.h", "w") as fh:
    fh.write("#ifndef ADC_OFFSET_ADJUSTMENT_LUT__H\n")
    fh.write("static const int_fast8_t adc_offset_adjustments[] = {\n")
    fh.write("\n".join(textwrap.wrap(txt)))
    fh.write("};\n\n")
    fh.write("#define ADC_OFFSET_ADJUSTMENT_LUT__H\n")
    fh.write("#endif // ADC_OFFSET_ADJUSTMENT_LUT__H\n")
In [5]:
cat adc_offset_adjustment_lut.h
#ifndef ADC_OFFSET_ADJUSTMENT_LUT__H
static const int_fast8_t adc_offset_adjustments[] = {
0, 0, -3, -2, 0, -1, -1, 3, 3, 2, 3, 3, 3, 3, 3, 3, 2, 3, 2, 2, 2, 2,
2, 2, 3, 3, 2, 1, 1, 2, 1, 2, 2, 2, 2, 3, 2, 1, 1, 1, 1, 1, 1, 2, 2,
2, 1, 1, 0, 1, 1, 1, 1, 1, 2, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0,
1, 0, 1, 1, 0, -1, -1, 0, 0, 0, 0, 0, 0, 1, -1, -1, -1, -1, -1, -1, 0,
0, 0, -1, -1, -1, -2, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -2,
-1, -1, -1, -2, -2, -2, -1, -1, -1, -1, -2, -2, -2, -3, -2, -2, -2,
-2, -2, -2, -2, -3, -3, -4, -3, -3, -2, -2, -2, -5, -6, -6, -7, -7,
-7, -7, -6, -6, -6, -6, -6, -7, -7, -8, -7, -7, -7, -7, -7, -7, -7,
-7, -8, -8, -8, -7, -7, -7, -7, -7, -8, -8, -8, -7, -8, -8, -8, -8,
-7, -8, -9, -8, -8, -8, -8, -7, -7, -8, -9, -9, -8, -9, -8, -8, -8,
-8, -8, -9, -9, -9, -9, -9, -9, -8, -8, -8, -8, -9, -8, -10, -9, -9,
-9, -9, -9, -8, -9, -9, -9, -9, -9, -10, -9, -9, -9, -10, -10, -10,
-9, -9, -9, -9, -9, -10, -10, -11, -10, -10, -10, -10, -10, -9, -9,
-11, -11, -11, -10, -11, -11, -10, -10, -10, -11, -11, -11, -12, -11,
-12, -12, -11, -11, -12, -12, -12, -11, -11, -12, -12, -12, -11, -13,
-13, -12, -12, -12, -13, -13, -13, -13, -13, -13, -13, -13, -13, -13,
-13, -13, -13, -14, -14, -14, -13, -13, -14, -14, -14, -14, -13, -13,
-13, -13, -14, -14, -14, -15, -14, -14, -14, -14, -14, -13, -13, -15,
-15, -15, -15, -14, -14, -14, -14, -14, -14, -15, -15, -15, -15, -15,
-15, -14, -14, -15, -14, -14, -15, -14, -15, -15, -15, -15, -15, -15,
-15, -16, -15, -15, -14, -15, -16, -16, -16, -16, -16, -16, -16, -15,
-15, -15, -15, -17, -17, -17, -17, -16, -16, -16, -16, -16, -16, -16,
-17, -17, -17, -17, -17, -16, -16, -17, -17, -17, -17, -16, -17, -17,
-17, -17, -18, -18, -18, -17, -17, -17, -17, -17, -17, -18, -19, -18,
-18, -18, -18, -17, -17, -17, -21, -21, -21, -22, -22, -21, -22, -21,
-21, -21, -22, -22, -22, -22, -22, -22, -22, -22, -22, -21, -22, -23,
-23, -23, -23, -22, -22, -22, -22, -22, -22, -22, -23, -23, -23, -23,
-23, -23, -23, -23, -23, -23, -23, -23, -23, -23, -23, -23, -23, -24,
-24, -23, -23, -23, -23, -24, -24, -23, -24, -24, -24, -24, -24, -24,
-23, -23, -23, -25, -25, -25, -24, -24, -25, -24, -24, -24, -25, -25,
-25, -25, -25, -25, -25, -24, -24, -25, -25, -25, -25, -25, -26, -25,
-25, -26, -26, -26, -25, -25, -26, -26, -25, -25, -26, -27, -27, -27,
-27, -26, -26, -26, -26, -26, -28, -28, -27, -27, -27, -27, -27, -26,
-26, -27, -28, -28, -28, -28, -28, -27, -27, -27, -28, -28, -28, -27,
-27, -27, -29, -29, -29, -28, -28, -28, -28, -28, -28, -29, -30, -29,
-29, -29, -29, -29, -28, -28, -29, -29, -29, -29, -30, -29, -29, -29,
-29, -29, -30, -29, -29, -29, -29, -29, -30, -30, -29, -30, -30, -30,
-29, -29, -29, -29, -28, -28, -30, -30, -30, -30, -30, -30, -30, -29,
-29, -29, -29, -30, -30, -30, -30, -30, -30, -30, -29, -30, -30, -30,
-30, -30, -30, -30, -30, -30, -31, -30, -30, -30, -30, -30, -30, -30,
-30, -31, -31, -31, -31, -31, -31, -30, -30, -30, -32, -32, -32, -31,
-32, -32, -31, -31, -31, -32, -33, -33, -32, -32, -32, -32, -32, -32,
-32, -32, -32, -32, -32, -33, -32, -32, -32, -32, -33, -33, -33, -33,
-32, -32, -32, -33, -34, -36, -36, -37, -37, -36, -37, -37, -37, -37,
-37, -37, -36, -37, -37, -37, -37, -37, -38, -37, -37, -37, -38, -38,
-37, -37, -37, -37, -36, -36, -37, -38, -38, -38, -37, -38, -38, -37,
-37, -37, -37, -37, -38, -38, -38, -38, -37, -37, -37, -37, -38, -38,
-38, -38, -38, -38, -38, -38, -38, -38, -38, -38, -38, -38, -38, -38,
-38, -38, -39, -39, -39, -38, -38, -38, -38, -38, -38, -39, -39, -39,
-39, -39, -39, -38, -38, -38, -38, -39, -40, -40, -39, -39, -39, -39,
-39, -39, -39, -39, -39, -40, -40, -40, -40, -40, -40, -41, -41, -40,
-40, -40, -40, -40, -41, -41, -41, -41, -41, -40, -40, -40, -40, -40,
-40, -42, -42, -41, -41, -41, -41, -41, -40, -40, -41, -42, -42, -42,
-42, -43, -43, -42, -42, -42, -42, -41, -42, -43, -42, -43, -43, -43,
-43, -43, -42, -42, -42, -43, -44, -44, -44, -43, -43, -43, -43, -43,
-43, -43, -43, -44, -44, -44, -44, -43, -43, -43, -44, -44, -44, -43,
-43, -44, -44, -43, -43, -44, -44, -44, -44, -44, -44, -44, -43, -44,
-45, -45, -44, -45, -44, -44, -44, -44, -44, -44, -46, -45, -45, -45,
-45, -45, -45, -45, -45, -45, -45, -46, -46, -45, -45, -45, -45, -45,
-46, -46, -46, -45, -45, -45, -46, -46, -47, -47, -46, -46, -46, -46,
-46, -46, -46, -47, -47, -47, -47, -47, -47, -47, -46, -47, -47, -47,
-47, -48, -47, -47, -47, -47, -47, -47, -47, -48, -47, -47, -48, -48,
-47, -47, -47, -50, -50, -50, -52, -52, -52, -51, -51, -51, -51, -51,
-51, -51, -51, -51, -52, -52, -52, -52, -52, -51, -52, -52, -52, -52,
-51, -52, -52, -52, -52, -53, -53, -53, -52, -53, -53, -52, -52, -52,
-53, -53, -53, -52, -52, -51, -50, -49, -48, -47, -46, -45, -44, -43,
-42, -41, -40, -39, -38, -37, -36, -35, -34, -33, -32, -31, -30, -29,
-28, -27, -26, -25, -24, -23, -22, -21, -20, -19, -18, -17, -16, -15,
-14, -13, -12, -11, -10, -9, -8, -7, -6, -5, -4, -3, -2, -1};

#define ADC_OFFSET_ADJUSTMENT_LUT__H
#endif // ADC_OFFSET_ADJUSTMENT_LUT__H

You use a look up table (LUT) like this:

`C++
#include "adc_offset_adjustment_lut.h"
uint_fast16_t adjustADCWithLUT(uint_fast16_t adcValue) {
    if (adcValue >= sizeof(adc_offset_adjustments)/sizeof(int_fast8_t)) {
        /* problem */
        Serial.println("logic error: adcValue > max index.");
        while (1) { delay(1000); }
    }
    return adcValue + adc_offset_adjustments[adcValue];
}

What does that look like? I put that data into a different CSV file.

In [6]:
df_brute_force = pandas.read_csv("brute-force-without-divider.csv", usecols=[0,1,2,3,4,5])
df_brute_force.head()
Out[6]:
DACposition expectedADC measuredADC adjustedADC measuredDifference adjustedDifference
0 0 0 0 0 0 0
1 4 1 0 0 -1 -1
2 8 2 3 5 1 3
3 12 3 6 9 3 6
4 16 4 4 6 0 2
In [ ]:
 
In [7]:
plt.plot(df_brute_force.expectedADC, df_brute_force.measuredDifference, df_brute_force.expectedADC, df_brute_force.adjustedDifference, linewidth=1.0)
Out[7]:
[<matplotlib.lines.Line2D at 0x7fe0747c8dd8>,
 <matplotlib.lines.Line2D at 0x7fe0747d30f0>]

As you can see, there's just a bunch of noise in the system, and since we are using absolute values to compensate we're making the per-point reliability worse even if we're making the overall system better. (Pareto Optimization?)

Let's try using math to solve this problem. Most ADC's have a problem with offset and slope. Let's tackle that.

In [ ]:
 
In [ ]:
 
In [8]:
# calculate offset by determining the least value of y
min(df.measuredDifference)
Out[8]:
-3
In [9]:
# what x value is associated with the highest y value?
def get_x_for_largest_y(dataframe, x_name="expectedADC", y_name="measuredDifference"):
    largest_y = 0
    associated_x = None
    x_index = None
    for (index, (x,y)) in enumerate(zip(dataframe[x_name], dataframe[y_name])):
        if y > largest_y:
            associated_x = x
            x_index = index
            largest_y = y
    return (x_index, associated_x, largest_y)

(x_index, associated_x, largest_y) = get_x_for_largest_y(df)
(x_index, associated_x, largest_y)
Out[9]:
(959, 959, 53)

Another thought I had is to consider that a measured value of 1024 should really have been about 971. If it's reading high, we could start by adjusting the slope. Let's see what that does. $pre_slope = 971.0/1024.0;$ Roughly 0.9482421875.

Let's try changing the slope first with no offset or other corrections. We'll call that without-divider-pre-slope.csv. Standby.

In [10]:
pre_slope_df = pandas.read_csv("without-divider-pre-slope.csv", usecols=[0,1,2,3,4,5])
pre_slope_df.head()
Out[10]:
DACposition expectedADC measuredADC adjustedADC measuredDifference adjustedDifference
0 0 0 0 0 0 0
1 4 1 0 0 -1 -1
2 8 2 0 0 -2 -2
3 12 3 1 0 -2 -3
4 16 4 2 1 -2 -3
In [11]:
plt.plot(pre_slope_df.expectedADC, pre_slope_df.measuredDifference, pre_slope_df.expectedADC, pre_slope_df.adjustedDifference, linewidth=1.0)
Out[11]:
[<matplotlib.lines.Line2D at 0x7fe074780588>,
 <matplotlib.lines.Line2D at 0x7fe073f18f98>]

That looks pretty good! Let's re-analyze.

In [12]:
get_x_for_largest_y(pre_slope_df)
Out[12]:
(968, 968, 54)

If we stop at that index, what is our average adjusted difference?

In [13]:
stop_idx = get_x_for_largest_y(pre_slope_df)[0]
offset = sum(pre_slope_df.adjustedDifference[:stop_idx+1])/len(pre_slope_df.adjustedDifference[:stop_idx+1])
# we have to remember to "undo" the slope for this since it will be applied _before_ the slope
offset *= (1.0 / (971.0/1024.0))
offset
Out[13]:
-2.271325615182926
In [14]:
# what might it look like if we simply shifted everything up (or down)
# take the lowest unadjusted number and compare it to zero.
min(pre_slope_df.measuredDifference)
Out[14]:
-3

Interesting. We'll still use the pre-adjusted number. If we add this number before we apply the slope, let's see what happens.

In [15]:
offset_then_slope_df = pandas.read_csv("without-divider-offset-then-slope.csv", usecols=[0,1,2,3,4,5])
offset_then_slope_df.head()
Out[15]:
DACposition expectedADC measuredADC adjustedADC measuredDifference adjustedDifference
0 0 0 0 2 0 2
1 4 1 0 2 -1 1
2 8 2 0 2 -2 0
3 12 3 1 3 -2 0
4 16 4 2 4 -2 0
In [16]:
plt.plot(offset_then_slope_df.expectedADC, offset_then_slope_df.measuredDifference, offset_then_slope_df.expectedADC, offset_then_slope_df.adjustedDifference, linewidth=1.0)
Out[16]:
[<matplotlib.lines.Line2D at 0x7fe073e7b5c0>,
 <matplotlib.lines.Line2D at 0x7fe073e7b898>]

That looks good enough for me. What is the average adjusted vs expected?

In [17]:
sum(offset_then_slope_df.adjustedDifference[:stop_idx+1])/len(offset_then_slope_df.adjustedDifference[:stop_idx+1])
Out[17]:
-0.05572755417956656

That looks fine. Going forward, let's use that dataframe and perform some more analysis.

In [18]:
df = offset_then_slope_df
In [19]:
def find_jumps(dataframe):
    # find every place the measuredDifference jumps up by more than 2 LSB
    # This is ... not very great.
    # It doesn't pick up the one at 132 and doesn't find the last one either
    pairs = list(zip(dataframe.expectedADC, dataframe.measuredDifference))
    jumps = []
    prior = pairs[0]
    for (x,y) in pairs[1:]:
        if y - prior[1] > 2:
            jumps.append( (x,y,y-prior[1]) )
        prior = (x,y)
    return jumps
In [20]:
jumps = find_jumps(df)
jumps
Out[20]:
[(136, 5, 3), (400, 21, 3), (664, 36, 3), (929, 51, 3)]
In [21]:
print("Difference between jumps:", jumps[0][0], "-", 0, "=", jumps[0][0] - 0)    
for i in range(1, len(jumps)):
    print("Difference between jumps:", jumps[i-1][0], "-", jumps[i][0], "=", jumps[i][0] - jumps[i-1][0])    
Difference between jumps: 136 - 0 = 136
Difference between jumps: 136 - 400 = 264
Difference between jumps: 400 - 664 = 264
Difference between jumps: 664 - 929 = 265
In [ ]:
 
In [22]:
# due to super weirdness near the end of the chart, exclude all data from that point forwards
# do this optionally
elide_data = True
if elide_data:
    expected_data = df.expectedADC[:x_index+1]
    measured_difference_data = df.measuredDifference[:x_index+1]
    adjusted_difference_data = df.adjustedDifference[:x_index+1]
else:
    expected_data = df.expectedADC
    measured_difference_data = df.measuredDifference
    adjusted_difference_data = df.adjustedDifference
In [ ]:
 
In [23]:
def make_xkcd_plot(dataframe):
    # make a nice xkcd-style plot (for fun)
    with plt.xkcd():
        plt.plot(dataframe.expectedADC, dataframe.measuredDifference, label="measuredDifference")
        plt.plot(dataframe.expectedADC, dataframe.adjustedDifference, label="adjustedDifference")
        plt.xlabel("expectedADC")
        plt.ylabel("difference")
        plt.legend()
In [24]:
def make_big_plots(expected_data, measured_difference_data, adjusted_difference_data):
    fig = plt.figure(figsize=(20,30))
    ax1 = fig.add_subplot(2,1,1)
    ax2 = fig.add_subplot(2,1,2)

    ax1.plot(expected_data, measured_difference_data, linewidth="1.0")
    ax2.plot(expected_data, adjusted_difference_data, linewidth="1.0")

    ax1.set_xlabel('expectedADC')
    ax1.set_ylabel('measuredDifference')
    # ax1.set_ylim([-3.0,3.0])
    ax1.grid(True)

    ax2.set_xlabel('expectedADC')
    ax2.set_ylabel('adjustedDifference')
    ax2.set_ylim([-5.0,5.0])
    # ax2.set_yticks(np.arange(-5.0, 5.0, 0.5), minor=True)
    ax2.grid(True)
    return (fig, ax1, ax2)

(fig, ax1, ax2) = make_big_plots(expected_data, measured_difference_data, adjusted_difference_data)

modalities = [ 136, 400, 664, 929 ]
divisor = 18
for x in modalities:
  ax1.annotate('modality', xy=(x, x/divisor), xytext=(x-5, x/divisor + 5), arrowprops=dict(facecolor='black', shrink=0.05))

Looks like the inflection point is around row 970/971. There is nothing we can do at this point to make things better that I'm aware of, because we can't pre-adjust the ADC which is what we'd need to do here. So we're stuck with a partial range. Boo.

I did measure the ADC output vs. expected and it's awful close.

Also, as you can see, there is also a modality. This isn't a huge surprise but I've not seen this written about anywhere else.

The modality works out to multiples of (around) 264. I have no idea why.

The first modality is between X index 0 and 136 (not quite 1/2 of 264), roughly. As you can see, the ADC "jumps" up a bit at that point. The same thing happens every 264 points later. I tried compensating for that and found a 1/2 modality (or rather, the 264 modality was really 2x132), and so on.

I found it wasn't worth it.

I simplified everything into an offset and a slope.

This gets me very close (typically within 2 LSBs). One LSB is $3300mV / 1024$ or about 3.2mV.

I'm totally OK with this being accurate to within 12.8mV on average.

Note that according to the data below (assuming we exclude the weirdness at the top end), we're accurate to +/- 4 LSB in absolute terms. 8LSB = 25.6mV.

In [25]:
# what is the rough distribution of the _adjusted_ difference?
bin_counts, bin_values, _ = plt.hist(adjusted_difference_data)
In [26]:
list(zip(list(bin_counts),list(bin_values)))
Out[26]:
[(46.0, -3.0),
 (94.0, -2.3),
 (211.0, -1.6),
 (0.0, -0.9000000000000004),
 (278.0, -0.20000000000000018),
 (219.0, 0.5),
 (0.0, 1.1999999999999993),
 (92.0, 1.8999999999999995),
 (19.0, 2.5999999999999996),
 (1.0, 3.3)]

Quite happy with that.

With Voltage Divider

This next section all involves the voltage divider (potential divider). The resistors I'm using all claim to be 1%. They are:

  • $20K Ohm$ on the high side
  • $5.1K Ohm$ on the low side

Which works out as: $(20K = 5.1K)/5.1K = 4.92156862745098$

Therefore, at 3300mV nominal we should expect to see $3300mV / 4.92... = 670.5mV$ and indeed we do at about 674mV. Confirmed by mulitmeter; Innova 3320 that I calibrated against a 2.048 and 4.096 Voltage Reference. Don't laugh it's what I've got.

What do I get on the 3.3V reference? 3.30..3.31. It keeps moving around. Interesting. Let me split the difference at 3305. How much difference does that make? 0.15%.

Anyway, the following is a run with the DAC hooked up through the voltage divider. One thing to note is that the "expected" values are a bit off yet. I calculate them as noted above by dividing the DAC position by 4.0 (to account for the 12-bit DAC vs. 10-bit ADC), and then again by the voltage divider ratio above (4.92...).

There is a $19.686...:1$ ratio ($4.0 * 4.92....$) between the DAC and the ADC. At full tilt (DAC of 4095) and a vRef of 3300mV I'd expect to see an ADC of 208 (if all were perfect) ($1023/4.92 = 208$) With adjustments, I see 206. That is only 2 LSB! One LSB on the low-side of the voltage divider is $3300mV/1023 = 3.226mV$, however on the high-side we have to multiply that by 4.92....: $3300mV/1023 * 4.9216 = 15.87612903225806$. This is the maximum voltage we can (presumably) safely measure, although in practice we're eliminating some 7% of the top range due to the ADC reading high. That means our 'effective' range is $0.93 * 15.87 = 14.8$, awful close to automotive voltages.

How close do we get?

Let's see.

Reminder that since the ADC can only cover 0-3300mV which is only $3300/15876 = 0.21$ -=> 21% of the full range.

UPDATE: I could move the ADC vRef to the 5.0vRef (which is more like... 4.68V) but the noisy vRef would make things worse.

In [27]:
df_v = pandas.read_csv("with-divider.csv")
df_v.head()
Out[27]:
DACposition expectedADC measuredADC adjustedADC measuredDifference adjustedDifference expectedVoltage measuredVoltage adjustedVoltage measuredVoltageDifference adjustedVoltageDifference
0 0 0 0 2 0 2 0.0 0.0 31.75 0.0 31.75
1 4 0 0 2 0 2 0.0 0.0 31.75 0.0 31.75
2 8 0 0 2 0 2 0.0 0.0 31.75 0.0 31.75
3 12 0 0 2 0 2 0.0 0.0 31.75 0.0 31.75
4 16 0 0 2 0 2 0.0 0.0 31.75 0.0 31.75
In [28]:
#(fig, ax1, ax2) = make_big_plots(df_v.expectedVoltage, df_v.measuredVoltageDifference, df_v.adjustedVoltageDifference)
plt.plot(df_v.expectedVoltage, df_v.adjustedVoltageDifference, linewidth=1.0)
Out[28]:
[<matplotlib.lines.Line2D at 0x7fe073b5ce80>]

It's weird that it looks to me like it's reading low, but remember this is only like... 20% of the range.

In [29]:
bin_counts, bin_values, _ = plt.hist(df_v.adjustedVoltageDifference)

Another quick sanity-check: grab a 9V battery. REMEMBER TO DISCONNECT THE DAC or you'll let the magic smoke out. AMHIK. My multimeters both say 9.10V. This thing says 8.8V. That's 300mV off. Not great. I might have to check vs. a third multimeter.

Instead, let me hook up a 4.096V voltage reference (LM4040). What does it say? When hooked up to the 5V ref on the Wemos, I get 4064mV, or 0.78125% low. $9100 * (4064/4096) = 9029$, still higher than $8800$.

I also checked a 3x1.5V Alkaline pack measured at 4.43/4.44. I got 4.381V. 1.1% low.

This tells me that there is more understanding and work to be done. When I use a multimeter to measure the same battery pack through the same potential divider, I get: $.903V x 25100/5100.0 = 4.444V$ which is right on the money.

I don't understand what's going wrong here.

Bigger Resistors?

It's a new day! I'm going to try 220K and 1M Ohm resistors. Something something impedence. Let's see what happens?

In [30]:
df_v = pandas.read_csv("with-divider-big.csv")
df_v.head()
Out[30]:
DACposition expectedADC measuredADC adjustedADC measuredDifference adjustedDifference expectedVoltage measuredVoltage adjustedVoltage measuredVoltageDifference adjustedVoltageDifference
0 0 0 0 2 0 2 0.0 0.00 35.78 0.00 35.78
1 4 0 0 2 0 2 0.0 0.00 35.78 0.00 35.78
2 8 0 0 2 0 2 0.0 0.00 35.78 0.00 35.78
3 12 0 1 3 1 3 0.0 17.89 53.67 17.89 53.67
4 16 0 1 3 1 3 0.0 17.89 53.67 17.89 53.67
In [31]:
plt.plot(df_v.expectedVoltage, df_v.adjustedVoltageDifference, linewidth=1.0)
Out[31]:
[<matplotlib.lines.Line2D at 0x7fe07396afd0>]
In [32]:
bin_counts, bin_values, _ = plt.hist(df_v.adjustedVoltageDifference)

OK, how about much smaller ones? (1K and 5.1K)?

In [33]:
df_v = pandas.read_csv("with-divider-small.csv")
df_v.head()
Out[33]:
DACposition expectedADC measuredADC adjustedADC measuredDifference adjustedDifference expectedVoltage measuredVoltage adjustedVoltage measuredVoltageDifference adjustedVoltageDifference
0 0 0 0 2 0 2 0.0 0.0 39.35 0.0 39.35
1 4 0 0 2 0 2 0.0 0.0 39.35 0.0 39.35
2 8 0 0 2 0 2 0.0 0.0 39.35 0.0 39.35
3 12 0 0 2 0 2 0.0 0.0 39.35 0.0 39.35
4 16 0 0 2 0 2 0.0 0.0 39.35 0.0 39.35
In [34]:
plt.plot(df_v.expectedVoltage, df_v.adjustedVoltageDifference, linewidth=1.0)
Out[34]:
[<matplotlib.lines.Line2D at 0x7fe073863710>]
In [35]:
bin_counts, bin_values, _ = plt.hist(df_v.adjustedVoltageDifference)

So at full tilt (3300mV) input, with a 1K/5.1K potential divider (ratio: $(5.1+1.0)/1.0 = 6.1:1$) we should see $3300mV / 6.1 = 541mV$. What does my multimeter say? it says 532mV which, when worked backwards, is 3239mV (1.85% low). That's not too bad.

In [36]:
idx = get_x_for_largest_y(df_v, x_name="expectedVoltage", y_name="measuredDifference")[0]
df_v.iloc[idx]
Out[36]:
DACposition                  4044.00
expectedADC                   165.00
measuredADC                   180.00
adjustedADC                   172.00
measuredDifference             15.00
adjustedDifference              7.00
expectedVoltage              3246.77
measuredVoltage              3541.94
adjustedVoltage              3384.52
measuredVoltageDifference     295.16
adjustedVoltageDifference     137.74
Name: 1011, dtype: float64

Now, I'm currently taking 256 samples with a 500 microsecond delay between readings (and then applying RMS). I noticed that the readings take a while to stabilize in the shell:

With one call to get readings per second on a 9V battery, this is what I see:

5175.16
8599.03
8638.39
8697.42
8717.10
8736.77
8736.77
8756.45
8756.45
8756.45

That's kinda weird. I don't understand that, either.

In [ ]: