A New ADC!

I purchased and set up a new ADS1015 from Adafruit. I left the smaller resistors in place, but (temporarily?) disabled smoothing via RMS (to see the data 'raw') and adjusted the "step" size to account for the fact that this is a 12-bit ADC and I have a 12-bit DAC.

It's so exciting! Let's dive right in.

In [3]:
import pandas
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt

# set the style to something nice(r)plt.style.use('fivethirtyeight') 
# bmh is also nice; https://vegibit.com/matplotlib-in-jupyter-notebook/
plt.style.use('fivethirtyeight') 

With the new ADC installed, I set it up to read from the DAC and without the potential divider. I made only the minimal code changes necessary, other than not using RMS or any other data smoothing, no offset or slope adjustments, etc...

In [5]:
# read the data
df = pandas.read_csv("without-divider-small-newadc.csv", usecols=[0,1,2,4])
plt.plot(df.expectedADC, df.measuredDifference, linewidth=1.0)
Out[5]:
[<matplotlib.lines.Line2D at 0x7f77cde73b38>]

That's weird, but also awesome. Look at how little noise there is! The ideal in this situation would be a perfectly straight line starting at the origin (0,0) and with no slope. The results are very encouraging, but since there is a clear (constant) slope, I'm going to assume I did something wrong.

Insert thinking here.

Let's start by setting the DAC to it's maximum (4095). Remember that the DAC vRef is 3300mV. Done. Checked with a multimeter (exactly 3.30V).

The ADC defaults to 3/2 gain, according to the docs.

3300mV input to the ADC, scaled at 3/2 of 4096mV: $3300mV / (3/2 * 4096mV) * 4095 = 2200$ What does the ADC read? It reads 1098 which is just 1/2 of what it should be. I'm still doing something wrong.

Insert more thinking here.

Aha! This ADC can read negative voltages! It's plus or minus 6144mV. So the total range isn't from 0-6144mV it's from -6144mV to +6144mV.

For our purposes, we're currently always asking the ADC to compare to ground, we can "shift" everything to the right.

One moment please while I make adjustments to the code.

In [3]:
# read the data
df = pandas.read_csv("without-divider-small-newadc-fixed.csv", usecols=[0,1,2,4])
plt.plot(df.expectedADC, df.measuredDifference, linewidth=1.0)
Out[3]:
[<matplotlib.lines.Line2D at 0x7fbac488cda0>]
In [4]:
bin_counts, bin_values, _ = plt.hist(df.measuredDifference)
In [5]:
list(zip(list(bin_counts),list(bin_values)))
Out[5]:
[(3.0, -7.0),
 (4.0, -6.0),
 (4.0, -5.0),
 (3.0, -4.0),
 (4.0, -3.0),
 (4.0, -2.0),
 (4.0, -1.0),
 (130.0, 0.0),
 (1769.0, 1.0),
 (2171.0, 2.0)]

Much better. SO much better. Across the (almost) the entire input range (at GAIN_TWOTHIRDS, the least gain) it looks like we see a range of 0 to 3. Amazing!

Before we investigate that weird drop at the end, let's continue exploration. The goal at this point isn't maximum awesome, it's to explore, so there will be brief explorations for many of the various permutations.

5V vRef vs. i2c vRef

Let's run another experiment:

Does the vRef input of the ADC make a difference? For the prior experiments, it was hooked up to the 5V ref on the Wemos D1 Mini. Let's see what using the 3.3V reference off of the $i^2c$ connection.

I won't bore you with the details but the answer is "maybe". Mostly it didn't seem to matter, but (later, when using the voltage divider) it did seem to matter. And I would sometimes get weird, non-reproduceable results suggesting some variation in inputs (and the only input that might change across tests is the vRef).

I'll continue using the 5V reference for the ADC and not the 3.3V that comes off of the i2c connector, partly because the DAC could output 5V but it's a flakey 5V whereas the 3.3V seems more stable. I'm trading range for (some) more stability.

Smoothing?

It's worth noting that the prior results were obtained while using a root-mean-square (RMS) with a sample count of 4 (vs. 256 when using the prior very noisy ADC). I don't think it's necessarily worth using a smoothing function at this time, but using a small count (4) costs so little it's not worth changing, either.

Voltage Divider

Let's see what things look like when we go through the voltage divider! The voltage divider is roughly 6.1:1 (5.1K on the high side, 1.0K on the low side), and we input a nominal 3300mV (max) which means max voltage perceived by the ADC would be: $3300mV / 6.1 = 541mV$.

Using a gain of GAIN_FOUR (+/- 1024mV) should work nicely.

Let's run a full test suite! But before we do that, let's think about what results we might get. As above, there are four things to concern ourselves with:

  1. Noise (it looks like that won't be an issue)
  2. Offset (ditto)
  3. Slope
  4. Non-linearity

I don't think that non-linearity is likely to be an issue (given the prior data), or anything else other than slope.

If we do get a slope here (and we don't get a slope when measuring without the potential divider) then that suggests that the resistor values aren't perfect.

We've chosen 5.1K and 1.0K but those are 1% resistors. If they're both off by 1% that could easily cause a slope (positive or negative).

Let's see what the data says.

Standby.

In [39]:
df = pandas.read_csv("with-divider-small-newadc-gain-four.csv")
plt.plot(df.expectedADC, df.measuredVoltageDifference, linewidth=1.0)
# REMEMBER: The following results are in mV!
Out[39]:
[<matplotlib.lines.Line2D at 0x7f77c25947f0>]

There appears to be a very mild slope, which could be because the resistors aren't perfect (5.1K and 1.0K). The slope appears to be approximately -20mV at about 20% of ADC (just using my eyeball to measure), which works out to -1%.

Let's put a +1% multiplier in and see what happens.

NOTE: I'm ignoring that weird dropoff at the end (for now).

In [40]:
df = pandas.read_csv("with-divider-small-newadc-gain-four-corrected.csv")
plt.plot(df.expectedADC, df.measuredVoltageDifference, linewidth=1.0)
# REMEMBER: The following results are in mV!
Out[40]:
[<matplotlib.lines.Line2D at 0x7f77c25713c8>]

I'm pretty happy with that. Yes, it's reading consistently high (probably averaging +3mV) but.... eh. Other than that weird big dropoff at the end (not sure what that's about).

In [8]:
sum(df.measuredVoltageDifference[:-30]), len(df.measuredVoltageDifference[:-30]), sum(df.measuredVoltageDifference[:-30])/len(df.measuredVoltageDifference[:-30])
Out[8]:
(17143.279999999722, 4066, 4.216251844564614)
In [9]:
df.measuredVoltageDifference
Out[9]:
0        3.08
1        3.08
2        3.08
3        3.08
4        6.16
5        3.08
6        3.08
7        3.08
8        3.08
9        0.00
10       3.08
11       3.08
12       3.08
13       3.08
14       3.08
15       3.08
16       3.08
17       3.08
18       3.08
19       3.08
20       3.08
21       3.08
22       3.08
23       0.00
24       3.08
25       3.08
26       3.08
27       3.08
28       3.08
29       3.08
        ...  
4066     0.00
4067     3.08
4068     0.00
4069     0.00
4070     0.00
4071    -3.08
4072     0.00
4073    -3.08
4074    -6.16
4075    -6.16
4076    -6.16
4077    -6.16
4078    -6.16
4079    -9.24
4080    -9.24
4081    -9.24
4082   -12.33
4083    -9.24
4084   -12.33
4085   -15.41
4086   -15.41
4087   -12.33
4088   -15.41
4089   -18.49
4090   -15.41
4091   -15.41
4092   -21.57
4093   -21.57
4094   -18.49
4095   -21.57
Name: measuredVoltageDifference, Length: 4096, dtype: float64

Capacitor?

The experiments above were all run with a 0.1uF capacitor on the low side of the potential divider. Does that make a difference? Let's remove the capacitor and re-run with no other changes:

In [10]:
df = pandas.read_csv("with-divider-small-newadc-gain-four-no-capacitor.csv")
plt.plot(df.expectedADC, df.measuredVoltageDifference, linewidth=1.0)
# REMEMBER: The following results are in mV!
Out[10]:
[<matplotlib.lines.Line2D at 0x7fbabc0fcf60>]

Conclusion? No, the capacitor neither helps nor hurts (at least with this ADC).

CPU Speed?

Does running at 160MHz (vs 80MHz) matter? Maybe another test for another time.

computeVolts

The library using (from Adafruit) has a built-in computeVolts method. Let's see what that looks like. Instead of running comprehensive test, I think it's reasonable to pick 2 points.

  1. set the DAC to full output (3300mV).
  2. read the ADC (not using the voltage divider); we'll use GAIN_ONE.
  3. also call computeVolts and compare.

Results: no difference. I got 3294mV and computeVolts returned 3.29V.

Doing the same test but with the voltage divider (and with gain set to GAIN_FOUR): 531.63mV vs 0.53V. Same. Very interesting.

9V Battery?

Let's see what this thing says when testing a 9V battery through the voltage divider.

  1. Disconnect DAC from voltage divider (don't want to fry another one).
  2. Set appropriate GAIN ($9000mV / 6.1 = 1475mV$). 1474mV is good for GAIN_TWO (+/0 2048mV).
  3. Measure.

Results: Very strange. It reports about 8800mV and then over the course of 20 or so seconds climbs to about 8900mV (and maybe keeps climbing).

Let me put the cap back in.

Slower growth, but it's starting at 8904mV, ending around 8923mV. I'm not sure what to think about that. The multimeter says 9070. Let's hook up a 4096mV voltage reference.

3857mV. Very consistently. Multimeter says 3870mV output. This makes me even more suspect of my 5V reference. I wonder if this is too much juice for USB. Let me use an external 5V reference.

Updated Power

I began to suspect variations in the 3300mV and 5000mV references that come off of the Wemos, since those are (were being) powered by USB.

I rigged up an external power supply that should be more than capable of supplying a reasonable 3300mV and 5000mV.

This got me thinking. How much difference does the external power supply make?

In [19]:
df = pandas.read_csv("without-voltage-divider-external-power.csv")
plt.plot(df.expectedADC, df.adjustedDifference, linewidth=1.0)
Out[19]:
[<matplotlib.lines.Line2D at 0x7f77c320eac8>]

Not... great, and we'll have to start over with some of our adjustments.

Let's double check how good the power supply's 5V and 3.3V is: according to my multimeter, the 5V supply is 4.098V (very close) and the 3.3V (unused as yet) is 3.27V (not great). By comparision, the 3.3V reference off the Wemos (powered by 5V) is 3335mV (ish), or about 1% high.

Since our readings are high, and the inputs to those readings come from the DAC, if the DAC's input voltage is 1% higher than expected then the ADC would be reading 1% higher as well.

The code currently assumes 3300mV (max) for the DAC output, but that might not be right. How about this instead?

  1. Hook the same 3.3V reference that the DAC uses up to a second channel on the ADC.
  2. Ask the ADC what the voltage is on that channel
  3. Dynamically change the values in the code to use that for our math.

One moment while I code that routine up.

In [42]:
df = pandas.read_csv("dynamic-vref.csv", usecols=[0,1,2,3,4,5])
plt.plot(df.expectedADC, df.adjustedDifference, linewidth=1.0)
Out[42]:
[<matplotlib.lines.Line2D at 0x7f77c23eb7f0>]

That's better, but there is still a slope. What is it? NOTE: let's throw out the last 20-ish rows because things got weird there.

In [49]:
df.tail(20).head(5)
Out[49]:
DACposition expectedADC measuredADC adjustedADC measuredDifference adjustedDifference
4076 4076 1643 1650 1650 7 7
4077 4077 1644 1651 1651 7 7
4078 4078 1644 1651 1651 7 7
4079 4079 1645 1651 1651 6 6
4080 4080 1645 1652 1652 7 7

It looks like it peaks at about 7 high at 1644 ($1651/1644 = 1.004866180048662$). So if we invert that we get: $1644/1651 = 0.9957601453664446$.

Let's put that in as a correction and re-run.

In [56]:
df = pandas.read_csv("dynamic-vref-and-correction.csv")
plt.plot(df.expectedADC, df.adjustedDifference, linewidth=1.0)
Out[56]:
[<matplotlib.lines.Line2D at 0x7f77c1e039b0>]
In [57]:
# also plot the voltage.
plt.plot(df.expectedVoltage, df.adjustedVoltageDifference, linewidth=1.0)
Out[57]:
[<matplotlib.lines.Line2D at 0x7f77c1933898>]

I think that looks fantastic. I'm very happy with that. Let's go through the voltage divider. I'll start by setting the DAC to max (~5V), perform and auto-adjustment (as described above), and take a measurement.

I get 4956V without the voltage divider.

When using the ADC and calculating to voltage myself, I get: 4862 after correction (about 1.9% low). When calling computeVolts I get 4898 (about 1.17% low).

Conclusion: Clearly the process I'm using to convert from the ADC value to voltage is not quite the same as what computeVolts is using.

Expansion

  1. read the ADC. This yields a uint16_t (there may or may not be smoothing involved)
  2. adjust the reading. This is the step we've performed in the prior methods.
  3. Call adc2Voltage or computeVolts.

What does adc2Voltage look like vs computeVolts? adc2Voltage:

float adc2Voltage(uint_fast16_t adcValue, boolean withVoltageDivider) {
    float vOut = (float(adcValue) * float(ADC_vRef)) / float(max_adc_range);
    if (withVoltageDivider) {
        vOut *= undo_voltage_divider_factor;
    }
    return vOut;
}

computeVolts (I won't reproduce the code here) takes a very different approach which I will reproduce here in spirit and with much obfuscation.

float adafruitConvertADC2Voltage(uint_fast16_t adcValue) {
    return adcValue * (rangeInVoltages / 2048);
}

computeVolts does not account for the voltage divider, so assume that's always false. We set ADC_vRef based off of the same logic that the Adafruit library uses.

AHA.

This external vRef doesn't use the rail vRef it has it's own internal voltage reference. That's what I've been doing wrong!

adc2Voltage with this external ADC isn't... a thing I should even be using. Let me make some minor adjustments.

I'm also going to take out the 1% adjustment on the voltage divider side.

In [66]:
df = pandas.read_csv("dynamic-vref-and-correction-v2.csv")
plt.plot(df.expectedADC, df.adjustedVoltageDifference, linewidth=1.0)
# NOTE: the results are in mV
Out[66]:
[<matplotlib.lines.Line2D at 0x7f77c1156278>]

Awesome. Now let's enable an 'auto-gain' function and re-run. TODO: Figure out what's up with that last big drop. That's kinda weird. Note that we're using 'DACposition' here and not 'expectedADC' because the 'expected' ADC is constantly moving around (due to auto-gain!)

In [87]:
df = pandas.read_csv("dynamic-vref-and-correction-autogain.csv")
plt.plot(df.DACposition, df.adjustedVoltageDifference, linewidth=1.0)
# NOTE: the results are in mV
Out[87]:
[<matplotlib.lines.Line2D at 0x7f77bfdbbe10>]

Reminder! The values in this chart on the y-axis are in the mV unit. It does (kinda) look like there is a negative slope to this. We're already making some corrections, perhaps they could be better. But I'm not going to worry about it.

Back to the Potential Divider

Let's make a run with the DAC feeding the potential divider.

In [91]:
df = pandas.read_csv("autogain-with-divider.csv")
plt.plot(df.DACposition, df.adjustedVoltageDifference, linewidth=1.0)
# NOTE: the results are in mV
Out[91]:
[<matplotlib.lines.Line2D at 0x7f77bf900320>]

The initial drop from near zero is ... weird. It's also got a negative slope. Let's ignore the drop for now (approx -12mV). At 2300mV expected we're reading -40 (approximately). If we adjust for the -12mV then that's a rise of -30 with a run of 2300, or -1.3-ish percent. I'll put the +1% multiplier back in from earlier. The -12 ... that's just weird.

In [97]:
df = pandas.read_csv("autogain-with-divider-and-slopefix.csv")
plt.plot(df.DACposition, df.adjustedVoltageDifference, linewidth=1.0)
# NOTE: the results are in mV
Out[97]:
[<matplotlib.lines.Line2D at 0x7f77bf5c2d68>]

Not quite perfect but... that feels awful good.

Next up:

  1. I'd like to understand the very first and last parts of this graph.
  2. RMS vs. mean vs. taking (e.g. 20 samples, sorting, throwout our 2 on each end, and taking a mean of the 16 that remain) vs. ????
In [ ]: