Resistance is not futile...
Part 1 of this tutorial showed why using an LED is not as straight forward as using a light bulb. Now we have a look at different ways to operate an LED.
The simplest and well known way to operate an LED is to add a series resistor. It is quite easy to determine the value for this resistor:
- Start with the operating point of the LED. E.g. an LED might have a nominal current of 20 mA and a correlated forward voltage of 2.1 V (from the data sheet).
- Assume that the LED is operating at that point. Then, take your supply voltage into consideration. All the voltage that doesn't drop over the LED must drop over the series resistor. With an operating voltage of for example 5 V, the voltage over the resistor would be 5 - 2.1 = 2.9 V.
- The current through the resistor is the same as through the LED. In our example this is 20 mA (0.02 A).
- Knowing U and I we can calculate the resistor, since R = U/I (Mr Ohm again...). In our case that is 2.9 V / 0.02 A = 145 Ohms. The next readily available resistor value would be 150 Ohms.
Now we take another look at the U/I diagram of the LED. But this time we swap I and U axes.
What does that swapping mean? Well, imagine we had a way to drive a defined current into the LED. We could then measure the corresponding voltage across the LED.
Now what happens if there is a resistor in series with the LED? We know that for a resistor, any current through it results in a proportional voltage (U = I x R). So we can add that linear voltage ramp on top of the LED curve.The next diagram shows the resistor curve in yellow, the LED curve in blue as before and the combined curve of the series circuit in orange.
Finally we swap I and U back again. What we get is a "tamed" LED. With an LED, the problem is the steep part of the U/I diagram. Every little change in voltage results in a big difference in current. By adding a resistor, the curve is flattened and the problem is defused.
For comparison: In the diagram with only an LED, moving down from the 20 mA operating point by 0.1 V reduces the current to about 9 mA - i.e. to less than 50% ! With the series resistor added, the same voltage drop would cause a reduction of the operating current to about 19 mA.
This method of limiting the current of an LED works very well and is extremely simple to implement. But it does have its limits. When the supply voltage varies a lot more than a few percent, the current will still be unstable (in our example about 6 mA / V).
And if the supply voltage is only a little more than the forward voltage of the LED, the series resistor would have to be so small, that it would hardly flatten the critical slope of the LEDs U/I curve.
Example: In my project "Simple Torchlight for Kids" I used a 9V battery, a white LED (Uf = 3.5 V) and a resistor of 220 Ohms. Note that a 9V battery contains 6 cells with 1.5 V each ( 6 x 1.5 = 9 V). These cells when discharging change their voltages down to about 1.1 Volts (when empty). So a 9V battery will start with 9 V and will gradually go down to under 7 V.
Someone suggested in the comments to use two LEDs in series (forward voltage: 3.5 + 3.5 = 7V). So the voltage over the series resistor would be 2V down to almost 0 Volt. This means the current at 7V would be almost 0 mA. So you wouldn't get much light. And even with the battery half charged, the current would only be 10 mA. So the two LEDs would light only with half intensity, i.e. like a single LED. And that defeats the purpose...
In such difficult cases, we would need a magic resistor which changes its value in response to drifts in temperature and supply...