Once you've built a prototype, you need to be sure that the unit is working to your requirements. When thinking about this specification, it's useful to consider similar commercially available units. Also, it's really satisfying to be able match or exceed the specifications for a fraction of the cost...

There's a surprisingly large number of parameters to test and specify for a bench PSU. Let's start by examining the keys specifications for the Thurlby Thandar PL320, a decent 30 volt, 2 amp model:

Parameter Specification
Output range 0-32 Volts nominal
0-2.1 Amps nominal
Load Regulation: <0.01% of maximum output for 90% load change
Line regulation: <0.01% of maximum output for 10% line voltage change
Ripple and noise: Typically <1mV RMS
Output Impedance: CV: Typically <5mΩ at 1Khz
CC: Typically 50KΩ
Transient response: <20uS to with 50mV of setting for 90% load change
Temperature coefficient: Typically <100ppm/°C
Meter resolution: Voltage: 10mV over the entire range
Current: 1mA over the entire range
Meter accuracy: Voltage: ±(0.1% of reading + 1 digit)
Current: ± (0.3% of reading + 1 digit)

Output range

An amount of over-range is built in, which is normal practice. There's 6.667% and 5% respectively with this unit, probably chosen for no reason other than to give nice round numbers.

With my supply, I decided to let the 'fine' controls define the over-range, meaning that with the fine control at minimum, the course control will cover 0 to 18 volts. The fine control has a range of 200mV (which is less than normal, but makes precise setting really easy), so the maximum output is 18.2V. You normally need to have a pre-set adjustment of the maximum output voltage, to take up tolerances in the voltage reference and associated passive components.

A similar rule applies with the current setting, but as my current meters only go up to 1.999 amps, I might restrict the range to 1.8 amps. The fine control has 200mA of range, which would give 2 amps.

Load Regulation

This is a simple and intuitive parameter - how much the output voltage changes when a load is connected to the output. This is a steady-state measurement, unlike output impedance which is discussed below. Of course, the supply must remain in CV mode. This parameter is sometimes called 'load effect'.

A load of 90% seems typical, and the result is expressed in %. For the Thurlby Thandar supply, 0.01% of 30 volts is 3mV. As the built-in meters have a resolution of 10mV, you shouldn't see a change in output voltage when the load is connected, unless of course the voltage was close to a 'threshold' - ie 30.001V.

This implies you need a decent voltmeter to measure this. In this case, a 4¾ digit meter (with a maximum count of 40,000) would be required. You should make this measurement at the maximum rated output voltage because the regulation is usually worse at this setting. I can cheat, because my supply should have the same impedance at all output voltages. You can also cheat by using zener diodes to drop the voltage down so that you can select a higher-resolution range on your meter, but I'm not sure if that's allowed!

Line Regulation

This requires a variac to allow adjustment of the mains input voltage. Every supply will have a certain minimum allowable voltage, below which internal regulators and/or voltage references stop working within spec. As long as you stay away from this (often not spec'd) value, then it's hard to see how the output can be affected with modern circuit techniques.

As with load regulation, the Thurlby unit claims a <3mV performance.

Ripple and noise

This is sometimes called PARD, which stand for Periodic And Random Deviations. It's difficult to measure on an oscilloscope - if you measure the observed peak to peak noise and attempt to calculate the RMS value, you'll generally end up on the high side. The problem is that the measurement bandwidth isn't specified here, but your oscilloscope will have a bandwidth of at least 20MHz, probably more... One approach is to repeat the measurement on a commercial supply, look up it's spec and find out what your error was. Then, use that as a scale-factor. Note - this is highly non-scientific!

It's useful to trigger the 'scope at line rate to show up any 50/100Hz (or 60/120Hz) components. Then, try careful triggering on the actual waveform - you may discover low-level oscillations...

An ac voltmeter will generally have a limited bandwidth, and it's important to know this and include it in your results. True-rms voltmeters with a decent bandwidth are rather expensive. An article in the April 2000 edition of Electronics World showed how to build an ac voltmeter with a 5MHz bandwidth - although 1mV is probably rather too close to its noise floor.

Output impedance

In CV mode

Impedance test (4K)This is similar to the load regulation specification, but it's measured at 1kHz. A test set-up is required to switch a load on and off at 1000 times a second while the voltage drop is observed on an oscilloscope.

The switch is normally a MOSFET - for example the BUZ11 has an RDS(ON) of 0.011Ω and is rated at over 20 amps. All for less than a pound! The value of R is chosen to be give a reasonable current drain - perhaps 50% or 90% of the rated output current. You might want to repeat the test at a range of different output voltages, and quote the highest value.

Note the sense connections, and the wiring of the test equipment. A power supply is only spec'd wrt the sense terminals! If a supply doesn't have them, then use the actual binding posts to make the connections.

Below left is what you might see on the oscilloscope screen if you're unlucky! In practice, you'll need to AC-couple the input and select a much more sensitive range. At 1KHz, you won't get any 'droop' caused by the 'scope AC-coupling capacitor. The deviations (ΔV) should be quite small; indeed the spikes at the switching points will dominate the waveform. Below right is a text-book perfect wafeform - yours won't look quite so clean:

Impedance test (2K)

What you might see...

Impedance test (2K)

AC-coupled and magnified

You'll see this waveform again when looking at transient response. For now, all we're interested in is ΔV - ignore the spikes.

You can hopefully see what's happening - when the load is connected, the output falls quite noticeably. The error amplifier realises this, and increases the drive to the pass transistors. Before long, the output has stabilised close to the original voltage. Any difference between the two is due to the output impedance of the unit, which is a result of non-infinite loop-gain of the error amplifier, and non-zero open-loop impedance of the pass transistors. Knowing the test load resistance, and ΔV, you can calculate the output impedance:

Measure the off-load voltage accurately with a DVM. Measure ΔV from the oscilloscope. (Do it this way round - don't measure the on-load voltage with a DVM because the test is meant to be at 1KHz - a supply might well perform better at DC!). From that, we can work out ΔI:

ΔI = (VOUT - ΔV) / R
Hence, ZOUT =  (ΔV R) / (VOUT - ΔV)

Remember to include the resistance of the switch! That's why MOSFET's are preferred for this. Lets put some real numbers into the equations: I'll set my supply to 18 volts, and use a load resistor of 10Ω which is on the low end of its tolerance range (thus absorbing the RDS(ON) of the MOSFET for the sake of simplicity) - this conveniently gives a current of 1.8 amps or 90% of the rated output. Suppose I find that ΔV is 10mv:

ZOUT = (0.01 x 10) / (18 - 0.01)
Hence, ZOUT = 5.559mΩ

That's rather close to the Thurlby Thandar spec. So, it seems 10mV is the magic number to aim for!

In CC mode

Variations in output current with voltage make the impedance of a current source non-infinate. Measuring the output impedance in Constant Current mode is similar, but this time we fix the output current, and change the output voltage. Another key difference is that this test is done at DC - that's because the output capacitor required in any power supply to ensure stability and good CV transient response will be the dominant factor in the measurement. At 1KHz, a 100uF capacitor does a reasonable impression of a 1.6Ω resistor!

Impedance testing (3K)Testing at DC makes the test setup much easier. You choose a value of I based on the resolution of your current meter - let's use 10mA. Then, you can calculate a value for RV, based on the maximum output voltage of the supply, minus the 'burden voltage' of your ammeter. 1 volt is normally a safe value. So I'd need a (17V/10mA) 1.7K potentiometer. The best thing to do is choose the next available value (probably 2.2K) and either add a parallel resistor, or just be careful at the top end of the scale - as long as the CC LED stays on, the result should be valid. (Obviously, the voltage controls must be set to maximum - the supply must stay in CC mode!)

Adjust RV over the range and observe any changes in reading on the external ammeter. Hopefully they will be below the resolution of the internal ammeter, but if not, it's worth confirming that they confirm this result. Some supplies use a differential amplifier to monitor output current, and a lack of common-mode-rejection of this stage can degrade this result. In this case, it is likely the built-in meters will fail to show the error if they are fed via the same by amplifier. (An early prototype of this project did just that!)

The maths is slightly easier. As before,


However, these quantities are directly measured. So, suppose we sweep the voltage over a 17 volt range, and measure a 0.5mA maximum change:

ZOUT = 17/0.0005
Hence, ZOUT = 34KΩ

So, working it back, if the Thurlby supply is specified at 50K, and measured over a 30V range, that implies that they observed a current change of 0.6mA. So, if I wish to meet their spec, I need to observe a change of 0.34mA or less.

Transient response

Transient response (3K)This requires the same test set-up as the output impedance test. The only thing I'm not sure about is the minimum load. With my supply at the moment, I get a text-book result if I add a parallel resistor that takes 100mA. Without that, I get a slightly odd result - see section 4 for more detail....

The result only tells you how quickly the output stabilises after a step-change in load current. It doesn't give any indication of the amplitude of the spikes that occur, although I found you can get almost any result you want by adjusting the rise-time of the function generator... For most applications of a low-power bench PSU, this result tends not to be too important if the product is to be used for small-signal stuff. Indeed, the inductance of the connecting leads, coupled with local decoupling capacitors on your test circuit will probably be the dominant factor...

Based on the specifications of supplies that I've investigated, most linear supplies take <20uS to return to within 50mV of the set voltage. 50mV seems to be a standard value - I guess that is chosen to ensure the output impedance (which will be responsible for the on-load drop, as we saw earlier) doesn't interfere with the measurement.

In practice, you won't get a waveform as clean as that. A degree of ringing and overshoot is present in most supplies. The power transistors used in most supplies are big, slow devices that make compensation difficult, often at the expense of other parameters, such as output impedance. That's what I've found, anyway...

Temperature Coefficient

How much voltage drift will occur as the result of temperature variations. More noticeable with digital metering, of course. Indeed, the tempco of the meters is an equally important parameter that is rarely quoted explicitly. The 2 main factors in the control circuit are the reference, and the error amplifier. The voltage reference used in the meter circuit is also very important - the driver IC's are normally optimised by the manufacturer to have <1-5 ppm/°C. It's a very difficult parameter to optimise empirically, and requires thought at the beginning of the design to ensure good performance.

The Thurlby Thandar supply claims a performance of <100ppm/°C, which translates to 0.01%, or 3mV (again!). This refers to the ambient temperature, and doesn't include the fact the internal temperature could rise quite considerably when the unit is under load, so in practice you might expect the actual results to be slightly worse.

As an example, if I owned one of these units, and set it to exactly 10V at the end of a day, the next morning (when the temperature in my workshop might be 15°C lower), I'd expect to see 9.95V indicated initially (assuming a positive tempco, and ignoring meter tempco, and internal temperature rise, which might be worsen the result considerably).

Meter Resolution and Accuracy

This is a function of the ICs or modules that you choose to include. Complete LCD modules are popular and cheap, but tend to have a tempco of around 100ppm/°C. If you want to improve on that, choose a module that lets you use an external reference.

The most popular devices offer a maximum count of 1,999 and are called 3½ digit. These tend to be based around the popular Intersil ICL7106 IC. This is a complete A-D converter, voltage reference and display driver, and you only need a handful of external passive components to operate. The ICL7107 is identical, but drives LED displays instead. Don't use the internal reference of these as the LED drivers dissipate a reasonable amount of heat...

Using these helped me choose output range - 0 to 2 amps and 0 to 18 volts. This gives a resolution of 1mA and 10mV - the same as the Thurlby Thandar unit. These make for good, useful meters - more resolution just gives the perception of worse drift! Less resolution feels 'cheap' - if you want higher output ranges then try to include better meters rather than range switching. The Thulby Thandar meters are 3¾ digit meters - highest count 4,095. Modules capable of this are at least double the price of 3½ digit devices, and make up a significant proportion of the total cost. Alternatively, there is another IC (ICL7135) that has a count of 19,999 (4½ digit) and is available for less than 5 UKP, but you have to provide a display driver and voltage reference. Also, I doubt you could exploit it's full accuracy on Veroboard!

An accuracy of 0.1% means 30mV at 30V. All digit meters feature the ±1 digit uncertainty. That means that the largest error will be 40mV at 30V, or a reading anywhere between 29.96 and 30.04. I guess that this includes the tempco of the voltage reference and scaling resistors. I also guess that the tempco of the current shunt resistor is largely responsible for the worse accuracy of the current meter...