When it comes to undervolting, or generally looking for the best efficiency in a given setup, there are a few more extra steps we have to take before calling it a success. While stability and performance testing is similar to a normal overclock, running our test suite to get the best results with synthetic benchmarks while avoiding crashes during the sessions, vendor-provided voltage and power readings cannot really be trusted and we will have to give the final word to the kill-a-watt.

Let’s start with some basic about performance and stability; the aim is to find settings that are going to be stable in all possible workloads the final client will go through, in all different conditions. This means we will try to hammer the system with the heaviest workload at the worst possible temps while also making sure that the intermediate steps in the voltage/frequency curve ar stable. I found out that Photoshop is one software that usually got through my testing suite, crashing the computer after it has been delivered to the customer; it’s probably hitting only 1 to 4 cores very hard ignoring the rest, something neither my full-load and single core benchmarks can do.

I found some setups to be hitten by Magic The Gatehering Arena, a program that does not load the computer very hard even when it is actually stuttering.

Also the contrary can be true: a computer that is crashing with testing tools might be perfectly stable in everyday usage, like my daily machine that i keep at -70mv with voltageshift (and never crashed on me at that value) even if Sandra can kill it at -45.

As i plan to do more undervolting-focused reviews of HW in the future, i will keep this guide updated with my testing metodology.

The basics

The first step is always to record baseline performance. The fastest way to do this for the CPU is running Cinebench R15 and R23 at high priority. These test are fast, hard hitting and the expected values for any CPU can wasily be found on many websites to compare against. Using high priority ensures that the result is not skewed by some randum scheduled windows operation in the background; ideally, a fresh windows install that has gone through all the updates will be used during testing, but variance is still present; i also found out that it does cause it to be less forgiving to undervolt/overclock that way.

Keep in mind the CINEBENCH BUG : some notebooks, like 99% of asus (and i would say 0% of HP) will crash at any undervolt setting applied in software when running Cinebench at high priority.

I will run and record scores of 3 runs of aida64 memory latency test, and a complete Sandra overall score benchmark, keeping notes of the different sub-scores.

At this point, it’s usually pointless to run stability testing at default settings; while that could be needed if they crash at lower-than- expected undervolt offsets.

Starting the undervolt (or overclock)

The fastest way to find the right settings is to go by halving, using some reference values that you know to be the extremes for that given product; as an example, i wrote down a quick reference table for some values of my best undervolt offset based on some recent CPUs we dealt with in good sample sizes:

i5 4300u : -80
i5 6300u : -120
i7 6700hq (notebook) : -120
i7 7700k (desktop) : -80
i5 8250u :  -110
i9 10900es QTB1 (desktop engineering sample) : -120
i9 9750h : -80

I would run a cinebench r15 run at that value, with high priority; if it fails, i would try again at -5 offset to make sure we do not have the Cinebench Bug, and then lower the offset to the half the value, like -60; if that passes, you run at -90 etc

When a stable Cinebench r15 pass is found, i usually run a full Sandra suite at a slightly lower offset, as i find it usually harder-hitting. When a stable Sandra value has been found, i proceed with a 30 minutes r23 run, a full 1 hour prime95 and then some lighter real world test like MTG Arena. I will then let it run overnight with Prime 95 to call it a stable setting.