In this article, i will try to explain some basic principles of DAW performance. This article will be updated at will, to track down new informations and components releases.
Please note that all of this information is largely theoretical speculation based on real world observation, and different audio engines might be actually able to find ways around these limitations: we all wonder how reaper is doing its things, or using audiogridder to artificially increase buffer size.
CPU PERFORMANCE: LATENCY VS THROUGHPUT
You can also listen to this video on youtube as introduction to this issue. This illustrates perfectly our findings where our 6700k did run better than a 7940x as both master with hackintosh / logic pro and slave with windows running EastWest Play session through Vienna Ensemble Pro.
The basic principle with audio production performance could be the old “you cannot make a baby in 1 month with 9 moms”. There are some aspects in audio that are just inefficient to split between different CPUS (the overhead would trump the gain), and if you have a background in real HW routing and signal processing, some notions about bus, sends and returns will be counter-intuitive to you.
With audio processing, the data for each refresh is based upon the results of the previous one, with usually 44.100 refresh per second happening. Then for each refresh, you will have to make sure everything is executed in the right order, where effects that will be running on more than one track or be dependant on the input from another track (think sidechain) requiring to be executed in the right position creating huge bottleneck.
BUFFER SIZE let’s try to understand how the audio buffer relates to processing power: highest the buffer size, the more “planning ahead” we give to the cpu. We will have to wait longer to hear the result, but it will be able to do a lot more. Think about building a skyscraper: if you have to build it one floor at time, you will waste a long time changing tools and managing tasks everytime you start a new one; the more you can plan and build together – doing all the doors at the same time after all the walls are done – the less time will be spent on swapping tool and planning and more on actually doing the job. If someone is able to finish his part first, he will have more time to swap tools and help someone who’s behind, further increasing efficiency.
And here comes the second big issue: while your CPU cores are all made the same and able to run at around the same speed (turbo boost 3.0 and core affinity are a bit more complicated but let’s keep this simple), in a real project every track will have a different compute need, while some effects will be running on more than one track or be dependant on the input from another track (think sidechain). This makes audio extremely subjective to the ability of the CPU to “pass” informations to process between cores and spread the work unevenly across them.
This all scales with the CPU core-to-core latencies and single core performance.
—
https://gearspace.com/board/showpost.php?p=17117060&postcount=1415