Check out the new mini-series from NAG®
At NAG, we offer integrated Automatic Differentiation (AD) solutions. These solutions are designed to save energy, money and accurately compute sensitivities in almost any application 10x-6000x faster than alternative methods. However, when talking to some businesses, we do still hear the same ‘untruths’ or ‘myths’ related to AD. To put paid to these myths, we will publish a series of short articles that debunk these myths and explain the truth over the next few weeks!
This week we look at our first Myth!
Myth 1: I’ll have to re-write my Libraries
You’re curious about AD and its benefits, but you’ve heard that you may need to re-write your libraries in order to integrate it. That’s far too much work, so you don’t pursue the idea much further.
We've encountered this scenario many times over the years. When we ask people why the integration cost would be so large, the response is typically “well, we have a large legacy code”, and they usually go on to explain just how poorly built the legacy code is.
So, we ask which aspects of this legacy code would make it difficult to implement AD? This question is always met with no answer.
How hard is it to integrate AD into C++ code? What makes it difficult? Why do some people say that it’s a huge job? Is this really a myth? NAG has helped dozens of clients to integrate our AD solution, dco/c++ into very large and complex quant libraries, and never once has this led to a re-write of the code – not even close.
The ease of integration depends on the sophistication of your AD tool. A poorly designed tool will make the job most unpleasant indeed! An enterprise-class product like dco/c++ avoids many of the pitfalls that plague these other tools (for example, dco/c++ doesn’t assume that your code has any particular shape or form).
When we talk about production C++ libraries, the only AD approach which consistently proves viable is the operator overloading approach. At a high level, the AD tool presents itself as a “numeric datatype” and the idea is that instead of computing with doubles or floats, you compute with this datatype and the tool takes care of the rest.
A very efficient AD tool (like dco/c++) will make heavy use of expression templates to get the compiler to do as much of the heavy lifting as possible. Those of us who bear the scars of working with expression template libraries know that mixing such libraries is a recipe for pain. So, if your library uses Boost or Eigen (or some home-grown expression library), then you must proceed with caution. Luckily, dco/c++ supports both Boost and Eigen, and is robust in the face of other expression template engines.
But let’s talk through a real example to put some numbers on the page. We recently (2021) applied dco/c++ to the master branch of QuantLib. QuantLib is a large code with all the standard C++-isms one would expect: templates (although it is not templated on the arithmetic type), typedefs, polymorphism and virtual inheritance, design patterns, logging (via streams), and it also uses boost and a good portion of C++11 classes and algorithms. Code statistics of the project show the following:
One of the engineers on NAG’s AD team integrated AD into the whole of QuantLib in slightly less than a day. In doing so, a total number of 200 files changed with overall 625 insertions and 462 deletions (that's characters), 90% of which could safely be done by a tool (indeed, NAG has prototypes of such tools).
This simple integration of dco/c++ also achieved some impressive results:
The performance of AD tools is measured by their “adjoint factor”, the ratio (adjoint runtime) / (original runtime). Speedup over finite differences is then roughly (# sensitivities) / (adjoint factor).
Can these results be improved? Of course. And yes, we’ve not yet addressed memory efficiency for reverse-mode (adjoint) AD, that’s in our next Myth!
NAG’s AD toolset has been developed over the last 12 years and it builds upon a further 10 years of AD R&D experience in C++. We know that details matter. Legacy codes are our business and we've not come across a single one which we couldn't handle.
Myths are narratives, which might sound like truths, and by talking through these in some detail and sharing our experiences, we hope to help businesses navigate these issues. Results matter, myths should not.
Myth 2: I’ll run out of memory
Myth 3: It won’t work on my code
Myth 4: AD will infect all my other libraries
Myth 5: It’s hard to maintain code with AD embedded
Myth 6: AAD will destroy parallelism
Commercial Leader & STEM Ambassador | Strategy | Business Development | Sales | Product Management | Marketing
2yInterested in this, highly likely you'll enjoy next week's webinar.... 16th March: A hybrid approach to Adjoint Algorithmic Differentiation. Efficient adjoints, ease of use. We show you how. Join us, register here: https://bit.ly/36trOZv [If the timing does not suit register and you should be able to get a copy of the recording]