
A combined automatic differentiation and array library for C++

Array features
A full description of the array capabilities and interface may be
found in Chapter 3 of the User
Guide. Here's a summary:
 Multidimensional arrays. Dynamic arrays are all of the
templated Array type and can have up to 7 dimensions, and
may refer to noncontiguous areas of memory. Fixedsize arrays (those
whose dimensions are known at compile time) have virtually the same
functionality and are all of the templated FixedArray type.
 Mathematical operators and functions. The full range of
elementwise operators (+, * ...) is supported,
including their assignment versions (+=, *= ...),
and operators that return boolean array expressions
(==, != ...). All the mathematical functions you
would expect are present, both unary
(sqrt, exp, log, log10, sin,
cos, tan, asin, acos, atan,
sinh, cosh, tanh, abs,
asinh, acosh, atanh, expm1,
log1p, cbrt, erf, erfc, exp2,
log2, round, trunc, rint
and nearbyint) and binary (pow, atan2,
min, max, fmin and fmax).
 Array slicing. There are a large number of ways of slicing
an array, and the slice is itself an Array that points to
part of the original array so can participate fully on both sides of
a statement. A concise matlablike syntax is provided to index
dimensions; for example, M(end1,__) refers to the
penultimate row of matrix M.
 Array reduction operations. The
functions sum, mean, product, norm2,
minval, maxval, any, all, count,
return a scalar, or can operate just along one dimension so
return an array of lower rank.
 Conditional operations. Two ways are provided to perform an
operation on an array depending on the result of a boolean
expression, one similar to Fortran 90's where, and the
other similar to Matlab's find.
 Matrix multiplication. Matrix multiplication may be applied
to 1D and 2D array expressions via the ** pseudooperator,
and implemented via which
ever BLAS
implementation you compiled Adept against. The
functions dot_product and outer_product are
available for matrix multiplication of two vectors.
 Linear algebra. Matrix inversion and solutions to linear
systems of equations
uses LAPACK under
the hood.
 Special square matrices. Specific classes are provided for
symmetric, triangular and banddiagonal matrices, the latter of
which use compressed storage. Via the underlying BLAS and LAPACK
libraries, matrix operations are optimized for these types.
 Passing arrays to and from functions. Adept uses a
referencecounting approach to implement the storage of array data,
enabling multiple array objects to point to the same data, or parts
of it in the case of array slices. This makes it straightforward to
pass arrays to and from functions without having to perform a deep
copy.
 Vectorization. Passive array operations will be
automatically vectorized on Intel hardware using SSE2 or AVX
intrinsics if they satisfy certain conditions on memory alignment
and the mathematical operations present.
 Array bounds and alias checking. Adept checks at run
time that array expressions agree in dimension. It also checks for
aliasing between data on the left and right hand sides of a
statement, making a temporary copy if they do. Full bounds checking
can be switched on at compile time.
Where possible, Adept's array features have followed capabilities in
Fortran 90 and Matlab; see a Comparison
of array features between Adept, Eigen, Fortran and Matlab
(PDF).
Missing features
Writing a fully functioning array library is a mammoth task, so I
have had to concentrate on the features I need for my own
applications. The following is a wish list that I hope to tackle
over the next decade, probably in the following order
 Vectorize active expressions, which would speedup automatic
differentation considerably.
 Redesign the stack structure as a linked list of blocks for
storing different types differential statement. This would then
facilitate the remaining additional features.
 Optimize the automatic differentiation of matrix multiplication
(currently slow) and add the automatic differentiation of matrix solve
and inverse (currently missing). This requires the ability to store
matrices on the stack (see Mike
Giles' list
of matrix derivative results).
 Add additional matrix operations, and their derivatives, such as
determinant, trace, matrix decomposition, matrix squareroot, matrix
exponential...
 Add features to facilitate parallelization of the foward pass of
an algorithm using both OpenMP and MPI.
 Add the capability to differentiate operations involving complex
numbers.
See also
