The scipy.optimize.leastsq function performs iterative least squares based on estimates of the Jacobian. Using the Dfun
input argument the Jacobian can be manually fed to improve performance (and in my opinion best practice because almost
everything is diffentiable). However computation of the Jacobian almost always involves calculations common to those
used for evaluating the residuals in the first place (especially exponential functions), typically before multiplying by
some coefficient to obtain the derivative.
Consequently it strikes me as ridiculous to calculate everything twice, especially in the interests of improving
`performance’. In MATLAB, the Jacobian matrix can be provided as a second output argument from the base function and
therefore avoids this inefficiency. I was wondering whether anyone knew of a similarly efficient solution in Scipy.