Optimizers

class mumott.optimization.optimizers.LBFGS(loss_function, **kwargs)[source]

This Optimizer makes the LBFGS algorithm from scipy.optimize available for usage with a LossFunction.

Parameters
  • loss_function (LossFunction) – The loss function to be minimized using this algorithm.

  • kwargs (Dict[str, Any]) – Miscellaneous options. See notes for valid entries.

Notes

Valid entries in kwargs are

x0

Initial guess for solution vector. Must be the same size as residual_calculator.coefficients. Defaults to loss_function.initial_values.

bounds

Used to set the bounds of the optimization method, see scipy.optimize.minimize documentation for details. Defaults to None.

maxiter

Maximum number of iterations. Defaults to 10.

disp

Whether to display output from the optimizer. Defaults to False

maxcor

Maximum number of Hessian corrections to the Jacobian. Defaults to 10.

iprint

If disp is true, controls output with no output if iprint < 0, convergence output only if iprint == 0, iteration-wise output if 0 < iprint <= 99, and sub-iteration output if iprint > 99.

maxfun

Maximum number of function evaluations, including line search evaluations. Defaults to 20.

ftol

Relative change tolerance for objective function. Changes to absolute change tolerance if objective function is < 1, see scipy.optimize.minimize documentation, which may lead to excessively fast convergence. Defaults to 1e-3.

gtol

Convergence tolerance for gradient. Defaults to 1e-5.

property no_tqdm

Whether to avoid making a tqdm progress bar.

optimize()[source]

Executes the optimization using the options stored in this class instance. The optimization will continue until convergence, or until the maximum number of iterations (maxiter) is exceeded.

Return type

Dict

Returns

A dict of optimization results. See scipy.optimize.OptimizeResult for details. The entry 'x', which contains the result, will be reshaped using the shape of the gradient from loss_function.

class mumott.optimization.optimizers.GradientDescent(loss_function, **kwargs)[source]

This Optimizer is a gradient descent (sometimes called steepest descent) solver, which can be set to terminate based on the loss function and/or the maximum number of iterations.

It also supports the use of Nestorov accelerated momentum, which features a look-ahead momentum term based on the gradient of the previous iterations.

The update sequence may be written

\[\begin{split}x &\leftarrow x - (p + \alpha(\nabla(x - p) + \Lambda)) \\ p &\leftarrow \beta(p + \alpha(\nabla(x - p) + \Lambda))\end{split}\]

where \(x\) are the optimization coefficients, \(p\) is the momentum, and \(\Lambda\) is the regularization term. \(\alpha\) is the step size and \(\beta\) is the Nestorov momentum weight.

Parameters
  • loss_function (LossFunction) – The loss function to be minimized using this algorithm.

  • kwargs (Dict[str, Any]) – Miscellaneous options. See notes for valid entries.

Notes

Valid entries in kwargs are
x0

Initial guess for solution vector. Must be the same size as residual_calculator.coefficients. Defaults to loss_function.initial_values.

step_sizefloat

Step size for the gradient, labelled \(\alpha\) above. Default value is 1. Must be strictly positive.

nestorov_weightfloat

The size of the look-ahead term in each iteration, labelled \(\beta\) above. Must be in the range [0, 1], including the endpoints. The default value is 0, which implies that the momentum term is not active.

maxiterint

Maximum number of iterations. Default value is 5.

ftolfloat

The tolerance for relative change in the loss function before termination. A termination can only be induced after at least 5 iterations. If None, termination only occurs once maxiter iterations have been performed. Default value is None.

display_lossbool

If True, displays the change in loss at every iteration. Default is False.

enforce_non_negativitybool

Enforces strict positivity on all the coefficients. Should only be used with local or scalar representations. Default value is False.

property no_tqdm

Whether to avoid making a tqdm progress bar.

optimize()[source]

Executes the optimization using the options stored in this class instance. The optimization will continue until convergence, or until the maximum number of iterations (maxiter) is exceeded.

Return type

Dict

Returns

A dict of optimization results. See scipy.optimize.OptimizeResult for details. The entry 'x', which contains the result, will be reshaped using the shape of the gradient from loss_function.

class mumott.optimization.optimizers.ZHTTOptimizer(loss_function, x0, step_size_parameter=None, **kwargs)[source]

Simple optimizer meant to be used in conjunction with ZHTTResidualCalculator. Cost function defined this way is non-convex, so this optimizer depends on being given a good starting guess. Such a guess can be generated by a different model and then fitted to an axially symmetric model.

For mode details, see the doncumentation of ZHTTResidualCalculator.

Parameters
  • loss_function (LossFunction) – The loss function to be minimized using this algorithm.

  • x0 (ndarray[Any, dtype[float]]) – Initial guess for solution vector. Must be the same size as residual_calculator.coefficients. Defaults to loss_function.initial_values.

  • step_size (float) – Step size for the gradient. A largest possible safe step-size will be estimated if none is given.

  • kwargs (Dict[str, Any]) – Miscellaneous options. See notes for valid entries.

Notes

Valid entries in kwargs are
maxiterint

Maximum number of iterations. Default value is 20.

property no_tqdm

Whether to avoid making a tqdm progress bar.

optimize()[source]

Function for executing the optimization. Should return a dict of the optimization results.