Workflow¶
The following figure illustrates the mumott workflow. Here, classes are shown in blue, input parameters and data in orange, and output data in green.
A typical workflow involves the following steps:
First the measured data along with its metadata is loaded into a
DataContainer
object. The latter allows one to access, inspect, and modify the data in various ways as shown in the tutorial on loading and inspecting data tutorial. Note that it is possible to skip the full data when instantiating aDataContainer
object. In that case only geometry and diode data are read, which is much faster and sufficient for alignment.The
DataContainer
object holds the information pertaining to the geometry of the data. The latter is stored in thegeometry
property of theDataContainer
object in the form of aGeometry
object.The geometry information is then used to set up a projector object, e.g.,
SAXSProjectorBilinear
. Projector objects allow one to transform tensor fields from three-dimensional space to projection space.Next a basis set object such as, e.g.,
SphericalHarmonics
, is set up.One can then combine the projector object, the basis set, and the data from the
DataContainer
object to set up a residual calculator object. Residual calculator objects hold the coefficients that need to be optimized and allow one to compute the residuals of the current representation.To find the optimal coefficients a loss function object is set up, using, e.g., the
SquaredLoss
orHuberLoss
classes. The loss function can include one or several regularization terms, which are defined by regularizer objects such asL1Norm
,L2Norm
orTotalVariation
.The loss function object is then handed over to an optimizer object, such as
LBFGS
orGradientDescent
, which updates the coefficients of the residual calculator object.The optimized coefficients can then be processed via the basis set object to generate tensor field properties such as the anisotropy or the orientation distribution, returned as a
dict
.The function
dict_to_h5
can be used to convert this dictionary of properties into anh5
file, to be further processed or visualized.
Pipelines¶
Reconstruction workflows can be greatly abstracted via reconstruction pipelines. A pipeline contains a typical series of objects linked together, and it is possible to replace some of the components in the pipeline with others preferred by the user.
The user interaction with the pipeline can be understood as follows:
A
DataContainer
instance is created from input, as in a standard workflow.The
DataContainer
is passed to a pipeline function, e.g., theSIGTT pipeline function
, along with user-specified parameters as keyword arguments.For example, one might want to set the regularization weight for the
Laplacian
regularizer (using theregularization_weight
keyword argument), or one might want to replace the defaultSAXSProjectorBilinear
with the GPU-basedSAXSProjectorCUDA
(using theProjector
keyword argument).The
SIGTT pipeline
executes, and returns adict
which contains the entry'result'
with the optimization coefficients. In addition, it contains the entriesoptimizer
,loss_function
,residual_calculator
,basis_set
, andprojector
, all containing the instances of the respective objects used in the pipeline.The
get_output
method of the basis set can then be used to generate tensor field properties, as in the standard workflow.