sasmodels package¶
Subpackages¶
Submodules¶
sasmodels.alignment module¶
GPU data alignment.
Some web sites say that maximizing performance for OpenCL code requires aligning data on certain memory boundaries. The following functions provide this service:
align_data()
aligns an existing array, returning a new array of the
correct alignment.
align_empty()
to create an empty array of the correct alignment.
Set alignment to gpu environment attribute boundary.
Note: This code is unused. So far, tests have not demonstrated any improvement from forcing correct alignment. The tests should be repeated with arrays forced away from the target boundaries to decide whether it is really required.
sasmodels.bumps_model module¶
Wrap sasmodels for direct use by bumps.
Model
is a wrapper for the sasmodels kernel which defines a
bumps Parameter box for each kernel parameter. Model accepts keyword
arguments to set the initial value for each parameter.
Experiment
combines the Model function with a data file loaded by
the sasview data loader. Experiment takes a cutoff parameter controlling
how far the polydispersity integral extends.
- sasmodels.bumps_model.BumpsParameter¶
alias of
Parameter
- class sasmodels.bumps_model.Data1D(x: ndarray | None = None, y: ndarray | None = None, dx: ndarray | None = None, dy: ndarray | None = None)[source]¶
Bases:
object
1D data object.
Note that this definition matches the attributes from sasview, with some generic 1D data vectors and some SAS specific definitions. Some refactoring to allow consistent naming conventions between 1D, 2D and SESANS data would be helpful.
Attributes
x, dx: \(q\) vector and gaussian resolution
y, dy: \(I(q)\) vector and measurement uncertainty
mask: values to include in plotting/analysis
dxl: slit widths for slit smeared data, with dx ignored
qmin, qmax: range of \(q\) values in x
filename: label for the data line
_xaxis, _xunit: label and units for the x axis
_yaxis, _yunit: label and units for the y axis
- __dict__ = mappingproxy({'__module__': 'sasmodels.data', '__doc__': '\n 1D data object.\n\n Note that this definition matches the attributes from sasview, with\n some generic 1D data vectors and some SAS specific definitions. Some\n refactoring to allow consistent naming conventions between 1D, 2D and\n SESANS data would be helpful.\n\n **Attributes**\n\n *x*, *dx*: $q$ vector and gaussian resolution\n\n *y*, *dy*: $I(q)$ vector and measurement uncertainty\n\n *mask*: values to include in plotting/analysis\n\n *dxl*: slit widths for slit smeared data, with *dx* ignored\n\n *qmin*, *qmax*: range of $q$ values in *x*\n\n *filename*: label for the data line\n\n *_xaxis*, *_xunit*: label and units for the *x* axis\n\n *_yaxis*, *_yunit*: label and units for the *y* axis\n ', '__init__': <function Data1D.__init__>, 'xaxis': <function Data1D.xaxis>, 'yaxis': <function Data1D.yaxis>, '__dict__': <attribute '__dict__' of 'Data1D' objects>, '__weakref__': <attribute '__weakref__' of 'Data1D' objects>, '__annotations__': {}})¶
- __doc__ = '\n 1D data object.\n\n Note that this definition matches the attributes from sasview, with\n some generic 1D data vectors and some SAS specific definitions. Some\n refactoring to allow consistent naming conventions between 1D, 2D and\n SESANS data would be helpful.\n\n **Attributes**\n\n *x*, *dx*: $q$ vector and gaussian resolution\n\n *y*, *dy*: $I(q)$ vector and measurement uncertainty\n\n *mask*: values to include in plotting/analysis\n\n *dxl*: slit widths for slit smeared data, with *dx* ignored\n\n *qmin*, *qmax*: range of $q$ values in *x*\n\n *filename*: label for the data line\n\n *_xaxis*, *_xunit*: label and units for the *x* axis\n\n *_yaxis*, *_yunit*: label and units for the *y* axis\n '¶
- __init__(x: ndarray | None = None, y: ndarray | None = None, dx: ndarray | None = None, dy: ndarray | None = None) None [source]¶
- __module__ = 'sasmodels.data'¶
- __weakref__¶
list of weak references to the object (if defined)
- class sasmodels.bumps_model.Data2D(x: ndarray | None = None, y: ndarray | None = None, z: ndarray | None = None, dx: ndarray | None = None, dy: ndarray | None = None, dz: ndarray | None = None)[source]¶
Bases:
object
2D data object.
Note that this definition matches the attributes from sasview. Some refactoring to allow consistent naming conventions between 1D, 2D and SESANS data would be helpful.
Attributes
qx_data, dqx_data: \(q_x\) matrix and gaussian resolution
qy_data, dqy_data: \(q_y\) matrix and gaussian resolution
data, err_data: \(I(q)\) matrix and measurement uncertainty
mask: values to exclude from plotting/analysis
qmin, qmax: range of \(q\) values in x
filename: label for the data line
_xaxis, _xunit: label and units for the x axis
_yaxis, _yunit: label and units for the y axis
_zaxis, _zunit: label and units for the y axis
Q_unit, I_unit: units for Q and intensity
x_bins, y_bins: grid steps in x and y directions
- __dict__ = mappingproxy({'__module__': 'sasmodels.data', '__doc__': '\n 2D data object.\n\n Note that this definition matches the attributes from sasview. Some\n refactoring to allow consistent naming conventions between 1D, 2D and\n SESANS data would be helpful.\n\n **Attributes**\n\n *qx_data*, *dqx_data*: $q_x$ matrix and gaussian resolution\n\n *qy_data*, *dqy_data*: $q_y$ matrix and gaussian resolution\n\n *data*, *err_data*: $I(q)$ matrix and measurement uncertainty\n\n *mask*: values to exclude from plotting/analysis\n\n *qmin*, *qmax*: range of $q$ values in *x*\n\n *filename*: label for the data line\n\n *_xaxis*, *_xunit*: label and units for the *x* axis\n\n *_yaxis*, *_yunit*: label and units for the *y* axis\n\n *_zaxis*, *_zunit*: label and units for the *y* axis\n\n *Q_unit*, *I_unit*: units for Q and intensity\n\n *x_bins*, *y_bins*: grid steps in *x* and *y* directions\n ', '__init__': <function Data2D.__init__>, 'xaxis': <function Data2D.xaxis>, 'yaxis': <function Data2D.yaxis>, 'zaxis': <function Data2D.zaxis>, '__dict__': <attribute '__dict__' of 'Data2D' objects>, '__weakref__': <attribute '__weakref__' of 'Data2D' objects>, '__annotations__': {}})¶
- __doc__ = '\n 2D data object.\n\n Note that this definition matches the attributes from sasview. Some\n refactoring to allow consistent naming conventions between 1D, 2D and\n SESANS data would be helpful.\n\n **Attributes**\n\n *qx_data*, *dqx_data*: $q_x$ matrix and gaussian resolution\n\n *qy_data*, *dqy_data*: $q_y$ matrix and gaussian resolution\n\n *data*, *err_data*: $I(q)$ matrix and measurement uncertainty\n\n *mask*: values to exclude from plotting/analysis\n\n *qmin*, *qmax*: range of $q$ values in *x*\n\n *filename*: label for the data line\n\n *_xaxis*, *_xunit*: label and units for the *x* axis\n\n *_yaxis*, *_yunit*: label and units for the *y* axis\n\n *_zaxis*, *_zunit*: label and units for the *y* axis\n\n *Q_unit*, *I_unit*: units for Q and intensity\n\n *x_bins*, *y_bins*: grid steps in *x* and *y* directions\n '¶
- __init__(x: ndarray | None = None, y: ndarray | None = None, z: ndarray | None = None, dx: ndarray | None = None, dy: ndarray | None = None, dz: ndarray | None = None) None [source]¶
- __module__ = 'sasmodels.data'¶
- __weakref__¶
list of weak references to the object (if defined)
- class sasmodels.bumps_model.DataMixin[source]¶
Bases:
object
DataMixin captures the common aspects of evaluating a SAS model for a particular data set, including calculating Iq and evaluating the resolution function. It is used in particular by
DirectModel
, which evaluates a SAS model parameters as key word arguments to the calculator method, and bybumps_model.Experiment
, which wraps the model and data for use with the Bumps fitting engine. It is not currently used bysasview_model.SasviewModel
since this will require a number of changes to SasView before we can do it._interpret_data initializes the data structures necessary to manage the calculations. This sets attributes in the child class such as data_type and resolution.
_calc_theory evaluates the model at the given control values.
_set_data sets the intensity data in the data object, possibly with random noise added. This is useful for simulating a dataset with the results from _calc_theory.
- __dict__ = mappingproxy({'__module__': 'sasmodels.direct_model', '__doc__': '\n DataMixin captures the common aspects of evaluating a SAS model for a\n particular data set, including calculating Iq and evaluating the\n resolution function. It is used in particular by :class:`DirectModel`,\n which evaluates a SAS model parameters as key word arguments to the\n calculator method, and by :class:`.bumps_model.Experiment`, which wraps the\n model and data for use with the Bumps fitting engine. It is not\n currently used by :class:`.sasview_model.SasviewModel` since this will\n require a number of changes to SasView before we can do it.\n\n *_interpret_data* initializes the data structures necessary\n to manage the calculations. This sets attributes in the child class\n such as *data_type* and *resolution*.\n\n *_calc_theory* evaluates the model at the given control values.\n\n *_set_data* sets the intensity data in the data object,\n possibly with random noise added. This is useful for simulating a\n dataset with the results from *_calc_theory*.\n ', '_interpret_data': <function DataMixin._interpret_data>, '_set_data': <function DataMixin._set_data>, '_calc_theory': <function DataMixin._calc_theory>, '__dict__': <attribute '__dict__' of 'DataMixin' objects>, '__weakref__': <attribute '__weakref__' of 'DataMixin' objects>, '__annotations__': {}})¶
- __doc__ = '\n DataMixin captures the common aspects of evaluating a SAS model for a\n particular data set, including calculating Iq and evaluating the\n resolution function. It is used in particular by :class:`DirectModel`,\n which evaluates a SAS model parameters as key word arguments to the\n calculator method, and by :class:`.bumps_model.Experiment`, which wraps the\n model and data for use with the Bumps fitting engine. It is not\n currently used by :class:`.sasview_model.SasviewModel` since this will\n require a number of changes to SasView before we can do it.\n\n *_interpret_data* initializes the data structures necessary\n to manage the calculations. This sets attributes in the child class\n such as *data_type* and *resolution*.\n\n *_calc_theory* evaluates the model at the given control values.\n\n *_set_data* sets the intensity data in the data object,\n possibly with random noise added. This is useful for simulating a\n dataset with the results from *_calc_theory*.\n '¶
- __module__ = 'sasmodels.direct_model'¶
- __weakref__¶
list of weak references to the object (if defined)
- _interpret_data(data: Data1D | Data2D | SesansData, model: KernelModel) None [source]¶
- class sasmodels.bumps_model.Experiment(data: Data1D | Data2D, model: Model, cutoff: float = 1e-05, name: str | None = None, extra_pars: Dict[str, Parameter] | None = None)[source]¶
Bases:
DataMixin
Bumps wrapper for a SAS experiment.
data is a
data.Data1D
,data.Data2D
ordata.SesansData
object. Usedata.empty_data1D()
ordata.empty_data2D()
to define \(q, \Delta q\) calculation points for displaying the SANS curve when there is no measured data.model is a
Model
object.cutoff is the integration cutoff, which avoids computing the the SAS model where the polydispersity weight is low.
The resulting model can be used directly in a Bumps FitProblem call.
- __doc__ = '\n Bumps wrapper for a SAS experiment.\n\n *data* is a :class:`.data.Data1D`, :class:`.data.Data2D` or\n :class:`.data.SesansData` object. Use :func:`.data.empty_data1D` or\n :func:`.data.empty_data2D` to define $q, \\Delta q$ calculation\n points for displaying the SANS curve when there is no measured data.\n\n *model* is a :class:`Model` object.\n\n *cutoff* is the integration cutoff, which avoids computing the\n the SAS model where the polydispersity weight is low.\n\n The resulting model can be used directly in a Bumps FitProblem call.\n '¶
- __init__(data: Data1D | Data2D, model: Model, cutoff: float = 1e-05, name: str | None = None, extra_pars: Dict[str, Parameter] | None = None) None [source]¶
- __module__ = 'sasmodels.bumps_model'¶
- _cache: Dict[str, ndarray] = None¶
- nllf() float [source]¶
Return the negative log likelihood of seeing data given the model parameters, up to a normalizing constant which depends on the data uncertainty.
- property resolution: None | Resolution¶
resolution.Resolution
applied to the data, if any.
- save(basename: str) None [source]¶
Save the model parameters and data into a file.
Not Implemented except for sesans fits.
- class sasmodels.bumps_model.KernelModel[source]¶
Bases:
object
Model definition for the compute engine.
- __dict__ = mappingproxy({'__module__': 'sasmodels.kernel', '__doc__': '\n Model definition for the compute engine.\n ', 'info': None, 'dtype': None, 'make_kernel': <function KernelModel.make_kernel>, 'release': <function KernelModel.release>, '__dict__': <attribute '__dict__' of 'KernelModel' objects>, '__weakref__': <attribute '__weakref__' of 'KernelModel' objects>, '__annotations__': {'info': 'ModelInfo', 'dtype': 'np.dtype'}})¶
- __doc__ = '\n Model definition for the compute engine.\n '¶
- __module__ = 'sasmodels.kernel'¶
- __weakref__¶
list of weak references to the object (if defined)
- dtype: dtype = None¶
- class sasmodels.bumps_model.Model(model: KernelModel, **kwargs: Dict[str, float | Parameter])[source]¶
Bases:
object
Bumps wrapper for a SAS model.
model is a runnable module as returned from
core.load_model()
.cutoff is the polydispersity weight cutoff.
Any additional key=value pairs are model dependent parameters.
- __dict__ = mappingproxy({'__module__': 'sasmodels.bumps_model', '__doc__': '\n Bumps wrapper for a SAS model.\n\n *model* is a runnable module as returned from :func:`.core.load_model`.\n\n *cutoff* is the polydispersity weight cutoff.\n\n Any additional *key=value* pairs are model dependent parameters.\n ', '__init__': <function Model.__init__>, 'parameters': <function Model.parameters>, 'state': <function Model.state>, '__dict__': <attribute '__dict__' of 'Model' objects>, '__weakref__': <attribute '__weakref__' of 'Model' objects>, '__annotations__': {}})¶
- __doc__ = '\n Bumps wrapper for a SAS model.\n\n *model* is a runnable module as returned from :func:`.core.load_model`.\n\n *cutoff* is the polydispersity weight cutoff.\n\n Any additional *key=value* pairs are model dependent parameters.\n '¶
- __init__(model: KernelModel, **kwargs: Dict[str, float | Parameter]) None [source]¶
- __module__ = 'sasmodels.bumps_model'¶
- __weakref__¶
list of weak references to the object (if defined)
- class sasmodels.bumps_model.ModelInfo[source]¶
Bases:
object
Interpret the model definition file, categorizing the parameters.
The module can be loaded with a normal python import statement if you know which module you need, or with __import__(‘sasmodels.model.’+name) if the name is in a string.
The structure should be mostly static, other than the delayed definition of Iq, Iqac and Iqabc if they need to be defined.
- Imagnetic: None | str | Callable[[...], np.ndarray] = None¶
Returns I(qx, qy, a, b, …). The interface follows
Iq
.
- Iq: None | str | Callable[[...], np.ndarray] = None¶
Returns I(q, a, b, …) for parameters a, b, etc. defined by the parameter table. Iq can be defined as a python function, or as a C function. If it is defined in C, then set Iq to the body of the C function, including the return statement. This function takes values for q and each of the parameters as separate double values (which may be converted to float or long double by sasmodels). All source code files listed in
source
will be loaded before the Iq function is defined. If Iq is not present, then sources should define static double Iq(double q, double a, double b, …) which will return I(q, a, b, …). Multiplicity parameters are sent as pointers to doubles. Constants in floating point expressions should include the decimal point. Seegenerate
for more details. If have_Fq is True, then Iq should return an interleaved array of \([\sum F(q_1), \sum F^2(q_1), \ldots, \sum F(q_n), \sum F^2(q_n)]\).
- Iqabc: None | str | Callable[[...], np.ndarray] = None¶
Returns I(qa, qb, qc, a, b, …). The interface follows
Iq
.
- Iqac: None | str | Callable[[...], np.ndarray] = None¶
Returns I(qab, qc, a, b, …). The interface follows
Iq
.
- Iqxy: None | str | Callable[[...], np.ndarray] = None¶
Returns I(qx, qy, a, b, …). The interface follows
Iq
.
- __dict__ = mappingproxy({'__module__': 'sasmodels.modelinfo', '__doc__': "\n Interpret the model definition file, categorizing the parameters.\n\n The module can be loaded with a normal python import statement if you\n know which module you need, or with __import__('sasmodels.model.'+name)\n if the name is in a string.\n\n The structure should be mostly static, other than the delayed definition\n of *Iq*, *Iqac* and *Iqabc* if they need to be defined.\n ", 'filename': None, 'basefile': None, 'id': None, 'name': None, 'title': None, 'description': None, 'parameters': None, 'base': None, 'translation': None, 'composition': None, 'hidden': None, 'docs': None, 'category': None, 'single': None, 'opencl': None, 'structure_factor': None, 'have_Fq': False, 'radius_effective_modes': None, 'source': None, 'c_code': None, 'valid': None, 'form_volume': None, 'shell_volume': None, 'radius_effective': None, 'Iq': None, 'Iqxy': None, 'Iqac': None, 'Iqabc': None, 'Imagnetic': None, 'profile': None, 'profile_axes': None, 'sesans': None, 'random': None, 'lineno': None, 'tests': None, '__init__': <function ModelInfo.__init__>, 'get_hidden_parameters': <function ModelInfo.get_hidden_parameters>, '__dict__': <attribute '__dict__' of 'ModelInfo' objects>, '__weakref__': <attribute '__weakref__' of 'ModelInfo' objects>, '__annotations__': {'filename': 'Optional[str]', 'basefile': 'Optional[str]', 'id': 'str', 'name': 'str', 'title': 'str', 'description': 'str', 'parameters': 'ParameterTable', 'base': 'ParameterTable', 'translation': 'Optional[str]', 'composition': 'Optional[Tuple[str, List[ModelInfo]]]', 'hidden': 'Optional[Callable[[int], Set[str]]]', 'docs': 'str', 'category': 'Optional[str]', 'single': 'bool', 'opencl': 'bool', 'structure_factor': 'bool', 'radius_effective_modes': 'List[str]', 'source': 'List[str]', 'c_code': 'Optional[str]', 'valid': 'str', 'form_volume': 'Union[None, str, Callable[[np.ndarray], float]]', 'shell_volume': 'Union[None, str, Callable[[np.ndarray], float]]', 'radius_effective': 'Union[None, Callable[[int, np.ndarray], float]]', 'Iq': 'Union[None, str, Callable[[...], np.ndarray]]', 'Iqxy': 'Union[None, str, Callable[[...], np.ndarray]]', 'Iqac': 'Union[None, str, Callable[[...], np.ndarray]]', 'Iqabc': 'Union[None, str, Callable[[...], np.ndarray]]', 'Imagnetic': 'Union[None, str, Callable[[...], np.ndarray]]', 'profile': 'Optional[Callable[[np.ndarray], None]]', 'profile_axes': 'Tuple[str, str]', 'sesans': 'Optional[Callable[[np.ndarray], np.ndarray]]', 'random': 'Optional[Callable[[], Dict[str, float]]]', 'lineno': 'Dict[str, int]', 'tests': 'List[TestCondition]'}})¶
- __doc__ = "\n Interpret the model definition file, categorizing the parameters.\n\n The module can be loaded with a normal python import statement if you\n know which module you need, or with __import__('sasmodels.model.'+name)\n if the name is in a string.\n\n The structure should be mostly static, other than the delayed definition\n of *Iq*, *Iqac* and *Iqabc* if they need to be defined.\n "¶
- __module__ = 'sasmodels.modelinfo'¶
- __weakref__¶
list of weak references to the object (if defined)
- base: ParameterTable = None¶
For reparameterized systems, base is the base parameter table. For normal systems it is simply a copy of parameters.
- basefile: str | None = None¶
Base file is usually filename, but not when a model has been reparameterized, in which case it is the file containing the original model definition. This is needed to signal an additional dependency for the model time stamp, and so that the compiler reports correct file for syntax errors.
- c_code: str | None = None¶
inline source code, added after all elements of source
- category: str | None = None¶
Location of the model description in the documentation. This takes the form of “section” or “section:subsection”. So for example, porod uses category=”shape-independent” so it is in the Shape-Independent Functions section whereas capped_cylinder uses: category=”shape:cylinder”, which puts it in the Cylinder Functions section.
- composition: Tuple[str, List[ModelInfo]] | None = None¶
Composition is None if this is an independent model, or it is a tuple with comoposition type (‘product’ or ‘misture’) and a list of
ModelInfo
blocks for the composed objects. This allows us to rebuild a complete mixture or product model from the info block. composition is not given in the model definition file, but instead arises when the model is constructed using names such as sphere*hardsphere or cylinder+sphere.
- description: str = None¶
Long description of the model.
- docs: str = None¶
Doc string from the top of the model file. This should be formatted using ReStructuredText format, with latex markup in “.. math” environments, or in dollar signs. This will be automatically extracted to a .rst file by
generate.make_doc()
, then converted to HTML or PDF by Sphinx.
- filename: str | None = None¶
Full path to the file defining the kernel, if any.
- form_volume: None | str | Callable[[np.ndarray], float] = None¶
Returns the form volume for python-based models. Form volume is needed for volume normalization in the polydispersity integral. If no parameters are volume parameters, then form volume is not needed. For C-based models, (with
source
defined, or withIq
defined using a string containing C code), form_volume must also be C code, either defined as a string, or in the sources.
Returns the set of hidden parameters for the model. control is the value of the control parameter. Note that multiplicity models have an implicit control parameter, which is the parameter that controls the multiplicity.
- have_Fq = False¶
True if the model defines an Fq function with signature
void Fq(double q, double *F1, double *F2, ...)
Different variants require different parameters. In order to show just the parameters needed for the variant selected, you should provide a function hidden(control) -> set([‘a’, ‘b’, …]) indicating which parameters need to be hidden. For multiplicity models, you need to use the complete name of the parameter, including its number. So for example, if variant “a” uses only sld1 and sld2, then sld3, sld4 and sld5 of multiplicity parameter sld[5] should be in the hidden set.
- id: str = None¶
Id of the kernel used to load it from the filesystem.
- lineno: Dict[str, int] = None¶
Line numbers for symbols defining C code
- name: str = None¶
Display name of the model, which defaults to the model id but with capitalization of the parts so for example core_shell defaults to “Core Shell”.
- opencl: bool = None¶
True if the model can be run as an opencl model. If for some reason the model cannot be run in opencl (e.g., because the model passes functions by reference), then set this to false.
- parameters: ParameterTable = None¶
Model parameter table. Parameters are defined using a list of parameter definitions, each of which is contains parameter name, units, default value, limits, type and description. See
Parameter
for details on the individual parameters. The parameters are gathered into aParameterTable
, which provides various views into the parameter list.
- profile: Callable[[np.ndarray], None] | None = None¶
Returns a model profile curve x, y. If profile is defined, this curve will appear in response to the Show button in SasView. Use
profile_axes
to set the axis labels. Note that y values will be scaled by 1e6 before plotting.
- profile_axes: Tuple[str, str] = None¶
Axis labels for the
profile
plot. The default is [‘x’, ‘y’]. Only the x component is used for now.
- radius_effective: None | Callable[[int, np.ndarray], float] = None¶
Computes the effective radius of the shape given the volume parameters. Only needed for models defined in python that can be used for monodisperse approximation for non-dilute solutions, P@S. The first argument is the integer effective radius mode, with default 0.
- radius_effective_modes: List[str] = None¶
List of options for computing the effective radius of the shape, or None if the model is not usable as a form factor model.
- random: Callable[[], Dict[str, float]] | None = None¶
Returns a random parameter set for the model
- sesans: Callable[[np.ndarray], np.ndarray] | None = None¶
Returns sesans(z, a, b, …) for models which can directly compute the SESANS correlation function. Note: not currently implemented.
- shell_volume: None | str | Callable[[np.ndarray], float] = None¶
Returns the shell volume for python-based models. Form volume and shell volume are needed for volume normalization in the polydispersity integral and structure interactions for hollow shapes. If no parameters are volume parameters, then shell volume is not needed. For C-based models, (with
source
defined, or withIq
defined using a string containing C code), shell_volume must also be C code, either defined as a string, or in the sources.
- single: bool = None¶
True if the model can be computed accurately with single precision. This is True by default, but models such as bcc_paracrystal set it to False because they require double precision calculations.
- source: List[str] = None¶
List of C source files used to define the model. The source files should define the Iq function, and possibly Iqac or Iqabc if the model defines orientation parameters. Files containing the most basic functions must appear first in the list, followed by the files that use those functions.
- structure_factor: bool = None¶
True if the model is a structure factor used to model the interaction between form factor models. This will default to False if it is not provided in the file.
- tests: List[TestCondition] = None¶
The set of tests that must pass. The format of the tests is described in
model_test
.
- title: str = None¶
Short description of the model.
- translation: str | None = None¶
Parameter translation code to convert from parameters table from caller to the base table used to evaluate the model.
- valid: str = None¶
Expression which evaluates to True if the input parameters are valid and the model can be computed, or False otherwise. Invalid parameter sets will not be included in the weighted \(I(Q)\) calculation or its volume normalization. Use C syntax for the expressions, with || for or && for and and ! for not. Any non-magnetic parameter can be used.
- class sasmodels.bumps_model.Reference(obj, attr, **kw)[source]¶
Bases:
Parameter
Create an adaptor so that a model attribute can be treated as if it were a parameter. This allows only direct access, wherein the storage for the parameter value is provided by the underlying model.
Indirect access, wherein the storage is provided by the parameter, cannot be supported since the parameter has no way to detect that the model is asking for the value of the attribute. This means that model attributes cannot be assigned to parameter expressions without some trigger to update the values of the attributes in the model.
- __doc__ = '\n Create an adaptor so that a model attribute can be treated as if it\n were a parameter. This allows only direct access, wherein the\n storage for the parameter value is provided by the underlying model.\n\n Indirect access, wherein the storage is provided by the parameter, cannot\n be supported since the parameter has no way to detect that the model\n is asking for the value of the attribute. This means that model\n attributes cannot be assigned to parameter expressions without some\n trigger to update the values of the attributes in the model.\n '¶
- __module__ = 'bumps.parameter'¶
- property value¶
- class sasmodels.bumps_model.Resolution[source]¶
Bases:
object
Abstract base class defining a 1D resolution function.
q is the set of q values at which the data is measured.
q_calc is the set of q values at which the theory needs to be evaluated. This may extend and interpolate the q values.
apply is the method to call with I(q_calc) to compute the resolution smeared theory I(q).
- __dict__ = mappingproxy({'__module__': 'sasmodels.resolution', '__doc__': '\n Abstract base class defining a 1D resolution function.\n\n *q* is the set of q values at which the data is measured.\n\n *q_calc* is the set of q values at which the theory needs to be evaluated.\n This may extend and interpolate the q values.\n\n *apply* is the method to call with I(q_calc) to compute the resolution\n smeared theory I(q).\n ', 'q': None, 'q_calc': None, 'apply': <function Resolution.apply>, '__dict__': <attribute '__dict__' of 'Resolution' objects>, '__weakref__': <attribute '__weakref__' of 'Resolution' objects>, '__annotations__': {'q': 'np.ndarray', 'q_calc': 'np.ndarray'}})¶
- __doc__ = '\n Abstract base class defining a 1D resolution function.\n\n *q* is the set of q values at which the data is measured.\n\n *q_calc* is the set of q values at which the theory needs to be evaluated.\n This may extend and interpolate the q values.\n\n *apply* is the method to call with I(q_calc) to compute the resolution\n smeared theory I(q).\n '¶
- __module__ = 'sasmodels.resolution'¶
- __weakref__¶
list of weak references to the object (if defined)
- q: ndarray = None¶
- q_calc: ndarray = None¶
- sasmodels.bumps_model.create_parameters(model_info: ModelInfo, **kwargs: float | str | Parameter) Tuple[Dict[str, Parameter], Dict[str, str]] [source]¶
Generate Bumps parameters from the model info.
model_info is returned from
generate.model_info()
on the model definition module.Any additional key=value pairs are initial values for the parameters to the models. Uninitialized parameters will use the model default value. The value can be a float, a bumps parameter, or in the case of the distribution type parameter, a string.
Returns a dictionary of {name: BumpsParameter} containing the bumps parameters for each model parameter, and a dictionary of {name: str} containing the polydispersity distribution types.
- sasmodels.bumps_model.plot_theory(data: Data1D | Data2D | SesansData, theory: ndarray | None, resid: ndarray | None = None, view: str | None = None, use_data: bool = True, limits: Tuple[float, float] | None = None, Iq_calc: ndarray | None = None) None [source]¶
Plot theory calculation.
data is needed to define the graph properties such as labels and units, and to define the data mask.
theory is a matrix of the same shape as the data.
view is log or linear
use_data is True if the data should be plotted as well as the theory.
limits sets the intensity limits on the plot; if None then the limits are inferred from the data.
Iq_calc is the raw theory values without resolution smearing
sasmodels.compare module¶
Program to compare models using different compute engines.
This program lets you compare results between OpenCL and DLL versions of the code and between precision (half, fast, single, double, quad), where fast precision is single precision using native functions for trig, etc., and may not be completely IEEE 754 compliant. This lets make sure that the model calculations are stable, or if you need to tag the model as double precision only.
Run using “./sascomp -h” in the sasmodels root to see the command line options. To run from from an installed version of sasmodels, use “python -m sasmodels.compare -h”.
Note that there is no way within sasmodels to select between an OpenCL CPU device and a GPU device, but you can do so by setting the SAS_OPENCL environment variable. Start a python interpreter and enter:
import pyopencl as cl
cl.create_some_context()
This will prompt you to select from the available OpenCL devices and tell you which string to use for the SAS_OPENCL variable. On Windows you will need to remove the quotes.
- class sasmodels.compare.Calculator(*args, **kwargs)[source]¶
Bases:
Protocol
Kernel calculator takes par=value keyword arguments.
- __abstractmethods__ = frozenset({})¶
- __dict__ = mappingproxy({'__module__': 'sasmodels.compare', '__doc__': 'Kernel calculator takes *par=value* keyword arguments.', '__call__': <function Calculator.__call__>, '__dict__': <attribute '__dict__' of 'Calculator' objects>, '__weakref__': <attribute '__weakref__' of 'Calculator' objects>, '__parameters__': (), '_is_protocol': True, '__subclasshook__': <function Protocol.__init_subclass__.<locals>._proto_hook>, '__init__': <function _no_init>, '__abstractmethods__': frozenset(), '_abc_impl': <_abc_data object>, '__annotations__': {}})¶
- __doc__ = 'Kernel calculator takes *par=value* keyword arguments.'¶
- __init__(*args, **kwargs)¶
- __module__ = 'sasmodels.compare'¶
- __parameters__ = ()¶
- __subclasshook__()¶
Abstract classes can override this to customize issubclass().
This is invoked early on by abc.ABCMeta.__subclasscheck__(). It should return True, False or NotImplemented. If it returns NotImplemented, the normal algorithm is used. Otherwise, it overrides the normal algorithm (and the outcome is cached).
- __weakref__¶
list of weak references to the object (if defined)
- _abc_impl = <_abc_data object>¶
- _is_protocol = True¶
- class sasmodels.compare.Explore(opts: Dict[str, Any])[source]¶
Bases:
object
Bumps wrapper for a SAS model comparison.
The resulting object can be used as a Bumps fit problem so that parameters can be adjusted in the GUI, with plots updated on the fly.
- __dict__ = mappingproxy({'__module__': 'sasmodels.compare', '__doc__': '\n Bumps wrapper for a SAS model comparison.\n\n The resulting object can be used as a Bumps fit problem so that\n parameters can be adjusted in the GUI, with plots updated on the fly.\n ', '__init__': <function Explore.__init__>, 'revert_values': <function Explore.revert_values>, 'model_update': <function Explore.model_update>, 'numpoints': <function Explore.numpoints>, 'parameters': <function Explore.parameters>, 'nllf': <function Explore.nllf>, 'plot': <function Explore.plot>, '__dict__': <attribute '__dict__' of 'Explore' objects>, '__weakref__': <attribute '__weakref__' of 'Explore' objects>, '__annotations__': {}})¶
- __doc__ = '\n Bumps wrapper for a SAS model comparison.\n\n The resulting object can be used as a Bumps fit problem so that\n parameters can be adjusted in the GUI, with plots updated on the fly.\n '¶
- __module__ = 'sasmodels.compare'¶
- __weakref__¶
list of weak references to the object (if defined)
- sasmodels.compare.MATH = {'acos': <built-in function acos>, 'acosh': <built-in function acosh>, 'asin': <built-in function asin>, 'asinh': <built-in function asinh>, 'atan': <built-in function atan>, 'atan2': <built-in function atan2>, 'atanh': <built-in function atanh>, 'ceil': <built-in function ceil>, 'comb': <built-in function comb>, 'copysign': <built-in function copysign>, 'cos': <built-in function cos>, 'cosh': <built-in function cosh>, 'degrees': <built-in function degrees>, 'dist': <built-in function dist>, 'e': 2.718281828459045, 'erf': <built-in function erf>, 'erfc': <built-in function erfc>, 'exp': <built-in function exp>, 'expm1': <built-in function expm1>, 'fabs': <built-in function fabs>, 'factorial': <built-in function factorial>, 'floor': <built-in function floor>, 'fmod': <built-in function fmod>, 'frexp': <built-in function frexp>, 'fsum': <built-in function fsum>, 'gamma': <built-in function gamma>, 'gcd': <built-in function gcd>, 'hypot': <built-in function hypot>, 'inf': inf, 'isclose': <built-in function isclose>, 'isfinite': <built-in function isfinite>, 'isinf': <built-in function isinf>, 'isnan': <built-in function isnan>, 'isqrt': <built-in function isqrt>, 'ldexp': <built-in function ldexp>, 'lgamma': <built-in function lgamma>, 'log': <built-in function log>, 'log10': <built-in function log10>, 'log1p': <built-in function log1p>, 'log2': <built-in function log2>, 'modf': <built-in function modf>, 'nan': nan, 'perm': <built-in function perm>, 'pi': 3.141592653589793, 'pow': <built-in function pow>, 'prod': <built-in function prod>, 'radians': <built-in function radians>, 'remainder': <built-in function remainder>, 'sin': <built-in function sin>, 'sinh': <built-in function sinh>, 'sqrt': <built-in function sqrt>, 'tan': <built-in function tan>, 'tanh': <built-in function tanh>, 'tau': 6.283185307179586, 'trunc': <built-in function trunc>}¶
list of math functions for use in evaluating parameters
- sasmodels.compare._format_par(name: str, value: float = 0.0, pd: float = 0.0, n: int = 0, nsigma: float = 3.0, pdtype: str = 'gaussian', relative_pd: bool = False, M0: float = 0.0, mphi: float = 0.0, mtheta: float = 0.0) str [source]¶
- sasmodels.compare._random_pd(model_info: ModelInfo, pars: Dict[str, float], is2d: bool) None [source]¶
Generate a random dispersity distribution for the model.
1% no shape dispersity 85% single shape parameter 13% two shape parameters 1% three shape parameters
If oriented, then put dispersity in theta, add phi and psi dispersity with 10% probability for each.
- sasmodels.compare._randomize_one(model_info: ModelInfo, name: str, value: float) float [source]¶
Randomize a single parameter.
- sasmodels.compare._show_invalid(data: Data, theory: np.ma.ndarray) None [source]¶
Display a list of the non-finite values in theory.
- sasmodels.compare._swap_pars(pars: ModelInfo, a: str, b: str) None [source]¶
Swap polydispersity and magnetism when swapping parameters.
Assume the parameters are of the same basic type (volume, sld, or other), so that if, for example, radius_pd is in pars but radius_bell_pd is not, then after the swap radius_bell_pd will be the old radius_pd and radius_pd will be removed.
- sasmodels.compare.build_math_context() Dict[str, Callable] [source]¶
build dictionary of functions from math module
- sasmodels.compare.columnize(items: List[str], indent: str = '', width: int | None = None) str [source]¶
Format a list of strings into columns.
Returns a string with carriage returns ready for printing.
- sasmodels.compare.compare(opts: Dict[str, Any], limits: Tuple[float, float] | None = None, maxdim: float | None = None) Tuple[float, float] [source]¶
Preform a comparison using options from the command line.
limits are the display limits on the graph, either to set the y-axis for 1D or to set the colormap scale for 2D. If None, then they are inferred from the data and returned. When exploring using Bumps, the limits are set when the model is initially called, and maintained as the values are adjusted, making it easier to see the effects of the parameters.
maxdim DEPRECATED Use opts[‘maxdim’] instead.
- sasmodels.compare.constrain_pars(model_info: ModelInfo, pars: Mapping[str, float]) None [source]¶
Restrict parameters to valid values.
This includes model specific code for models such as capped_cylinder which need to support within model constraints (cap radius more than cylinder radius in this case).
Warning: this updates the pars dictionary in place.
- sasmodels.compare.explore(opts: Dict[str, Any]) None [source]¶
explore the model using the bumps gui.
- sasmodels.compare.get_pars(model_info: ModelInfo) Mapping[str, float] [source]¶
Extract default parameters from the model definition.
- sasmodels.compare.limit_dimensions(model_info: ModelInfo, pars: Mapping[str, float], maxdim: float) None [source]¶
Limit parameters of units of Ang to maxdim.
- sasmodels.compare.make_data(opts: Dict[str, Any]) Tuple[Data1D | Data2D | SesansData, ndarray] [source]¶
Generate an empty dataset, used with the model to set Q points and resolution.
opts contains the options, with ‘qmax’, ‘nq’, ‘res’, ‘accuracy’, ‘is2d’ and ‘view’ parsed from the command line.
- sasmodels.compare.make_engine(model_info: ModelInfo, data: Data1D | Data2D | SesansData, dtype: str, cutoff: float, ngauss: int = 0) Calculator [source]¶
Generate the appropriate calculation engine for the given datatype.
Datatypes with ‘!’ appended are evaluated using external C DLLs rather than OpenCL.
- sasmodels.compare.parameter_range(p: str, v: float) Tuple[float, float] [source]¶
Choose a parameter range based on parameter name and initial value.
- sasmodels.compare.parlist(model_info: ModelInfo, pars: Mapping[str, float], is2d: bool) str [source]¶
Format the parameter list for printing.
- sasmodels.compare.parse_pars(opts: Dict[str, Any], maxdim: float | None = None) Tuple[Dict[str, float], Dict[str, float]] [source]¶
Generate parameter sets for base and comparison models.
Returns a pair of parameter dictionaries.
The default parameter values come from the model, or a randomized model if a seed value is given. Next, evaluate any parameter expressions, constraining the value of the parameter within and between models.
Note: When generating random parameters, the seed must already be set with a call to np.random.seed(opts[‘seed’]).
opts controls the parameter generation:
opts = { 'info': (model_info 1, model_info 2), 'seed': -1, # if seed>=0 then randomize parameters 'mono': False, # force monodisperse random parameters 'magnetic': False, # force nonmagetic random parameters 'maxdim': np.inf, # limit particle size to maxdim for random pars 'values': ['par=expr', ...], # override parameter values in model 'show_pars': False, # Show parameter values 'is2d': False, # Show values for orientation parameters }
The values of par=expr are evaluated approximately as:
import numpy as np from math import * from parameter_set import * parameter_set.par = eval(expr)
That is, you can use arbitrary python math expressions including the functions defined in the math library and the numpy library. You can also use the existing parameter values, which will either be the model defaults or the randomly generated values if seed is non-negative.
To compare different values of the same parameter, use par=expr,expr. The first parameter set will have the values from the first expression and the second parameter set will have the values from the second expression. Note that the second expression is evaluated using the values from the first expression, which allows things like:
length=2*radius,length+3
which will compare length to length+3 when length is set to 2*radius.
maxdim DEPRECATED Use opts[‘maxdim’] instead.
- sasmodels.compare.plot_models(opts: Dict[str, Any], result: Dict[str, Any], limits: Tuple[float, float] | None = None, setnum: int = 0) Tuple[float, float] [source]¶
Plot the results from
run_models()
.
- sasmodels.compare.plot_profile(model_info: ModelInfo, label: List[Tuple[float, ndarray, ndarray]] = 'base', **args: float) None [source]¶
Plot the profile returned by the model profile method.
model_info defines model parameters, etc.
label is the legend label for the plotted line.
args are parameter=value pairs for the model profile function.
- class sasmodels.compare.push_seed(seed: int | None = None)[source]¶
Bases:
object
Set the seed value for the random number generator.
When used in a with statement, the random number generator state is restored after the with statement is complete.
- Parameters:
- seedint or array_like, optional
Seed for RandomState
- Example:
Seed can be used directly to set the seed:
>>> from numpy.random import randint >>> push_seed(24) <...push_seed object at...> >>> print(randint(0,1000000,3)) [242082 899 211136]
Seed can also be used in a with statement, which sets the random number generator state for the enclosed computations and restores it to the previous state on completion:
>>> with push_seed(24): ... print(randint(0,1000000,3)) [242082 899 211136]
Using nested contexts, we can demonstrate that state is indeed restored after the block completes:
>>> with push_seed(24): ... print(randint(0,1000000)) ... with push_seed(24): ... print(randint(0,1000000,3)) ... print(randint(0,1000000)) 242082 [242082 899 211136] 899
The restore step is protected against exceptions in the block:
>>> with push_seed(24): ... print(randint(0,1000000)) ... try: ... with push_seed(24): ... print(randint(0,1000000,3)) ... raise Exception() ... except Exception: ... print("Exception raised") ... print(randint(0,1000000)) 242082 [242082 899 211136] Exception raised 899
- __dict__ = mappingproxy({'__module__': 'sasmodels.compare', '__doc__': '\n Set the seed value for the random number generator.\n\n When used in a with statement, the random number generator state is\n restored after the with statement is complete.\n\n :Parameters:\n\n *seed* : int or array_like, optional\n Seed for RandomState\n\n :Example:\n\n Seed can be used directly to set the seed::\n\n >>> from numpy.random import randint\n >>> push_seed(24)\n <...push_seed object at...>\n >>> print(randint(0,1000000,3))\n [242082 899 211136]\n\n Seed can also be used in a with statement, which sets the random\n number generator state for the enclosed computations and restores\n it to the previous state on completion::\n\n >>> with push_seed(24):\n ... print(randint(0,1000000,3))\n [242082 899 211136]\n\n Using nested contexts, we can demonstrate that state is indeed\n restored after the block completes::\n\n >>> with push_seed(24):\n ... print(randint(0,1000000))\n ... with push_seed(24):\n ... print(randint(0,1000000,3))\n ... print(randint(0,1000000))\n 242082\n [242082 899 211136]\n 899\n\n The restore step is protected against exceptions in the block::\n\n >>> with push_seed(24):\n ... print(randint(0,1000000))\n ... try:\n ... with push_seed(24):\n ... print(randint(0,1000000,3))\n ... raise Exception()\n ... except Exception:\n ... print("Exception raised")\n ... print(randint(0,1000000))\n 242082\n [242082 899 211136]\n Exception raised\n 899\n ', '__init__': <function push_seed.__init__>, '__enter__': <function push_seed.__enter__>, '__exit__': <function push_seed.__exit__>, '__dict__': <attribute '__dict__' of 'push_seed' objects>, '__weakref__': <attribute '__weakref__' of 'push_seed' objects>, '__annotations__': {}})¶
- __doc__ = '\n Set the seed value for the random number generator.\n\n When used in a with statement, the random number generator state is\n restored after the with statement is complete.\n\n :Parameters:\n\n *seed* : int or array_like, optional\n Seed for RandomState\n\n :Example:\n\n Seed can be used directly to set the seed::\n\n >>> from numpy.random import randint\n >>> push_seed(24)\n <...push_seed object at...>\n >>> print(randint(0,1000000,3))\n [242082 899 211136]\n\n Seed can also be used in a with statement, which sets the random\n number generator state for the enclosed computations and restores\n it to the previous state on completion::\n\n >>> with push_seed(24):\n ... print(randint(0,1000000,3))\n [242082 899 211136]\n\n Using nested contexts, we can demonstrate that state is indeed\n restored after the block completes::\n\n >>> with push_seed(24):\n ... print(randint(0,1000000))\n ... with push_seed(24):\n ... print(randint(0,1000000,3))\n ... print(randint(0,1000000))\n 242082\n [242082 899 211136]\n 899\n\n The restore step is protected against exceptions in the block::\n\n >>> with push_seed(24):\n ... print(randint(0,1000000))\n ... try:\n ... with push_seed(24):\n ... print(randint(0,1000000,3))\n ... raise Exception()\n ... except Exception:\n ... print("Exception raised")\n ... print(randint(0,1000000))\n 242082\n [242082 899 211136]\n Exception raised\n 899\n '¶
- __module__ = 'sasmodels.compare'¶
- __weakref__¶
list of weak references to the object (if defined)
- sasmodels.compare.randomize_pars(model_info: ModelInfo, pars: Mapping[str, float], maxdim: float = inf, is2d: bool = False) Mapping[str, float] [source]¶
Generate random values for all of the parameters.
Valid ranges for the random number generator are guessed from the name of the parameter; this will not account for constraints such as cap radius greater than cylinder radius in the capped_cylinder model, so
constrain_pars()
needs to be called afterward..
- sasmodels.compare.run_models(opts: Dict[str, Any], verbose: bool = False) Dict[str, Any] [source]¶
Process a parameter set, return calculation results and times.
- sasmodels.compare.set_beam_stop(data: Data1D | Data2D | SesansData, radius: float, outer: float | None = None) None [source]¶
Add a beam stop of the given radius. If outer, make an annulus.
- sasmodels.compare.set_spherical_integration_parameters(opts: Dict[str, Any], steps: int) None [source]¶
Set integration parameters for spherical integration over the entire surface in theta-phi coordinates.
- sasmodels.compare.suppress_magnetism(pars: Mapping[str, float]) Mapping[str, float] [source]¶
Complete eliminate magnetism of the model to test models more quickly.
- sasmodels.compare.suppress_pd(pars: Mapping[str, float]) Mapping[str, float] [source]¶
Complete eliminate polydispersity of the model to test models more quickly.
- sasmodels.compare.tic() Callable[[], float] [source]¶
Timer function.
Use “toc=tic()” to start the clock and “toc()” to measure a time interval.
- sasmodels.compare.time_calculation(calculator: Calculator, pars: Mapping[str, float], evals: int = 1)[source]¶
Compute the average calculation time over N evaluations.
An additional call is generated without polydispersity in order to initialize the calculation engine, and make the average more stable.
sasmodels.compare_many module¶
Program to compare results from many random parameter sets for a given model.
The result is a comma separated value (CSV) table that can be redirected from standard output into a file and loaded into a spreadsheet.
The models are compared for each parameter set and if the difference is
greater than expected for that precision, the parameter set is labeled
as bad and written to the output, along with the random seed used to
generate that parameter value. This seed can be used with compare
to reload and display the details of the model.
- sasmodels.compare_many.calc_stats(target: ndarray, value: ndarray, index: Any) Tuple[float, float, float, float] [source]¶
Calculate statistics between the target value and the computed value.
target and value are the vectors being compared, with the difference normalized by target to get relative error. Only the elements listed in index are used, though index may be and empty slice defined by slice(None, None).
Returns:
maxrel the maximum relative difference
rel95 the relative difference with the 5% biggest differences ignored
maxabs the maximum absolute difference for the 5% biggest differences
maxval the maximum value for the 5% biggest differences
- sasmodels.compare_many.compare_instance(name: str, data: Data1D | Data2D | SesansData, index: Any, N: int = 1, mono: bool = True, cutoff: float = 1e-05, base: str = 'single', comp: str = 'double') None [source]¶
Compare the model under different calculation engines.
name is the name of the model.
data is the data object giving \(q, \Delta q\) calculation points.
index is the active set of points.
N is the number of comparisons to make.
cutoff is the polydispersity weight cutoff to make the calculation a little bit faster.
base and comp are the names of the calculation engines to compare.
- sasmodels.compare_many.print_column_headers(pars: Dict[str, float], parts: List[str]) None [source]¶
Generate column headers for the differences and for the parameters, and print them to standard output.
sasmodels.conversion_table module¶
Parameter conversion table
CONVERSION_TABLE gives the old model name and a dictionary of old parameter
names for each parameter in sasmodels. This is used by convert
to
determine the equivalent parameter set when comparing a sasmodels model to
the models defined in previous versions of SasView and sasmodels. This is now
versioned based on the version number of SasView.
When any sasmodels parameter or model name is changed, this must be modified to account for that.
Usage:
<old_Sasview_version> : {
<new_model_name> : [
<old_model_name> ,
{
<new_param_name_1> : <old_param_name_1>,
...
<new_param_name_n> : <old_param_name_n>
}
]
}
Any future parameter and model name changes can and should be given in this table for future compatibility.
sasmodels.convert module¶
Convert models to and from sasview.
- sasmodels.convert._check_one(name, seed=None)[source]¶
Generate a random set of parameters for name, and check that they can be converted back to SasView 3.x and forward again to sasmodels. Raises an error if the parameters are changed.
- sasmodels.convert._conversion_target(model_name, version=(3, 1, 2))[source]¶
Find the sasmodel name which translates into the sasview name.
Note: CoreShellEllipsoidModel translates into core_shell_ellipsoid:1. This is necessary since there is only one variant in sasmodels for the two variants in sasview.
- sasmodels.convert._convert_pars(pars, mapping)[source]¶
Rename the parameters and any associated polydispersity attributes.
- sasmodels.convert._is_sld(model_info, par)[source]¶
Return True if parameter is a magnetic magnitude or SLD parameter.
- sasmodels.convert._remove_pd(pars, key, name)[source]¶
Remove polydispersity from the parameter list.
Note: operates in place
- sasmodels.convert._rescale_sld(model_info, pars, scale)[source]¶
rescale all sld parameters in the new model definition by scale so the numbers are nicer. Relies on the fact that all sld parameters in the new model definition end with sld. For backward conversion use scale=1e-6. For forward conversion use scale=1e6.
- sasmodels.convert._revert_pars(pars, mapping)[source]¶
Rename the parameters and any associated polydispersity attributes.
- sasmodels.convert.constrain_new_to_old(model_info, pars)[source]¶
Restrict parameter values to those that will match sasview.
- sasmodels.convert.convert_model(name, pars, use_underscore=False, model_version=(3, 1, 2))[source]¶
Convert model from old style parameter names to new style.
- sasmodels.convert.revert_name(model_info)[source]¶
Translate model name back to the name used in SasView 3.x
sasmodels.core module¶
Core model handling routines.
- class sasmodels.core.KernelModel[source]¶
Bases:
object
Model definition for the compute engine.
- __annotations__ = {'dtype': 'np.dtype', 'info': 'ModelInfo'}¶
- __dict__ = mappingproxy({'__module__': 'sasmodels.kernel', '__doc__': '\n Model definition for the compute engine.\n ', 'info': None, 'dtype': None, 'make_kernel': <function KernelModel.make_kernel>, 'release': <function KernelModel.release>, '__dict__': <attribute '__dict__' of 'KernelModel' objects>, '__weakref__': <attribute '__weakref__' of 'KernelModel' objects>, '__annotations__': {'info': 'ModelInfo', 'dtype': 'np.dtype'}})¶
- __doc__ = '\n Model definition for the compute engine.\n '¶
- __module__ = 'sasmodels.kernel'¶
- __weakref__¶
list of weak references to the object (if defined)
- dtype: dtype = None¶
- class sasmodels.core.ModelInfo[source]¶
Bases:
object
Interpret the model definition file, categorizing the parameters.
The module can be loaded with a normal python import statement if you know which module you need, or with __import__(‘sasmodels.model.’+name) if the name is in a string.
The structure should be mostly static, other than the delayed definition of Iq, Iqac and Iqabc if they need to be defined.
- Imagnetic: None | str | Callable[[...], np.ndarray] = None¶
Returns I(qx, qy, a, b, …). The interface follows
Iq
.
- Iq: None | str | Callable[[...], np.ndarray] = None¶
Returns I(q, a, b, …) for parameters a, b, etc. defined by the parameter table. Iq can be defined as a python function, or as a C function. If it is defined in C, then set Iq to the body of the C function, including the return statement. This function takes values for q and each of the parameters as separate double values (which may be converted to float or long double by sasmodels). All source code files listed in
source
will be loaded before the Iq function is defined. If Iq is not present, then sources should define static double Iq(double q, double a, double b, …) which will return I(q, a, b, …). Multiplicity parameters are sent as pointers to doubles. Constants in floating point expressions should include the decimal point. Seegenerate
for more details. If have_Fq is True, then Iq should return an interleaved array of \([\sum F(q_1), \sum F^2(q_1), \ldots, \sum F(q_n), \sum F^2(q_n)]\).
- Iqabc: None | str | Callable[[...], np.ndarray] = None¶
Returns I(qa, qb, qc, a, b, …). The interface follows
Iq
.
- Iqac: None | str | Callable[[...], np.ndarray] = None¶
Returns I(qab, qc, a, b, …). The interface follows
Iq
.
- Iqxy: None | str | Callable[[...], np.ndarray] = None¶
Returns I(qx, qy, a, b, …). The interface follows
Iq
.
- __annotations__ = {'Imagnetic': 'Union[None, str, Callable[[...], np.ndarray]]', 'Iq': 'Union[None, str, Callable[[...], np.ndarray]]', 'Iqabc': 'Union[None, str, Callable[[...], np.ndarray]]', 'Iqac': 'Union[None, str, Callable[[...], np.ndarray]]', 'Iqxy': 'Union[None, str, Callable[[...], np.ndarray]]', 'base': 'ParameterTable', 'basefile': 'Optional[str]', 'c_code': 'Optional[str]', 'category': 'Optional[str]', 'composition': 'Optional[Tuple[str, List[ModelInfo]]]', 'description': 'str', 'docs': 'str', 'filename': 'Optional[str]', 'form_volume': 'Union[None, str, Callable[[np.ndarray], float]]', 'hidden': 'Optional[Callable[[int], Set[str]]]', 'id': 'str', 'lineno': 'Dict[str, int]', 'name': 'str', 'opencl': 'bool', 'parameters': 'ParameterTable', 'profile': 'Optional[Callable[[np.ndarray], None]]', 'profile_axes': 'Tuple[str, str]', 'radius_effective': 'Union[None, Callable[[int, np.ndarray], float]]', 'radius_effective_modes': 'List[str]', 'random': 'Optional[Callable[[], Dict[str, float]]]', 'sesans': 'Optional[Callable[[np.ndarray], np.ndarray]]', 'shell_volume': 'Union[None, str, Callable[[np.ndarray], float]]', 'single': 'bool', 'source': 'List[str]', 'structure_factor': 'bool', 'tests': 'List[TestCondition]', 'title': 'str', 'translation': 'Optional[str]', 'valid': 'str'}¶
- __dict__ = mappingproxy({'__module__': 'sasmodels.modelinfo', '__doc__': "\n Interpret the model definition file, categorizing the parameters.\n\n The module can be loaded with a normal python import statement if you\n know which module you need, or with __import__('sasmodels.model.'+name)\n if the name is in a string.\n\n The structure should be mostly static, other than the delayed definition\n of *Iq*, *Iqac* and *Iqabc* if they need to be defined.\n ", 'filename': None, 'basefile': None, 'id': None, 'name': None, 'title': None, 'description': None, 'parameters': None, 'base': None, 'translation': None, 'composition': None, 'hidden': None, 'docs': None, 'category': None, 'single': None, 'opencl': None, 'structure_factor': None, 'have_Fq': False, 'radius_effective_modes': None, 'source': None, 'c_code': None, 'valid': None, 'form_volume': None, 'shell_volume': None, 'radius_effective': None, 'Iq': None, 'Iqxy': None, 'Iqac': None, 'Iqabc': None, 'Imagnetic': None, 'profile': None, 'profile_axes': None, 'sesans': None, 'random': None, 'lineno': None, 'tests': None, '__init__': <function ModelInfo.__init__>, 'get_hidden_parameters': <function ModelInfo.get_hidden_parameters>, '__dict__': <attribute '__dict__' of 'ModelInfo' objects>, '__weakref__': <attribute '__weakref__' of 'ModelInfo' objects>, '__annotations__': {'filename': 'Optional[str]', 'basefile': 'Optional[str]', 'id': 'str', 'name': 'str', 'title': 'str', 'description': 'str', 'parameters': 'ParameterTable', 'base': 'ParameterTable', 'translation': 'Optional[str]', 'composition': 'Optional[Tuple[str, List[ModelInfo]]]', 'hidden': 'Optional[Callable[[int], Set[str]]]', 'docs': 'str', 'category': 'Optional[str]', 'single': 'bool', 'opencl': 'bool', 'structure_factor': 'bool', 'radius_effective_modes': 'List[str]', 'source': 'List[str]', 'c_code': 'Optional[str]', 'valid': 'str', 'form_volume': 'Union[None, str, Callable[[np.ndarray], float]]', 'shell_volume': 'Union[None, str, Callable[[np.ndarray], float]]', 'radius_effective': 'Union[None, Callable[[int, np.ndarray], float]]', 'Iq': 'Union[None, str, Callable[[...], np.ndarray]]', 'Iqxy': 'Union[None, str, Callable[[...], np.ndarray]]', 'Iqac': 'Union[None, str, Callable[[...], np.ndarray]]', 'Iqabc': 'Union[None, str, Callable[[...], np.ndarray]]', 'Imagnetic': 'Union[None, str, Callable[[...], np.ndarray]]', 'profile': 'Optional[Callable[[np.ndarray], None]]', 'profile_axes': 'Tuple[str, str]', 'sesans': 'Optional[Callable[[np.ndarray], np.ndarray]]', 'random': 'Optional[Callable[[], Dict[str, float]]]', 'lineno': 'Dict[str, int]', 'tests': 'List[TestCondition]'}})¶
- __doc__ = "\n Interpret the model definition file, categorizing the parameters.\n\n The module can be loaded with a normal python import statement if you\n know which module you need, or with __import__('sasmodels.model.'+name)\n if the name is in a string.\n\n The structure should be mostly static, other than the delayed definition\n of *Iq*, *Iqac* and *Iqabc* if they need to be defined.\n "¶
- __module__ = 'sasmodels.modelinfo'¶
- __weakref__¶
list of weak references to the object (if defined)
- base: ParameterTable = None¶
For reparameterized systems, base is the base parameter table. For normal systems it is simply a copy of parameters.
- basefile: str | None = None¶
Base file is usually filename, but not when a model has been reparameterized, in which case it is the file containing the original model definition. This is needed to signal an additional dependency for the model time stamp, and so that the compiler reports correct file for syntax errors.
- c_code: str | None = None¶
inline source code, added after all elements of source
- category: str | None = None¶
Location of the model description in the documentation. This takes the form of “section” or “section:subsection”. So for example, porod uses category=”shape-independent” so it is in the Shape-Independent Functions section whereas capped_cylinder uses: category=”shape:cylinder”, which puts it in the Cylinder Functions section.
- composition: Tuple[str, List[ModelInfo]] | None = None¶
Composition is None if this is an independent model, or it is a tuple with comoposition type (‘product’ or ‘misture’) and a list of
ModelInfo
blocks for the composed objects. This allows us to rebuild a complete mixture or product model from the info block. composition is not given in the model definition file, but instead arises when the model is constructed using names such as sphere*hardsphere or cylinder+sphere.
- description: str = None¶
Long description of the model.
- docs: str = None¶
Doc string from the top of the model file. This should be formatted using ReStructuredText format, with latex markup in “.. math” environments, or in dollar signs. This will be automatically extracted to a .rst file by
generate.make_doc()
, then converted to HTML or PDF by Sphinx.
- filename: str | None = None¶
Full path to the file defining the kernel, if any.
- form_volume: None | str | Callable[[np.ndarray], float] = None¶
Returns the form volume for python-based models. Form volume is needed for volume normalization in the polydispersity integral. If no parameters are volume parameters, then form volume is not needed. For C-based models, (with
source
defined, or withIq
defined using a string containing C code), form_volume must also be C code, either defined as a string, or in the sources.
Returns the set of hidden parameters for the model. control is the value of the control parameter. Note that multiplicity models have an implicit control parameter, which is the parameter that controls the multiplicity.
- have_Fq = False¶
True if the model defines an Fq function with signature
void Fq(double q, double *F1, double *F2, ...)
Different variants require different parameters. In order to show just the parameters needed for the variant selected, you should provide a function hidden(control) -> set([‘a’, ‘b’, …]) indicating which parameters need to be hidden. For multiplicity models, you need to use the complete name of the parameter, including its number. So for example, if variant “a” uses only sld1 and sld2, then sld3, sld4 and sld5 of multiplicity parameter sld[5] should be in the hidden set.
- id: str = None¶
Id of the kernel used to load it from the filesystem.
- lineno: Dict[str, int] = None¶
Line numbers for symbols defining C code
- name: str = None¶
Display name of the model, which defaults to the model id but with capitalization of the parts so for example core_shell defaults to “Core Shell”.
- opencl: bool = None¶
True if the model can be run as an opencl model. If for some reason the model cannot be run in opencl (e.g., because the model passes functions by reference), then set this to false.
- parameters: ParameterTable = None¶
Model parameter table. Parameters are defined using a list of parameter definitions, each of which is contains parameter name, units, default value, limits, type and description. See
Parameter
for details on the individual parameters. The parameters are gathered into aParameterTable
, which provides various views into the parameter list.
- profile: Callable[[np.ndarray], None] | None = None¶
Returns a model profile curve x, y. If profile is defined, this curve will appear in response to the Show button in SasView. Use
profile_axes
to set the axis labels. Note that y values will be scaled by 1e6 before plotting.
- profile_axes: Tuple[str, str] = None¶
Axis labels for the
profile
plot. The default is [‘x’, ‘y’]. Only the x component is used for now.
- radius_effective: None | Callable[[int, np.ndarray], float] = None¶
Computes the effective radius of the shape given the volume parameters. Only needed for models defined in python that can be used for monodisperse approximation for non-dilute solutions, P@S. The first argument is the integer effective radius mode, with default 0.
- radius_effective_modes: List[str] = None¶
List of options for computing the effective radius of the shape, or None if the model is not usable as a form factor model.
- random: Callable[[], Dict[str, float]] | None = None¶
Returns a random parameter set for the model
- sesans: Callable[[np.ndarray], np.ndarray] | None = None¶
Returns sesans(z, a, b, …) for models which can directly compute the SESANS correlation function. Note: not currently implemented.
- shell_volume: None | str | Callable[[np.ndarray], float] = None¶
Returns the shell volume for python-based models. Form volume and shell volume are needed for volume normalization in the polydispersity integral and structure interactions for hollow shapes. If no parameters are volume parameters, then shell volume is not needed. For C-based models, (with
source
defined, or withIq
defined using a string containing C code), shell_volume must also be C code, either defined as a string, or in the sources.
- single: bool = None¶
True if the model can be computed accurately with single precision. This is True by default, but models such as bcc_paracrystal set it to False because they require double precision calculations.
- source: List[str] = None¶
List of C source files used to define the model. The source files should define the Iq function, and possibly Iqac or Iqabc if the model defines orientation parameters. Files containing the most basic functions must appear first in the list, followed by the files that use those functions.
- structure_factor: bool = None¶
True if the model is a structure factor used to model the interaction between form factor models. This will default to False if it is not provided in the file.
- tests: List[TestCondition] = None¶
The set of tests that must pass. The format of the tests is described in
model_test
.
- title: str = None¶
Short description of the model.
- translation: str | None = None¶
Parameter translation code to convert from parameters table from caller to the base table used to evaluate the model.
- valid: str = None¶
Expression which evaluates to True if the input parameters are valid and the model can be computed, or False otherwise. Invalid parameter sets will not be included in the weighted \(I(Q)\) calculation or its volume normalization. Use C syntax for the expressions, with || for or && for and and ! for not. Any non-magnetic parameter can be used.
- sasmodels.core.build_model(model_info: ModelInfo, dtype: str | None = None, platform: str = 'ocl') KernelModel [source]¶
Prepare the model for the default execution platform.
This will return an OpenCL model, a DLL model or a python model depending on the model and the computing platform.
model_info is the model definition structure returned from
load_model_info()
.dtype indicates whether the model should use single or double precision for the calculation. Choices are ‘single’, ‘double’, ‘quad’, ‘half’, or ‘fast’. If dtype ends with ‘!’, then force the use of the DLL rather than OpenCL for the calculation.
platform should be “dll” to force the dll to be used for C models, otherwise it uses the default “ocl”.
- sasmodels.core.glob(pathname, *, recursive=False)[source]¶
Return a list of paths matching a pathname pattern.
The pattern may contain simple shell-style wildcards a la fnmatch. However, unlike fnmatch, filenames starting with a dot are special cases that are not matched by ‘*’ and ‘?’ patterns.
If recursive is true, the pattern ‘**’ will match any files and zero or more directories and subdirectories.
- sasmodels.core.joinpath(path, *paths)¶
- sasmodels.core.list_models(kind: str | None = None) List[str] [source]¶
Return the list of available models on the model path.
kind can be one of the following:
all: all models
py: python models only
c: c models only
single: c models which support single precision
double: c models which require double precision
opencl: c models which run in opencl
dll: c models which do not run in opencl
1d: models without orientation
2d: models with orientation
magnetic: models supporting magnetic sld
nommagnetic: models without magnetic parameter
For multiple conditions, combine with plus. For example, c+single+2d would return all oriented models implemented in C which can be computed accurately with single precision arithmetic.
- sasmodels.core.list_models_main() int [source]¶
Run list_models as a main program. See
list_models()
for the kinds of models that can be requested on the command line.
- sasmodels.core.load_model(model_name: str, dtype: str | None = None, platform: str = 'ocl') KernelModel [source]¶
Load model info and build model.
model_name is the name of the model, or perhaps a model expression such as sphere*hardsphere or sphere+cylinder.
dtype and platform are given by
build_model()
.
- sasmodels.core.load_model_info(model_string: str) ModelInfo [source]¶
Load a model definition given the model name.
model_string is the name of the model, or perhaps a model expression such as sphere*cylinder or sphere+cylinder. Use ‘@’ for a structure factor product, e.g. sphere@hardsphere. Custom models can be specified by prefixing the model name with ‘custom.’, e.g. ‘custom.MyModel+sphere’.
This returns a handle to the module defining the model. This can be used with functions in generate to build the docs or extract model info.
- sasmodels.core.merge_deps(old, new)[source]¶
Merge two dependency lists. The lists are partially ordered, with all dependents coming after the items they depend on, but otherwise order doesn’t matter. The merged list preserves the partial ordering. So if old and new both include the item “c”, then all items that come before “c” in old and new will come before “c” in the result, and all items that come after “c” in old and new will come after “c” in the result.
- sasmodels.core.parse_dtype(model_info: ModelInfo, dtype: str | None = None, platform: str | None = None) Tuple[dtype, bool, str] [source]¶
Interpret dtype string, returning np.dtype, fast flag and platform.
Possible types include ‘half’, ‘single’, ‘double’ and ‘quad’. If the type is ‘fast’, then this is equivalent to dtype ‘single’ but using fast native functions rather than those with the precision level guaranteed by the OpenCL standard. ‘default’ will choose the appropriate default for the model and platform.
Platform preference can be specfied (“ocl”, “cuda”, “dll”), with the default being OpenCL or CUDA if available, otherwise DLL. If the dtype name ends with ‘!’ then platform is forced to be DLL rather than GPU. The default platform is set by the environment variable SAS_OPENCL, SAS_OPENCL=driver:device for OpenCL, SAS_OPENCL=cuda:device for CUDA or SAS_OPENCL=none for DLL.
This routine ignores the preferences within the model definition. This is by design. It allows us to test models in single precision even when we have flagged them as requiring double precision so we can easily check the performance on different platforms without having to change the model definition.
- sasmodels.core.precompile_dlls(path: str, dtype: str = 'double') List[str] [source]¶
Precompile the dlls for all builtin models, returning a list of dll paths.
path is the directory in which to save the dlls. It will be created if it does not already exist.
This can be used when build the windows distribution of sasmodels which may be missing the OpenCL driver and the dll compiler.
- sasmodels.core.reparameterize(base, parameters, translation, filename=None, title=None, insert_after=None, docs=None, name=None, source=None)[source]¶
Reparameterize an existing model.
base is the original modelinfo. This cannot be a reparameterized model; only one level of reparameterization is supported.
parameters are the new parameter definitions that will be included in the model info.
translation is a string each line containing var = expr. The variable var can be a new intermediate value, or it can be a parameter from the base model that will be replace by the expression. The expression expr can be any C99 expression, including C-style if-expressions condition ? value1 : value2. Expressions can use any new or existing parameter that is not being replaced including intermediate values that are previously defined. Parameters can only be assigned once, never updated. C99 math functions are available, as well as any functions defined in the base model or included in source (see below).
filename is the filename for the replacement model. This is usually __file__, giving the path to the model file, but it could also be a nominal filename for translations defined on-the-fly.
title is the model title, which defaults to base.title plus “ (reparameterized)”.
insert_after controls parameter placement. By default, the new parameters replace the old parameters in their original position. Instead, you can provide a dictionary {‘par’: ‘newpar1,newpar2’} indicating that new parameters named newpar1 and newpar2 should be included in the table after the existing parameter par, or at the beginning if par is the empty string.
docs constains the doc string for the translated model, which by default references the base model and gives the translation text.
name is the model name (default =
"constrained_" + base.name
).source is a list any additional C source files that should be included to define functions and constants used in the translation expressions. This will be included after all sources for the base model. Sources will only be included once, even if they are listed in both places, so feel free to list all dependencies for the helper function, such as “lib/polevl.c”.
sasmodels.data module¶
SAS data representations.
Plotting functions for data sets:
plot_data()
plots the data file.
plot_theory()
plots a calculated result from the model.
Wrappers for the sasview data loader and data manipulations:
load_data()
loads a sasview data file.
set_beam_stop()
masks the beam stop from the data.
set_half()
selects the right or left half of the data, which can be useful for shear measurements which have not been properly corrected for path length and reflections.
set_top()
cuts the top part off the data.
Empty data sets for evaluating models without data:
empty_data1D()
creates an empty dataset, which is useful for plotting a theory function before the data is measured.
empty_data2D()
creates an empty 2D dataset.
Note that the empty datasets use a minimal representation of the SasView objects so that models can be run without SasView on the path. You could also use these for your own data loader.
- class sasmodels.data.Data1D(x: ndarray | None = None, y: ndarray | None = None, dx: ndarray | None = None, dy: ndarray | None = None)[source]¶
Bases:
object
1D data object.
Note that this definition matches the attributes from sasview, with some generic 1D data vectors and some SAS specific definitions. Some refactoring to allow consistent naming conventions between 1D, 2D and SESANS data would be helpful.
Attributes
x, dx: \(q\) vector and gaussian resolution
y, dy: \(I(q)\) vector and measurement uncertainty
mask: values to include in plotting/analysis
dxl: slit widths for slit smeared data, with dx ignored
qmin, qmax: range of \(q\) values in x
filename: label for the data line
_xaxis, _xunit: label and units for the x axis
_yaxis, _yunit: label and units for the y axis
- __annotations__ = {}¶
- __dict__ = mappingproxy({'__module__': 'sasmodels.data', '__doc__': '\n 1D data object.\n\n Note that this definition matches the attributes from sasview, with\n some generic 1D data vectors and some SAS specific definitions. Some\n refactoring to allow consistent naming conventions between 1D, 2D and\n SESANS data would be helpful.\n\n **Attributes**\n\n *x*, *dx*: $q$ vector and gaussian resolution\n\n *y*, *dy*: $I(q)$ vector and measurement uncertainty\n\n *mask*: values to include in plotting/analysis\n\n *dxl*: slit widths for slit smeared data, with *dx* ignored\n\n *qmin*, *qmax*: range of $q$ values in *x*\n\n *filename*: label for the data line\n\n *_xaxis*, *_xunit*: label and units for the *x* axis\n\n *_yaxis*, *_yunit*: label and units for the *y* axis\n ', '__init__': <function Data1D.__init__>, 'xaxis': <function Data1D.xaxis>, 'yaxis': <function Data1D.yaxis>, '__dict__': <attribute '__dict__' of 'Data1D' objects>, '__weakref__': <attribute '__weakref__' of 'Data1D' objects>, '__annotations__': {}})¶
- __doc__ = '\n 1D data object.\n\n Note that this definition matches the attributes from sasview, with\n some generic 1D data vectors and some SAS specific definitions. Some\n refactoring to allow consistent naming conventions between 1D, 2D and\n SESANS data would be helpful.\n\n **Attributes**\n\n *x*, *dx*: $q$ vector and gaussian resolution\n\n *y*, *dy*: $I(q)$ vector and measurement uncertainty\n\n *mask*: values to include in plotting/analysis\n\n *dxl*: slit widths for slit smeared data, with *dx* ignored\n\n *qmin*, *qmax*: range of $q$ values in *x*\n\n *filename*: label for the data line\n\n *_xaxis*, *_xunit*: label and units for the *x* axis\n\n *_yaxis*, *_yunit*: label and units for the *y* axis\n '¶
- __init__(x: ndarray | None = None, y: ndarray | None = None, dx: ndarray | None = None, dy: ndarray | None = None) None [source]¶
- __module__ = 'sasmodels.data'¶
- __weakref__¶
list of weak references to the object (if defined)
- class sasmodels.data.Data2D(x: ndarray | None = None, y: ndarray | None = None, z: ndarray | None = None, dx: ndarray | None = None, dy: ndarray | None = None, dz: ndarray | None = None)[source]¶
Bases:
object
2D data object.
Note that this definition matches the attributes from sasview. Some refactoring to allow consistent naming conventions between 1D, 2D and SESANS data would be helpful.
Attributes
qx_data, dqx_data: \(q_x\) matrix and gaussian resolution
qy_data, dqy_data: \(q_y\) matrix and gaussian resolution
data, err_data: \(I(q)\) matrix and measurement uncertainty
mask: values to exclude from plotting/analysis
qmin, qmax: range of \(q\) values in x
filename: label for the data line
_xaxis, _xunit: label and units for the x axis
_yaxis, _yunit: label and units for the y axis
_zaxis, _zunit: label and units for the y axis
Q_unit, I_unit: units for Q and intensity
x_bins, y_bins: grid steps in x and y directions
- __annotations__ = {}¶
- __dict__ = mappingproxy({'__module__': 'sasmodels.data', '__doc__': '\n 2D data object.\n\n Note that this definition matches the attributes from sasview. Some\n refactoring to allow consistent naming conventions between 1D, 2D and\n SESANS data would be helpful.\n\n **Attributes**\n\n *qx_data*, *dqx_data*: $q_x$ matrix and gaussian resolution\n\n *qy_data*, *dqy_data*: $q_y$ matrix and gaussian resolution\n\n *data*, *err_data*: $I(q)$ matrix and measurement uncertainty\n\n *mask*: values to exclude from plotting/analysis\n\n *qmin*, *qmax*: range of $q$ values in *x*\n\n *filename*: label for the data line\n\n *_xaxis*, *_xunit*: label and units for the *x* axis\n\n *_yaxis*, *_yunit*: label and units for the *y* axis\n\n *_zaxis*, *_zunit*: label and units for the *y* axis\n\n *Q_unit*, *I_unit*: units for Q and intensity\n\n *x_bins*, *y_bins*: grid steps in *x* and *y* directions\n ', '__init__': <function Data2D.__init__>, 'xaxis': <function Data2D.xaxis>, 'yaxis': <function Data2D.yaxis>, 'zaxis': <function Data2D.zaxis>, '__dict__': <attribute '__dict__' of 'Data2D' objects>, '__weakref__': <attribute '__weakref__' of 'Data2D' objects>, '__annotations__': {}})¶
- __doc__ = '\n 2D data object.\n\n Note that this definition matches the attributes from sasview. Some\n refactoring to allow consistent naming conventions between 1D, 2D and\n SESANS data would be helpful.\n\n **Attributes**\n\n *qx_data*, *dqx_data*: $q_x$ matrix and gaussian resolution\n\n *qy_data*, *dqy_data*: $q_y$ matrix and gaussian resolution\n\n *data*, *err_data*: $I(q)$ matrix and measurement uncertainty\n\n *mask*: values to exclude from plotting/analysis\n\n *qmin*, *qmax*: range of $q$ values in *x*\n\n *filename*: label for the data line\n\n *_xaxis*, *_xunit*: label and units for the *x* axis\n\n *_yaxis*, *_yunit*: label and units for the *y* axis\n\n *_zaxis*, *_zunit*: label and units for the *y* axis\n\n *Q_unit*, *I_unit*: units for Q and intensity\n\n *x_bins*, *y_bins*: grid steps in *x* and *y* directions\n '¶
- __init__(x: ndarray | None = None, y: ndarray | None = None, z: ndarray | None = None, dx: ndarray | None = None, dy: ndarray | None = None, dz: ndarray | None = None) None [source]¶
- __module__ = 'sasmodels.data'¶
- __weakref__¶
list of weak references to the object (if defined)
- class sasmodels.data.Detector(pixel_size: Tuple[float, float] = (None, None), distance: float | None = None)[source]¶
Bases:
object
Detector attributes.
- __dict__ = mappingproxy({'__module__': 'sasmodels.data', '__doc__': '\n Detector attributes.\n ', '__init__': <function Detector.__init__>, '__dict__': <attribute '__dict__' of 'Detector' objects>, '__weakref__': <attribute '__weakref__' of 'Detector' objects>, '__annotations__': {}})¶
- __doc__ = '\n Detector attributes.\n '¶
- __init__(pixel_size: Tuple[float, float] = (None, None), distance: float | None = None) None [source]¶
- __module__ = 'sasmodels.data'¶
- __weakref__¶
list of weak references to the object (if defined)
- class sasmodels.data.Sample[source]¶
Bases:
object
Sample attributes.
- __dict__ = mappingproxy({'__module__': 'sasmodels.data', '__doc__': '\n Sample attributes.\n ', '__init__': <function Sample.__init__>, '__dict__': <attribute '__dict__' of 'Sample' objects>, '__weakref__': <attribute '__weakref__' of 'Sample' objects>, '__annotations__': {}})¶
- __doc__ = '\n Sample attributes.\n '¶
- __module__ = 'sasmodels.data'¶
- __weakref__¶
list of weak references to the object (if defined)
- class sasmodels.data.SesansData(**kw)[source]¶
Bases:
Data1D
SESANS data object.
This is just
Data1D
with a wavelength parameter.x is spin echo length and y is polarization (P/P0).
- __doc__ = '\n SESANS data object.\n\n This is just :class:`Data1D` with a wavelength parameter.\n\n *x* is spin echo length and *y* is polarization (P/P0).\n '¶
- __module__ = 'sasmodels.data'¶
- isSesans = True¶
- class sasmodels.data.Source[source]¶
Bases:
object
Beam attributes.
- __dict__ = mappingproxy({'__module__': 'sasmodels.data', '__doc__': '\n Beam attributes.\n ', '__init__': <function Source.__init__>, '__dict__': <attribute '__dict__' of 'Source' objects>, '__weakref__': <attribute '__weakref__' of 'Source' objects>, '__annotations__': {}})¶
- __doc__ = '\n Beam attributes.\n '¶
- __module__ = 'sasmodels.data'¶
- __weakref__¶
list of weak references to the object (if defined)
- class sasmodels.data.Vector(x: float | None = None, y: float | None = None, z: float | None = None)[source]¶
Bases:
object
3-space vector of x, y, z
- __dict__ = mappingproxy({'__module__': 'sasmodels.data', '__doc__': '\n 3-space vector of *x*, *y*, *z*\n ', '__init__': <function Vector.__init__>, '__dict__': <attribute '__dict__' of 'Vector' objects>, '__weakref__': <attribute '__weakref__' of 'Vector' objects>, '__annotations__': {}})¶
- __doc__ = '\n 3-space vector of *x*, *y*, *z*\n '¶
- __module__ = 'sasmodels.data'¶
- __weakref__¶
list of weak references to the object (if defined)
- sasmodels.data._build_matrix(self, plottable)[source]¶
Build a matrix for 2d plot from a vector Returns a matrix (image) with ~ square binning Requirement: need 1d array formats of self.data, self.qx_data, and self.qy_data where each one corresponds to z, x, or y axis values
- sasmodels.data._fillup_pixels(image=None, weights=None)[source]¶
Fill z values of the empty cells of 2d image matrix with the average over up-to next nearest neighbor points
- Parameters:
image – (2d matrix with some zi = None)
- Returns:
image (2d array )
- TODO:
Find better way to do for-loop below
- sasmodels.data._get_bins(self)[source]¶
get bins set x_bins and y_bins into self, 1d arrays of the index with ~ square binning Requirement: need 1d array formats of self.qx_data, and self.qy_data where each one corresponds to x, or y axis values
- sasmodels.data.empty_data1D(q: ndarray, resolution: float = 0.0, L: float = 0.0, dL: float = 0.0) Data1D [source]¶
Create empty 1D data using the given q as the x value.
rms resolution \(\Delta q/q\) defaults to 0%. If wavelength L and rms wavelength divergence dL are defined, then resolution defines rms \(\Delta \theta/\theta\) for the lowest q, with \(\theta\) derived from \(q = 4\pi/\lambda \sin(\theta)\).
- sasmodels.data.empty_data2D(qx: ndarray, qy: ndarray | None = None, resolution: float = 0.0) Data2D [source]¶
Create empty 2D data using the given mesh.
If qy is missing, create a square mesh with qy=qx.
resolution dq/q defaults to 5%.
- sasmodels.data.load_data(filename: str, index: int = 0) Data1D | Data2D | SesansData [source]¶
Load data using a sasview loader.
- sasmodels.data.plot_data(data: Data1D | Data2D | SesansData, view: str | None = None, limits: Tuple[float, float] | None = None) None [source]¶
Plot data loaded by the sasview loader.
data is a sasview data object, either 1D, 2D or SESANS.
view is log or linear.
limits sets the intensity limits on the plot; if None then the limits are inferred from the data.
- sasmodels.data.plot_theory(data: Data1D | Data2D | SesansData, theory: ndarray | None, resid: ndarray | None = None, view: str | None = None, use_data: bool = True, limits: Tuple[float, float] | None = None, Iq_calc: ndarray | None = None) None [source]¶
Plot theory calculation.
data is needed to define the graph properties such as labels and units, and to define the data mask.
theory is a matrix of the same shape as the data.
view is log or linear
use_data is True if the data should be plotted as well as the theory.
limits sets the intensity limits on the plot; if None then the limits are inferred from the data.
Iq_calc is the raw theory values without resolution smearing
- sasmodels.data.protect(func: Callable) Callable [source]¶
Decorator to wrap calls in an exception trapper which prints the exception and continues. Keyboard interrupts are ignored.
- sasmodels.data.set_beam_stop(data: Data1D | Data2D | SesansData, radius: float, outer: float | None = None) None [source]¶
Add a beam stop of the given radius. If outer, make an annulus.
- sasmodels.data.set_half(data: Data1D | Data2D | SesansData, half: str) None [source]¶
Select half of the data, either “right” or “left”.
- sasmodels.data.set_top(data: Data1D | Data2D | SesansData, cutoff: float) None [source]¶
Chop the top off the data, above cutoff.
sasmodels.details module¶
Kernel Call Details¶
When calling sas computational kernels with polydispersity there are a
number of details that need to be sent to the caller. This includes the
list of polydisperse parameters, the number of points in the polydispersity
weight distribution, and which parameter is the “theta” parameter for
polar coordinate integration. The CallDetails
object maintains
this data. Use make_details()
to build a details object which
can be passed to one of the computational kernels.
- class sasmodels.details.CallDetails(model_info: ModelInfo)[source]¶
Bases:
object
Manage the polydispersity information for the kernel call.
Conceptually, a polydispersity calculation is an integral over a mesh in n-D space where n is the number of polydisperse parameters. In order to keep the program responsive, and not crash the GPU, only a portion of the mesh is computed at a time. Meshes with a large number of points will therefore require many calls to the polydispersity loop. Restarting a nested loop in the middle requires that the indices of the individual mesh dimensions can be computed for the current loop location. This is handled by the pd_stride vector, with n//stride giving the loop index and n%stride giving the position in the sub loops.
One of the parameters may be the latitude. When integrating in polar coordinates, the total circumference decreases as latitude varies from pi r^2 at the equator to 0 at the pole, and the weight associated with a range of latitude values needs to be scaled by this circumference. This scale factor needs to be updated each time the theta value changes. theta_par indicates which of the values in the parameter vector is the latitude parameter, or -1 if there is no latitude parameter in the model. In practice, the normalization term cancels if the latitude is not a polydisperse parameter.
- __dict__ = mappingproxy({'__module__': 'sasmodels.details', '__doc__': '\n Manage the polydispersity information for the kernel call.\n\n Conceptually, a polydispersity calculation is an integral over a mesh\n in n-D space where n is the number of polydisperse parameters. In order\n to keep the program responsive, and not crash the GPU, only a portion\n of the mesh is computed at a time. Meshes with a large number of points\n will therefore require many calls to the polydispersity loop. Restarting\n a nested loop in the middle requires that the indices of the individual\n mesh dimensions can be computed for the current loop location. This\n is handled by the *pd_stride* vector, with n//stride giving the loop\n index and n%stride giving the position in the sub loops.\n\n One of the parameters may be the latitude. When integrating in polar\n coordinates, the total circumference decreases as latitude varies from\n pi r^2 at the equator to 0 at the pole, and the weight associated\n with a range of latitude values needs to be scaled by this circumference.\n This scale factor needs to be updated each time the theta value\n changes. *theta_par* indicates which of the values in the parameter\n vector is the latitude parameter, or -1 if there is no latitude\n parameter in the model. In practice, the normalization term cancels\n if the latitude is not a polydisperse parameter.\n ', 'parts': None, '__init__': <function CallDetails.__init__>, 'pd_par': <property object>, 'pd_length': <property object>, 'pd_offset': <property object>, 'pd_stride': <property object>, 'num_eval': <property object>, 'num_weights': <property object>, 'num_active': <property object>, 'theta_par': <property object>, 'show': <function CallDetails.show>, '__dict__': <attribute '__dict__' of 'CallDetails' objects>, '__weakref__': <attribute '__weakref__' of 'CallDetails' objects>, '__annotations__': {'parts': 'List["CallDetails"]', 'offset': 'np.ndarray', 'length': 'np.ndarray'}})¶
- __doc__ = '\n Manage the polydispersity information for the kernel call.\n\n Conceptually, a polydispersity calculation is an integral over a mesh\n in n-D space where n is the number of polydisperse parameters. In order\n to keep the program responsive, and not crash the GPU, only a portion\n of the mesh is computed at a time. Meshes with a large number of points\n will therefore require many calls to the polydispersity loop. Restarting\n a nested loop in the middle requires that the indices of the individual\n mesh dimensions can be computed for the current loop location. This\n is handled by the *pd_stride* vector, with n//stride giving the loop\n index and n%stride giving the position in the sub loops.\n\n One of the parameters may be the latitude. When integrating in polar\n coordinates, the total circumference decreases as latitude varies from\n pi r^2 at the equator to 0 at the pole, and the weight associated\n with a range of latitude values needs to be scaled by this circumference.\n This scale factor needs to be updated each time the theta value\n changes. *theta_par* indicates which of the values in the parameter\n vector is the latitude parameter, or -1 if there is no latitude\n parameter in the model. In practice, the normalization term cancels\n if the latitude is not a polydisperse parameter.\n '¶
- __module__ = 'sasmodels.details'¶
- __weakref__¶
list of weak references to the object (if defined)
- property num_active¶
Number of active polydispersity loops
- property num_eval¶
Total size of the pd mesh
- property num_weights¶
Total length of all the weight vectors
- parts: List[CallDetails] = None¶
- property pd_length¶
Number of weights for each polydisperse parameter
- property pd_offset¶
Offsets for the individual weight vectors in the set of weights
- property pd_par¶
List of polydisperse parameters
- property pd_stride¶
Stride in the pd mesh for each pd dimension
- property theta_par¶
Location of the theta parameter in the parameter vector
- sasmodels.details.convert_magnetism(parameters: ParameterTable, values: Sequence[ndarray]) bool [source]¶
Convert magnetism values from polar to rectangular coordinates.
Returns True if any magnetism is present.
- sasmodels.details.correct_theta_weights(parameters: ParameterTable, dispersity: Sequence[ndarray], weights: Sequence[ndarray]) Sequence[ndarray] [source]¶
Deprecated Theta weights will be computed in the kernel wrapper if they are needed.
If there is a theta parameter, update the weights of that parameter so that the cosine weighting required for polar integration is preserved.
Avoid evaluation strictly at the pole, which would otherwise send the weight to zero. This is probably not a problem in practice (if dispersity is +/- 90, then you probably should be using a 1-D model of the circular average).
Note: scale and background parameters are not include in the tuples for dispersity and weights, so index is parameters.theta_offset, not parameters.theta_offset+2
Returns updated weights vectors
- sasmodels.details.dispersion_mesh(model_info: ModelInfo, mesh: List[Tuple[float, ndarray, ndarray]]) Tuple[List[ndarray], List[ndarray]] [source]¶
Create a mesh grid of dispersion parameters and weights.
mesh is a list of (value, dispersity, weights), where the values are the individual parameter values, and (dispersity, weights) is the distribution of parameter values.
Only the volume parameters should be included in this list. Orientation parameters do not affect the calculation of effective radius or volume ratio. This is convenient since it avoids the distinction between value and dispersity that is present in orientation parameters but not shape parameters.
Returns [p1,p2,…],w where pj is a vector of values for parameter j and w is a vector containing the products for weights for each parameter set in the vector.
- sasmodels.details.make_details(model_info: ModelInfo, length: ndarray, offset: ndarray, num_weights: int) CallDetails [source]¶
Return a
CallDetails
object for a polydisperse calculation of the model defined by model_info. Polydispersity is defined by the length of the polydispersity distribution for each parameter and the offset of the distribution in the polydispersity array. Monodisperse parameters should use a polydispersity length of one with weight 1.0. num_weights is the total length of the polydispersity array.
- sasmodels.details.make_kernel_args(kernel: Kernel, mesh: Tuple[List[np.ndarray], List[np.ndarray]]) Tuple[CallDetails, np.ndarray, bool] [source]¶
Converts (value, dispersity, weight) for each parameter into kernel pars.
Returns a CallDetails object indicating the polydispersity, a data object containing the different values, and the magnetic flag indicating whether any magnetic magnitudes are non-zero. Magnetic vectors (M0, phi, theta) are converted to rectangular coordinates (mx, my, mz).
sasmodels.direct_model module¶
Class interface to the model calculator.
Calling a model is somewhat non-trivial since the functions called depend on the data type. For 1D data the Iq kernel needs to be called, for 2D data the Iqxy kernel needs to be called, and for SESANS data the Iq kernel needs to be called followed by a Hankel transform. Before the kernel is called an appropriate q calculation vector needs to be constructed. This is not the simple q vector where you have measured the data since the resolution calculation will require values beyond the range of the measured data. After the calculation the resolution calculator must be called to return the predicted value for each measured data point.
DirectModel
is a callable object that takes parameter=value
keyword arguments and returns the appropriate theory values for the data.
DataMixin
does the real work of interpreting the data and calling
the model calculator. This is used by DirectModel
, which uses
direct parameter values and by bumps_model.Experiment
which wraps
the parameter values in boxes so that the user can set fitting ranges, etc.
on the individual parameters and send the model to the Bumps optimizers.
- class sasmodels.direct_model.DataMixin[source]¶
Bases:
object
DataMixin captures the common aspects of evaluating a SAS model for a particular data set, including calculating Iq and evaluating the resolution function. It is used in particular by
DirectModel
, which evaluates a SAS model parameters as key word arguments to the calculator method, and bybumps_model.Experiment
, which wraps the model and data for use with the Bumps fitting engine. It is not currently used bysasview_model.SasviewModel
since this will require a number of changes to SasView before we can do it._interpret_data initializes the data structures necessary to manage the calculations. This sets attributes in the child class such as data_type and resolution.
_calc_theory evaluates the model at the given control values.
_set_data sets the intensity data in the data object, possibly with random noise added. This is useful for simulating a dataset with the results from _calc_theory.
- __annotations__ = {}¶
- __dict__ = mappingproxy({'__module__': 'sasmodels.direct_model', '__doc__': '\n DataMixin captures the common aspects of evaluating a SAS model for a\n particular data set, including calculating Iq and evaluating the\n resolution function. It is used in particular by :class:`DirectModel`,\n which evaluates a SAS model parameters as key word arguments to the\n calculator method, and by :class:`.bumps_model.Experiment`, which wraps the\n model and data for use with the Bumps fitting engine. It is not\n currently used by :class:`.sasview_model.SasviewModel` since this will\n require a number of changes to SasView before we can do it.\n\n *_interpret_data* initializes the data structures necessary\n to manage the calculations. This sets attributes in the child class\n such as *data_type* and *resolution*.\n\n *_calc_theory* evaluates the model at the given control values.\n\n *_set_data* sets the intensity data in the data object,\n possibly with random noise added. This is useful for simulating a\n dataset with the results from *_calc_theory*.\n ', '_interpret_data': <function DataMixin._interpret_data>, '_set_data': <function DataMixin._set_data>, '_calc_theory': <function DataMixin._calc_theory>, '__dict__': <attribute '__dict__' of 'DataMixin' objects>, '__weakref__': <attribute '__weakref__' of 'DataMixin' objects>, '__annotations__': {}})¶
- __doc__ = '\n DataMixin captures the common aspects of evaluating a SAS model for a\n particular data set, including calculating Iq and evaluating the\n resolution function. It is used in particular by :class:`DirectModel`,\n which evaluates a SAS model parameters as key word arguments to the\n calculator method, and by :class:`.bumps_model.Experiment`, which wraps the\n model and data for use with the Bumps fitting engine. It is not\n currently used by :class:`.sasview_model.SasviewModel` since this will\n require a number of changes to SasView before we can do it.\n\n *_interpret_data* initializes the data structures necessary\n to manage the calculations. This sets attributes in the child class\n such as *data_type* and *resolution*.\n\n *_calc_theory* evaluates the model at the given control values.\n\n *_set_data* sets the intensity data in the data object,\n possibly with random noise added. This is useful for simulating a\n dataset with the results from *_calc_theory*.\n '¶
- __module__ = 'sasmodels.direct_model'¶
- __weakref__¶
list of weak references to the object (if defined)
- _interpret_data(data: Data1D | Data2D | SesansData, model: KernelModel) None [source]¶
- class sasmodels.direct_model.DirectModel(data: Data1D | Data2D | SesansData, model: KernelModel, cutoff: float = 1e-05)[source]¶
Bases:
DataMixin
Create a calculator object for a model.
data is 1D SAS, 2D SAS or SESANS data
model is a model calculator return from
core.load_model()
cutoff is the polydispersity weight cutoff.
- __doc__ = '\n Create a calculator object for a model.\n\n *data* is 1D SAS, 2D SAS or SESANS data\n\n *model* is a model calculator return from :func:`.core.load_model`\n\n *cutoff* is the polydispersity weight cutoff.\n '¶
- __init__(data: Data1D | Data2D | SesansData, model: KernelModel, cutoff: float = 1e-05) None [source]¶
- __module__ = 'sasmodels.direct_model'¶
- sasmodels.direct_model._get_par_weights(parameter: Parameter, values: Dict[str, float], active: bool = True) Tuple[float, ndarray, ndarray] [source]¶
Generate the distribution for parameter name given the parameter values in pars.
Uses “name”, “name_pd”, “name_pd_type”, “name_pd_n”, “name_pd_sigma” from the pars dictionary for parameter value and parameter dispersion.
- sasmodels.direct_model._vol_pars(model_info: ModelInfo, values: Mapping[str, float]) Tuple[ndarray, ndarray] [source]¶
- sasmodels.direct_model.call_Fq(calculator: Kernel, pars: Mapping[str, float], cutoff: float = 0.0, mono: bool = False) ndarray [source]¶
Like
call_kernel()
, but returning F, F^2, R_eff, V_shell, V_form/V_shell.For solid objects V_shell is equal to V_form and the volume ratio is 1.
Use parameter radius_effective_mode to select the effective radius calculation to use amongst the radius_effective_modes list given in the model.
- sasmodels.direct_model.call_kernel(calculator: Kernel, pars: Mapping[str, float], cutoff: float = 0.0, mono: bool = False) ndarray [source]¶
Call kernel returned from model.make_kernel with parameters pars.
cutoff is the limiting value for the product of dispersion weights used to perform the multidimensional dispersion calculation more quickly at a slight cost to accuracy. The default value of cutoff=0 integrates over the entire dispersion cube. Using cutoff=1e-5 can be 50% faster, but with an error of about 1%, which is usually less than the measurement uncertainty.
mono is True if polydispersity should be set to none on all parameters.
- sasmodels.direct_model.call_profile(model_info: ModelInfo, pars: Mapping[str, float] | None = None) Tuple[ndarray, ndarray, Tuple[str, str]] [source]¶
Returns the profile x, y, (xlabel, ylabel) representing the model.
- sasmodels.direct_model.get_mesh(model_info: ModelInfo, values: Dict[str, float], dim: str = '1d', mono: bool = False) List[Tuple[float, ndarray, ndarray]] [source]¶
Retrieve the dispersity mesh described by the parameter set.
Returns a list of (value, dispersity, weights) with one tuple for each parameter in the model call parameters. Inactive parameters return the default value with a weight of 1.0.
sasmodels.exception module¶
Utility to add annotations to python exceptions.
- sasmodels.exception.annotate_exception(msg, exc=None)[source]¶
Add an annotation to the current exception, which can then be forwarded to the caller using a bare “raise” statement to raise the annotated exception. If the exception exc is provided, then that exception is the one that is annotated, otherwise sys.exc_info is used.
Example:
>>> D = {} >>> try: ... print(D['hello']) ... except: ... annotate_exception("while accessing 'D'") ... raise Traceback (most recent call last): ... KeyError: "hello while accessing 'D'"
sasmodels.generate module¶
SAS model constructor.
Small angle scattering models are defined by a set of kernel functions:
Iq(q, p1, p2, …) returns the scattering at q for a form with particular dimensions averaged over all orientations.
Iqac(qab, qc, p1, p2, …) returns the scattering at qab, qc for a rotationally symmetric form with particular dimensions. qab, qc are determined from shape orientation and scattering angles. This call is used if the shape has orientation parameters theta and phi.
Iqabc(qa, qb, qc, p1, p2, …) returns the scattering at qa, qb, qc for a form with particular dimensions. qa, qb, qc are determined from shape orientation and scattering angles. This call is used if the shape has orientation parameters theta, phi and psi.
Iqxy(qx, qy, p1, p2, …) returns the scattering at qx, qy. Use this to create an arbitrary 2D theory function, needed for q-dependent background functions and for models with non-uniform magnetism.
form_volume(p1, p2, …) returns the volume of the form with particular dimension, or 1.0 if no volume normalization is required.
shell_volume(p1, p2, …) returns the volume of the shell for forms which are hollow.
radius_effective(mode, p1, p2, …) returns the effective radius of the form with particular dimensions. Mode determines the type of effective radius returned, with mode=1 for equivalent volume.
These functions are defined in a kernel module .py script and an associated set of .c files. The model constructor will use them to create models with polydispersity across volume and orientation parameters, and provide scale and background parameters for each model.
C code should be stylized C-99 functions written for OpenCL. All functions need prototype declarations even if the are defined before they are used. Although OpenCL supports #include preprocessor directives, the list of includes should be given as part of the metadata in the kernel module definition. The included files should be listed using a path relative to the kernel module, or if using “lib/file.c” if it is one of the standard includes provided with the sasmodels source. The includes need to be listed in order so that functions are defined before they are used.
Floating point values should be declared as double. For single precision calculations, double will be replaced by float. The single precision conversion will also tag floating point constants with “f” to make them single precision constants. When using integral values in floating point expressions, they should be expressed as floating point values by including a decimal point. This includes 0., 1. and 2.
OpenCL has a sincos function which can improve performance when both the sin and cos values are needed for a particular argument. Since this function does not exist in C99, all use of sincos should be replaced by the macro SINCOS(value, sn, cn) where sn and cn are previously declared double variables. When compiled for systems without OpenCL, SINCOS will be replaced by sin and cos calls. If value is an expression, it will appear twice in this case; whether or not it will be evaluated twice depends on the quality of the compiler.
The kernel module must set variables defining the kernel meta data:
id is an implicit variable formed from the filename. It will be a valid python identifier, and will be used as the reference into the html documentation, with ‘_’ replaced by ‘-‘.
name is the model name as displayed to the user. If it is missing, it will be constructed from the id.
title is a short description of the model, suitable for a tool tip, or a one line model summary in a table of models.
description is an extended description of the model to be displayed while the model parameters are being edited.
parameters is the list of parameters. Parameters in the kernel functions must appear in the same order as they appear in the parameters list. Two additional parameters, scale and background are added to the beginning of the parameter list. They will show up in the documentation as model parameters, but they are never sent to the kernel functions. Note that effect_radius and volfraction must occur first in structure factor calculations.
category is the default category for the model. The category is two level structure, with the form “group:section”, indicating where in the manual the model will be located. Models are alphabetical within their section.
source is the list of C-99 source files that must be joined to create the OpenCL kernel functions. The files defining the functions need to be listed before the files which use the functions.
form_volume, Iq, Iqac, Iqabc are strings containing the C source code for the body of the volume, Iq, and Iqac functions respectively. These can also be defined in the last source file.
Iq, Iqac, Iqabc also be instead be python functions defining the kernel. If they are marked as Iq.vectorized = True then the kernel is passed the entire q vector at once, otherwise it is passed values one q at a time. The performance improvement of this step is significant.
valid expression that evaluates to True if the input parameters are valid (e.g., “bell_radius >= radius” for the barbell or capped cylinder models). The expression can call C functions, including those defined in your model file.
A modelinfo.ModelInfo
structure is constructed from the kernel meta
data and returned to the caller.
Valid inputs should be identified by the valid expression. Particularly with polydispersity, there are some sets of shape parameters which lead to nonsensical forms, such as a capped cylinder where the cap radius is smaller than the cylinder radius. The polydispersity calculation will ignore these points, effectively chopping the parameter weight distributions at the boundary of the infeasible region. The resulting scattering will be set to background, even for models with no polydispersity. If the valid expression misses some parameter combinations and they reach the kernel, the kernel should probably return NaN rather than zero. Even if the volume also evaluates to zero for these parameters, the distribution weights are still accumulated and the average volume calculation will be slightly off.
The doc string at the start of the kernel module will be used to construct the model documentation web pages. Embedded figures should appear in the subdirectory “img” beside the model definition, and tagged with the kernel module name to avoid collision with other models. Some file systems are case-sensitive, so only use lower case characters for file names and extensions.
Code follows the C99 standard with the following extensions and conditions:
M_PI_180 = pi/180
M_4PI_3 = 4pi/3
square(x) = x*x
cube(x) = x*x*x
sas_sinx_x(x) = sin(x)/x, with sin(0)/0 -> 1
all double precision constants must include the decimal point
all double declarations may be converted to half, float, or long double
FLOAT_SIZE is the number of bytes in the converted variables
load_kernel_module()
loads the model definition file and
modelinfo.make_model_info()
parses it. make_source()
converts C-based model definitions to C source code, including the
polydispersity integral. model_sources()
returns the list of
source files the model depends on, and ocl_timestamp()
returns
the latest time stamp amongst the source files (so you can check if
the model needs to be rebuilt).
The function make_doc()
extracts the doc string and adds the
parameter table to the top. make_figure in sasmodels/doc/genmodel
creates the default figure for the model. [These two sets of code
should mignrate into docs.py so docs can be updated in one place].
- sasmodels.generate._add_source(source, code, path, lineno=1)[source]¶
Add a file to the list of source code chunks, tagged with path and line.
- sasmodels.generate._build_translation(model_info, table_id='_v', var_prefix='_var_')[source]¶
Interpret parameter translation block, if any.
model_info contains the parameter table and any translation definition for converting between model parameters and the base calculation model.
table_id is the internal label used for the parameter call table. It must be of the form “_table” matching whatever table variable name that appears in the macros such as CALL_VOLUME() and CALL_IQ().
var_prefix is a tag to attach to intermediate variables to avoid collision with variables used inside kernel_iq.
Returns:
subs = {name: expr, …} parameter substitution table for calling into the kernel function
translation = “#define TRANSLATION_VARS(_v) _var_name = expr ….”
validity = “#define VALID(_v) …”
The returned subs is used to generate the substitions for CALL_VOLUME etc., parameters, via
_call_pars()
.The returned translation and validity macros need to be included inside the generated model files. They are the same for all variants (1D, 2D, magnetic) so can be defined once along side the parameter table. Even though they are expanded independently in each variant.
- sasmodels.generate._build_translation_vars(table_id, variables)[source]¶
Build TRANSLATION_VARS macro for C which builds intermediate values.
g.,
#define TRANSLATION_VARS(_v) \ const double _temporary_Re = cbrt(_v.volume/_v.eccentricity/M_4PI_3)
- sasmodels.generate._build_validity_check(eq, table_id, subs)[source]¶
Substitute parameter expressions into validity test, returning the VALID(_table) macro.
- sasmodels.generate._call_pars(pars: str, subs: List[Parameter]) List[str] [source]¶
Return a list of prefix+parameter from parameter items.
pars is the list of parameters from the base model.
subs contains the translation equations with references to parameters from the new parameter table. If there is no translation, then subs is just a list of references into the base table.
- sasmodels.generate._convert_section_titles_to_boldface(lines: Sequence[str]) Iterator[str] [source]¶
Do the actual work of identifying and converting section headings.
- sasmodels.generate._convert_type(source: str, type_name: str, constant_flag: str) str [source]¶
Replace ‘double’ with type_name in source, tagging floating point constants with constant_flag.
- sasmodels.generate._fix_tgmath_int(source: str) str [source]¶
Replace f(integer) with f(integer.) for sin, cos, pow, etc.
OS X OpenCL complains that it can’t resolve the type generic calls to the standard math functions when they are called with integer constants, but this does not happen with the Windows Intel driver for example. To avoid confusion on the matrix marketplace, automatically promote integers to floats if we recognize them in the source.
The specific functions we look for are:
trigonometric: sin, asin, sinh, asinh, etc., and atan2 exponential: exp, exp2, exp10, expm1, log, log2, log10, logp1 power: pow, pown, powr, sqrt, rsqrt, rootn special: erf, erfc, tgamma float: fabs, fmin, fmax
Note that we don’t convert the second argument of dual argument functions: atan2, fmax, fmin, pow, powr. This could potentially be a problem for pow(x, 2), but that case seems to work without change.
- sasmodels.generate._gen_fn(model_info: ModelInfo, name: str, pars: List[Parameter]) str [source]¶
Generate a function given pars and body.
Returns the following string:
double fn(double a, double b, ...); double fn(double a, double b, ...) { .... }
- sasmodels.generate._kernels(kernel: Dict[str, str], call_iq: str, clear_iq: str, call_iqxy: str, clear_iqxy: str, name: str) List[str] [source]¶
- sasmodels.generate._search(search_path: List[str], filename: str) str [source]¶
Find filename in search_path.
Raises ValueError if file does not exist.
- sasmodels.generate._split_translation(translation)[source]¶
Process the translation string, which is a sequence of assignments.
Blanks and comments (c-style and python-style) are stripped.
Conditional expressions should use C syntax (! || && ? :) not python.
- sasmodels.generate.contains_Fq(source: List[str]) bool [source]¶
Return True if C source defines “void Fq(“.
- sasmodels.generate.contains_shell_volume(source: List[str]) bool [source]¶
Return True if C source defines “double shell_volume(“.
- sasmodels.generate.convert_section_titles_to_boldface(s: str) str [source]¶
Use explicit bold-face rather than section headings so that the table of contents is not polluted with section names from the model documentation.
Sections are identified as the title line followed by a line of punctuation at least as long as the title line.
- sasmodels.generate.convert_type(source: str, dtype: dtype) str [source]¶
Convert code from double precision to the desired type.
Floating point constants are tagged with ‘f’ for single precision or ‘L’ for long double precision.
- sasmodels.generate.dll_timestamp(model_info: ModelInfo) int [source]¶
Return a timestamp for the model corresponding to the most recently changed file or dependency.
- sasmodels.generate.find_xy_mode(source: List[str]) bool [source]¶
Return the xy mode as qa, qac, qabc or qxy.
Note this is not a C parser, and so can be easily confused by non-standard syntax. Also, it will incorrectly identify the following as having 2D models:
/* double Iqac(qab, qc, ...) { ... fill this in later ... } */
If you want to comment out the function, use // on the front of the line:
/* // double Iqac(qab, qc, ...) { ... fill this in later ... } */
- sasmodels.generate.format_units(units: str) str [source]¶
Convert units into ReStructured Text format.
- sasmodels.generate.get_data_path(external_dir, target_file)[source]¶
Search for the target file relative in the installed application.
Search first in the location of the generate module in case we are running directly from the distribution. Search next to the python executable for windows installs. Search in the ../Resources directory next to the executable for Mac OS/X installs.
- sasmodels.generate.indent(s: str, depth: int) str [source]¶
Indent a string of text with depth additional spaces on each line.
- sasmodels.generate.kernel_name(model_info: ModelInfo, variant: str) str [source]¶
Name of the exported kernel symbol.
variant is “Iq”, “Iqxy” or “Imagnetic”.
- sasmodels.generate.load_kernel_module(model_name: str) module [source]¶
Return the kernel module named in model_name.
If the name ends in .py then load it as a custom model using
custom.__init__.load_custom_kernel_module()
, otherwise load it as a builtin from sasmodels.models.
- sasmodels.generate.load_template(filename: str) str [source]¶
Load template file from sasmodels resource directory.
- sasmodels.generate.make_doc(model_info: ModelInfo) str [source]¶
Return the documentation for the model.
- sasmodels.generate.make_html(model_info: ModelInfo) str [source]¶
Convert model docs directly to html.
- sasmodels.generate.make_partable(pars: List[Parameter]) str [source]¶
Generate the parameter table to include in the sphinx documentation.
- sasmodels.generate.make_source(model_info: ModelInfo) Dict[str, str] [source]¶
Generate the OpenCL/ctypes kernel from the module info.
Uses source files found in the given search path. Returns None if this is a pure python model, with no C source components.
- sasmodels.generate.model_sources(model_info: ModelInfo) List[str] [source]¶
Return a list of the sources file paths for the module.
- sasmodels.generate.ocl_timestamp(model_info: ModelInfo) int [source]¶
Return a timestamp for the model corresponding to the most recently changed file or dependency.
Note that this does not look at the time stamps for the OpenCL header information since that need not trigger a recompile of the DLL.
- sasmodels.generate.set_integration_size(info: ModelInfo, n: int) None [source]¶
Update the model definition, replacing the gaussian integration with a gaussian integration of a different size.
Note: this really ought to be a method in modelinfo, but that leads to import loops.
- sasmodels.generate.test_tag_float()[source]¶
Check that floating point constants are identified and tagged with ‘f’
sasmodels.gengauss module¶
Generate the Gauss-Legendre integration points and save them as a C file.
sasmodels.guyou module¶
Convert between latitude-longitude and Guyou map coordinates.
- sasmodels.guyou.ellipticFi(phi, psi, m)[source]¶
Returns F(phi+ipsi|m). See Abramowitz and Stegun, 17.4.11.
sasmodels.jitter module¶
Jitter Explorer¶
Application to explore orientation angle and angular dispersity.
From the command line:
# Show docs
python -m sasmodels.jitter --help
# Guyou projection jitter, uniform over 20 degree theta and 10 in phi
python -m sasmodels.jitter --projection=guyou --dist=uniform --jitter=20,10,0
From a jupyter cell:
import ipyvolume as ipv
from sasmodels import jitter
import importlib; importlib.reload(jitter)
jitter.set_plotter("ipv")
size = (10, 40, 100)
view = (20, 0, 0)
#size = (15, 15, 100)
#view = (60, 60, 0)
dview = (0, 0, 0)
#dview = (5, 5, 0)
#dview = (15, 180, 0)
#dview = (180, 15, 0)
projection = 'equirectangular'
#projection = 'azimuthal_equidistance'
#projection = 'guyou'
#projection = 'sinusoidal'
#projection = 'azimuthal_equal_area'
dist = 'uniform'
#dist = 'gaussian'
jitter.run(size=size, view=view, jitter=dview, dist=dist, projection=projection)
#filename = projection+('_theta' if dview[0] == 180 else '_phi' if dview[1] == 180 else '')
#ipv.savefig(filename+'.png')
- sasmodels.jitter.PLOT_ENGINE(calculator, draw_shape, size, view, jitter, dist, mesh, projection)¶
- class sasmodels.jitter.Quaternion(w, r)[source]¶
Bases:
object
Quaternion(w, r) = w + ir[0] + jr[1] + kr[2]
Quaternion.from_angle_axis(theta, r) for a rotation of angle theta about an axis oriented toward the direction r. This defines a unit quaternion, normalizing \(r\) to the unit vector \(\hat r\), and setting quaternion \(Q = \cos \theta + \sin \theta \hat r\)
Quaternion objects can be multiplied, which applies a rotation about the given axis, allowing composition of rotations without risk of gimbal lock. The resulting quaternion is applied to a set of points using Q.rot(v).
- __dict__ = mappingproxy({'__module__': 'sasmodels.jitter', '__doc__': '\n Quaternion(w, r) = w + ir[0] + jr[1] + kr[2]\n\n Quaternion.from_angle_axis(theta, r) for a rotation of angle theta about\n an axis oriented toward the direction r. This defines a unit quaternion,\n normalizing $r$ to the unit vector $\\hat r$, and setting quaternion\n $Q = \\cos \\theta + \\sin \\theta \\hat r$\n\n Quaternion objects can be multiplied, which applies a rotation about the\n given axis, allowing composition of rotations without risk of gimbal lock.\n The resulting quaternion is applied to a set of points using *Q.rot(v)*.\n ', '__init__': <function Quaternion.__init__>, 'from_angle_axis': <staticmethod object>, '__mul__': <function Quaternion.__mul__>, 'rot': <function Quaternion.rot>, 'conj': <function Quaternion.conj>, 'inv': <function Quaternion.inv>, 'norm': <function Quaternion.norm>, '__str__': <function Quaternion.__str__>, '__dict__': <attribute '__dict__' of 'Quaternion' objects>, '__weakref__': <attribute '__weakref__' of 'Quaternion' objects>, '__annotations__': {}})¶
- __doc__ = '\n Quaternion(w, r) = w + ir[0] + jr[1] + kr[2]\n\n Quaternion.from_angle_axis(theta, r) for a rotation of angle theta about\n an axis oriented toward the direction r. This defines a unit quaternion,\n normalizing $r$ to the unit vector $\\hat r$, and setting quaternion\n $Q = \\cos \\theta + \\sin \\theta \\hat r$\n\n Quaternion objects can be multiplied, which applies a rotation about the\n given axis, allowing composition of rotations without risk of gimbal lock.\n The resulting quaternion is applied to a set of points using *Q.rot(v)*.\n '¶
- __module__ = 'sasmodels.jitter'¶
- __weakref__¶
list of weak references to the object (if defined)
- sasmodels.jitter.R_to_xyz(R)[source]¶
Return phi, theta, psi Tait-Bryan angles corresponding to the given rotation matrix.
Extracting Euler Angles from a Rotation Matrix Mike Day, Insomniac Games https://d3cw3dd2w32x2b.cloudfront.net/wp-content/uploads/2012/07/euler-angles1.pdf Based on: Shoemake’s “Euler Angle Conversion”, Graphics Gems IV, pp. 222-229
- sasmodels.jitter._ipv_plot(calculator, draw_shape, size, view, jitter, dist, mesh, projection)[source]¶
- sasmodels.jitter._mpl_plot(calculator, draw_shape, size, view, jitter, dist, mesh, projection)[source]¶
- sasmodels.jitter.apply_jitter(jitter, points)[source]¶
Apply the jitter transform to a set of points.
Points are stored in a 3 x n numpy matrix, not a numpy array or tuple.
- sasmodels.jitter.build_model(model_name, n=150, qmax=0.5, **pars)[source]¶
Build a calculator for the given shape.
model_name is any sasmodels model. n and qmax define an n x n mesh on which to evaluate the model. The remaining parameters are stored in the returned calculator as calculator.pars. They are used by
draw_scattering()
to set the non-orientation parameters in the calculation.Returns a calculator function which takes a dictionary or parameters and produces Iqxy. The Iqxy value needs to be reshaped to an n x n matrix for plotting. See the
direct_model.DirectModel
class for details.
- sasmodels.jitter.clipped_range(data, portion=1.0, mode='central')[source]¶
Determine range from data.
If portion is 1, use full range, otherwise use the center of the range or the top of the range, depending on whether mode is ‘central’ or ‘top’.
- sasmodels.jitter.draw_axes(axes, origin=(-1, -1, -1), length=(2, 2, 2))[source]¶
Draw wireframe axes lines, with given origin and length
- sasmodels.jitter.draw_bcc(axes, size, view, jitter, steps=None, alpha=1)[source]¶
Draw points for body-centered cubic paracrystal
- sasmodels.jitter.draw_beam(axes, view=(0, 0), alpha=0.5, steps=25)[source]¶
Draw the beam going from source at (0, 0, 1) to detector at (0, 0, -1)
- sasmodels.jitter.draw_ellipsoid(axes, size, view, jitter, steps=25, alpha=1)[source]¶
Draw an ellipsoid.
- sasmodels.jitter.draw_fcc(axes, size, view, jitter, steps=None, alpha=1)[source]¶
Draw points for face-centered cubic paracrystal
- sasmodels.jitter.draw_jitter(axes, view, jitter, dist='gaussian', size=(0.1, 0.4, 1.0), draw_shape=<function draw_parallelepiped>, projection='equirectangular', alpha=0.8, views=None)[source]¶
Represent jitter as a set of shapes at different orientations.
- sasmodels.jitter.draw_mesh(axes, view, jitter, radius=1.2, n=11, dist='gaussian', projection='equirectangular')[source]¶
Draw the dispersion mesh showing the theta-phi orientations at which the model will be evaluated.
- sasmodels.jitter.draw_parallelepiped(axes, size, view, jitter, steps=None, color=(0.6, 1.0, 0.6), alpha=1)[source]¶
Draw a parallelepiped surface, with view and jitter.
- sasmodels.jitter.draw_person_on_sphere(axes, view, height=0.5, radius=1.0)[source]¶
Draw a person on the surface of a sphere.
view indicates (latitude, longitude, orientation)
- sasmodels.jitter.draw_sc(axes, size, view, jitter, steps=None, alpha=1)[source]¶
Draw points for simple cubic paracrystal
- sasmodels.jitter.draw_scattering(calculator, axes, view, jitter, dist='gaussian')[source]¶
Plot the scattering for the particular view.
calculator is returned from
build_model()
. axes are the 3D axes on which the data will be plotted. view and jitter are the current orientation and orientation dispersity. dist is one of the sasmodels weight distributions.
- sasmodels.jitter.draw_sphere(axes, radius=1.0, steps=25, center=(0, 0, 0), color='w', alpha=1.0)[source]¶
Draw a sphere
- sasmodels.jitter.get_projection(projection)[source]¶
jitter projections <https://en.wikipedia.org/wiki/List_of_map_projections>
- equirectangular (standard latitude-longitude mesh)
<https://en.wikipedia.org/wiki/Equirectangular_projection> Allows free movement in phi (around the equator), but theta is limited to +/- 90, and points are cos-weighted. Jitter in phi is uniform in weight along a line of latitude. With small theta and phi ranging over +/- 180 this forms a wobbling disk. With small phi and theta ranging over +/- 90 this forms a wedge like a slice of an orange.
- azimuthal_equidistance (Postel)
<https://en.wikipedia.org/wiki/Azimuthal_equidistant_projection> Preserves distance from center, and so is an excellent map for representing a bivariate gaussian on the surface. Theta and phi operate identically, cutting wegdes from the antipode of the viewing angle. This unfortunately does not allow free movement in either theta or phi since the orthogonal wobble decreases to 0 as the body rotates through 180 degrees.
- sinusoidal (Sanson-Flamsteed, Mercator equal-area)
<https://en.wikipedia.org/wiki/Sinusoidal_projection> Preserves arc length with latitude, giving bad behaviour at theta near +/- 90. Theta and phi operate somewhat differently, so a system with a-b-c dtheta-dphi-dpsi will not give the same value as one with b-a-c dphi-dtheta-dpsi, as would be the case for azimuthal equidistance. Free movement using theta or phi uniform over +/- 180 will work, but not as well as equirectangular phi, with theta being slightly worse. Computationally it is much cheaper for wide theta-phi meshes since it excludes points which lie outside the sinusoid near theta +/- 90 rather than packing them close together as in equirectangle. Note that the poles will be slightly overweighted for theta > 90 with the circle from theta at 90+dt winding backwards around the pole, overlapping the circle from theta at 90-dt.
- Guyou (hemisphere-in-a-square) not weighted
<https://en.wikipedia.org/wiki/Guyou_hemisphere-in-a-square_projection> With tiling, allows rotation in phi or theta through +/- 180, with uniform spacing. Both theta and phi allow free rotation, with wobble in the orthogonal direction reasonably well behaved (though not as good as equirectangular phi). The forward/reverse transformations relies on elliptic integrals that are somewhat expensive, so the behaviour has to be very good to justify the cost and complexity. The weighting function for each point has not yet been computed. Note: run the module guyou.py directly and it will show the forward and reverse mappings.
- azimuthal_equal_area incomplete
<https://en.wikipedia.org/wiki/Lambert_azimuthal_equal-area_projection> Preserves the relative density of the surface patches. Not that useful and not completely implemented
- Gauss-Kreuger not implemented
<https://en.wikipedia.org/wiki/Transverse_Mercator_projection#Ellipsoidal_transverse_Mercator> Should allow free movement in theta, but phi is distorted.
- sasmodels.jitter.map_colors(z, kw)[source]¶
Process matplotlib-style colour arguments.
Pulls ‘cmap’, ‘alpha’, ‘vmin’, and ‘vmax’ from th kw dictionary, setting the kw[‘color’] to an RGB array. These are ignored if ‘c’ or ‘color’ are set inside kw.
- sasmodels.jitter.orient_relative_to_beam(view, points)[source]¶
Apply the view transform to a set of points.
Points are stored in a 3 x n numpy matrix, not a numpy array or tuple.
- sasmodels.jitter.orient_relative_to_beam_quaternion(view, points)[source]¶
Apply the view transform to a set of points.
Points are stored in a 3 x n numpy matrix, not a numpy array or tuple.
This variant uses quaternions rather than rotation matrices for the computation. It works but it is not used because it doesn’t solve any problems. The challenge of mapping theta/phi/psi to SO(3) does not disappear by calculating the transform differently.
- sasmodels.jitter.run(model_name='parallelepiped', size=(10, 40, 100), view=(0, 0, 0), jitter=(0, 0, 0), dist='gaussian', mesh=30, projection='equirectangular')[source]¶
Show an interactive orientation and jitter demo.
model_name is one of: sphere, ellipsoid, triaxial_ellipsoid, parallelepiped, cylinder, or sc/fcc/bcc_paracrystal
size gives the dimensions (a, b, c) of the shape.
view gives the initial view (theta, phi, psi) of the shape.
view gives the initial jitter (dtheta, dphi, dpsi) of the shape.
dist is the type of dispersition: gaussian, rectangle, or uniform.
mesh is the number of points in the dispersion mesh.
projection is the map projection to use for the mesh: equirectangular, sinusoidal, guyou, azimuthal_equidistance, or azimuthal_equal_area.
- sasmodels.jitter.select_calculator(model_name, n=150, size=(10, 40, 100))[source]¶
Create a model calculator for the given shape.
model_name is one of sphere, cylinder, ellipsoid, triaxial_ellipsoid, parallelepiped or bcc_paracrystal. n is the number of points to use in the q range. qmax is chosen based on model parameters for the given model to show something intersting.
Returns calculator and tuple size (a,b,c) giving minor and major equitorial axes and polar axis respectively. See
build_model()
for details on the returned calculator.
sasmodels.kernel module¶
Execution kernel interface¶
KernelModel
defines the interface to all kernel models.
In particular, each model should provide a KernelModel.make_kernel()
call which returns an executable kernel, Kernel
, that operates
on the given set of q_vector inputs. On completion of the computation,
the kernel should be released, which also releases the inputs.
- class sasmodels.kernel.Kernel[source]¶
Bases:
object
Instantiated model for the compute engine, applied to a particular q.
Subclasses should define __init__() to set up the kernel inputs, and _call_kernel() to evaluate the kernel:
def __init__(self, ...): ... self.q_input = <q-value class with nq attribute> self.info = <ModelInfo object> self.dim = <'1d' or '2d'> self.dtype = <kernel.dtype> size = 2*self.q_input.nq+4 if self.info.have_Fq else self.q_input.nq+4 size = size + <extra padding if needed for kernel> self.result = np.empty(size, dtype=self.dtype) def _call_kernel(self, call_details, values, cutoff, magnetic, radius_effective_mode): # type: (CallDetails, np.ndarray, np.ndarray, float, bool, int) -> None ... # call <kernel> nq = self.q_input.nq if self.info.have_Fq: # models that compute both F and F^2 end = 2*nq if have_Fq else nq self.result[0:end:2] = F**2 self.result[1:end:2] = F else: end = nq self.result[0:end] = Fsq self.result[end + 0] = total_weight self.result[end + 1] = form_volume self.result[end + 2] = shell_volume self.result[end + 3] = radius_effective
- Fq(call_details: CallDetails, values: ndarray, cutoff: ndarray, magnetic: float, radius_effective_mode: bool = 0) ndarray [source]¶
Returns <F(q)>, <F(q)^2>, effective radius, shell volume and form:shell volume ratio. The <F(q)> term may be None if the form factor does not support direct computation of \(F(q)\)
\(P(q) = <F^2(q)>/<V>\) is used for structure factor calculations,
\[I(q) = \text{scale} \cdot P(q) \cdot S(q) + \text{background}\]For the beta approximation, this becomes
\[I(q) = \text{scale} P (1 + <F>^2/<F^2> (S - 1)) + \text{background} = \text{scale}/<V> (<F^2> + <F>^2 (S - 1)) + \text{background}\]\(<F(q)>\) and \(<F^2(q)>\) are averaged by polydispersity in shape and orientation, with each configuration \(x_k\) having form factor \(F(q, x_k)\), weight \(w_k\) and volume \(V_k\). The result is:
\[P(q)=\frac{\sum w_k F^2(q, x_k) / \sum w_k}{\sum w_k V_k / \sum w_k}\]The form factor itself is scaled by volume and contrast to compute the total scattering. This is then squared, and the volume weighted F^2 is then normalized by volume F. For a given density, the number of scattering centers is assumed to scale linearly with volume. Later scaling the resulting \(P(q)\) by the volume fraction of particles gives the total scattering on an absolute scale. Most models incorporate the volume fraction into the overall scale parameter. An exception is vesicle, which includes the volume fraction parameter in the model itself, scaling \(F\) by \(\surd V_f\) so that the math for the beta approximation works out.
By scaling \(P(q)\) by total weight \(\sum w_k\), there is no need to make sure that the polydisperisity distributions normalize to one. In particular, any distibution values \(x_k\) outside the valid domain of \(F\) will not be included, and the distribution will be implicitly truncated. This is controlled by the parameter limits defined in the model (which truncate the distribution before calling the kernel) as well as any region excluded using the INVALID macro defined within the model itself.
The volume used in the polydispersity calculation is the form volume for solid objects or the shell volume for hollow objects. Shell volume should be used within \(F\) so that the normalizing scale represents the volume fraction of the shell rather than the entire form. This corresponds to the volume fraction of shell-forming material added to the solvent.
The calculation of \(S\) requires the effective radius and the volume fraction of the particles. The model can have several different ways to compute effective radius, with the radius_effective_mode parameter used to select amongst them. The volume fraction of particles should be determined from the total volume fraction of the form, not just the shell volume fraction. This makes a difference for hollow shapes, which need to scale the volume fraction by the returned volume ratio when computing \(S\). For solid objects, the shell volume is set to the form volume so this scale factor evaluates to one and so can be used for both hollow and solid shapes.
- Iq(call_details: CallDetails, values: ndarray, cutoff: ndarray, magnetic: float) ndarray [source]¶
Returns I(q) from the polydisperse average scattering.
\[I(q) = \text{scale} \cdot P(q) + \text{background}\]With the correct choice of model and contrast, setting scale to the volume fraction \(V_f\) of particles should match the measured absolute scattering. Some models (e.g., vesicle) have volume fraction built into the model, and do not need an additional scale.
- __call__(call_details: CallDetails, values: ndarray, cutoff: ndarray, magnetic: float) ndarray ¶
Returns I(q) from the polydisperse average scattering.
\[I(q) = \text{scale} \cdot P(q) + \text{background}\]With the correct choice of model and contrast, setting scale to the volume fraction \(V_f\) of particles should match the measured absolute scattering. Some models (e.g., vesicle) have volume fraction built into the model, and do not need an additional scale.
- __dict__ = mappingproxy({'__module__': 'sasmodels.kernel', '__doc__': "\n Instantiated model for the compute engine, applied to a particular *q*.\n\n Subclasses should define *__init__()* to set up the kernel inputs, and\n *_call_kernel()* to evaluate the kernel::\n\n def __init__(self, ...):\n ...\n self.q_input = <q-value class with nq attribute>\n self.info = <ModelInfo object>\n self.dim = <'1d' or '2d'>\n self.dtype = <kernel.dtype>\n size = 2*self.q_input.nq+4 if self.info.have_Fq else self.q_input.nq+4\n size = size + <extra padding if needed for kernel>\n self.result = np.empty(size, dtype=self.dtype)\n\n def _call_kernel(self, call_details, values, cutoff, magnetic,\n radius_effective_mode):\n # type: (CallDetails, np.ndarray, np.ndarray, float, bool, int) -> None\n ... # call <kernel>\n nq = self.q_input.nq\n if self.info.have_Fq: # models that compute both F and F^2\n end = 2*nq if have_Fq else nq\n self.result[0:end:2] = F**2\n self.result[1:end:2] = F\n else:\n end = nq\n self.result[0:end] = Fsq\n self.result[end + 0] = total_weight\n self.result[end + 1] = form_volume\n self.result[end + 2] = shell_volume\n self.result[end + 3] = radius_effective\n ", 'dim': None, 'info': None, 'dtype': None, 'q_input': None, 'result': None, 'Iq': <function Kernel.Iq>, '__call__': <function Kernel.Iq>, 'Fq': <function Kernel.Fq>, 'release': <function Kernel.release>, '_call_kernel': <function Kernel._call_kernel>, '__dict__': <attribute '__dict__' of 'Kernel' objects>, '__weakref__': <attribute '__weakref__' of 'Kernel' objects>, '__annotations__': {'dim': 'str', 'info': 'ModelInfo', 'dtype': 'np.dtype', 'q_input': 'Any', 'result': 'np.ndarray'}})¶
- __doc__ = "\n Instantiated model for the compute engine, applied to a particular *q*.\n\n Subclasses should define *__init__()* to set up the kernel inputs, and\n *_call_kernel()* to evaluate the kernel::\n\n def __init__(self, ...):\n ...\n self.q_input = <q-value class with nq attribute>\n self.info = <ModelInfo object>\n self.dim = <'1d' or '2d'>\n self.dtype = <kernel.dtype>\n size = 2*self.q_input.nq+4 if self.info.have_Fq else self.q_input.nq+4\n size = size + <extra padding if needed for kernel>\n self.result = np.empty(size, dtype=self.dtype)\n\n def _call_kernel(self, call_details, values, cutoff, magnetic,\n radius_effective_mode):\n # type: (CallDetails, np.ndarray, np.ndarray, float, bool, int) -> None\n ... # call <kernel>\n nq = self.q_input.nq\n if self.info.have_Fq: # models that compute both F and F^2\n end = 2*nq if have_Fq else nq\n self.result[0:end:2] = F**2\n self.result[1:end:2] = F\n else:\n end = nq\n self.result[0:end] = Fsq\n self.result[end + 0] = total_weight\n self.result[end + 1] = form_volume\n self.result[end + 2] = shell_volume\n self.result[end + 3] = radius_effective\n "¶
- __module__ = 'sasmodels.kernel'¶
- __weakref__¶
list of weak references to the object (if defined)
- _call_kernel(call_details: CallDetails, values: ndarray, cutoff: ndarray, magnetic: float, radius_effective_mode: bool) None [source]¶
Call the kernel. Subclasses defining kernels for particular execution engines need to provide an implementation for this.
- dim: str = None¶
Kernel dimension, either “1d” or “2d”.
- dtype: dtype = None¶
Numerical precision for the computation.
- q_input: Any = None¶
Q values at which the kernel is to be evaluated.
- result: ndarray = None¶
Place to hold result of _call_kernel() for subclass.
- class sasmodels.kernel.KernelModel[source]¶
Bases:
object
Model definition for the compute engine.
- __annotations__ = {'dtype': 'np.dtype', 'info': 'ModelInfo'}¶
- __dict__ = mappingproxy({'__module__': 'sasmodels.kernel', '__doc__': '\n Model definition for the compute engine.\n ', 'info': None, 'dtype': None, 'make_kernel': <function KernelModel.make_kernel>, 'release': <function KernelModel.release>, '__dict__': <attribute '__dict__' of 'KernelModel' objects>, '__weakref__': <attribute '__weakref__' of 'KernelModel' objects>, '__annotations__': {'info': 'ModelInfo', 'dtype': 'np.dtype'}})¶
- __doc__ = '\n Model definition for the compute engine.\n '¶
- __module__ = 'sasmodels.kernel'¶
- __weakref__¶
list of weak references to the object (if defined)
- dtype: dtype = None¶
sasmodels.kernelcl module¶
GPU driver for C kernels
TODO: docs are out of date
There should be a single GPU environment running on the system. This
environment is constructed on the first call to environment()
, and the
same environment is returned on each call.
After retrieving the environment, the next step is to create the kernel.
This is done with a call to GpuEnvironment.compile_program()
, which
returns the type of data used by the kernel.
Next a GpuInput
object should be created with the correct kind
of data. This data object can be used by multiple kernels, for example,
if the target model is a weighted sum of multiple kernels. The data
should include any extra evaluation points required to compute the proper
data smearing. This need not match the square grid for 2D data if there
is an index saying which q points are active.
Together the GpuInput, the program, and a device form a GpuKernel
.
This kernel is used during fitting, receiving new sets of parameters and
evaluating them. The output value is stored in an output buffer on the
devices, where it can be combined with other structure factors and form
factors and have instrumental resolution effects applied.
In order to use OpenCL for your models, you will need OpenCL drivers for your machine. These should be available from your graphics card vendor. Intel provides OpenCL drivers for CPUs as well as their integrated HD graphics chipsets. AMD also provides drivers for Intel CPUs, but as of this writing the performance is lacking compared to the Intel drivers. NVidia combines drivers for CUDA and OpenCL in one package. The result is a bit messy if you have multiple drivers installed. You can see which drivers are available by starting python and running:
import pyopencl as cl cl.create_some_context(interactive=True)
Once you have done that, it will show the available drivers which you can select. It will then tell you that you can use these drivers automatically by setting the SAS_OPENCL environment variable, which is PYOPENCL_CTX equivalent but not conflicting with other pyopnecl programs.
Some graphics cards have multiple devices on the same card. You cannot yet use both of them concurrently to evaluate models, but you can run the program twice using a different device for each session.
OpenCL kernels are compiled when needed by the device driver. Some drivers produce compiler output even when there is no error. You can see the output by setting PYOPENCL_COMPILER_OUTPUT=1. It should be harmless, albeit annoying.
- class sasmodels.kernelcl.GpuEnvironment[source]¶
Bases:
object
GPU context for OpenCL, with possibly many devices and one queue per device.
- __dict__ = mappingproxy({'__module__': 'sasmodels.kernelcl', '__doc__': '\n GPU context for OpenCL, with possibly many devices and one queue per device.\n ', '__init__': <function GpuEnvironment.__init__>, 'has_type': <function GpuEnvironment.has_type>, 'compile_program': <function GpuEnvironment.compile_program>, '__dict__': <attribute '__dict__' of 'GpuEnvironment' objects>, '__weakref__': <attribute '__weakref__' of 'GpuEnvironment' objects>, '__annotations__': {}})¶
- __doc__ = '\n GPU context for OpenCL, with possibly many devices and one queue per device.\n '¶
- __module__ = 'sasmodels.kernelcl'¶
- __weakref__¶
list of weak references to the object (if defined)
- class sasmodels.kernelcl.GpuInput(q_vectors: List[ndarray], dtype: dtype = dtype('float32'))[source]¶
Bases:
object
Make q data available to the gpu.
q_vectors is a list of q vectors, which will be [q] for 1-D data, and [qx, qy] for 2-D data. Internally, the vectors will be reallocated to get the best performance on OpenCL, which may involve shifting and stretching the array to better match the memory architecture. Additional points will be evaluated with q=1e-3.
dtype is the data type for the q vectors. The data type should be set to match that of the kernel, which is an attribute of
GpuModel
. Note that not all kernels support double precision, so even if the program was created for double precision, the GpuModel.dtype may be single precision.Call
release()
when complete. Even if not called directly, the buffer will be released when the data object is freed.- __dict__ = mappingproxy({'__module__': 'sasmodels.kernelcl', '__doc__': '\n Make q data available to the gpu.\n\n *q_vectors* is a list of q vectors, which will be *[q]* for 1-D data,\n and *[qx, qy]* for 2-D data. Internally, the vectors will be reallocated\n to get the best performance on OpenCL, which may involve shifting and\n stretching the array to better match the memory architecture. Additional\n points will be evaluated with *q=1e-3*.\n\n *dtype* is the data type for the q vectors. The data type should be\n set to match that of the kernel, which is an attribute of\n :class:`GpuModel`. Note that not all kernels support double\n precision, so even if the program was created for double precision,\n the *GpuModel.dtype* may be single precision.\n\n Call :meth:`release` when complete. Even if not called directly, the\n buffer will be released when the data object is freed.\n ', 'nq': 0, 'dtype': dtype('float32'), 'is_2d': False, 'q': None, 'q_b': None, '__init__': <function GpuInput.__init__>, 'release': <function GpuInput.release>, '__del__': <function GpuInput.__del__>, '__dict__': <attribute '__dict__' of 'GpuInput' objects>, '__weakref__': <attribute '__weakref__' of 'GpuInput' objects>, '__annotations__': {}})¶
- __doc__ = '\n Make q data available to the gpu.\n\n *q_vectors* is a list of q vectors, which will be *[q]* for 1-D data,\n and *[qx, qy]* for 2-D data. Internally, the vectors will be reallocated\n to get the best performance on OpenCL, which may involve shifting and\n stretching the array to better match the memory architecture. Additional\n points will be evaluated with *q=1e-3*.\n\n *dtype* is the data type for the q vectors. The data type should be\n set to match that of the kernel, which is an attribute of\n :class:`GpuModel`. Note that not all kernels support double\n precision, so even if the program was created for double precision,\n the *GpuModel.dtype* may be single precision.\n\n Call :meth:`release` when complete. Even if not called directly, the\n buffer will be released when the data object is freed.\n '¶
- __module__ = 'sasmodels.kernelcl'¶
- __weakref__¶
list of weak references to the object (if defined)
- dtype = dtype('float32')¶
- is_2d = False¶
- nq = 0¶
- q = None¶
- q_b = None¶
- class sasmodels.kernelcl.GpuKernel(model: GpuModel, q_vectors: List[ndarray])[source]¶
Bases:
Kernel
Callable SAS kernel.
model is the GpuModel object to call
The kernel is derived from
kernel.Kernel
, providing the _call_kernel() method to evaluate the kernel for a given set of parameters. Because of the need to move the q values to the GPU before evaluation, the kernel is instantiated for a particular set of q vectors, and can be called many times without transfering q each time.Call
release()
when done with the kernel instance.- __doc__ = '\n Callable SAS kernel.\n\n *model* is the GpuModel object to call\n\n The kernel is derived from :class:`.kernel.Kernel`, providing the\n *_call_kernel()* method to evaluate the kernel for a given set of\n parameters. Because of the need to move the q values to the GPU before\n evaluation, the kernel is instantiated for a particular set of q vectors,\n and can be called many times without transfering q each time.\n\n Call :meth:`release` when done with the kernel instance.\n '¶
- __module__ = 'sasmodels.kernelcl'¶
- _call_kernel(call_details: CallDetails, values: ndarray, cutoff: float, magnetic: bool, radius_effective_mode: int) None [source]¶
Call the kernel. Subclasses defining kernels for particular execution engines need to provide an implementation for this.
- _result_b: Buffer = None¶
- dim: str = ''¶
Kernel dimensions (1d or 2d).
- dtype: dtype = None¶
Kernel precision.
- q_input: Any = None¶
Q values at which the kernel is to be evaluated.
- result: ndarray = None¶
Calculation results, updated after each call to _call_kernel().
- class sasmodels.kernelcl.GpuModel(source: Dict[str, str], model_info: ModelInfo, dtype: dtype = dtype('float32'), fast: bool = False)[source]¶
Bases:
KernelModel
GPU wrapper for a single model.
source and model_info are the model source and interface as returned from
generate.make_source()
andmodelinfo.make_model_info()
.dtype is the desired model precision. Any numpy dtype for single or double precision floats will do, such as ‘f’, ‘float32’ or ‘single’ for single and ‘d’, ‘float64’ or ‘double’ for double. Double precision is an optional extension which may not be available on all devices. Half precision (‘float16’,’half’) may be available on some devices. Fast precision (‘fast’) is a loose version of single precision, indicating that the compiler is allowed to take shortcuts.
- __doc__ = "\n GPU wrapper for a single model.\n\n *source* and *model_info* are the model source and interface as returned\n from :func:`.generate.make_source` and :func:`.modelinfo.make_model_info`.\n\n *dtype* is the desired model precision. Any numpy dtype for single\n or double precision floats will do, such as 'f', 'float32' or 'single'\n for single and 'd', 'float64' or 'double' for double. Double precision\n is an optional extension which may not be available on all devices.\n Half precision ('float16','half') may be available on some devices.\n Fast precision ('fast') is a loose version of single precision, indicating\n that the compiler is allowed to take shortcuts.\n "¶
- __init__(source: Dict[str, str], model_info: ModelInfo, dtype: dtype = dtype('float32'), fast: bool = False) None [source]¶
- __module__ = 'sasmodels.kernelcl'¶
- _kernels: Dict[str, Kernel] = None¶
- _program: Program = None¶
- dtype: dtype = None¶
- fast: bool = False¶
- get_function(name: str) Kernel [source]¶
Fetch the kernel from the environment by name, compiling it if it does not already exist.
- make_kernel(q_vectors: List[ndarray]) GpuKernel [source]¶
Instantiate a kernel for evaluating the model at q_vectors.
- source: str = ''¶
- sasmodels.kernelcl._create_some_context() Context [source]¶
Protected call to cl.create_some_context without interactivity.
Uses SAS_OPENCL or PYOPENCL_CTX if they are set in the environment, otherwise scans for the most appropriate device using
_get_default_context()
. Ignore SAS_OPENCL=OpenCL, which indicates that an OpenCL device should be used without specifying which one (and not a CUDA device, or no GPU).
- sasmodels.kernelcl._get_default_context() List[Context] [source]¶
Get an OpenCL context, preferring GPU over CPU, and preferring Intel drivers over AMD drivers.
- sasmodels.kernelcl.compile_model(context: Context, source: str, dtype: dtype, fast: bool = False) Program [source]¶
Build a model to run on the gpu.
Returns the compiled program and its type.
Raises an error if the desired precision is not available.
- sasmodels.kernelcl.environment() GpuEnvironment [source]¶
Returns a singleton
GpuEnvironment
.This provides an OpenCL context and one queue per device.
- sasmodels.kernelcl.fix_pyopencl_include() None [source]¶
Monkey patch pyopencl to allow spaces in include file path.
- sasmodels.kernelcl.get_warp(kernel: Kernel, queue: CommandQueue) int [source]¶
Return the size of an execution batch for kernel running on queue.
- sasmodels.kernelcl.has_type(device: Device, dtype: dtype) bool [source]¶
Return true if device supports the requested precision.
- sasmodels.kernelcl.quote_path(v: str) str [source]¶
Quote the path if it is not already quoted.
If v starts with ‘-’, then assume that it is a -I option or similar and do not quote it. This is fragile: -Ipath with space needs to be quoted.
- sasmodels.kernelcl.reset_environment() GpuEnvironment [source]¶
Return a new OpenCL context, such as after a change to SAS_OPENCL.
sasmodels.kernelcuda module¶
GPU driver for C kernels (with CUDA)
To select cuda, use SAS_OPENCL=cuda, or SAS_OPENCL=cuda:n for a particular device number. If no device number is specified, then look for CUDA_DEVICE=n or a file ~/.cuda-device containing n for the device number. Otherwise, try all available device numbers.
TODO: docs are out of date
There should be a single GPU environment running on the system. This
environment is constructed on the first call to environment()
, and the
same environment is returned on each call.
After retrieving the environment, the next step is to create the kernel.
This is done with a call to GpuEnvironment.compile_program()
, which
returns the type of data used by the kernel.
Next a GpuInput
object should be created with the correct kind
of data. This data object can be used by multiple kernels, for example,
if the target model is a weighted sum of multiple kernels. The data
should include any extra evaluation points required to compute the proper
data smearing. This need not match the square grid for 2D data if there
is an index saying which q points are active.
Together the GpuInput, the program, and a device form a GpuKernel
.
This kernel is used during fitting, receiving new sets of parameters and
evaluating them. The output value is stored in an output buffer on the
devices, where it can be combined with other structure factors and form
factors and have instrumental resolution effects applied.
In order to use OpenCL for your models, you will need OpenCL drivers for your machine. These should be available from your graphics card vendor. Intel provides OpenCL drivers for CPUs as well as their integrated HD graphics chipsets. AMD also provides drivers for Intel CPUs, but as of this writing the performance is lacking compared to the Intel drivers. NVidia combines drivers for CUDA and OpenCL in one package. The result is a bit messy if you have multiple drivers installed. You can see which drivers are available by starting python and running:
import pyopencl as cl cl.create_some_context(interactive=True)
Once you have done that, it will show the available drivers which you can select. It will then tell you that you can use these drivers automatically by setting the SAS_OPENCL environment variable, which is PYOPENCL_CTX equivalent but not conflicting with other pyopnecl programs.
Some graphics cards have multiple devices on the same card. You cannot yet use both of them concurrently to evaluate models, but you can run the program twice using a different device for each session.
OpenCL kernels are compiled when needed by the device driver. Some drivers produce compiler output even when there is no error. You can see the output by setting PYOPENCL_COMPILER_OUTPUT=1. It should be harmless, albeit annoying.
- class sasmodels.kernelcuda.GpuEnvironment(devnum: int | None = None)[source]¶
Bases:
object
GPU context for CUDA.
- __dict__ = mappingproxy({'__module__': 'sasmodels.kernelcuda', '__doc__': '\n GPU context for CUDA.\n ', 'context': None, '__init__': <function GpuEnvironment.__init__>, 'release': <function GpuEnvironment.release>, '__del__': <function GpuEnvironment.__del__>, 'has_type': <function GpuEnvironment.has_type>, 'compile_program': <function GpuEnvironment.compile_program>, '__dict__': <attribute '__dict__' of 'GpuEnvironment' objects>, '__weakref__': <attribute '__weakref__' of 'GpuEnvironment' objects>, '__annotations__': {'context': 'cuda.Context'}})¶
- __doc__ = '\n GPU context for CUDA.\n '¶
- __module__ = 'sasmodels.kernelcuda'¶
- __weakref__¶
list of weak references to the object (if defined)
- compile_program(name: str, source: str, dtype: np.dtype, fast: bool, timestamp: float) SourceModule [source]¶
Compile the program for the device in the given context.
- context: cuda.Context = None¶
- class sasmodels.kernelcuda.GpuInput(q_vectors: List[ndarray], dtype: dtype = dtype('float32'))[source]¶
Bases:
object
Make q data available to the gpu.
q_vectors is a list of q vectors, which will be [q] for 1-D data, and [qx, qy] for 2-D data. Internally, the vectors will be reallocated to get the best performance on OpenCL, which may involve shifting and stretching the array to better match the memory architecture. Additional points will be evaluated with q=1e-3.
dtype is the data type for the q vectors. The data type should be set to match that of the kernel, which is an attribute of
GpuModel
. Note that not all kernels support double precision, so even if the program was created for double precision, the GpuModel.dtype may be single precision.Call
release()
when complete. Even if not called directly, the buffer will be released when the data object is freed.- __dict__ = mappingproxy({'__module__': 'sasmodels.kernelcuda', '__doc__': '\n Make q data available to the gpu.\n\n *q_vectors* is a list of q vectors, which will be *[q]* for 1-D data,\n and *[qx, qy]* for 2-D data. Internally, the vectors will be reallocated\n to get the best performance on OpenCL, which may involve shifting and\n stretching the array to better match the memory architecture. Additional\n points will be evaluated with *q=1e-3*.\n\n *dtype* is the data type for the q vectors. The data type should be\n set to match that of the kernel, which is an attribute of\n :class:`GpuModel`. Note that not all kernels support double\n precision, so even if the program was created for double precision,\n the *GpuModel.dtype* may be single precision.\n\n Call :meth:`release` when complete. Even if not called directly, the\n buffer will be released when the data object is freed.\n ', '__init__': <function GpuInput.__init__>, 'release': <function GpuInput.release>, '__del__': <function GpuInput.__del__>, '__dict__': <attribute '__dict__' of 'GpuInput' objects>, '__weakref__': <attribute '__weakref__' of 'GpuInput' objects>, '__annotations__': {}})¶
- __doc__ = '\n Make q data available to the gpu.\n\n *q_vectors* is a list of q vectors, which will be *[q]* for 1-D data,\n and *[qx, qy]* for 2-D data. Internally, the vectors will be reallocated\n to get the best performance on OpenCL, which may involve shifting and\n stretching the array to better match the memory architecture. Additional\n points will be evaluated with *q=1e-3*.\n\n *dtype* is the data type for the q vectors. The data type should be\n set to match that of the kernel, which is an attribute of\n :class:`GpuModel`. Note that not all kernels support double\n precision, so even if the program was created for double precision,\n the *GpuModel.dtype* may be single precision.\n\n Call :meth:`release` when complete. Even if not called directly, the\n buffer will be released when the data object is freed.\n '¶
- __module__ = 'sasmodels.kernelcuda'¶
- __weakref__¶
list of weak references to the object (if defined)
- class sasmodels.kernelcuda.GpuKernel(model: GpuModel, q_vectors: List[ndarray])[source]¶
Bases:
Kernel
Callable SAS kernel.
model is the GpuModel object to call
The kernel is derived from
kernel.Kernel
, providing the _call_kernel() method to evaluate the kernel for a given set of parameters. Because of the need to move the q values to the GPU before evaluation, the kernel is instantiated for a particular set of q vectors, and can be called many times without transfering q each time.Call
release()
when done with the kernel instance.- __doc__ = '\n Callable SAS kernel.\n\n *model* is the GpuModel object to call\n\n The kernel is derived from :class:`.kernel.Kernel`, providing the\n *_call_kernel()* method to evaluate the kernel for a given set of\n parameters. Because of the need to move the q values to the GPU before\n evaluation, the kernel is instantiated for a particular set of q vectors,\n and can be called many times without transfering q each time.\n\n Call :meth:`release` when done with the kernel instance.\n '¶
- __module__ = 'sasmodels.kernelcuda'¶
- _call_kernel(call_details: CallDetails, values: ndarray, cutoff: float, magnetic: bool, radius_effective_mode: int) None [source]¶
Call the kernel. Subclasses defining kernels for particular execution engines need to provide an implementation for this.
- dim: str = ''¶
Kernel dimensions (1d or 2d).
- dtype: dtype = None¶
Kernel precision.
- result: ndarray = None¶
Calculation results, updated after each call to _call_kernel().
- class sasmodels.kernelcuda.GpuModel(source: Dict[str, str], model_info: ModelInfo, dtype: dtype = dtype('float32'), fast: bool = False)[source]¶
Bases:
KernelModel
GPU wrapper for a single model.
source and model_info are the model source and interface as returned from
generate.make_source()
andmodelinfo.make_model_info()
.dtype is the desired model precision. Any numpy dtype for single or double precision floats will do, such as ‘f’, ‘float32’ or ‘single’ for single and ‘d’, ‘float64’ or ‘double’ for double. Double precision is an optional extension which may not be available on all devices. Half precision (‘float16’,’half’) may be available on some devices. Fast precision (‘fast’) is a loose version of single precision, indicating that the compiler is allowed to take shortcuts.
- __doc__ = "\n GPU wrapper for a single model.\n\n *source* and *model_info* are the model source and interface as returned\n from :func:`.generate.make_source` and :func:`.modelinfo.make_model_info`.\n\n *dtype* is the desired model precision. Any numpy dtype for single\n or double precision floats will do, such as 'f', 'float32' or 'single'\n for single and 'd', 'float64' or 'double' for double. Double precision\n is an optional extension which may not be available on all devices.\n Half precision ('float16','half') may be available on some devices.\n Fast precision ('fast') is a loose version of single precision, indicating\n that the compiler is allowed to take shortcuts.\n "¶
- __init__(source: Dict[str, str], model_info: ModelInfo, dtype: dtype = dtype('float32'), fast: bool = False) None [source]¶
- __module__ = 'sasmodels.kernelcuda'¶
- _kernels: Dict[str, cuda.Function] = None¶
- _program: SourceModule = None¶
- dtype: np.dtype = None¶
- fast: bool = False¶
- get_function(name: str) cuda.Function [source]¶
Fetch the kernel from the environment by name, compiling it if it does not already exist.
- make_kernel(q_vectors: List[ndarray]) GpuKernel [source]¶
Instantiate a kernel for evaluating the model at q_vectors.
- source: str = ''¶
- sasmodels.kernelcuda._add_device_tag(match: None) str [source]¶
replace qualifiers with __device__ qualifiers if needed
- sasmodels.kernelcuda.compile_model(source: str, dtype: np.dtype, fast: bool = False) SourceModule [source]¶
Build a model to run on the gpu.
Returns the compiled program and its type. The returned type will be float32 even if the desired type is float64 if any of the devices in the context do not support the cl_khr_fp64 extension.
- sasmodels.kernelcuda.environment() GpuEnvironment [source]¶
Returns a singleton
GpuEnvironment
.This provides an OpenCL context and one queue per device.
- sasmodels.kernelcuda.has_type(dtype: dtype) bool [source]¶
Return true if device supports the requested precision.
- sasmodels.kernelcuda.mark_device_functions(source: str) str [source]¶
Mark all function declarations as __device__ functions (except kernel).
- sasmodels.kernelcuda.reset_environment() None [source]¶
Call to create a new OpenCL context, such as after a change to SAS_OPENCL.
- sasmodels.kernelcuda.show_device_functions(source: str) str [source]¶
Show all discovered function declarations, but don’t change any.
sasmodels.kerneldll module¶
DLL driver for C kernels
If the environment variable SAS_OPENMP is set, then sasmodels will attempt to compile with OpenMP flags so that the model can use all available kernels. This may or may not be available on your compiler toolchain. Depending on operating system and environment.
Windows does not have provide a compiler with the operating system. Instead, we assume that TinyCC is installed and available. This can be done with a simple pip command if it is not already available:
pip install tinycc
If Microsoft Visual C++ is available (because VCINSTALLDIR is defined in the environment), then that will be used instead. Microsoft Visual C++ for Python is available from Microsoft:
If neither compiler is available, sasmodels will check for MinGW, the GNU compiler toolchain. This available in packages such as Anaconda and PythonXY, or available stand alone. This toolchain has had difficulties on some systems, and may or may not work for you.
You can control which compiler to use by setting SAS_COMPILER in the environment:
tinycc (Windows): use the TinyCC compiler shipped with SasView
msvc (Windows): use the Microsoft Visual C++ compiler
mingw (Windows): use the MinGW GNU cc compiler
unix (Linux): use the system cc compiler.
unix (Mac): use the clang compiler. You will need XCode installed, and the XCode command line tools. Mac comes with OpenCL drivers, so generally this will not be needed.
Both msvc and mingw require that the compiler is available on your path. For msvc, this can done by running vcvarsall.bat in a windows terminal. Install locations are system dependent, such as:
C:Program Files (x86)Common FilesMicrosoftVisual C++ for Python9.0vcvarsall.bat
or maybe
C:UsersyournameAppDataLocalProgramsCommonMicrosoftVisual C++ for Python9.0vcvarsall.bat
OpenMP for msvc requires the Microsoft vcomp90.dll library, which doesn’t seem to be included with the compiler, nor does there appear to be a public download location. There may be one on your machine already in a location such as:
C:Windowswinsxsx86_microsoft.vc90.openmp*vcomp90.dll
If you copy this to somewhere on your path, such as the python directory or the install directory for this application, then OpenMP should be supported.
For full control of the compiler, define a function compile_command(source,output) which takes the name of the source file and the name of the output file and returns a compile command that can be evaluated in the shell. For even more control, replace the entire compile(source,output) function.
The global attribute ALLOW_SINGLE_PRECISION_DLLS should be set to False if you wish to prevent single precision floating point evaluation for the compiled models, otherwise set it defaults to True.
- class sasmodels.kerneldll.DllKernel(kernel: Callable[[], ndarray], model_info: ModelInfo, q_input: PyInput)[source]¶
Bases:
Kernel
Callable SAS kernel.
kernel is the c function to call.
model_info is the module information
q_input is the DllInput q vectors at which the kernel should be evaluated.
The resulting call method takes the pars, a list of values for the fixed parameters to the kernel, and pd_pars, a list of (value, weight) vectors for the polydisperse parameters. cutoff determines the integration limits: any points with combined weight less than cutoff will not be calculated.
Call
release()
when done with the kernel instance.- __doc__ = '\n Callable SAS kernel.\n\n *kernel* is the c function to call.\n\n *model_info* is the module information\n\n *q_input* is the DllInput q vectors at which the kernel should be\n evaluated.\n\n The resulting call method takes the *pars*, a list of values for\n the fixed parameters to the kernel, and *pd_pars*, a list of (value, weight)\n vectors for the polydisperse parameters. *cutoff* determines the\n integration limits: any points with combined weight less than *cutoff*\n will not be calculated.\n\n Call :meth:`release` when done with the kernel instance.\n '¶
- __module__ = 'sasmodels.kerneldll'¶
- class sasmodels.kerneldll.DllModel(dllpath: str, model_info: ModelInfo, dtype: dtype = dtype('float32'))[source]¶
Bases:
KernelModel
ctypes wrapper for a single model.
dllpath is the stored path to the dll.
model_info is the model definition returned from
modelinfo.make_model_info()
.dtype is the desired model precision. Any numpy dtype for single or double precision floats will do, such as ‘f’, ‘float32’ or ‘single’ for single and ‘d’, ‘float64’ or ‘double’ for double. Double precision is an optional extension which may not be available on all devices.
Call
release()
when done with the kernel.- __doc__ = "\n ctypes wrapper for a single model.\n\n *dllpath* is the stored path to the dll.\n\n *model_info* is the model definition returned from\n :func:`.modelinfo.make_model_info`.\n\n *dtype* is the desired model precision. Any numpy dtype for single\n or double precision floats will do, such as 'f', 'float32' or 'single'\n for single and 'd', 'float64' or 'double' for double. Double precision\n is an optional extension which may not be available on all devices.\n\n Call :meth:`release` when done with the kernel.\n "¶
- __module__ = 'sasmodels.kerneldll'¶
- sasmodels.kerneldll.compile_model(source: str, output: str) None [source]¶
Compile source producing output.
Raises RuntimeError if the compile failed or the output wasn’t produced.
- sasmodels.kerneldll.decode(s)¶
- sasmodels.kerneldll.dll_name(model_file: str, dtype: dtype) str [source]¶
Name of the dll containing the model. This is the base file name without any path or extension, with a form such as ‘sas_sphere32’.
- sasmodels.kerneldll.dll_path(model_file: str, dtype: dtype) str [source]¶
Complete path to the dll for the model. Note that the dll may not exist yet if it hasn’t been compiled.
- sasmodels.kerneldll.load_dll(source: str, model_info: ModelInfo, dtype: dtype = dtype('float64')) DllModel [source]¶
Create and load a dll corresponding to the source.
model_info is the info object returned from
modelinfo.make_model_info()
.source is returned from
generate.make_source()
, as make_source(model_info)[‘dll’].See
make_dll()
for details on controlling the dll path and the allowed floating point precision.
- sasmodels.kerneldll.make_dll(source: str, model_info: ModelInfo, dtype: dtype = dtype('float64'), system: bool = False) str [source]¶
Returns the path to the compiled model defined by kernel_module.
If the model has not been compiled, or if the source file(s) are newer than the dll, then make_dll will compile the model before returning. This routine does not load the resulting dll.
dtype is a numpy floating point precision specifier indicating whether the model should be single, double or long double precision. The default is double precision, np.dtype(‘d’).
Set sasmodels.ALLOW_SINGLE_PRECISION_DLLS to False if single precision models are not allowed as DLLs.
Set sasmodels.kerneldll.SAS_DLL_PATH to the compiled dll output path. Alternatively, set the environment variable SAS_DLL_PATH. The default is in ~/.sasmodels/compiled_models.
system is a bool that controls whether these are the precompiled DLLs that would be shipped with a binary distribution.
sasmodels.kernelpy module¶
Python driver for python kernels
Calls the kernel with a vector of \(q\) values for a single parameter set.
Polydispersity is supported by looping over different parameter sets and
summing the results. The interface to PyModel
matches those for
kernelcl.GpuModel
and kerneldll.DllModel
.
- class sasmodels.kernelpy.PyInput(q_vectors, dtype)[source]¶
Bases:
object
Make q data available to the gpu.
q_vectors is a list of q vectors, which will be [q] for 1-D data, and [qx, qy] for 2-D data. Internally, the vectors will be reallocated to get the best performance on OpenCL, which may involve shifting and stretching the array to better match the memory architecture. Additional points will be evaluated with q=1e-3.
dtype is the data type for the q vectors. The data type should be set to match that of the kernel, which is an attribute of
PyModel
. Note that not all kernels support double precision, so even if the program was created for double precision, the GpuProgram.dtype may be single precision.Call
release()
when complete. Even if not called directly, the buffer will be released when the data object is freed.- __dict__ = mappingproxy({'__module__': 'sasmodels.kernelpy', '__doc__': '\n Make q data available to the gpu.\n\n *q_vectors* is a list of q vectors, which will be *[q]* for 1-D data,\n and *[qx, qy]* for 2-D data. Internally, the vectors will be reallocated\n to get the best performance on OpenCL, which may involve shifting and\n stretching the array to better match the memory architecture. Additional\n points will be evaluated with *q=1e-3*.\n\n *dtype* is the data type for the q vectors. The data type should be\n set to match that of the kernel, which is an attribute of\n :class:`PyModel`. Note that not all kernels support double\n precision, so even if the program was created for double precision,\n the *GpuProgram.dtype* may be single precision.\n\n Call :meth:`release` when complete. Even if not called directly, the\n buffer will be released when the data object is freed.\n ', '__init__': <function PyInput.__init__>, 'release': <function PyInput.release>, '__dict__': <attribute '__dict__' of 'PyInput' objects>, '__weakref__': <attribute '__weakref__' of 'PyInput' objects>, '__annotations__': {}})¶
- __doc__ = '\n Make q data available to the gpu.\n\n *q_vectors* is a list of q vectors, which will be *[q]* for 1-D data,\n and *[qx, qy]* for 2-D data. Internally, the vectors will be reallocated\n to get the best performance on OpenCL, which may involve shifting and\n stretching the array to better match the memory architecture. Additional\n points will be evaluated with *q=1e-3*.\n\n *dtype* is the data type for the q vectors. The data type should be\n set to match that of the kernel, which is an attribute of\n :class:`PyModel`. Note that not all kernels support double\n precision, so even if the program was created for double precision,\n the *GpuProgram.dtype* may be single precision.\n\n Call :meth:`release` when complete. Even if not called directly, the\n buffer will be released when the data object is freed.\n '¶
- __module__ = 'sasmodels.kernelpy'¶
- __weakref__¶
list of weak references to the object (if defined)
- class sasmodels.kernelpy.PyKernel(model_info: ModelInfo, q_input: List[ndarray])[source]¶
Bases:
Kernel
Callable SAS kernel.
kernel is the kernel object to call.
model_info is the module information
q_input is the DllInput q vectors at which the kernel should be evaluated.
The resulting call method takes the pars, a list of values for the fixed parameters to the kernel, and pd_pars, a list of (value,weight) vectors for the polydisperse parameters. cutoff determines the integration limits: any points with combined weight less than cutoff will not be calculated.
Call
release()
when done with the kernel instance.- __doc__ = '\n Callable SAS kernel.\n\n *kernel* is the kernel object to call.\n\n *model_info* is the module information\n\n *q_input* is the DllInput q vectors at which the kernel should be\n evaluated.\n\n The resulting call method takes the *pars*, a list of values for\n the fixed parameters to the kernel, and *pd_pars*, a list of (value,weight)\n vectors for the polydisperse parameters. *cutoff* determines the\n integration limits: any points with combined weight less than *cutoff*\n will not be calculated.\n\n Call :meth:`release` when done with the kernel instance.\n '¶
- __module__ = 'sasmodels.kernelpy'¶
- _call_kernel(call_details: CallDetails, values: ndarray, cutoff: ndarray, magnetic: float, radius_effective_mode: bool) None [source]¶
Call the kernel. Subclasses defining kernels for particular execution engines need to provide an implementation for this.
- class sasmodels.kernelpy.PyModel(model_info)[source]¶
Bases:
KernelModel
Wrapper for pure python models.
- __doc__ = '\n Wrapper for pure python models.\n '¶
- __module__ = 'sasmodels.kernelpy'¶
- sasmodels.kernelpy._create_default_functions(model_info)[source]¶
Autogenerate missing functions, such as Iqxy from Iq.
This only works for Iqxy when Iq is written in python.
make_source()
performs a similar role for Iq written in C. This also vectorizes any functions that are not already marked as vectorized.
- sasmodels.kernelpy._create_vector_Iq(model_info)[source]¶
Define Iq as a vector function if it exists.
- sasmodels.kernelpy._create_vector_Iqxy(model_info)[source]¶
Define Iqxy as a vector function if it exists, or default it from Iq().
- sasmodels.kernelpy._loops(parameters: ndarray, form: Callable[[], ndarray], form_volume: Callable[[], float], form_radius: Callable[[], float], nq: int, call_details: CallDetails, values: ndarray, cutoff: float) None [source]¶
sasmodels.list_pars module¶
List all parameters used along with the models which use them.
Usage:
python -m sasmodels.list_pars [-v]
If ‘-v’ is given, then list the models containing the parameter in addition to just the parameter name.
- sasmodels.list_pars.find_pars(kind=None)[source]¶
Find all parameters in all models.
Returns the reference table {parameter: [model, model, …]}
sasmodels.mixture module¶
Mixture model¶
The product model multiplies the structure factor by the form factor, modulated by the effective radius of the form. The resulting model has a attributes of both the model description (with parameters, etc.) and the module evaluator (with call, release, etc.).
To use it, first load form factor P and structure factor S, then create ProductModel(P, S).
- class sasmodels.mixture.MixtureKernel(model_info: ModelInfo, kernels: List[Kernel], q: Tuple[ndarray])[source]¶
Bases:
Kernel
Instantiated kernel for mixture of models.
- Iq(call_details: CallDetails, values: ndarray, cutoff: ndarray, magnetic: float) ndarray [source]¶
Returns I(q) from the polydisperse average scattering.
\[I(q) = \text{scale} \cdot P(q) + \text{background}\]With the correct choice of model and contrast, setting scale to the volume fraction \(V_f\) of particles should match the measured absolute scattering. Some models (e.g., vesicle) have volume fraction built into the model, and do not need an additional scale.
- __call__(call_details: CallDetails, values: ndarray, cutoff: ndarray, magnetic: float) ndarray ¶
Returns I(q) from the polydisperse average scattering.
\[I(q) = \text{scale} \cdot P(q) + \text{background}\]With the correct choice of model and contrast, setting scale to the volume fraction \(V_f\) of particles should match the measured absolute scattering. Some models (e.g., vesicle) have volume fraction built into the model, and do not need an additional scale.
- __doc__ = '\n Instantiated kernel for mixture of models.\n '¶
- __module__ = 'sasmodels.mixture'¶
- class sasmodels.mixture.MixtureModel(model_info: ModelInfo, parts: List[KernelModel])[source]¶
Bases:
KernelModel
Model definition for mixture of models.
- __doc__ = '\n Model definition for mixture of models.\n '¶
- __init__(model_info: ModelInfo, parts: List[KernelModel]) None [source]¶
- __module__ = 'sasmodels.mixture'¶
- make_kernel(q_vectors: List[ndarray]) MixtureKernel [source]¶
Instantiate a kernel for evaluating the model at q_vectors.
- class sasmodels.mixture._MixtureParts(model_info: ModelInfo, kernels: List[Kernel], call_details: CallDetails, values: ndarray)[source]¶
Bases:
object
Mixture component iterator.
- __dict__ = mappingproxy({'__module__': 'sasmodels.mixture', '__doc__': '\n Mixture component iterator.\n ', '__init__': <function _MixtureParts.__init__>, '__iter__': <function _MixtureParts.__iter__>, '__next__': <function _MixtureParts.__next__>, 'next': <function _MixtureParts.__next__>, '_part_details': <function _MixtureParts._part_details>, '_part_values': <function _MixtureParts._part_values>, '__dict__': <attribute '__dict__' of '_MixtureParts' objects>, '__weakref__': <attribute '__weakref__' of '_MixtureParts' objects>, '__annotations__': {}})¶
- __doc__ = '\n Mixture component iterator.\n '¶
- __init__(model_info: ModelInfo, kernels: List[Kernel], call_details: CallDetails, values: ndarray) None [source]¶
- __iter__() _MixtureParts [source]¶
- __module__ = 'sasmodels.mixture'¶
- __next__() Tuple[List[Callable], CallDetails, ndarray] [source]¶
- __weakref__¶
list of weak references to the object (if defined)
- _part_details(info: ModelInfo, par_index: int) CallDetails [source]¶
- next() Tuple[List[Callable], CallDetails, ndarray] ¶
- sasmodels.mixture._intermediates(q: np.ndarray, results: List[Tuple[Kernel, np.ndarray, Callable | None]]) OrderedDict[str, Any] [source]¶
Returns intermediate results for mixture model.
sasmodels.model_test module¶
Run model unit tests.
Usage:
python -m sasmodels.model_test [opencl|cuda|dll|all] model1 model2 ...
If model1 is ‘all’, then all except the remaining models will be tested.
Subgroups are also possible, such as ‘py’, ‘single’ or ‘1d’. See
core.list_models()
for details.
Each model is tested using the default parameters at q=0.1, (qx, qy)=(0.1, 0.1), and Fq is called to make sure R_eff, volume and volume ratio are computed. The return values at these points are not considered. The test is only to verify that the models run to completion, and do not produce inf or NaN.
Tests are defined with the tests attribute in the model.py file. tests is a list of individual tests to run, where each test consists of the parameter values for the test, the q-values and the expected results. For the effective radius test and volume ratio tests, use the extended output form, which checks each output of kernel.Fq. For 1-D tests, either specify the q value or a list of q-values, and the corresponding I(q) value, or list of I(q) values.
That is:
tests = [
[ {parameters}, q, I(q)],
[ {parameters}, [q], [I(q)] ],
[ {parameters}, [q1, q2, ...], [I(q1), I(q2), ...]],
[ {parameters}, (qx, qy), I(qx, Iqy)],
[ {parameters}, [(qx1, qy1), (qx2, qy2), ...],
[I(qx1, qy1), I(qx2, qy2), ...]],
[ {parameters}, q, F(q), F^2(q), R_eff, V, V_r ],
...
]
Parameters are key:value pairs, where key is one of the parameters of the model and value is the value to use for the test. Any parameters not given in the parameter list will take on the default parameter value.
Precision defaults to 5 digits (relative).
- sasmodels.model_test.check_model(model_info: ModelInfo) Tuple[bool, str] [source]¶
Run the tests for a single model, capturing the output.
Returns success status and the output string.
- sasmodels.model_test.invalid_pars(partable: ParameterTable, pars: Dict[str, float]) List[str] [source]¶
Return a list of parameter names that are not part of the model.
- sasmodels.model_test.is_near(target: float, actual: float, digits: int = 5) bool [source]¶
Returns true if actual is within digits significant digits of target.
taget zero and inf should match actual zero and inf. If you want to accept eps for zero, choose a value such as 1e-10, which must match up to +/- 1e-15 when digits is the default value of 5.
If target is None, then just make sure that actual is not NaN.
If target is NaN, make sure actual is NaN.
- sasmodels.model_test.main() int [source]¶
Run tests given is models.
Returns 0 if success or 1 if any tests fail.
- sasmodels.model_test.make_suite(loaders: List[str], models: List[str]) TestSuite [source]¶
Construct the pyunit test suite.
loaders is the list of kernel drivers to use (dll, opencl or cuda). For python model the python driver is always used.
models is the list of models to test, or [“all”] to test all models.
sasmodels.modelinfo module¶
Model Info and Parameter Tables¶
Defines ModelInfo
and ParameterTable
and the routines for
manipulating them. In particular, make_model_info()
converts a kernel
module into the model info block as seen by the rest of the sasmodels library.
- sasmodels.modelinfo.C_SYMBOLS = ['Imagnetic', 'Iq', 'Iqxy', 'Iqac', 'Iqabc', 'form_volume', 'shell_volume', 'c_code', 'valid']¶
Set of variables defined in the model that might contain C code
- sasmodels.modelinfo.MAX_PD = 5¶
Maximum number of simultaneously polydisperse parameters
- class sasmodels.modelinfo.ModelInfo[source]¶
Bases:
object
Interpret the model definition file, categorizing the parameters.
The module can be loaded with a normal python import statement if you know which module you need, or with __import__(‘sasmodels.model.’+name) if the name is in a string.
The structure should be mostly static, other than the delayed definition of Iq, Iqac and Iqabc if they need to be defined.
- Imagnetic: None | str | Callable[[...], np.ndarray] = None¶
Returns I(qx, qy, a, b, …). The interface follows
Iq
.
- Iq: None | str | Callable[[...], np.ndarray] = None¶
Returns I(q, a, b, …) for parameters a, b, etc. defined by the parameter table. Iq can be defined as a python function, or as a C function. If it is defined in C, then set Iq to the body of the C function, including the return statement. This function takes values for q and each of the parameters as separate double values (which may be converted to float or long double by sasmodels). All source code files listed in
source
will be loaded before the Iq function is defined. If Iq is not present, then sources should define static double Iq(double q, double a, double b, …) which will return I(q, a, b, …). Multiplicity parameters are sent as pointers to doubles. Constants in floating point expressions should include the decimal point. Seegenerate
for more details. If have_Fq is True, then Iq should return an interleaved array of \([\sum F(q_1), \sum F^2(q_1), \ldots, \sum F(q_n), \sum F^2(q_n)]\).
- Iqabc: None | str | Callable[[...], np.ndarray] = None¶
Returns I(qa, qb, qc, a, b, …). The interface follows
Iq
.
- Iqac: None | str | Callable[[...], np.ndarray] = None¶
Returns I(qab, qc, a, b, …). The interface follows
Iq
.
- Iqxy: None | str | Callable[[...], np.ndarray] = None¶
Returns I(qx, qy, a, b, …). The interface follows
Iq
.
- __annotations__ = {'Imagnetic': 'Union[None, str, Callable[[...], np.ndarray]]', 'Iq': 'Union[None, str, Callable[[...], np.ndarray]]', 'Iqabc': 'Union[None, str, Callable[[...], np.ndarray]]', 'Iqac': 'Union[None, str, Callable[[...], np.ndarray]]', 'Iqxy': 'Union[None, str, Callable[[...], np.ndarray]]', 'base': 'ParameterTable', 'basefile': 'Optional[str]', 'c_code': 'Optional[str]', 'category': 'Optional[str]', 'composition': 'Optional[Tuple[str, List[ModelInfo]]]', 'description': 'str', 'docs': 'str', 'filename': 'Optional[str]', 'form_volume': 'Union[None, str, Callable[[np.ndarray], float]]', 'hidden': 'Optional[Callable[[int], Set[str]]]', 'id': 'str', 'lineno': 'Dict[str, int]', 'name': 'str', 'opencl': 'bool', 'parameters': 'ParameterTable', 'profile': 'Optional[Callable[[np.ndarray], None]]', 'profile_axes': 'Tuple[str, str]', 'radius_effective': 'Union[None, Callable[[int, np.ndarray], float]]', 'radius_effective_modes': 'List[str]', 'random': 'Optional[Callable[[], Dict[str, float]]]', 'sesans': 'Optional[Callable[[np.ndarray], np.ndarray]]', 'shell_volume': 'Union[None, str, Callable[[np.ndarray], float]]', 'single': 'bool', 'source': 'List[str]', 'structure_factor': 'bool', 'tests': 'List[TestCondition]', 'title': 'str', 'translation': 'Optional[str]', 'valid': 'str'}¶
- __dict__ = mappingproxy({'__module__': 'sasmodels.modelinfo', '__doc__': "\n Interpret the model definition file, categorizing the parameters.\n\n The module can be loaded with a normal python import statement if you\n know which module you need, or with __import__('sasmodels.model.'+name)\n if the name is in a string.\n\n The structure should be mostly static, other than the delayed definition\n of *Iq*, *Iqac* and *Iqabc* if they need to be defined.\n ", 'filename': None, 'basefile': None, 'id': None, 'name': None, 'title': None, 'description': None, 'parameters': None, 'base': None, 'translation': None, 'composition': None, 'hidden': None, 'docs': None, 'category': None, 'single': None, 'opencl': None, 'structure_factor': None, 'have_Fq': False, 'radius_effective_modes': None, 'source': None, 'c_code': None, 'valid': None, 'form_volume': None, 'shell_volume': None, 'radius_effective': None, 'Iq': None, 'Iqxy': None, 'Iqac': None, 'Iqabc': None, 'Imagnetic': None, 'profile': None, 'profile_axes': None, 'sesans': None, 'random': None, 'lineno': None, 'tests': None, '__init__': <function ModelInfo.__init__>, 'get_hidden_parameters': <function ModelInfo.get_hidden_parameters>, '__dict__': <attribute '__dict__' of 'ModelInfo' objects>, '__weakref__': <attribute '__weakref__' of 'ModelInfo' objects>, '__annotations__': {'filename': 'Optional[str]', 'basefile': 'Optional[str]', 'id': 'str', 'name': 'str', 'title': 'str', 'description': 'str', 'parameters': 'ParameterTable', 'base': 'ParameterTable', 'translation': 'Optional[str]', 'composition': 'Optional[Tuple[str, List[ModelInfo]]]', 'hidden': 'Optional[Callable[[int], Set[str]]]', 'docs': 'str', 'category': 'Optional[str]', 'single': 'bool', 'opencl': 'bool', 'structure_factor': 'bool', 'radius_effective_modes': 'List[str]', 'source': 'List[str]', 'c_code': 'Optional[str]', 'valid': 'str', 'form_volume': 'Union[None, str, Callable[[np.ndarray], float]]', 'shell_volume': 'Union[None, str, Callable[[np.ndarray], float]]', 'radius_effective': 'Union[None, Callable[[int, np.ndarray], float]]', 'Iq': 'Union[None, str, Callable[[...], np.ndarray]]', 'Iqxy': 'Union[None, str, Callable[[...], np.ndarray]]', 'Iqac': 'Union[None, str, Callable[[...], np.ndarray]]', 'Iqabc': 'Union[None, str, Callable[[...], np.ndarray]]', 'Imagnetic': 'Union[None, str, Callable[[...], np.ndarray]]', 'profile': 'Optional[Callable[[np.ndarray], None]]', 'profile_axes': 'Tuple[str, str]', 'sesans': 'Optional[Callable[[np.ndarray], np.ndarray]]', 'random': 'Optional[Callable[[], Dict[str, float]]]', 'lineno': 'Dict[str, int]', 'tests': 'List[TestCondition]'}})¶
- __doc__ = "\n Interpret the model definition file, categorizing the parameters.\n\n The module can be loaded with a normal python import statement if you\n know which module you need, or with __import__('sasmodels.model.'+name)\n if the name is in a string.\n\n The structure should be mostly static, other than the delayed definition\n of *Iq*, *Iqac* and *Iqabc* if they need to be defined.\n "¶
- __module__ = 'sasmodels.modelinfo'¶
- __weakref__¶
list of weak references to the object (if defined)
- base: ParameterTable = None¶
For reparameterized systems, base is the base parameter table. For normal systems it is simply a copy of parameters.
- basefile: str | None = None¶
Base file is usually filename, but not when a model has been reparameterized, in which case it is the file containing the original model definition. This is needed to signal an additional dependency for the model time stamp, and so that the compiler reports correct file for syntax errors.
- c_code: str | None = None¶
inline source code, added after all elements of source
- category: str | None = None¶
Location of the model description in the documentation. This takes the form of “section” or “section:subsection”. So for example, porod uses category=”shape-independent” so it is in the Shape-Independent Functions section whereas capped_cylinder uses: category=”shape:cylinder”, which puts it in the Cylinder Functions section.
- composition: Tuple[str, List[ModelInfo]] | None = None¶
Composition is None if this is an independent model, or it is a tuple with comoposition type (‘product’ or ‘misture’) and a list of
ModelInfo
blocks for the composed objects. This allows us to rebuild a complete mixture or product model from the info block. composition is not given in the model definition file, but instead arises when the model is constructed using names such as sphere*hardsphere or cylinder+sphere.
- description: str = None¶
Long description of the model.
- docs: str = None¶
Doc string from the top of the model file. This should be formatted using ReStructuredText format, with latex markup in “.. math” environments, or in dollar signs. This will be automatically extracted to a .rst file by
generate.make_doc()
, then converted to HTML or PDF by Sphinx.
- filename: str | None = None¶
Full path to the file defining the kernel, if any.
- form_volume: None | str | Callable[[np.ndarray], float] = None¶
Returns the form volume for python-based models. Form volume is needed for volume normalization in the polydispersity integral. If no parameters are volume parameters, then form volume is not needed. For C-based models, (with
source
defined, or withIq
defined using a string containing C code), form_volume must also be C code, either defined as a string, or in the sources.
Returns the set of hidden parameters for the model. control is the value of the control parameter. Note that multiplicity models have an implicit control parameter, which is the parameter that controls the multiplicity.
- have_Fq = False¶
True if the model defines an Fq function with signature
void Fq(double q, double *F1, double *F2, ...)
Different variants require different parameters. In order to show just the parameters needed for the variant selected, you should provide a function hidden(control) -> set([‘a’, ‘b’, …]) indicating which parameters need to be hidden. For multiplicity models, you need to use the complete name of the parameter, including its number. So for example, if variant “a” uses only sld1 and sld2, then sld3, sld4 and sld5 of multiplicity parameter sld[5] should be in the hidden set.
- id: str = None¶
Id of the kernel used to load it from the filesystem.
- lineno: Dict[str, int] = None¶
Line numbers for symbols defining C code
- name: str = None¶
Display name of the model, which defaults to the model id but with capitalization of the parts so for example core_shell defaults to “Core Shell”.
- opencl: bool = None¶
True if the model can be run as an opencl model. If for some reason the model cannot be run in opencl (e.g., because the model passes functions by reference), then set this to false.
- parameters: ParameterTable = None¶
Model parameter table. Parameters are defined using a list of parameter definitions, each of which is contains parameter name, units, default value, limits, type and description. See
Parameter
for details on the individual parameters. The parameters are gathered into aParameterTable
, which provides various views into the parameter list.
- profile: Callable[[np.ndarray], None] | None = None¶
Returns a model profile curve x, y. If profile is defined, this curve will appear in response to the Show button in SasView. Use
profile_axes
to set the axis labels. Note that y values will be scaled by 1e6 before plotting.
- profile_axes: Tuple[str, str] = None¶
Axis labels for the
profile
plot. The default is [‘x’, ‘y’]. Only the x component is used for now.
- radius_effective: None | Callable[[int, np.ndarray], float] = None¶
Computes the effective radius of the shape given the volume parameters. Only needed for models defined in python that can be used for monodisperse approximation for non-dilute solutions, P@S. The first argument is the integer effective radius mode, with default 0.
- radius_effective_modes: List[str] = None¶
List of options for computing the effective radius of the shape, or None if the model is not usable as a form factor model.
- random: Callable[[], Dict[str, float]] | None = None¶
Returns a random parameter set for the model
- sesans: Callable[[np.ndarray], np.ndarray] | None = None¶
Returns sesans(z, a, b, …) for models which can directly compute the SESANS correlation function. Note: not currently implemented.
- shell_volume: None | str | Callable[[np.ndarray], float] = None¶
Returns the shell volume for python-based models. Form volume and shell volume are needed for volume normalization in the polydispersity integral and structure interactions for hollow shapes. If no parameters are volume parameters, then shell volume is not needed. For C-based models, (with
source
defined, or withIq
defined using a string containing C code), shell_volume must also be C code, either defined as a string, or in the sources.
- single: bool = None¶
True if the model can be computed accurately with single precision. This is True by default, but models such as bcc_paracrystal set it to False because they require double precision calculations.
- source: List[str] = None¶
List of C source files used to define the model. The source files should define the Iq function, and possibly Iqac or Iqabc if the model defines orientation parameters. Files containing the most basic functions must appear first in the list, followed by the files that use those functions.
- structure_factor: bool = None¶
True if the model is a structure factor used to model the interaction between form factor models. This will default to False if it is not provided in the file.
- tests: List[TestCondition] = None¶
The set of tests that must pass. The format of the tests is described in
model_test
.
- title: str = None¶
Short description of the model.
- translation: str | None = None¶
Parameter translation code to convert from parameters table from caller to the base table used to evaluate the model.
- valid: str = None¶
Expression which evaluates to True if the input parameters are valid and the model can be computed, or False otherwise. Invalid parameter sets will not be included in the weighted \(I(Q)\) calculation or its volume normalization. Use C syntax for the expressions, with || for or && for and and ! for not. Any non-magnetic parameter can be used.
- class sasmodels.modelinfo.Parameter(name: str, units: str = '', default: float = nan, limits: Tuple[float, float] = (-inf, inf), ptype: str = '', description: str = '')[source]¶
Bases:
object
The available kernel parameters are defined as a list, with each parameter defined as a sublist with the following elements:
name is the name that will be displayed to the user. Names should be lower case, with words separated by underscore. If acronyms are used, the whole acronym should be upper case. For vector parameters, the name will be followed by [len] where len is an integer length of the vector, or the name of the parameter which controls the length. The attribute id will be created from name without the length.
units should be one of degrees for angles, Ang for lengths, 1e-6/Ang^2 for SLDs.
default will be the initial value for the model when it is selected, or when an initial value is not otherwise specified.
limits ([lb, ub]) are the hard limits on the parameter value, used to limit the polydispersity density function. In the fit, the parameter limits given to the fit are the limits on the central value of the parameter. If there is polydispersity, it will evaluate parameter values outside the fit limits, but not outside the hard limits specified in the model. If there are no limits, use +/-inf imported from numpy.
type indicates how the parameter will be used. “volume” parameters will be used in all functions. “orientation” parameters are not passed, but will be used to convert from qx, qy to qa, qb, qc in calls to Iqxy and Imagnetic. If type is the empty string, the parameter will be used in all of Iq, Iqxy and Imagnetic. “sld” parameters can automatically be promoted to magnetic parameters, each of which will have a magnitude and a direction, which may be different from other sld parameters. The volume parameters are used for calls to form_volume within the kernel (required for volume normalization), to shell_volume (for hollow shapes), and to radius_effective (for structure factor interactions) respectively.
description is a short description of the parameter. This will be displayed in the parameter table and used as a tool tip for the parameter value in the user interface.
Additional values can be set after the parameter is created:
length is the length of the field if it is a vector field
length_control is the parameter which sets the vector length
is_control is True if the parameter is a control parameter for a vector
polydisperse is true if the parameter accepts a polydispersity
relative_pd is true if that polydispersity is a portion of the value (so a 10% length dipsersity would use a polydispersity value of 0.1) rather than absolute dispersisity (such as an angle plus or minus 15 degrees).
choices is the option names for a drop down list of options, as for example, might be used to set the value of a shape parameter.
Control parameters are used for variant models such as rpa which have different cases with different parameters, as well as models like spherical_sld with its user defined number of shells. The control parameter should appear in the parameter table along with the parameters it is is controlling. For variant models, use [CASES] in place of the parameter limits Within the parameter definition table, with case names such as:
CASES = ["diblock copolymer", "triblock copolymer", ...]
This should give limits=[[case1, case2, …]], but the model loader translates it to limits=[0, len(CASES)-1], and adds choices=CASES to the
Parameter
definition. Note that models can use a list of cases as a parameter without it being a control parameter. Either way, the parameter is sent to the model evaluator as float(choice_num), where choices are numbered from 0.ModelInfo.get_hidden_parameters()
will determine which parameers to display.The class contructor should not be called directly, but instead the parameter table is built using
make_parameter_table()
andparse_parameter()
therein.- __dict__ = mappingproxy({'__module__': 'sasmodels.modelinfo', '__doc__': '\n The available kernel parameters are defined as a list, with each parameter\n defined as a sublist with the following elements:\n\n *name* is the name that will be displayed to the user. Names\n should be lower case, with words separated by underscore. If\n acronyms are used, the whole acronym should be upper case. For vector\n parameters, the name will be followed by *[len]* where *len* is an\n integer length of the vector, or the name of the parameter which\n controls the length. The attribute *id* will be created from name\n without the length.\n\n *units* should be one of *degrees* for angles, *Ang* for lengths,\n *1e-6/Ang^2* for SLDs.\n\n *default* will be the initial value for the model when it\n is selected, or when an initial value is not otherwise specified.\n\n *limits ([lb, ub])* are the hard limits on the parameter value, used to\n limit the polydispersity density function. In the fit, the parameter limits\n given to the fit are the limits on the central value of the parameter.\n If there is polydispersity, it will evaluate parameter values outside\n the fit limits, but not outside the hard limits specified in the model.\n If there are no limits, use +/-inf imported from numpy.\n\n *type* indicates how the parameter will be used. "volume" parameters\n will be used in all functions. "orientation" parameters are not passed,\n but will be used to convert from *qx*, *qy* to *qa*, *qb*, *qc* in calls to\n *Iqxy* and *Imagnetic*. If *type* is the empty string, the parameter will\n be used in all of *Iq*, *Iqxy* and *Imagnetic*. "sld" parameters\n can automatically be promoted to magnetic parameters, each of which\n will have a magnitude and a direction, which may be different from\n other sld parameters. The volume parameters are used for calls\n to form_volume within the kernel (required for volume normalization),\n to shell_volume (for hollow shapes), and to radius_effective (for\n structure factor interactions) respectively.\n\n *description* is a short description of the parameter. This will\n be displayed in the parameter table and used as a tool tip for the\n parameter value in the user interface.\n\n Additional values can be set after the parameter is created:\n\n * *length* is the length of the field if it is a vector field\n\n * *length_control* is the parameter which sets the vector length\n\n * *is_control* is True if the parameter is a control parameter for a vector\n\n * *polydisperse* is true if the parameter accepts a polydispersity\n\n * *relative_pd* is true if that polydispersity is a portion of the\n value (so a 10% length dipsersity would use a polydispersity value\n of 0.1) rather than absolute dispersisity (such as an angle plus or\n minus 15 degrees).\n\n *choices* is the option names for a drop down list of options, as for\n example, might be used to set the value of a shape parameter.\n\n Control parameters are used for variant models such as :ref:`rpa` which\n have different cases with different parameters, as well as models\n like *spherical_sld* with its user defined number of shells.\n The control parameter should appear in the parameter table along with the\n parameters it is is controlling. For variant models, use *[CASES]* in\n place of the parameter limits Within the parameter definition table,\n with case names such as::\n\n CASES = ["diblock copolymer", "triblock copolymer", ...]\n\n This should give *limits=[[case1, case2, ...]]*, but the model loader\n translates it to *limits=[0, len(CASES)-1]*, and adds *choices=CASES* to\n the :class:`Parameter` definition. Note that models can use a list of\n cases as a parameter without it being a control parameter. Either way,\n the parameter is sent to the model evaluator as *float(choice_num)*,\n where choices are numbered from 0. :meth:`ModelInfo.get_hidden_parameters`\n will determine which parameers to display.\n\n The class contructor should not be called directly, but instead the\n parameter table is built using :func:`make_parameter_table` and\n :func:`parse_parameter` therein.\n ', '__init__': <function Parameter.__init__>, 'as_definition': <function Parameter.as_definition>, 'as_function_argument': <function Parameter.as_function_argument>, '__str__': <function Parameter.__str__>, '__repr__': <function Parameter.__repr__>, '__dict__': <attribute '__dict__' of 'Parameter' objects>, '__weakref__': <attribute '__weakref__' of 'Parameter' objects>, '__annotations__': {'id': 'str', 'name': 'str', 'units': 'str', 'default': 'float', 'limits': 'Limits', 'type': 'str', 'description': 'str', 'length': 'int', 'length_control': 'Optional[str]', 'is_control': 'bool', 'polydisperse': 'bool', 'relative_pd': 'bool', 'choices': 'List[str]'}})¶
- __doc__ = '\n The available kernel parameters are defined as a list, with each parameter\n defined as a sublist with the following elements:\n\n *name* is the name that will be displayed to the user. Names\n should be lower case, with words separated by underscore. If\n acronyms are used, the whole acronym should be upper case. For vector\n parameters, the name will be followed by *[len]* where *len* is an\n integer length of the vector, or the name of the parameter which\n controls the length. The attribute *id* will be created from name\n without the length.\n\n *units* should be one of *degrees* for angles, *Ang* for lengths,\n *1e-6/Ang^2* for SLDs.\n\n *default* will be the initial value for the model when it\n is selected, or when an initial value is not otherwise specified.\n\n *limits ([lb, ub])* are the hard limits on the parameter value, used to\n limit the polydispersity density function. In the fit, the parameter limits\n given to the fit are the limits on the central value of the parameter.\n If there is polydispersity, it will evaluate parameter values outside\n the fit limits, but not outside the hard limits specified in the model.\n If there are no limits, use +/-inf imported from numpy.\n\n *type* indicates how the parameter will be used. "volume" parameters\n will be used in all functions. "orientation" parameters are not passed,\n but will be used to convert from *qx*, *qy* to *qa*, *qb*, *qc* in calls to\n *Iqxy* and *Imagnetic*. If *type* is the empty string, the parameter will\n be used in all of *Iq*, *Iqxy* and *Imagnetic*. "sld" parameters\n can automatically be promoted to magnetic parameters, each of which\n will have a magnitude and a direction, which may be different from\n other sld parameters. The volume parameters are used for calls\n to form_volume within the kernel (required for volume normalization),\n to shell_volume (for hollow shapes), and to radius_effective (for\n structure factor interactions) respectively.\n\n *description* is a short description of the parameter. This will\n be displayed in the parameter table and used as a tool tip for the\n parameter value in the user interface.\n\n Additional values can be set after the parameter is created:\n\n * *length* is the length of the field if it is a vector field\n\n * *length_control* is the parameter which sets the vector length\n\n * *is_control* is True if the parameter is a control parameter for a vector\n\n * *polydisperse* is true if the parameter accepts a polydispersity\n\n * *relative_pd* is true if that polydispersity is a portion of the\n value (so a 10% length dipsersity would use a polydispersity value\n of 0.1) rather than absolute dispersisity (such as an angle plus or\n minus 15 degrees).\n\n *choices* is the option names for a drop down list of options, as for\n example, might be used to set the value of a shape parameter.\n\n Control parameters are used for variant models such as :ref:`rpa` which\n have different cases with different parameters, as well as models\n like *spherical_sld* with its user defined number of shells.\n The control parameter should appear in the parameter table along with the\n parameters it is is controlling. For variant models, use *[CASES]* in\n place of the parameter limits Within the parameter definition table,\n with case names such as::\n\n CASES = ["diblock copolymer", "triblock copolymer", ...]\n\n This should give *limits=[[case1, case2, ...]]*, but the model loader\n translates it to *limits=[0, len(CASES)-1]*, and adds *choices=CASES* to\n the :class:`Parameter` definition. Note that models can use a list of\n cases as a parameter without it being a control parameter. Either way,\n the parameter is sent to the model evaluator as *float(choice_num)*,\n where choices are numbered from 0. :meth:`ModelInfo.get_hidden_parameters`\n will determine which parameers to display.\n\n The class contructor should not be called directly, but instead the\n parameter table is built using :func:`make_parameter_table` and\n :func:`parse_parameter` therein.\n '¶
- __init__(name: str, units: str = '', default: float = nan, limits: Tuple[float, float] = (-inf, inf), ptype: str = '', description: str = '') None [source]¶
- __module__ = 'sasmodels.modelinfo'¶
- __weakref__¶
list of weak references to the object (if defined)
- class sasmodels.modelinfo.ParameterTable(parameters: List[Parameter])[source]¶
Bases:
object
ParameterTable manages the list of available parameters.
There are a couple of complications which mean that the list of parameters for the kernel differs from the list of parameters that the user sees.
(1) Common parameters. Scale and background are implicit to every model, but are not passed to the kernel.
(2) Vector parameters. Vector parameters are passed to the kernel as a pointer to an array, e.g., thick[], but they are seen by the user as n separate parameters thick1, thick2, …
Therefore, the parameter table is organized by how it is expected to be used. The following information is needed to set up the kernel functions:
kernel_parameters is the list of parameters in the kernel parameter table, with vector parameter p declared as p[].
iq_parameters is the list of parameters to the Iq(q, …) function, with vector parameter p sent as p[].
form_volume_parameters is the list of parameters to the form_volume(…) function, with vector parameter p sent as p[].
Problem details, which sets up the polydispersity loops, requires the following:
theta_offset is the offset of the theta parameter in the kernel parameter table, with vector parameters counted as n individual parameters p1, p2, …, or offset is -1 if there is no theta parameter.
max_pd is the maximum number of polydisperse parameters, with vector parameters counted as n individual parameters p1, p2, … Note that this number is limited to sasmodels.modelinfo.MAX_PD.
npars is the total number of parameters to the kernel, with vector parameters counted as n individual parameters p1, p2, …
common_parameters is the list of common parameters, with a unique copy for each model so that structure factors can have a default background of 0.0.
call_parameters is the complete list of parameters to the kernel, including scale and background, with vector parameters recorded as individual parameters p1, p2, …
active_1d is the set of names that may be polydisperse for 1d data
active_2d is the set of names that may be polydisperse for 2d data
User parameters are the set of parameters visible to the user, including the scale and background parameters that the kernel does not see. User parameters don’t use vector notation, and instead use p1, p2, …
- __dict__ = mappingproxy({'__module__': 'sasmodels.modelinfo', '__doc__': "\n ParameterTable manages the list of available parameters.\n\n There are a couple of complications which mean that the list of parameters\n for the kernel differs from the list of parameters that the user sees.\n\n (1) Common parameters. Scale and background are implicit to every model,\n but are not passed to the kernel.\n\n (2) Vector parameters. Vector parameters are passed to the kernel as a\n pointer to an array, e.g., thick[], but they are seen by the user as n\n separate parameters thick1, thick2, ...\n\n Therefore, the parameter table is organized by how it is expected to be\n used. The following information is needed to set up the kernel functions:\n\n * *kernel_parameters* is the list of parameters in the kernel parameter\n table, with vector parameter p declared as p[].\n\n * *iq_parameters* is the list of parameters to the Iq(q, ...) function,\n with vector parameter p sent as p[].\n\n * *form_volume_parameters* is the list of parameters to the form_volume(...)\n function, with vector parameter p sent as p[].\n\n Problem details, which sets up the polydispersity loops, requires the\n following:\n\n * *theta_offset* is the offset of the theta parameter in the kernel parameter\n table, with vector parameters counted as n individual parameters\n p1, p2, ..., or offset is -1 if there is no theta parameter.\n\n * *max_pd* is the maximum number of polydisperse parameters, with vector\n parameters counted as n individual parameters p1, p2, ... Note that\n this number is limited to sasmodels.modelinfo.MAX_PD.\n\n * *npars* is the total number of parameters to the kernel, with vector\n parameters counted as n individual parameters p1, p2, ...\n\n * *common_parameters* is the list of common parameters, with a unique\n copy for each model so that structure factors can have a default\n background of 0.0.\n\n * *call_parameters* is the complete list of parameters to the kernel,\n including scale and background, with vector parameters recorded as\n individual parameters p1, p2, ...\n\n * *active_1d* is the set of names that may be polydisperse for 1d data\n\n * *active_2d* is the set of names that may be polydisperse for 2d data\n\n User parameters are the set of parameters visible to the user, including\n the scale and background parameters that the kernel does not see. User\n parameters don't use vector notation, and instead use p1, p2, ...\n ", '__init__': <function ParameterTable.__init__>, 'set_zero_background': <function ParameterTable.set_zero_background>, 'check_angles': <function ParameterTable.check_angles>, 'check_duplicates': <function ParameterTable.check_duplicates>, '__getitem__': <function ParameterTable.__getitem__>, '__contains__': <function ParameterTable.__contains__>, '_set_vector_lengths': <function ParameterTable._set_vector_lengths>, '_get_ref': <function ParameterTable._get_ref>, '_get_defaults': <function ParameterTable._get_defaults>, '_get_call_parameters': <function ParameterTable._get_call_parameters>, 'user_parameters': <function ParameterTable.user_parameters>, '__dict__': <attribute '__dict__' of 'ParameterTable' objects>, '__weakref__': <attribute '__weakref__' of 'ParameterTable' objects>, '__annotations__': {}})¶
- __doc__ = "\n ParameterTable manages the list of available parameters.\n\n There are a couple of complications which mean that the list of parameters\n for the kernel differs from the list of parameters that the user sees.\n\n (1) Common parameters. Scale and background are implicit to every model,\n but are not passed to the kernel.\n\n (2) Vector parameters. Vector parameters are passed to the kernel as a\n pointer to an array, e.g., thick[], but they are seen by the user as n\n separate parameters thick1, thick2, ...\n\n Therefore, the parameter table is organized by how it is expected to be\n used. The following information is needed to set up the kernel functions:\n\n * *kernel_parameters* is the list of parameters in the kernel parameter\n table, with vector parameter p declared as p[].\n\n * *iq_parameters* is the list of parameters to the Iq(q, ...) function,\n with vector parameter p sent as p[].\n\n * *form_volume_parameters* is the list of parameters to the form_volume(...)\n function, with vector parameter p sent as p[].\n\n Problem details, which sets up the polydispersity loops, requires the\n following:\n\n * *theta_offset* is the offset of the theta parameter in the kernel parameter\n table, with vector parameters counted as n individual parameters\n p1, p2, ..., or offset is -1 if there is no theta parameter.\n\n * *max_pd* is the maximum number of polydisperse parameters, with vector\n parameters counted as n individual parameters p1, p2, ... Note that\n this number is limited to sasmodels.modelinfo.MAX_PD.\n\n * *npars* is the total number of parameters to the kernel, with vector\n parameters counted as n individual parameters p1, p2, ...\n\n * *common_parameters* is the list of common parameters, with a unique\n copy for each model so that structure factors can have a default\n background of 0.0.\n\n * *call_parameters* is the complete list of parameters to the kernel,\n including scale and background, with vector parameters recorded as\n individual parameters p1, p2, ...\n\n * *active_1d* is the set of names that may be polydisperse for 1d data\n\n * *active_2d* is the set of names that may be polydisperse for 2d data\n\n User parameters are the set of parameters visible to the user, including\n the scale and background parameters that the kernel does not see. User\n parameters don't use vector notation, and instead use p1, p2, ...\n "¶
- __module__ = 'sasmodels.modelinfo'¶
- __weakref__¶
list of weak references to the object (if defined)
- _get_defaults() Mapping[str, float] [source]¶
Get a list of parameter defaults from the parameters.
Expands vector parameters into parameter id+number.
- _set_vector_lengths() None [source]¶
Walk the list of kernel parameters, setting the length field of the vector parameters from the upper limit of the reference parameter.
This needs to be done once the entire parameter table is available since the reference may still be undefined when the parameter is initially created.
Returns the list of control parameter names.
Note: This modifies the underlying parameter object.
- check_angles(strict=False)[source]¶
Check that orientation angles are theta, phi and possibly psi.
strict should be True when checking a parameter table defined in a model file, but False when checking from mixture models, etc., where the parameters aren’t being passed to a calculator directly.
- set_zero_background() None [source]¶
Set the default background to zero for this model. This is done for structure factor models.
- user_parameters(pars: Dict[str, float], is2d: bool = True) List[Parameter] [source]¶
Return the list of parameters for the given data type.
Vector parameters are expanded in place. If multiple parameters share the same vector length, then the parameters will be interleaved in the result. The control parameters come first. For example, if the parameter table is ordered as:
sld_core sld_shell[num_shells] sld_solvent thickness[num_shells] num_shells
and pars[num_shells]=2 then the returned list will be:
num_shells scale background sld_core sld_shell1 thickness1 sld_shell2 thickness2 sld_solvent
Note that shell/thickness pairs are grouped together in the result even though they were not grouped in the incoming table. The control parameter is always returned first since the GUI will want to set it early, and rerender the table when it is changed.
Parameters marked as sld will automatically have a set of associated magnetic parameters (p_M0, p_mtheta, p_mphi), as well as polarization information (up_theta, up_phi, up_frac_i, up_frac_f).
- sasmodels.modelinfo._find_source_lines(model_info: ModelInfo, kernel_module: module) None [source]¶
Identify the location of the C source inside the model definition file.
This code runs through the source of the kernel module looking for lines that contain C code (because it is a c function definition). Clearly there are all sorts of reasons why this might not work (e.g., code commented out in a triple-quoted line block, code built using string concatenation, code defined in the branch of an ‘if’ block, code imported from another file), but it should work properly in the 95% case, and for the remainder, getting the incorrect line number will merely be inconvenient.
- sasmodels.modelinfo._insert_after(parameters, insert, remove, insert_after)[source]¶
Build new parameters from old with insertion locations specified.
parameters is the existing parameter list.
insert is list of parameters to insert.
remove is list of parameter names to remove.
insert_after where to insert names, as {“old”: “new,new,…”}
- sasmodels.modelinfo._simple_insert(parameters, insert, remove)[source]¶
Build new parameters from old with insertion locations specified.
parameters is the existing parameter list.
insert is list of parameters to insert.
remove is list of parameter names to remove.
The new parameters are inserted as a block replacing the first removed parameter.
- sasmodels.modelinfo.derive_table(table: ParameterTable, insert: List[str], remove: List[Parameter], insert_after: Dict[str, str] | None = None) ParameterTable [source]¶
Create a derived parameter table.
Parameters given in insert are added to the table and parameters named in remove are deleted from the table. If insert_after is provided, then it indicates where in the new parameter table the parameters are inserted.
- sasmodels.modelinfo.expand_pars(partable: ParameterTable, pars: Mapping[str, float | List[float]] | None = None) Mapping[str, float] [source]¶
Create a parameter set from key-value pairs.
pars are the key-value pairs to use for the parameters. Any parameters not specified in pars are set from the partable defaults.
If pars references vector fields, such as thickness[n], then support different ways of assigning the parameter values, including assigning a specific value (e.g., thickness3=50.0), assigning a new value to all (e.g., thickness=50.0) or assigning values using list notation.
- sasmodels.modelinfo.make_model_info(kernel_module: module) ModelInfo [source]¶
Extract the model definition from the loaded kernel module.
Fill in default values for parts of the module that are not provided.
Note: vectorized Iq and Iqac/Iqabc functions will be created for python models when the model is first called, not when the model is loaded.
- sasmodels.modelinfo.make_parameter_table(pars: List[Tuple[str, str, float, Tuple[float, float], str, str]]) ParameterTable [source]¶
Construct a parameter table from a list of parameter definitions.
This is used by the module processor to convert the parameter block into the parameter table seen in the
ModelInfo
for the module.
- sasmodels.modelinfo.parse_parameter(name: str, units: str = '', default: float = nan, user_limits: Sequence[Any] | None = None, ptype: str = '', description: str = '') Parameter [source]¶
Parse an individual parameter from the parameter definition block.
This does type and value checking on the definition, leading to early failure in the model loading process and easier debugging.
sasmodels.multiscat module¶
Multiple scattering calculator
Calculate multiple scattering using 2D FFT convolution.
Usage:
python -m sasmodels.multiscat [options] model_name model_par=value ...
Options include:
-h, --help: show help and exit
-n, --nq: the number of mesh points (dq = qmax*window/nq)
-o, --outfile: save results to outfile.txt and outfile_powers.txt
-p, --probability: the scattering probability (0.1)
-q, --qmax: that max q that you care about (0.5)
-r, --random: generate a random parameter set
-s, --seed: generate a random parameter set with a given seed
-w, --window: the extension window (q is calculated for qmax*window)
-2, --2d: perform the calculation for an oriented pattern
Assume the probability of scattering is \(p\). After each scattering event, \(1-p\) neutrons will leave the system and go to the detector, and the remaining \(p\) will scatter again.
Let the scattering probability for \(n\) scattering event at \(q\) be \(f_n(q)\), with
for \(I_1(q)\), the single scattering from the system. After two scattering events, the scattering probability will be the convolution of the first scattering and itself, or \(f_2(q) = (f_1*f_1)(q)\). After \(n\) events it will be \(f_n(q) = (f_1 * \cdots * f_1)(q)\). The total scattering is calculated as the weighted sum of \(f_k\), with weights following the Poisson distribution
for \(\lambda\) determined by the total thickness divided by the mean free path between scattering, giving
The \(k=0\) term is ignored since it involves no scattering. We cut the series when cumulative probability is less than cutoff \(C=99\%\), which is \(\max n\) such that
Using the convolution theorem, where \(F = \mathcal{F}(f)\) is the Fourier transform,
so
Since the Fourier transform is a linear operator, we can move the polynomial expression for the convolution into the transform, giving
In the dilute limit \(\lambda \rightarrow 0\) only the \(k=1\) term is active, and so
therefore we compute
For speed we may use the fast fourier transform with a power of two. The resulting \(I(q)\) will be linearly spaced and likely heavily oversampled. The usual pinhole or slit resolution calculation can performed from these calculated values.
By default single precision OpenCL is used for the calculation. Set the environment variable SAS_OPENCL=none to use double precision numpy FFT instead. The OpenCL versions is about 10x faster on an elderly Mac with Intel HD 4000 graphics. The single precision numerical artifacts don’t seem to seriously impact overall accuracy, though they look pretty bad.
- sasmodels.multiscat.Calculator¶
alias of
NumpyCalculator
- class sasmodels.multiscat.ICalculator[source]¶
Bases:
object
Multiple scattering calculator
- __dict__ = mappingproxy({'__module__': 'sasmodels.multiscat', '__doc__': '\n Multiple scattering calculator\n ', 'fft': <function ICalculator.fft>, 'ifft': <function ICalculator.ifft>, 'multiple_scattering': <function ICalculator.multiple_scattering>, '__dict__': <attribute '__dict__' of 'ICalculator' objects>, '__weakref__': <attribute '__weakref__' of 'ICalculator' objects>, '__annotations__': {}})¶
- __doc__ = '\n Multiple scattering calculator\n '¶
- __module__ = 'sasmodels.multiscat'¶
- __weakref__¶
list of weak references to the object (if defined)
- multiple_scattering(Iq, p, coverage=0.99)[source]¶
Compute multiple scattering for I(q) given scattering probability p.
Given a probability p of scattering with the thickness, the expected number of scattering events, \(\lambda = -\log(1 - p)\), giving a Poisson weighted sum of single, double, triple, etc. scattering patterns. The number of patterns used is based on coverage (default 99%).
- class sasmodels.multiscat.MultipleScattering(qmin=None, qmax=None, nq=None, window=2, probability=None, coverage=0.99, is2d=False, resolution=None, dtype=dtype('float64'))[source]¶
Bases:
Resolution
Compute multiple scattering using Fourier convolution.
The fourier steps are determined by qmax, the maximum \(q\) value desired, nq the number of \(q\) steps and window, the amount of padding around the circular convolution. The \(q\) spacing will be \(\Delta q = 2 q_\mathrm{max} w / n_q\). If nq is not given it will use \(n_q = 2^k\) such that \(\Delta q < q_\mathrm{min}\).
probability is related to the expected number of scattering events in the sample \(\lambda\) as \(p = 1 - e^{-\lambda}\).
coverage determines how many scattering steps to consider. The default is 0.99, which sets \(n\) such that \(1 \ldots n\) covers 99% of the Poisson probability mass function.
is2d is True then 2D scattering is used, otherwise it accepts and returns 1D scattering.
resolution is the resolution function to apply after multiple scattering. If present, then the resolution \(q\) vectors will provide default values for qmin, qmax and nq.
- __doc__ = '\n Compute multiple scattering using Fourier convolution.\n\n The fourier steps are determined by *qmax*, the maximum $q$ value\n desired, *nq* the number of $q$ steps and *window*, the amount\n of padding around the circular convolution. The $q$ spacing\n will be $\\Delta q = 2 q_\\mathrm{max} w / n_q$. If *nq* is not\n given it will use $n_q = 2^k$ such that $\\Delta q < q_\\mathrm{min}$.\n\n *probability* is related to the expected number of scattering\n events in the sample $\\lambda$ as $p = 1 - e^{-\\lambda}$.\n\n *coverage* determines how many scattering steps to consider. The\n default is 0.99, which sets $n$ such that $1 \\ldots n$ covers 99%\n of the Poisson probability mass function.\n\n *is2d* is True then 2D scattering is used, otherwise it accepts\n and returns 1D scattering.\n\n *resolution* is the resolution function to apply after multiple\n scattering. If present, then the resolution $q$ vectors will provide\n default values for *qmin*, *qmax* and *nq*.\n '¶
- __init__(qmin=None, qmax=None, nq=None, window=2, probability=None, coverage=0.99, is2d=False, resolution=None, dtype=dtype('float64'))[source]¶
- __module__ = 'sasmodels.multiscat'¶
- class sasmodels.multiscat.NumpyCalculator(dims=None, dtype=dtype('float64'))[source]¶
Bases:
ICalculator
Multiple scattering calculator using numpy fft.
- __doc__ = '\n Multiple scattering calculator using numpy fft.\n '¶
- __module__ = 'sasmodels.multiscat'¶
- multiple_scattering(Iq, p, coverage=0.99)[source]¶
Compute multiple scattering for I(q) given scattering probability p.
Given a probability p of scattering with the thickness, the expected number of scattering events, \(\lambda = -\log(1 - p)\), giving a Poisson weighted sum of single, double, triple, etc. scattering patterns. The number of patterns used is based on coverage (default 99%).
- class sasmodels.multiscat.OpenclCalculator(dims, dtype=dtype('float64'))[source]¶
Bases:
ICalculator
Multiple scattering calculator using OpenCL via pyfft.
- __doc__ = '\n Multiple scattering calculator using OpenCL via pyfft.\n '¶
- __module__ = 'sasmodels.multiscat'¶
- multiple_scattering(Iq, p, coverage=0.99)[source]¶
Compute multiple scattering for I(q) given scattering probability p.
Given a probability p of scattering with the thickness, the expected number of scattering events, \(\lambda = -\log(1 - p)\), giving a Poisson weighted sum of single, double, triple, etc. scattering patterns. The number of patterns used is based on coverage (default 99%).
- polyval1d = None¶
- polyval1f = None¶
- sasmodels.multiscat.annular_average(qxy, Iqxy, qbins)[source]¶
Compute annular average of points in Iqxy at qbins. The \(q_x\), \(q_y\) coordinates for Iqxy are given in qxy.
- sasmodels.multiscat.parse_pars(model: ModelInfo, opts: Namespace) Dict[str, float] [source]¶
Parse par=val arguments from the command line.
- sasmodels.multiscat.rebin(x, I, xo)[source]¶
Rebin from edges x, bins I into edges xo.
x and xo should be monotonically increasing.
If x has duplicate values, then all corresponding values at I(x) will be effectively summed into the same bin. If xo has duplicate values, the first bin will contain the entire contents and subsequent bins will contain zeros.
- sasmodels.multiscat.scattering_coeffs(p, coverage=0.99)[source]¶
Return the coefficients of the scattering powers for transmission probability p. This is just the corresponding values for the Poisson distribution for \(\lambda = -\ln(1-p)\) such that \(\sum_{k = 0 \ldots n} P(k; \lambda)\) is larger than coverage.
sasmodels.product module¶
Product model¶
The product model multiplies the structure factor by the form factor, modulated by the effective radius of the form. The resulting model has a attributes of both the model description (with parameters, etc.) and the module evaluator (with call, release, etc.).
To use it, first load form factor P and structure factor S, then create make_product_info(P, S).
The P@S models is somewhat complicated because there are many special parameters that need to be handled in particular ways. Much of the code is used to figure out what special parameters we have, where to find them in the P@S model inputs and how to distribute them to the underlying P and S model calculators.
The parameter packet received by the P@S is a details.CallDetails
structure along with a data vector. The CallDetails structure indicates which
parameters are polydisperse, the length of the distribution, and where to
find it in the data vector. The distributions are ordered from longest to
shortest, with length 1 distributions filling out the distribution set. That
way the kernel calculator doesn’t have to check if it needs another nesting
level since it is always there. The data vector consists of a list of target
values for the parameters, followed by a concatenation of the distribution
values, and then followed by a concatenation of the distribution weights.
Given the combined details and data for P@S, we must decompose them in to
details for P and details for S separately, which unfortunately requires
intimate knowledge of the data structures and tricky code.
The special parameters are:
- scale and background:
First two parameters of the value list in each of P, S and P@S. When decomposing P@S parameters, ignore scale and background, instead using 1 and 0 for the first two slots of both P and S. After calling P and S individually, the results are combined as
volfraction*scale*P*S + background
. The scale and background do not show up in the polydispersity structure so they are easy to handle.
- volfraction:
Always the first parameter of S, but it may also be in P. If it is in P, then P.volfraction is used in the combined P@S list, and S.volfraction is elided, otherwise S.volfraction is used. If we are using volfraction from P we can treat it like all the other P parameters when calling P, but when calling S we need to insert the P.volfraction into data vector for S and assign a slot of length 1 in the distribution. Because we are using the original layout of the distribution vectors from P@S, but copying it into private data vectors for S and P, we are free to “borrow” a P slots to store the missing S.volfraction distribution. We use the P.volfraction slot itself but any slot will work.
For hollow shapes, volfraction represents the volume fraction of material but S needs the volume fraction enclosed by the shape. The answer is to scale the user specified volume fraction by the form:shell ratio computed from the average form volume and average shell volume returned from P. Use the original volfraction divided by shell_volume to compute the number density, and scale P@S by that to get absolute scaling on the final I(q). The scale for P@S should therefore usually be one.
- radius_effective:
Always the second parameter of S and always part of P@S, but never in P. The value may be calculated using P.radius_effective() or it may be set to the radius_effective value in P@S, depending on radius_effective_mode. If part of S, the value may be polydisperse. If calculated by P, then it will be the weighted average of effective radii computed for the polydisperse shape parameters.
- structure_factor_mode
If P@S supports beta approximation (i.e., if it has the Fq function that returns <FF*> and <F><F*>), then structure_factor_mode will be added to the P@S parameters right after the S parameters. This mode may be 0 for the monodisperse approximation or 1 for the beta approximation. We will add more values here as we implemented more complicated operations, but for now P and S must be computed separately. If beta, then we return I = scale volfrac/volume ( <FF> + <F>^2 (S-1)) + background. If not beta then return I = scale/volume P S + background . In both cases, return the appropriate immediate values.
- radius_effective_mode
If P defines the radius_effective function (and therefore P.info.radius_effective_modes is a list of effective radius modes), then radius_effective_mode will be the final parameter in P@S. Mode will be zero if radius_effective is defined by the user using the S parameter; any other value and the radius_effective parameter will be filled in from the value computed in P. In the latter case, the polydispersity information for S.radius_effective will need to be suppressed, with pd length set to 1, the first value set to the effective radius and the first weight set to 1. Do this after composing the S data vector so the inputs are left untouched.
- regular parameters
The regular P parameters form a block of length P.info.npars at the start of the data vector (after scale and background). These will be followed by S.effective_radius, and S.volfraction (if P.volfraction is absent), and then the regular S parameters. The P and S blocks can be copied as a group into the respective P and S data vectors. We can copy the distribution value and weight vectors untouched to both the P and S data vectors since they are referenced by offset and length. We can update the radius_effective slots in the P data vector with P.radius_effective() if needed.
- magnetic parameters
For each P parameter that is an SLD there will be a set of three magnetic parameters tacked on to P@S after the regular P and S and after the special structure_factor_mode and radius_effective_mode. These can be copied as a group after the regular P parameters. There won’t be any magnetic S parameters.
- class sasmodels.product.ProductKernel(model_info: ModelInfo, p_kernel: Kernel, s_kernel: Kernel, q: Tuple[ndarray])[source]¶
Bases:
Kernel
Instantiated kernel for product model.
- Iq(call_details: CallDetails, values: ndarray, cutoff: float, magnetic: bool) ndarray [source]¶
Returns I(q) from the polydisperse average scattering.
\[I(q) = \text{scale} \cdot P(q) + \text{background}\]With the correct choice of model and contrast, setting scale to the volume fraction \(V_f\) of particles should match the measured absolute scattering. Some models (e.g., vesicle) have volume fraction built into the model, and do not need an additional scale.
- __call__(call_details: CallDetails, values: ndarray, cutoff: float, magnetic: bool) ndarray ¶
Returns I(q) from the polydisperse average scattering.
\[I(q) = \text{scale} \cdot P(q) + \text{background}\]With the correct choice of model and contrast, setting scale to the volume fraction \(V_f\) of particles should match the measured absolute scattering. Some models (e.g., vesicle) have volume fraction built into the model, and do not need an additional scale.
- __doc__ = '\n Instantiated kernel for product model.\n '¶
- __init__(model_info: ModelInfo, p_kernel: Kernel, s_kernel: Kernel, q: Tuple[ndarray]) None [source]¶
- __module__ = 'sasmodels.product'¶
- class sasmodels.product.ProductModel(model_info: ModelInfo, P: KernelModel, S: KernelModel)[source]¶
Bases:
KernelModel
Model definition for product model.
- P: KernelModel = None¶
Form factor modelling individual particles.
- S: KernelModel = None¶
Structure factor modelling interaction between particles.
- __doc__ = '\n Model definition for product model.\n '¶
- __init__(model_info: ModelInfo, P: KernelModel, S: KernelModel) None [source]¶
- __module__ = 'sasmodels.product'¶
- dtype: dtype = None¶
Model precision. This is not really relevant, since it is the individual P and S models that control the effective dtype, converting the q-vectors to the correct type when the kernels for each are created. Ideally this should be set to the more precise type to avoid loss of precision, but precision in q is not critical (single is good enough for our purposes), so it just uses the precision of the form factor.
- sasmodels.product._intermediates(Q: np.ndarray, F: np.ndarray, Fsq: np.ndarray, S: np.ndarray, scale: float, volume: float, volume_ratio: float, radius_effective: float, beta_mode: bool, P_intermediate: Optiona[Callable[[], Parts]]) Parts [source]¶
Returns intermediate results for beta approximation-enabled product. The result may be an array or a float.
- sasmodels.product._tag_parameter(par)[source]¶
Tag the parameter name with _S to indicate that the parameter comes from the structure factor parameter set. This is only necessary if the form factor model includes a parameter of the same name as a parameter in the structure factor.
sasmodels.resolution module¶
Define the resolution functions for the data.
This defines classes for 1D and 2D resolution calculations.
- class sasmodels.resolution.IgorComparisonTest(methodName='runTest')[source]¶
Bases:
TestCase
Test resolution calculations against those returned by Igor.
- __doc__ = '\n Test resolution calculations against those returned by Igor.\n '¶
- __module__ = 'sasmodels.resolution'¶
- test_pinhole_romberg()[source]¶
Compare pinhole resolution smearing with romberg integration result.
- class sasmodels.resolution.Perfect1D(q)[source]¶
Bases:
Resolution
Resolution function to use when there is no actual resolution smearing to be applied. It has the same interface as the other resolution functions, but returns the identity function.
- __doc__ = '\n Resolution function to use when there is no actual resolution smearing\n to be applied. It has the same interface as the other resolution\n functions, but returns the identity function.\n '¶
- __module__ = 'sasmodels.resolution'¶
- class sasmodels.resolution.Pinhole1D(q, q_width, q_calc=None, nsigma=(2.5, 3.0))[source]¶
Bases:
Resolution
Pinhole aperture with q-dependent gaussian resolution.
q points at which the data is measured.
q_width gaussian 1-sigma resolution at each data point.
q_calc is the list of points to calculate, or None if this should be estimated from the q and q_width.
nsigma is the width of the resolution function. Should be 2.5. See
pinhole_resolution()
for details.- __doc__ = '\n Pinhole aperture with q-dependent gaussian resolution.\n\n *q* points at which the data is measured.\n\n *q_width* gaussian 1-sigma resolution at each data point.\n\n *q_calc* is the list of points to calculate, or None if this should\n be estimated from the *q* and *q_width*.\n\n *nsigma* is the width of the resolution function. Should be 2.5.\n See :func:`pinhole_resolution` for details.\n '¶
- __module__ = 'sasmodels.resolution'¶
- class sasmodels.resolution.Resolution[source]¶
Bases:
object
Abstract base class defining a 1D resolution function.
q is the set of q values at which the data is measured.
q_calc is the set of q values at which the theory needs to be evaluated. This may extend and interpolate the q values.
apply is the method to call with I(q_calc) to compute the resolution smeared theory I(q).
- __annotations__ = {'q': 'np.ndarray', 'q_calc': 'np.ndarray'}¶
- __dict__ = mappingproxy({'__module__': 'sasmodels.resolution', '__doc__': '\n Abstract base class defining a 1D resolution function.\n\n *q* is the set of q values at which the data is measured.\n\n *q_calc* is the set of q values at which the theory needs to be evaluated.\n This may extend and interpolate the q values.\n\n *apply* is the method to call with I(q_calc) to compute the resolution\n smeared theory I(q).\n ', 'q': None, 'q_calc': None, 'apply': <function Resolution.apply>, '__dict__': <attribute '__dict__' of 'Resolution' objects>, '__weakref__': <attribute '__weakref__' of 'Resolution' objects>, '__annotations__': {'q': 'np.ndarray', 'q_calc': 'np.ndarray'}})¶
- __doc__ = '\n Abstract base class defining a 1D resolution function.\n\n *q* is the set of q values at which the data is measured.\n\n *q_calc* is the set of q values at which the theory needs to be evaluated.\n This may extend and interpolate the q values.\n\n *apply* is the method to call with I(q_calc) to compute the resolution\n smeared theory I(q).\n '¶
- __module__ = 'sasmodels.resolution'¶
- __weakref__¶
list of weak references to the object (if defined)
- q: ndarray = None¶
- q_calc: ndarray = None¶
- class sasmodels.resolution.ResolutionTest(methodName='runTest')[source]¶
Bases:
TestCase
Test the resolution calculations.
- __doc__ = '\n Test the resolution calculations.\n '¶
- __module__ = 'sasmodels.resolution'¶
- class sasmodels.resolution.Slit1D(q, qx_width, qy_width=0.0, q_calc=None)[source]¶
Bases:
Resolution
Slit aperture with resolution function.
q points at which the data is measured.
qx_width slit width in qx
qy_width slit height in qy
q_calc is the list of points to calculate, or None if this should be estimated from the q and q_width.
The weight_matrix is computed by
slit_resolution()
- __doc__ = '\n Slit aperture with resolution function.\n\n *q* points at which the data is measured.\n\n *qx_width* slit width in qx\n\n *qy_width* slit height in qy\n\n *q_calc* is the list of points to calculate, or None if this should\n be estimated from the *q* and *q_width*.\n\n The *weight_matrix* is computed by :func:`slit_resolution`\n '¶
- __module__ = 'sasmodels.resolution'¶
- sasmodels.resolution.apply_resolution_matrix(weight_matrix, theory)[source]¶
Apply the resolution weight matrix to the computed theory function.
- sasmodels.resolution.bin_edges(x)[source]¶
Determine bin edges from bin centers, assuming that edges are centered between the bins.
Note: this uses the arithmetic mean, which may not be appropriate for log-scaled data.
- sasmodels.resolution.eval_form(q, form, pars)[source]¶
Return the SAS model evaluated at q.
form is the SAS model returned from :fun:`core.load_model`.
pars are the parameter values to use when evaluating.
- sasmodels.resolution.gaussian(q, q0, dq, nsigma=2.5)[source]¶
Return the truncated Gaussian resolution function.
q0 is the center, dq is the width and q are the points to evaluate.
- sasmodels.resolution.geometric_extrapolation(q, q_min, q_max, points_per_decade=None)[source]¶
Extrapolate q to [q_min, q_max] using geometric steps, with the average geometric step size in q as the step size.
if q_min is zero or less then q[0]/10 is used instead.
points_per_decade sets the ratio between consecutive steps such that there will be \(n\) points used for every factor of 10 increase in q.
If points_per_decade is not given, it will be estimated as follows. Starting at \(q_1\) and stepping geometrically by \(\Delta q\) to \(q_n\) in \(n\) points gives a geometric average of:
\[\log \Delta q = (\log q_n - \log q_1) / (n - 1)\]From this we can compute the number of steps required to extend \(q\) from \(q_n\) to \(q_\text{max}\) by \(\Delta q\) as:
\[n_\text{extend} = (\log q_\text{max} - \log q_n) / \log \Delta q\]Substituting:
\[n_\text{extend} = (n-1) (\log q_\text{max} - \log q_n) / (\log q_n - \log q_1)\]
- sasmodels.resolution.interpolate(q, max_step)[source]¶
Returns q_calc with points spaced at most max_step apart.
- sasmodels.resolution.linear_extrapolation(q, q_min, q_max)[source]¶
Extrapolate q out to [q_min, q_max] using the step size in q as a guide. Extrapolation below uses about the same size as the first interval. Extrapolation above uses about the same size as the final interval.
Note that extrapolated values may be negative.
- sasmodels.resolution.main()[source]¶
Run tests given is sys.argv.
Returns 0 if success or 1 if any tests fail.
- sasmodels.resolution.pinhole_extend_q(q, q_width, nsigma=(2.5, 3.0))[source]¶
Given q and q_width, find a set of sampling points q_calc so that each point \(I(q)\) has sufficient support from the underlying function.
- sasmodels.resolution.pinhole_resolution(q_calc, q, q_width, nsigma=(2.5, 3.0))[source]¶
Compute the convolution matrix W for pinhole resolution 1-D data.
Each row W[i] determines the normalized weight that the corresponding points q_calc contribute to the resolution smeared point q[i]. Given W, the resolution smearing can be computed using dot(W,q).
Note that resolution is limited to \(\pm 2.5 \sigma\).[1] The true resolution function is a broadened triangle, and does not extend over the entire range \((-\infty, +\infty)\). It is important to impose this limitation since some models fall so steeply that the weighted value in gaussian tails would otherwise dominate the integral.
q_calc must be increasing. q_width must be greater than zero.
[1] Barker, J. G., and J. S. Pedersen. 1995. Instrumental Smearing Effects in Radially Symmetric Small-Angle Neutron Scattering by Numerical and Analytical Methods. Journal of Applied Crystallography 28 (2): 105–14. https://doi.org/10.1107/S0021889894010095.
- sasmodels.resolution.romberg_pinhole_1d(q, q_width, form, pars, nsigma=2.5)[source]¶
Romberg integration for pinhole resolution.
This is an adaptive integration technique. It is called with settings that make it slow to evaluate but give it good accuracy.
- sasmodels.resolution.romberg_slit_1d(q, width, height, form, pars)[source]¶
Romberg integration for slit resolution.
This is an adaptive integration technique. It is called with settings that make it slow to evaluate but give it good accuracy.
- sasmodels.resolution.slit_extend_q(q, width, height)[source]¶
Given q, width and height, find a set of sampling points q_calc so that each point I(q) has sufficient support from the underlying function.
- sasmodels.resolution.slit_resolution(q_calc, q, width, height, n_height=30)[source]¶
Build a weight matrix to compute I_s(q) from I(q_calc), given \(q_\perp\) = width and \(q_\parallel\) = height. n_height is is the number of steps to use in the integration over \(q_\parallel\) when both \(q_\perp\) and \(q_\parallel\) are non-zero.
Each \(q\) can have an independent width and height value even though current instruments use the same slit setting for all measured points.
If slit height is large relative to width, use:
\[I_s(q_i) = \frac{1}{\Delta q_\perp} \int_0^{\Delta q_\perp} I\left(\sqrt{q_i^2 + q_\perp^2}\right) \,dq_\perp\]If slit width is large relative to height, use:
\[I_s(q_i) = \frac{1}{2 \Delta q_\parallel} \int_{-\Delta q_\parallel}^{\Delta q_\parallel} I\left(|q_i + q_\parallel|\right) \,dq_\parallel\]For a mixture of slit width and height use:
\[I_s(q_i) = \frac{1}{2 \Delta q_\parallel \Delta q_\perp} \int_{-\Delta q_\parallel}^{\Delta q_\parallel} \int_0^{\Delta q_\perp} I\left(\sqrt{(q_i + q_\parallel)^2 + q_\perp^2}\right) \,dq_\perp dq_\parallel\]Definition
We are using the mid-point integration rule to assign weights to each element of a weight matrix \(W\) so that
\[I_s(q) = W\,I(q_\text{calc})\]If q_calc is at the mid-point, we can infer the bin edges from the pairwise averages of q_calc, adding the missing edges before q_calc[0] and after q_calc[-1].
For \(q_\parallel = 0\), the smeared value can be computed numerically using the \(u\) substitution
\[u_j = \sqrt{q_j^2 - q^2}\]This gives
\[I_s(q) \approx \sum_j I(u_j) \Delta u_j\]where \(I(u_j)\) is the value at the mid-point, and \(\Delta u_j\) is the difference between consecutive edges which have been first converted to \(u\). Only \(u_j \in [0, \Delta q_\perp]\) are used, which corresponds to \(q_j \in \left[q, \sqrt{q^2 + \Delta q_\perp}\right]\), so
\[W_{ij} = \frac{1}{\Delta q_\perp} \Delta u_j = \frac{1}{\Delta q_\perp} \left( \sqrt{q_{j+1}^2 - q_i^2} - \sqrt{q_j^2 - q_i^2} \right) \ \text{if}\ q_j \in \left[q_i, \sqrt{q_i^2 + q_\perp^2}\right]\]where \(I_s(q_i)\) is the theory function being computed and \(q_j\) are the mid-points between the calculated values in q_calc. We tweak the edges of the initial and final intervals so that they lie on integration limits.
(To be precise, the transformed midpoint \(u(q_j)\) is not necessarily the midpoint of the edges \(u((q_{j-1}+q_j)/2)\) and \(u((q_j + q_{j+1})/2)\), but it is at least in the interval, so the approximation is going to be a little better than the left or right Riemann sum, and should be good enough for our purposes.)
For \(q_\perp = 0\), the \(u\) substitution is simpler:
\[u_j = \left|q_j - q\right|\]so
\[W_{ij} = \frac{1}{2 \Delta q_\parallel} \Delta u_j = \frac{1}{2 \Delta q_\parallel} (q_{j+1} - q_j) \ \text{if}\ q_j \in \left[q-\Delta q_\parallel, q+\Delta q_\parallel\right]\]However, we need to support cases were \(u_j < 0\), which means using \(2 (q_{j+1} - q_j)\) when \(q_j \in \left[0, q_\parallel-q_i\right]\). This is not an issue for \(q_i > q_\parallel\).
For both \(q_\perp > 0\) and \(q_\parallel > 0\) we perform a 2 dimensional integration with
\[u_{jk} = \sqrt{q_j^2 - (q + (k\Delta q_\parallel/L))^2} \ \text{for}\ k = -L \ldots L\]for \(L\) = n_height. This gives
\[W_{ij} = \frac{1}{2 \Delta q_\perp q_\parallel} \sum_{k=-L}^L \Delta u_{jk} \left(\frac{\Delta q_\parallel}{2 L + 1}\right)\]
sasmodels.resolution2d module¶
#This software was developed by the University of Tennessee as part of the #Distributed Data Analysis of Neutron Scattering Experiments (DANSE) #project funded by the US National Science Foundation. #See the license text in license.txt
- class sasmodels.resolution2d.Pinhole2D(data=None, index=None, nsigma=3.0, accuracy='Low', coords='polar')[source]¶
Bases:
Resolution
Gaussian Q smearing class for SAS 2d data
- __doc__ = '\n Gaussian Q smearing class for SAS 2d data\n '¶
- __init__(data=None, index=None, nsigma=3.0, accuracy='Low', coords='polar')[source]¶
Assumption: equally spaced bins in dq_r, dq_phi space.
- Parameters:
data – 2d data used to set the smearing parameters
index – 1d array with len(data) to define the range of the calculation: elements are given as True or False
nr – number of bins in dq_r-axis
nphi – number of bins in dq_phi-axis
coord – coordinates [string], ‘polar’ or ‘cartesian’
- __module__ = 'sasmodels.resolution2d'¶
- _calc_res()[source]¶
Over sampling of r_nbins times phi_nbins, calculate Gaussian weights, then find smeared intensity
- class sasmodels.resolution2d.Slit2D(q, qx_width, qy_width=0.0, q_calc=None, accuracy='low')[source]¶
Bases:
Resolution
Slit aperture with resolution function on an oriented sample.
q points at which the data is measured.
qx_width slit width in qx
qy_width slit height in qy; current implementation requires a fixed qy_width for all q points.
q_calc is the list of q points to calculate, or None if this should be estimated from the q and qx_width.
accuracy determines the number of qy points to compute for each q. The values are stored in sasmodels.resolution2d.N_SLIT_PERP. The default values are: low=101, med=401, high=1001, xhigh=2001
- __doc__ = '\n Slit aperture with resolution function on an oriented sample.\n\n *q* points at which the data is measured.\n\n *qx_width* slit width in qx\n\n *qy_width* slit height in qy; current implementation requires a fixed\n qy_width for all q points.\n\n *q_calc* is the list of q points to calculate, or None if this\n should be estimated from the *q* and *qx_width*.\n\n *accuracy* determines the number of *qy* points to compute for each *q*.\n The values are stored in sasmodels.resolution2d.N_SLIT_PERP. The default\n values are: low=101, med=401, high=1001, xhigh=2001\n '¶
- __module__ = 'sasmodels.resolution2d'¶
sasmodels.rst2html module¶
Convert a restructured text document to html.
Inline math markup can uses the math directive, or it can use latex style $expression$. Math can be rendered using simple html and unicode, or with mathjax.
- sasmodels.rst2html.can_use_qt() bool [source]¶
Return True if QWebView exists.
Checks first in PyQt5 then in PyQt4
- sasmodels.rst2html.load_rst_as_html(filename: str) str [source]¶
Load rst from file and convert to html
- sasmodels.rst2html.replace_compact_fraction(content)[source]¶
Convert frac12 to frac{1}{2} for broken latex parsers
- sasmodels.rst2html.replace_dollar(content)[source]¶
Convert dollar signs to inline math markup in rst.
- sasmodels.rst2html.rst2html(rst, part='whole', math_output='mathjax')[source]¶
Convert restructured text into simple html.
Valid math_output formats for formulas include: - html - mathml - mathjax See http://docutils.sourceforge.net/docs/user/config.html#math-output for details.
The following part choices are available: - whole: the entire html document - html_body: document division with title and contents and footer - body: contents only
There are other parts, but they don’t make sense alone:
subtitle, version, encoding, html_prolog, header, meta, html_title, title, stylesheet, html_subtitle, html_body, body, head, body_suffix, fragment, docinfo, html_head, head_prefix, body_prefix, footer, body_pre_docinfo, whole
- sasmodels.rst2html.suppress_html_errors()[source]¶
Context manager for keeping error reports out of the generated HTML.
Within the context, system message nodes in the docutils parse tree will be ignored. After the context, the usual behaviour will be restored.
- sasmodels.rst2html.test_dollar()[source]¶
Test substitution of dollar signs with equivalent RST math markup
- sasmodels.rst2html.view_help(filename: str, qt: bool = False) None [source]¶
View rst or html file. If qt use q viewer, otherwise use wx.
- sasmodels.rst2html.view_html(html: str, url: str = '') None ¶
HTML viewer app in Qt
sasmodels.sasview_model module¶
Sasview model constructor.
Given a module defining an OpenCL kernel such as sasmodels.models.cylinder, create a sasview model class to run that kernel as follows:
from sasmodels.sasview_model import load_custom_model
CylinderModel = load_custom_model('sasmodels/models/cylinder.py')
- sasmodels.sasview_model.MODELS: Dict[str, Callable[[int], SasviewModel]] = {}¶
set of defined models (standard and custom)
- sasmodels.sasview_model.MODEL_BY_PATH: Dict[str, Callable[[int], SasviewModel]] = {}¶
custom model {path: model} mapping so we can check timestamps
- sasmodels.sasview_model.MultiplicationModel(form_factor: SasviewModel, structure_factor: SasviewModel) SasviewModel [source]¶
Returns a constructed product model from form_factor and structure_factor.
- class sasmodels.sasview_model.MultiplicityInfo(number, control, choices, x_axis_label)¶
Bases:
tuple
- __doc__ = 'MultiplicityInfo(number, control, choices, x_axis_label)'¶
- __getnewargs__()¶
Return self as a plain tuple. Used by copy and pickle.
- __module__ = 'sasmodels.sasview_model'¶
- static __new__(_cls, number, control, choices, x_axis_label)¶
Create new instance of MultiplicityInfo(number, control, choices, x_axis_label)
- __repr__()¶
Return a nicely formatted representation string
- __slots__ = ()¶
- _asdict()¶
Return a new dict which maps field names to their values.
- _field_defaults = {}¶
- _fields = ('number', 'control', 'choices', 'x_axis_label')¶
- _fields_defaults = {}¶
- classmethod _make(iterable)¶
Make a new MultiplicityInfo object from a sequence or iterable
- _replace(**kwds)¶
Return a new MultiplicityInfo object replacing specified fields with new values
- choices¶
Alias for field number 2
- control¶
Alias for field number 1
- number¶
Alias for field number 0
- x_axis_label¶
Alias for field number 3
- sasmodels.sasview_model.MultiplicityInfoType¶
alias of
MultiplicityInfo
- sasmodels.sasview_model.SUPPORT_OLD_STYLE_PLUGINS = True¶
True if pre-existing plugins, with the old names and parameters, should continue to be supported.
- class sasmodels.sasview_model.SasviewModel(multiplicity: int | None = None)[source]¶
Bases:
object
Sasview wrapper for opencl/ctypes model.
- __dict__ = mappingproxy({'__module__': 'sasmodels.sasview_model', '__doc__': '\n Sasview wrapper for opencl/ctypes model.\n ', '_model': None, '_model_info': None, 'id': None, 'name': None, 'description': None, 'category': None, 'orientation_params': None, 'magnetic_params': None, 'fixed': None, 'input_name': 'Q', 'input_unit': 'A^{-1}', 'output_name': 'Intensity', 'output_unit': 'cm^{-1}', 'cutoff': 1e-05, 'non_fittable': (), 'is_structure_factor': False, 'is_form_factor': False, 'is_multiplicity_model': False, 'multiplicity_info': None, 'params': None, 'dispersion': None, 'details': None, 'multiplicity': None, '_persistency_dict': None, '__init__': <function SasviewModel.__init__>, '__get_state__': <function SasviewModel.__get_state__>, '__set_state__': <function SasviewModel.__set_state__>, '__str__': <function SasviewModel.__str__>, 'is_fittable': <function SasviewModel.is_fittable>, 'getProfile': <function SasviewModel.getProfile>, 'setParam': <function SasviewModel.setParam>, 'getParam': <function SasviewModel.getParam>, 'getParamList': <function SasviewModel.getParamList>, 'getDispParamList': <function SasviewModel.getDispParamList>, 'clone': <function SasviewModel.clone>, 'run': <function SasviewModel.run>, 'runXY': <function SasviewModel.runXY>, 'evalDistribution': <function SasviewModel.evalDistribution>, 'calc_composition_models': <function SasviewModel.calc_composition_models>, 'calculate_Iq': <function SasviewModel.calculate_Iq>, '_calculate_Iq': <function SasviewModel._calculate_Iq>, 'calculate_ER': <function SasviewModel.calculate_ER>, 'calculate_VR': <function SasviewModel.calculate_VR>, 'set_dispersion': <function SasviewModel.set_dispersion>, '_dispersion_mesh': <function SasviewModel._dispersion_mesh>, '_get_weights': <function SasviewModel._get_weights>, 'runTests': <classmethod object>, '__dict__': <attribute '__dict__' of 'SasviewModel' objects>, '__weakref__': <attribute '__weakref__' of 'SasviewModel' objects>, '__annotations__': {'_model': 'KernelModel', '_model_info': 'ModelInfo', 'id': 'str', 'name': 'str', 'description': 'str', 'category': 'str', 'orientation_params': 'List[str]', 'magnetic_params': 'List[str]', 'fixed': 'List[str]', 'non_fittable': 'Sequence[str]', 'multiplicity_info': 'MultiplicityInfoType', 'params': 'Dict[str, float]', 'dispersion': 'Dict[str, Any]', 'details': 'Dict[str, Sequence[Any]]', 'multiplicity': 'Optional[int]', '_persistency_dict': 'Dict[str, Tuple[np.ndarray, np.ndarray]]'}})¶
- __doc__ = '\n Sasview wrapper for opencl/ctypes model.\n '¶
- __module__ = 'sasmodels.sasview_model'¶
- __weakref__¶
list of weak references to the object (if defined)
- _dispersion_mesh() List[Tuple[ndarray, ndarray]] [source]¶
Create a mesh grid of dispersion parameters and weights.
Returns [p1,p2,…],w where pj is a vector of values for parameter j and w is a vector containing the products for weights for each parameter set in the vector.
- _get_weights(par: Parameter) Tuple[ndarray, ndarray] [source]¶
Return dispersion weights for parameter
- _model: KernelModel = None¶
- _persistency_dict: Dict[str, Tuple[ndarray, ndarray]] = None¶
memory for polydispersity array if using ArrayDispersion (used by sasview).
- calc_composition_models(qx)[source]¶
returns parts of the composition model or None if not a composition model.
- calculate_ER(mode: int = 1) float [source]¶
Calculate the effective radius for P(q)*S(q)
mode is the R_eff type, which defaults to 1 to match the ER calculation for sasview models from version 3.x.
- Returns:
the value of the effective radius
- calculate_Iq(qx: Sequence[float], qy: Sequence[float] | None = None) Tuple[np.ndarray, Callable[[], collections.OrderedDict[str, np.ndarray]]] [source]¶
Calculate Iq for one set of q with the current parameters.
If the model is 1D, use q. If 2D, use qx, qy.
This should NOT be used for fitting since it copies the q vectors to the card for each evaluation.
The returned tuple contains the scattering intensity followed by a callable which returns a dictionary of intermediate data from ProductKernel.
- calculate_VR() float [source]¶
Calculate the volf ratio for P(q)*S(q)
- Returns:
the value of the form:shell volume ratio
- category: str = None¶
default model category
- clone() SasviewModel [source]¶
Return a identical copy of self
- cutoff = 1e-05¶
default cutoff for polydispersity
- description: str = None¶
short model description
- details: Dict[str, Sequence[Any]] = None¶
units and limits for each parameter
- dispersion: Dict[str, Any] = None¶
values for dispersion width, npts, nsigmas and type
- evalDistribution(qdist: ndarray | Tuple[ndarray, ndarray] | List[ndarray]) ndarray [source]¶
Evaluate a distribution of q-values.
- Parameters:
qdist – array of q or a list of arrays [qx,qy]
For 1D, a numpy array is expected as input
evalDistribution(q) where *q* is a numpy array.
For 2D, a list of [qx,qy] is expected with 1D arrays as input
qx = [ qx[0], qx[1], qx[2], ....] qy = [ qy[0], qy[1], qy[2], ....]
If the model is 1D only, then
\[q = \sqrt{q_x^2+q_y^2}\]
- fixed: List[str] = None¶
names of the fittable parameters
- getParam(name: str) float [source]¶
Set the value of a model parameter
- Parameters:
name – name of the parameter
- getProfile() Tuple[ndarray, ndarray] [source]¶
Get SLD profile
- : return: (z, beta) where z is a list of depth of the transition points
beta is a list of the corresponding SLD values
- id: str = None¶
load/save name for the model
- input_name = 'Q'¶
- input_unit = 'A^{-1}'¶
- is_fittable(par_name: str) bool [source]¶
Check if a given parameter is fittable or not
- Parameters:
par_name – the parameter name to check
- is_form_factor = False¶
True if model should appear as a form factor
- is_multiplicity_model = False¶
True if model has multiplicity
- is_structure_factor = False¶
True if model should appear as a structure factor
- magnetic_params: List[str] = None¶
names of the magnetic parameters in the order they appear
- multiplicity: int | None = None¶
multiplicity value, or None if no multiplicity on the model
- multiplicity_info: MultiplicityInfo = None¶
Multiplicity information
- name: str = None¶
display name for the model
- non_fittable: Sequence[str] = ()¶
parameters that are not fitted
- orientation_params: List[str] = None¶
names of the orientation parameters in the order they appear
- output_name = 'Intensity'¶
- output_unit = 'cm^{-1}'¶
- params: Dict[str, float] = None¶
parameter {name: value} mapping
- run(x: float | Tuple[float, float] | Sequence[float] = 0.0) float [source]¶
Evaluate the model
- Parameters:
x – input q, or [q,phi]
- Returns:
scattering function P(q)
DEPRECATED: use calculate_Iq instead
- classmethod runTests()[source]¶
Run any tests built into the model and captures the test output.
Returns success flag and output
- runXY(x: float | [float, float] | Sequence[float] = 0.0) float [source]¶
Evaluate the model in cartesian coordinates
- Parameters:
x – input q, or [qx, qy]
- Returns:
scattering function P(q)
DEPRECATED: use calculate_Iq instead
- setParam(name: str, value: float) None [source]¶
Set the value of a model parameter
- Parameters:
name – name of the parameter
value – value of the parameter
- set_dispersion(parameter: str, dispersion: Dispersion) None [source]¶
Set the dispersion object for a model parameter
- Parameters:
parameter – name of the parameter [string]
dispersion – dispersion object of type Dispersion
- sasmodels.sasview_model._CACHED_MODULE: Dict[str, module] = {}¶
Track modules that we have loaded so we can determine whether the model has changed since we last reloaded.
- sasmodels.sasview_model._generate_model_attributes(model_info: ModelInfo) Dict[str, Any] [source]¶
Generate the class attributes for the model.
This should include all the information necessary to query the model details so that you do not need to instantiate a model to query it.
All the attributes should be immutable to avoid accidents.
- sasmodels.sasview_model._make_standard_model(name: str) Callable[[int], SasviewModel] [source]¶
Load the sasview model defined by name.
name can be a standard model name or a path to a custom model.
Returns a class that can be used directly as a sasview model.
- sasmodels.sasview_model._register_old_models() None [source]¶
Place the new models into sasview under the old names.
Monkey patch sas.sascalc.fit as sas.models so that sas.models.pluginmodel is available to the plugin modules.
- sasmodels.sasview_model.find_model(modelname: str) Callable[[int], SasviewModel] [source]¶
Find a model by name. If the model name ends in py, try loading it from custom models, otherwise look for it in the list of builtin models.
- sasmodels.sasview_model.load_custom_model(path: str) Callable[[int], SasviewModel] [source]¶
Load a custom model given the model path.
- sasmodels.sasview_model.load_standard_models() List[Callable[[int], SasviewModel]] [source]¶
Load and return the list of predefined models.
If there is an error loading a model, then a traceback is logged and the model is not returned.
- sasmodels.sasview_model.make_model_from_info(model_info: ModelInfo) Callable[[int], SasviewModel] [source]¶
Convert model_info into a SasView model wrapper.
- sasmodels.sasview_model.reset_environment() None [source]¶
Clear the compute engine context so that the GUI can change devices.
This removes all compiled kernels, even those that are active on fit pages, but they will be restored the next time they are needed.
- sasmodels.sasview_model.test_cylinder() float [source]¶
Test that the cylinder model runs, returning the value at [0.1,0.1].
- sasmodels.sasview_model.test_empty_distribution() None [source]¶
Make sure that sasmodels returns NaN when there are no polydispersity points
- sasmodels.sasview_model.test_model_list() None [source]¶
Make sure that all models build as sasview models
- sasmodels.sasview_model.test_old_name() None [source]¶
Load and run cylinder model as sas.models.CylinderModel
- sasmodels.sasview_model.test_product() float [source]¶
Test that 2-D hardsphere model runs and doesn’t produce NaN.
sasmodels.sesans module¶
Conversion of scattering cross section from SANS (I(q), or rather, ds/dO) in absolute units (cm-1)into SESANS correlation function G using a Hankel transformation, then converting the SESANS correlation function into polarisation from the SESANS experiment
Everything is in units of metres except specified otherwise (NOT TRUE!!!) Everything is in conventional units (nm for spin echo length)
Wim Bouwman (w.g.bouwman@tudelft.nl), June 2013
- class sasmodels.sesans.SesansTransform(z: ndarray, SElength: float, lam: float, zaccept: float, Rmax: float, log_spacing: float = 1.0003)[source]¶
Bases:
object
Spin-Echo SANS transform calculator. Similar to a resolution function, the SesansTransform object takes I(q) for the set of q_calc values and produces a transformed dataset
SElength (A) is the set of spin-echo lengths in the measured data.
zaccept (1/A) is the maximum acceptance of scattering vector in the spin echo encoding dimension (for ToF: Q of min(R) and max(lam)).
Rmax (A) is the maximum size sensitivity; larger radius requires more computation time.
- _H: ndarray = None¶
- _H0: ndarray = None¶
- __dict__ = mappingproxy({'__module__': 'sasmodels.sesans', '__doc__': '\n Spin-Echo SANS transform calculator. Similar to a resolution function,\n the SesansTransform object takes I(q) for the set of *q_calc* values and\n produces a transformed dataset\n\n *SElength* (A) is the set of spin-echo lengths in the measured data.\n\n *zaccept* (1/A) is the maximum acceptance of scattering vector in the spin\n echo encoding dimension (for ToF: Q of min(R) and max(lam)).\n\n *Rmax* (A) is the maximum size sensitivity; larger radius requires more\n computation time.\n ', 'q': None, 'q_calc': None, '_H': None, '_H0': None, '__init__': <function SesansTransform.__init__>, 'apply': <function SesansTransform.apply>, '_set_hankel': <function SesansTransform._set_hankel>, '__dict__': <attribute '__dict__' of 'SesansTransform' objects>, '__weakref__': <attribute '__weakref__' of 'SesansTransform' objects>, '__annotations__': {'q': 'np.ndarray', 'q_calc': 'np.ndarray', '_H': 'np.ndarray', '_H0': 'np.ndarray'}})¶
- __doc__ = '\n Spin-Echo SANS transform calculator. Similar to a resolution function,\n the SesansTransform object takes I(q) for the set of *q_calc* values and\n produces a transformed dataset\n\n *SElength* (A) is the set of spin-echo lengths in the measured data.\n\n *zaccept* (1/A) is the maximum acceptance of scattering vector in the spin\n echo encoding dimension (for ToF: Q of min(R) and max(lam)).\n\n *Rmax* (A) is the maximum size sensitivity; larger radius requires more\n computation time.\n '¶
- __init__(z: ndarray, SElength: float, lam: float, zaccept: float, Rmax: float, log_spacing: float = 1.0003) None [source]¶
- __module__ = 'sasmodels.sesans'¶
- __weakref__¶
list of weak references to the object (if defined)
- q: ndarray = None¶
SElength from the data in the original data units; not used by transform but the GUI uses it, so make sure that it is present.
- q_calc: ndarray = None¶
q values to calculate when computing transform
sasmodels.special module¶
Special Functions¶
This following standard C99 math functions are available:
- M_PI, M_PI_2, M_PI_4, M_SQRT1_2, M_E:
\(\pi\), \(\pi/2\), \(\pi/4\), \(1/\sqrt{2}\) and Euler’s constant \(e\)
- exp, log, pow(x,y), expm1, log1p, sqrt, cbrt:
Power functions \(e^x\), \(\ln x\), \(x^y\), \(e^x - 1\), \(\ln 1 + x\), \(\sqrt{x}\), \(\sqrt[3]{x}\). The functions expm1(x) and log1p(x) are accurate across all \(x\), including \(x\) very close to zero.
- sin, cos, tan, asin, acos, atan:
Trigonometry functions and inverses, operating on radians.
- sinh, cosh, tanh, asinh, acosh, atanh:
Hyperbolic trigonometry functions.
- atan2(y,x):
Angle from the \(x\)-axis to the point \((x,y)\), which is equal to \(\tan^{-1}(y/x)\) corrected for quadrant. That is, if \(x\) and \(y\) are both negative, then atan2(y,x) returns a value in quadrant III where atan(y/x) would return a value in quadrant I. Similarly for quadrants II and IV when \(x\) and \(y\) have opposite sign.
- fabs(x), fmin(x,y), fmax(x,y), trunc, rint:
Floating point functions. rint(x) returns the nearest integer.
- NAN:
NaN, Not a Number, \(0/0\). Use isnan(x) to test for NaN. Note that you cannot use
x == NAN
to test for NaN values since that will always return false. NAN does not equal NAN! The alternative,x != x
may fail if the compiler optimizes the test away.- INFINITY:
\(\infty, 1/0\). Use isinf(x) to test for infinity, or isfinite(x) to test for finite and not NaN.
- erf, erfc, tgamma, lgamma: do not use
Special functions that should be part of the standard, but are missing or inaccurate on some platforms. Use sas_erf, sas_erfc and sas_gamma instead (see below). Note: lgamma(x) has not yet been tested.
Some non-standard constants and functions are also provided:
- M_PI_180, M_4PI_3:
\(\frac{\pi}{180}\), \(\frac{4\pi}{3}\)
- SINCOS(x, s, c):
Macro which sets s=sin(x) and c=cos(x). The variables c and s must be declared first.
- square(x):
\(x^2\)
- cube(x):
\(x^3\)
- sas_sinx_x(x):
\(\sin(x)/x\), with limit \(\sin(0)/0 = 1\).
- powr(x, y):
\(x^y\) for \(x \ge 0\); this is faster than general \(x^y\) on some GPUs.
- pown(x, n):
\(x^n\) for \(n\) integer; this is faster than general \(x^n\) on some GPUs.
- FLOAT_SIZE:
The number of bytes in a floating point value. Even though all variables are declared double, they may be converted to single precision float before running. If your algorithm depends on precision (which is not uncommon for numerical algorithms), use the following:
#if FLOAT_SIZE>4 ... code for double precision ... #else ... code for single precision ... #endif- SAS_DOUBLE:
A replacement for
double
so that the declared variable will stay double precision; this should generally not be used since some graphics cards do not support double precision. There is no provision for forcing a constant to stay double precision.
The following special functions and scattering calculations are defined.
These functions have been tuned to be fast and numerically stable down
to \(q=0\) even in single precision. In some cases they work around bugs
which appear on some platforms but not others, so use them where needed.
Add the files listed in source = ["lib/file.c", ...]
to your model.py
file in the order given, otherwise these functions will not be available.
- polevl(x, c, n):
Polynomial evaluation \(p(x) = \sum_{i=0}^n c_i x^i\) using Horner’s method so it is faster and more accurate.
\(c = \{c_n, c_{n-1}, \ldots, c_0 \}\) is the table of coefficients, sorted from highest to lowest.
- p1evl(x, c, n):
Evaluate normalized polynomial \(p(x) = x^n + \sum_{i=0}^{n-1} c_i x^i\) using Horner’s method so it is faster and more accurate.
\(c = \{c_{n-1}, c_{n-2} \ldots, c_0 \}\) is the table of coefficients, sorted from highest to lowest.
- sas_gamma(x):
Gamma function sas_gamma\((x) = \Gamma(x)\).
The standard math function, tgamma(x) is unstable for \(x < 1\) on some platforms.
- sas_gammaln(x):
log gamma function sas_gammaln\((x) = \log \Gamma(|x|)\).
The standard math function, lgamma(x), is incorrect for single precision on some platforms.
- sas_gammainc(a, x), sas_gammaincc(a, x):
Incomplete gamma function sas_gammainc\((a, x) = \int_0^x t^{a-1}e^{-t}\,dt / \Gamma(a)\) and complementary incomplete gamma function sas_gammaincc\((a, x) = \int_x^\infty t^{a-1}e^{-t}\,dt / \Gamma(a)\)
- sas_erf(x), sas_erfc(x):
Error function sas_erf\((x) = \frac{2}{\sqrt\pi}\int_0^x e^{-t^2}\,dt\) and complementary error function sas_erfc\((x) = \frac{2}{\sqrt\pi}\int_x^{\infty} e^{-t^2}\,dt\).
The standard math functions erf(x) and erfc(x) are slower and broken on some platforms.
- sas_J0(x):
Bessel function of the first kind sas_J0\((x)=J_0(x)\) where \(J_0(x) = \frac{1}{\pi}\int_0^\pi \cos(x\sin(\tau))\,d\tau\).
The standard math function j0(x) is not available on all platforms.
- sas_J1(x):
Bessel function of the first kind sas_J1\((x)=J_1(x)\) where \(J_1(x) = \frac{1}{\pi}\int_0^\pi \cos(\tau - x\sin(\tau))\,d\tau\).
The standard math function j1(x) is not available on all platforms.
- sas_JN(n, x):
Bessel function of the first kind and integer order \(n\): sas_JN\((n, x)=J_n(x)\) where \(J_n(x) = \frac{1}{\pi}\int_0^\pi \cos(n\tau - x\sin(\tau))\,d\tau\). If \(n\) = 0 or 1, it uses sas_J0(x) or sas_J1(x), respectively.
The standard math function jn(n, x) is not available on all platforms.
- sas_Si(x):
Sine integral sas_Si\((x) = \int_0^x \tfrac{\sin t}{t}\,dt\).
This function uses Taylor series for small and large arguments:
For large arguments,
\[\text{Si}(x) \sim \frac{\pi}{2} - \frac{\cos(x)}{x} \left(1 - \frac{2!}{x^2} + \frac{4!}{x^4} - \frac{6!}{x^6} \right) - \frac{\sin(x)}{x} \left(\frac{1}{x} - \frac{3!}{x^3} + \frac{5!}{x^5} - \frac{7!}{x^7}\right)\]For small arguments,
\[\text{Si}(x) \sim x - \frac{x^3}{3\times 3!} + \frac{x^5}{5 \times 5!} - \frac{x^7}{7 \times 7!} + \frac{x^9}{9\times 9!} - \frac{x^{11}}{11\times 11!}\]- sas_3j1x_x(x):
Spherical Bessel form sas_3j1x_x\((x) = 3 j_1(x)/x = 3 (\sin(x) - x \cos(x))/x^3\), with a limiting value of 1 at \(x=0\), where \(j_1(x)\) is the spherical Bessel function of the first kind and first order.
This function uses a Taylor series for small \(x\) for numerical accuracy.
- sas_2J1x_x(x):
Bessel form sas_2J1x_x\((x) = 2 J_1(x)/x\), with a limiting value of 1 at \(x=0\), where \(J_1(x)\) is the Bessel function of first kind and first order.
- gauss76.n, gauss76.z[i], gauss76.w[i]:
Points \(z_i\) and weights \(w_i\) for \(n=76\) point Gaussian quadrature, computing \(\int_{-1}^1 f(z)\,dz \approx \sum_{i=1}^{76} w_i\,f(z_i)\). When translating the model to C, include ‘lib/gauss76.c’ in the source and use
GAUSS_N
,GAUSS_Z
, andGAUSS_W
.Similar arrays are available in
gauss20
for 20-point quadrature andgauss150.c
for 150-point quadrature. By usingimport gauss76 as gauss
it is easy to change the number of points in the integration.
- class sasmodels.special.Gauss(w, z)[source]¶
Bases:
object
Gauss-Legendre integration weights
- __dict__ = mappingproxy({'__module__': 'sasmodels.special', '__doc__': 'Gauss-Legendre integration weights', '__init__': <function Gauss.__init__>, '__dict__': <attribute '__dict__' of 'Gauss' objects>, '__weakref__': <attribute '__weakref__' of 'Gauss' objects>, '__annotations__': {}})¶
- __doc__ = 'Gauss-Legendre integration weights'¶
- __module__ = 'sasmodels.special'¶
- __weakref__¶
list of weak references to the object (if defined)
- sasmodels.special.p1evl(x, c, n)[source]¶
return x^n + p(x) for polynomial p of degree n-1 with coefficients c
- sasmodels.special.polevl(x, c, n)[source]¶
return p(x) for polynomial p of degree n-1 with coefficients c
- sasmodels.special.sincos(x)¶
return sin(x), cos(x)
sasmodels.weights module¶
SAS distributions for polydispersity.
- class sasmodels.weights.ArrayDispersion(npts=None, width=None, nsigmas=None)[source]¶
Bases:
Dispersion
Empirical dispersion curve.
Use
set_weights()
to set \(w = f(x)\).- __doc__ = '\n Empirical dispersion curve.\n\n Use :meth:`set_weights` to set $w = f(x)$.\n '¶
- __module__ = 'sasmodels.weights'¶
- default = {'npts': 35, 'nsigmas': 1, 'width': 0}¶
- type = 'array'¶
- class sasmodels.weights.BoltzmannDispersion(npts=None, width=None, nsigmas=None)[source]¶
Bases:
Dispersion
Boltzmann dispersion, with \(\sigma=k T/E\).
\[w = \exp\left( -|x - c|/\sigma\right)\]- __doc__ = '\n Boltzmann dispersion, with $\\sigma=k T/E$.\n\n .. math::\n\n w = \\exp\\left( -|x - c|/\\sigma\\right)\n '¶
- __module__ = 'sasmodels.weights'¶
- default = {'npts': 35, 'nsigmas': 3, 'width': 0}¶
- type = 'boltzmann'¶
- class sasmodels.weights.Dispersion(npts=None, width=None, nsigmas=None)[source]¶
Bases:
object
Base dispersion object.
Subclasses should define _weights(center, sigma, lb, ub) which returns the x points and their corresponding weights.
- __dict__ = mappingproxy({'__module__': 'sasmodels.weights', '__doc__': '\n Base dispersion object.\n\n Subclasses should define *_weights(center, sigma, lb, ub)*\n which returns the x points and their corresponding weights.\n ', 'type': 'base disperser', 'default': {'npts': 35, 'width': 0, 'nsigmas': 3}, '__init__': <function Dispersion.__init__>, 'get_pars': <function Dispersion.get_pars>, 'set_weights': <function Dispersion.set_weights>, 'get_weights': <function Dispersion.get_weights>, '_weights': <function Dispersion._weights>, '_linspace': <function Dispersion._linspace>, '__dict__': <attribute '__dict__' of 'Dispersion' objects>, '__weakref__': <attribute '__weakref__' of 'Dispersion' objects>, '__annotations__': {}})¶
- __doc__ = '\n Base dispersion object.\n\n Subclasses should define *_weights(center, sigma, lb, ub)*\n which returns the x points and their corresponding weights.\n '¶
- __module__ = 'sasmodels.weights'¶
- __weakref__¶
list of weak references to the object (if defined)
- _linspace(center, sigma, lb, ub)[source]¶
helper function to provide linear spaced weight points within range
- default = {'npts': 35, 'nsigmas': 3, 'width': 0}¶
- get_weights(center, lb, ub, relative)[source]¶
Return the weights for the distribution.
center is the center of the distribution
lb, ub are the min and max allowed values
relative is True if the distribution width is proportional to the center value instead of absolute. For polydispersity use relative. For orientation parameters use absolute.
- set_weights(values, weights)[source]¶
Set the weights on the disperser if it is
ArrayDispersion
.
- type = 'base disperser'¶
- class sasmodels.weights.GaussianDispersion(npts=None, width=None, nsigmas=None)[source]¶
Bases:
Dispersion
Gaussian dispersion, with 1-\(\sigma\) width.
\[w = \exp\left(-\tfrac12 (x - c)^2/\sigma^2\right)\]- __doc__ = '\n Gaussian dispersion, with 1-$\\sigma$ width.\n\n .. math::\n\n w = \\exp\\left(-\\tfrac12 (x - c)^2/\\sigma^2\\right)\n '¶
- __module__ = 'sasmodels.weights'¶
- default = {'npts': 35, 'nsigmas': 3, 'width': 0}¶
- type = 'gaussian'¶
- class sasmodels.weights.LogNormalDispersion(npts=None, width=None, nsigmas=None)[source]¶
Bases:
Dispersion
log Gaussian dispersion, with 1-\(\sigma\) width.
\[w = \frac{\exp\left(-\tfrac12 (\ln x - c)^2/\sigma^2\right)}{x\sigma}\]- __doc__ = '\n log Gaussian dispersion, with 1-$\\sigma$ width.\n\n .. math::\n\n w = \\frac{\\exp\\left(-\\tfrac12 (\\ln x - c)^2/\\sigma^2\\right)}{x\\sigma}\n '¶
- __module__ = 'sasmodels.weights'¶
- default = {'npts': 80, 'nsigmas': 8, 'width': 0}¶
- type = 'lognormal'¶
- class sasmodels.weights.RectangleDispersion(npts=None, width=None, nsigmas=None)[source]¶
Bases:
Dispersion
Uniform dispersion, with width \(\sqrt{3}\sigma\).
\[w = 1\]- __doc__ = '\n Uniform dispersion, with width $\\sqrt{3}\\sigma$.\n\n .. math::\n\n w = 1\n '¶
- __module__ = 'sasmodels.weights'¶
- default = {'npts': 35, 'nsigmas': 1.73205, 'width': 0}¶
- type = 'rectangle'¶
- class sasmodels.weights.SchulzDispersion(npts=None, width=None, nsigmas=None)[source]¶
Bases:
Dispersion
Schultz dispersion, with 1-\(\sigma\) width.
\[w = \frac{z^z\,R^{z-1}}{e^{Rz}\,c \Gamma(z)}\]where \(c\) is the center of the distribution, \(R = x/c\) and \(z=(c/\sigma)^2\).
This is evaluated using logarithms as
\[w = \exp\left(z \ln z + (z-1)\ln R - Rz - \ln c - \ln \Gamma(z) \right)\]- __doc__ = '\n Schultz dispersion, with 1-$\\sigma$ width.\n\n .. math::\n\n w = \\frac{z^z\\,R^{z-1}}{e^{Rz}\\,c \\Gamma(z)}\n\n where $c$ is the center of the distribution, $R = x/c$ and $z=(c/\\sigma)^2$.\n\n This is evaluated using logarithms as\n\n .. math::\n\n w = \\exp\\left(z \\ln z + (z-1)\\ln R - Rz - \\ln c - \\ln \\Gamma(z) \\right)\n '¶
- __module__ = 'sasmodels.weights'¶
- default = {'npts': 80, 'nsigmas': 8, 'width': 0}¶
- type = 'schulz'¶
- class sasmodels.weights.UniformDispersion(npts=None, width=None, nsigmas=None)[source]¶
Bases:
Dispersion
Uniform dispersion, with width \(\sigma\).
\[w = 1\]- __doc__ = '\n Uniform dispersion, with width $\\sigma$.\n\n .. math::\n\n w = 1\n '¶
- __module__ = 'sasmodels.weights'¶
- default = {'npts': 35, 'nsigmas': None, 'width': 0}¶
- type = 'uniform'¶
- sasmodels.weights.get_weights(disperser, n, width, nsigmas, value, limits, relative)[source]¶
Return the set of values and weights for a polydisperse parameter.
disperser is the name of the disperser.
n is the number of points in the weight vector.
width is the width of the disperser distribution.
nsigmas is the number of sigmas to span for the dispersion convolution.
value is the value of the parameter in the model.
limits is [lb, ub], the lower and upper bound on the possible values.
relative is true if width is defined in proportion to the value of the parameter, and false if it is an absolute width.
Returns (value, weight), where value and weight are vectors.
- sasmodels.weights.load_weights(pattern: str | None = None) None [source]¶
Load dispersion distributions matching the given glob pattern
- sasmodels.weights.plot_weights(model_info: ModelInfo, mesh: List[Tuple[float, ndarray, ndarray]]) None [source]¶
Plot the weights returned by
get_weights()
.model_info defines model parameters, etc.
mesh is a list of tuples containing (value, dispersity, weights) for each parameter, where (dispersity, weights) pairs are the distributions to be plotted.
Module contents¶
sasmodels¶
sasmodels is a package containing models for small angle neutron and xray scattering. Models supported are the one dimensional circular average and two dimensional oriented patterns. As well as the form factor calculations for the individual shapes sasmodels also provides automatic shape polydispersity, angular dispersion and resolution convolution. SESANS patterns can be computed for any model.
Models can be written in python or in C. C models can run on the GPU if
OpenCL drivers are available. See generate
for details on
defining new models.