nn

bench([label, verbose, extra_argv])

Run benchmarks for module using nose.

test([label, verbose, extra_argv, doctests, ...])

Run tests for module using nose.

Module: nn.histo_resdnn

Class and helper functions for fitting the Histological ResDNN model.

Add(*args, **kwargs)

Layer that adds a list of inputs.

Dense(*args, **kwargs)

Just your regular densely-connected NN layer.

HemiSphere([x, y, z, theta, phi, xyz, ...])

Points on the unit sphere.

HistoResDNN([sh_order, basis_type, verbose])

This class is intended for the ResDNN Histology Network model.

Model(*args, **kwargs)

Model groups layers into an object with training and inference features.

Version(version)

Attributes

Input([shape, batch_size, name, dtype, ...])

Input() is used to instantiate a Keras tensor.

doctest_skip_parser(func)

Decorator replaces custom skip test markup in doctests.

get_bval_indices(bvals, bval[, tol])

Get indices where the b-value is bval

get_fnames([name])

Provide full paths to example or test datasets.

get_sphere([name])

provide triangulated spheres

optional_package(name[, trip_msg])

Return package-like thing and module setup for package name

set_logger_level(log_level)

Change the logger of the HistoResDNN to one on the following: DEBUG, INFO, WARNING, CRITICAL, ERROR

sf_to_sh(sf, sphere[, sh_order, basis_type, ...])

Spherical function to spherical harmonics (SH).

sh_to_sf(sh, sphere[, sh_order, basis_type, ...])

Spherical harmonics (SH) to spherical function (SF).

sph_harm_ind_list(sh_order[, full_basis])

Returns the degree (m) and order (n) of all the symmetric spherical harmonics of degree less then or equal to sh_order.

unique_bvals_magnitude(bvals[, bmag, rbvals])

This function gives the unique rounded b-values of the data

Module: nn.model

MultipleLayerPercepton([input_shape, ...])

Methods

SingleLayerPerceptron([input_shape, ...])

Methods

Version(version)

Attributes

optional_package(name[, trip_msg])

Return package-like thing and module setup for package name

bench

dipy.nn.bench(label='fast', verbose=1, extra_argv=None)

Run benchmarks for module using nose.

Parameters
label{‘fast’, ‘full’, ‘’, attribute identifier}, optional

Identifies the benchmarks to run. This can be a string to pass to the nosetests executable with the ‘-A’ option, or one of several special values. Special values are:

  • ‘fast’ - the default - which corresponds to the nosetests -A option of ‘not slow’.

  • ‘full’ - fast (as above) and slow benchmarks as in the ‘no -A’ option to nosetests - this is the same as ‘’.

  • None or ‘’ - run all tests.

  • attribute_identifier - string passed directly to nosetests as ‘-A’.

verboseint, optional

Verbosity value for benchmark outputs, in the range 1-10. Default is 1.

extra_argvlist, optional

List with any extra arguments to pass to nosetests.

Returns
successbool

Returns True if running the benchmarks works, False if an error occurred.

Notes

Benchmarks are like tests, but have names starting with “bench” instead of “test”, and can be found under the “benchmarks” sub-directory of the module.

Each NumPy module exposes bench in its namespace to run all benchmarks for it.

Examples

>>> success = np.lib.bench() 
Running benchmarks for numpy.lib
...
using 562341 items:
unique:
0.11
unique1d:
0.11
ratio: 1.0
nUnique: 56230 == 56230
...
OK
>>> success 
True

test

dipy.nn.test(label='fast', verbose=1, extra_argv=None, doctests=False, coverage=False, raise_warnings=None, timer=False)

Run tests for module using nose.

Parameters
label{‘fast’, ‘full’, ‘’, attribute identifier}, optional

Identifies the tests to run. This can be a string to pass to the nosetests executable with the ‘-A’ option, or one of several special values. Special values are:

  • ‘fast’ - the default - which corresponds to the nosetests -A option of ‘not slow’.

  • ‘full’ - fast (as above) and slow tests as in the ‘no -A’ option to nosetests - this is the same as ‘’.

  • None or ‘’ - run all tests.

  • attribute_identifier - string passed directly to nosetests as ‘-A’.

verboseint, optional

Verbosity value for test outputs, in the range 1-10. Default is 1.

extra_argvlist, optional

List with any extra arguments to pass to nosetests.

doctestsbool, optional

If True, run doctests in module. Default is False.

coveragebool, optional

If True, report coverage of NumPy code. Default is False. (This requires the coverage module).

raise_warningsNone, str or sequence of warnings, optional

This specifies which warnings to configure as ‘raise’ instead of being shown once during the test execution. Valid strings are:

  • “develop” : equals (Warning,)

  • “release” : equals (), do not raise on any warnings.

timerbool or int, optional

Timing of individual tests with nose-timer (which needs to be installed). If True, time tests and report on all of them. If an integer (say N), report timing results for N slowest tests.

Returns
resultobject

Returns the result of running the tests as a nose.result.TextTestResult object.

Notes

Each NumPy module exposes test in its namespace to run all tests for it. For example, to run all tests for numpy.lib:

>>> np.lib.test() 

Examples

>>> result = np.lib.test() 
Running unit tests for numpy.lib
...
Ran 976 tests in 3.933s

OK

>>> result.errors 
[]
>>> result.knownfail 
[]

Add

class dipy.nn.histo_resdnn.Add(*args, **kwargs)

Bases: keras.layers.merge._Merge

Layer that adds a list of inputs.

It takes as input a list of tensors, all of the same shape, and returns a single tensor (also of the same shape).

Examples:

>>> input_shape = (2, 3, 4)
>>> x1 = tf.random.normal(input_shape)
>>> x2 = tf.random.normal(input_shape)
>>> y = tf.keras.layers.Add()([x1, x2])
>>> print(y.shape)
(2, 3, 4)

Used in a functional model:

>>> input1 = tf.keras.layers.Input(shape=(16,))
>>> x1 = tf.keras.layers.Dense(8, activation='relu')(input1)
>>> input2 = tf.keras.layers.Input(shape=(32,))
>>> x2 = tf.keras.layers.Dense(8, activation='relu')(input2)
>>> # equivalent to `added = tf.keras.layers.add([x1, x2])`
>>> added = tf.keras.layers.Add()([x1, x2])
>>> out = tf.keras.layers.Dense(4)(added)
>>> model = tf.keras.models.Model(inputs=[input1, input2], outputs=out)
Attributes
activity_regularizer

Optional regularizer function for the output of this layer.

compute_dtype

The dtype of the layer’s computations.

dtype

The dtype of the layer weights.

dtype_policy

The dtype policy associated with this layer.

dynamic

Whether the layer is dynamic (eager-only); set in the constructor.

inbound_nodes

Deprecated, do NOT use! Only for compatibility with external Keras.

input

Retrieves the input tensor(s) of a layer.

input_mask

Retrieves the input mask tensor(s) of a layer.

input_shape

Retrieves the input shape(s) of a layer.

input_spec

InputSpec instance(s) describing the input format for this layer.

losses

List of losses added using the add_loss() API.

metrics

List of metrics added using the add_metric() API.

name

Name of the layer (string), set in the constructor.

name_scope

Returns a tf.name_scope instance for this class.

non_trainable_variables

Sequence of non-trainable variables owned by this module and its submodules.

non_trainable_weights

List of all non-trainable weights tracked by this layer.

outbound_nodes

Deprecated, do NOT use! Only for compatibility with external Keras.

output

Retrieves the output tensor(s) of a layer.

output_mask

Retrieves the output mask tensor(s) of a layer.

output_shape

Retrieves the output shape(s) of a layer.

stateful
submodules

Sequence of all sub-modules.

supports_masking

Whether this layer supports computing a mask using compute_mask.

trainable
trainable_variables

Sequence of trainable variables owned by this module and its submodules.

trainable_weights

List of all trainable weights tracked by this layer.

updates
variable_dtype

Alias of Layer.dtype, the dtype of the weights.

variables

Returns the list of all layer variables/weights.

weights

Returns the list of all layer variables/weights.

Methods

__call__(*args, **kwargs)

Wraps call, applying pre- and post-processing steps.

add_loss(losses, **kwargs)

Add loss tensor(s), potentially dependent on layer inputs.

add_metric(value[, name])

Adds metric tensor to the layer.

add_update(updates[, inputs])

Add update op(s), potentially dependent on layer inputs.

add_variable(*args, **kwargs)

Deprecated, do NOT use! Alias for add_weight.

add_weight([name, shape, dtype, ...])

Adds a new variable to the layer.

apply(inputs, *args, **kwargs)

Deprecated, do NOT use!

call(inputs)

This is where the layer's logic lives.

compute_mask(inputs[, mask])

Computes an output mask tensor.

compute_output_signature(input_signature)

Compute the output tensor signature of the layer based on the inputs.

count_params()

Count the total number of scalars composing the weights.

finalize_state()

Finalizes the layers state after updating layer weights.

from_config(config)

Creates a layer from its config.

get_config()

Returns the config of the layer.

get_input_at(node_index)

Retrieves the input tensor(s) of a layer at a given node.

get_input_mask_at(node_index)

Retrieves the input mask tensor(s) of a layer at a given node.

get_input_shape_at(node_index)

Retrieves the input shape(s) of a layer at a given node.

get_losses_for(inputs)

Deprecated, do NOT use!

get_output_at(node_index)

Retrieves the output tensor(s) of a layer at a given node.

get_output_mask_at(node_index)

Retrieves the output mask tensor(s) of a layer at a given node.

get_output_shape_at(node_index)

Retrieves the output shape(s) of a layer at a given node.

get_updates_for(inputs)

Deprecated, do NOT use!

get_weights()

Returns the current weights of the layer, as NumPy arrays.

set_weights(weights)

Sets the weights of the layer, from NumPy arrays.

with_name_scope(method)

Decorator to automatically enter the module name scope.

build

compute_output_shape

__init__(**kwargs)

Initializes a Merge layer.

Args:

**kwargs: standard layer keyword arguments.

Dense

class dipy.nn.histo_resdnn.Dense(*args, **kwargs)

Bases: keras.engine.base_layer.Layer

Just your regular densely-connected NN layer.

Dense implements the operation: output = activation(dot(input, kernel) + bias) where activation is the element-wise activation function passed as the activation argument, kernel is a weights matrix created by the layer, and bias is a bias vector created by the layer (only applicable if use_bias is True). These are all attributes of Dense.

Note: If the input to the layer has a rank greater than 2, then Dense computes the dot product between the inputs and the kernel along the last axis of the inputs and axis 0 of the kernel (using tf.tensordot). For example, if input has dimensions (batch_size, d0, d1), then we create a kernel with shape (d1, units), and the kernel operates along axis 2 of the input, on every sub-tensor of shape (1, 1, d1) (there are batch_size * d0 such sub-tensors). The output in this case will have shape (batch_size, d0, units).

Besides, layer attributes cannot be modified after the layer has been called once (except the trainable attribute). When a popular kwarg input_shape is passed, then keras will create an input layer to insert before the current layer. This can be treated equivalent to explicitly defining an InputLayer.

Example:

>>> # Create a `Sequential` model and add a Dense layer as the first layer.
>>> model = tf.keras.models.Sequential()
>>> model.add(tf.keras.Input(shape=(16,)))
>>> model.add(tf.keras.layers.Dense(32, activation='relu'))
>>> # Now the model will take as input arrays of shape (None, 16)
>>> # and output arrays of shape (None, 32).
>>> # Note that after the first layer, you don't need to specify
>>> # the size of the input anymore:
>>> model.add(tf.keras.layers.Dense(32))
>>> model.output_shape
(None, 32)
Args:

units: Positive integer, dimensionality of the output space. activation: Activation function to use.

If you don’t specify anything, no activation is applied (ie. “linear” activation: a(x) = x).

use_bias: Boolean, whether the layer uses a bias vector. kernel_initializer: Initializer for the kernel weights matrix. bias_initializer: Initializer for the bias vector. kernel_regularizer: Regularizer function applied to

the kernel weights matrix.

bias_regularizer: Regularizer function applied to the bias vector. activity_regularizer: Regularizer function applied to

the output of the layer (its “activation”).

kernel_constraint: Constraint function applied to

the kernel weights matrix.

bias_constraint: Constraint function applied to the bias vector.

Input shape:

N-D tensor with shape: (batch_size, …, input_dim). The most common situation would be a 2D input with shape (batch_size, input_dim).

Output shape:

N-D tensor with shape: (batch_size, …, units). For instance, for a 2D input with shape (batch_size, input_dim), the output would have shape (batch_size, units).

Attributes
activity_regularizer

Optional regularizer function for the output of this layer.

compute_dtype

The dtype of the layer’s computations.

dtype

The dtype of the layer weights.

dtype_policy

The dtype policy associated with this layer.

dynamic

Whether the layer is dynamic (eager-only); set in the constructor.

inbound_nodes

Deprecated, do NOT use! Only for compatibility with external Keras.

input

Retrieves the input tensor(s) of a layer.

input_mask

Retrieves the input mask tensor(s) of a layer.

input_shape

Retrieves the input shape(s) of a layer.

input_spec

InputSpec instance(s) describing the input format for this layer.

losses

List of losses added using the add_loss() API.

metrics

List of metrics added using the add_metric() API.

name

Name of the layer (string), set in the constructor.

name_scope

Returns a tf.name_scope instance for this class.

non_trainable_variables

Sequence of non-trainable variables owned by this module and its submodules.

non_trainable_weights

List of all non-trainable weights tracked by this layer.

outbound_nodes

Deprecated, do NOT use! Only for compatibility with external Keras.

output

Retrieves the output tensor(s) of a layer.

output_mask

Retrieves the output mask tensor(s) of a layer.

output_shape

Retrieves the output shape(s) of a layer.

stateful
submodules

Sequence of all sub-modules.

supports_masking

Whether this layer supports computing a mask using compute_mask.

trainable
trainable_variables

Sequence of trainable variables owned by this module and its submodules.

trainable_weights

List of all trainable weights tracked by this layer.

updates
variable_dtype

Alias of Layer.dtype, the dtype of the weights.

variables

Returns the list of all layer variables/weights.

weights

Returns the list of all layer variables/weights.

Methods

__call__(*args, **kwargs)

Wraps call, applying pre- and post-processing steps.

add_loss(losses, **kwargs)

Add loss tensor(s), potentially dependent on layer inputs.

add_metric(value[, name])

Adds metric tensor to the layer.

add_update(updates[, inputs])

Add update op(s), potentially dependent on layer inputs.

add_variable(*args, **kwargs)

Deprecated, do NOT use! Alias for add_weight.

add_weight([name, shape, dtype, ...])

Adds a new variable to the layer.

apply(inputs, *args, **kwargs)

Deprecated, do NOT use!

build(input_shape)

Creates the variables of the layer (optional, for subclass implementers).

call(inputs)

This is where the layer's logic lives.

compute_mask(inputs[, mask])

Computes an output mask tensor.

compute_output_shape(input_shape)

Computes the output shape of the layer.

compute_output_signature(input_signature)

Compute the output tensor signature of the layer based on the inputs.

count_params()

Count the total number of scalars composing the weights.

finalize_state()

Finalizes the layers state after updating layer weights.

from_config(config)

Creates a layer from its config.

get_config()

Returns the config of the layer.

get_input_at(node_index)

Retrieves the input tensor(s) of a layer at a given node.

get_input_mask_at(node_index)

Retrieves the input mask tensor(s) of a layer at a given node.

get_input_shape_at(node_index)

Retrieves the input shape(s) of a layer at a given node.

get_losses_for(inputs)

Deprecated, do NOT use!

get_output_at(node_index)

Retrieves the output tensor(s) of a layer at a given node.

get_output_mask_at(node_index)

Retrieves the output mask tensor(s) of a layer at a given node.

get_output_shape_at(node_index)

Retrieves the output shape(s) of a layer at a given node.

get_updates_for(inputs)

Deprecated, do NOT use!

get_weights()

Returns the current weights of the layer, as NumPy arrays.

set_weights(weights)

Sets the weights of the layer, from NumPy arrays.

with_name_scope(method)

Decorator to automatically enter the module name scope.

__init__(units, activation=None, use_bias=True, kernel_initializer='glorot_uniform', bias_initializer='zeros', kernel_regularizer=None, bias_regularizer=None, activity_regularizer=None, kernel_constraint=None, bias_constraint=None, **kwargs)
build(input_shape)

Creates the variables of the layer (optional, for subclass implementers).

This is a method that implementers of subclasses of Layer or Model can override if they need a state-creation step in-between layer instantiation and layer call. It is invoked automatically before the first execution of call().

This is typically used to create the weights of Layer subclasses (at the discretion of the subclass implementer).

Args:
input_shape: Instance of TensorShape, or list of instances of

TensorShape if the layer expects a list of inputs (one instance per input).

call(inputs)

This is where the layer’s logic lives.

The call() method may not create state (except in its first invocation, wrapping the creation of variables or other resources in tf.init_scope()). It is recommended to create state in __init__(), or the build() method that is called automatically before call() executes the first time.

Args:
inputs: Input tensor, or dict/list/tuple of input tensors.

The first positional inputs argument is subject to special rules: - inputs must be explicitly passed. A layer cannot have zero

arguments, and inputs cannot be provided via the default value of a keyword argument.

  • NumPy array or Python scalar values in inputs get cast as tensors.

  • Keras mask metadata is only collected from inputs.

  • Layers are built (build(input_shape) method) using shape info from inputs only.

  • input_spec compatibility is only checked against inputs.

  • Mixed precision input casting is only applied to inputs. If a layer has tensor arguments in *args or **kwargs, their casting behavior in mixed precision should be handled manually.

  • The SavedModel input specification is generated using inputs only.

  • Integration with various ecosystem packages like TFMOT, TFLite, TF.js, etc is only supported for inputs and not for tensors in positional and keyword arguments.

*args: Additional positional arguments. May contain tensors, although

this is not recommended, for the reasons above.

**kwargs: Additional keyword arguments. May contain tensors, although

this is not recommended, for the reasons above. The following optional keyword arguments are reserved: - training: Boolean scalar tensor of Python boolean indicating

whether the call is meant for training or inference.

  • mask: Boolean input mask. If the layer’s call() method takes a mask argument, its default value will be set to the mask generated for inputs by the previous layer (if input did come from a layer that generated a corresponding mask, i.e. if it came from a Keras layer with masking support).

Returns:

A tensor or list/tuple of tensors.

compute_output_shape(input_shape)

Computes the output shape of the layer.

This method will cause the layer’s state to be built, if that has not happened before. This requires that the layer will later be used with inputs that match the input shape provided here.

Args:
input_shape: Shape tuple (tuple of integers)

or list of shape tuples (one per output tensor of the layer). Shape tuples can include None for free dimensions, instead of an integer.

Returns:

An input shape tuple.

get_config()

Returns the config of the layer.

A layer config is a Python dictionary (serializable) containing the configuration of a layer. The same layer can be reinstantiated later (without its trained weights) from this configuration.

The config of a layer does not include connectivity information, nor the layer class name. These are handled by Network (one layer of abstraction above).

Note that get_config() does not guarantee to return a fresh copy of dict every time it is called. The callers should make a copy of the returned dict if they want to modify it.

Returns:

Python dictionary.

HemiSphere

class dipy.nn.histo_resdnn.HemiSphere(x=None, y=None, z=None, theta=None, phi=None, xyz=None, faces=None, edges=None, tol=1e-05)

Bases: dipy.core.sphere.Sphere

Points on the unit sphere.

A HemiSphere is similar to a Sphere but it takes antipodal symmetry into account. Antipodal symmetry means that point v on a HemiSphere is the same as the point -v. Duplicate points are discarded when constructing a HemiSphere (including antipodal duplicates). edges and faces are remapped to the remaining points as closely as possible.

The HemiSphere can be constructed using one of three conventions:

HemiSphere(x, y, z)
HemiSphere(xyz=xyz)
HemiSphere(theta=theta, phi=phi)
Parameters
x, y, z1-D array_like

Vertices as x-y-z coordinates.

theta, phi1-D array_like

Vertices as spherical coordinates. Theta and phi are the inclination and azimuth angles respectively.

xyz(N, 3) ndarray

Vertices as x-y-z coordinates.

faces(N, 3) ndarray

Indices into vertices that form triangular faces. If unspecified, the faces are computed using a Delaunay triangulation.

edges(N, 2) ndarray

Edges between vertices. If unspecified, the edges are derived from the faces.

tolfloat

Angle in degrees. Vertices that are less than tol degrees apart are treated as duplicates.

See also

Sphere
Attributes
x
y
z

Methods

find_closest(xyz)

Find the index of the vertex in the Sphere closest to the input vector, taking into account antipodal symmetry

from_sphere(sphere[, tol])

Create instance from a Sphere

mirror()

Create a full Sphere from a HemiSphere

subdivide([n])

Create a more subdivided HemiSphere

edges

faces

vertices

__init__(x=None, y=None, z=None, theta=None, phi=None, xyz=None, faces=None, edges=None, tol=1e-05)

Create a HemiSphere from points

faces()
find_closest(xyz)

Find the index of the vertex in the Sphere closest to the input vector, taking into account antipodal symmetry

Parameters
xyzarray-like, 3 elements

A unit vector

Returns
idxint

The index into the Sphere.vertices array that gives the closest vertex (in angle).

classmethod from_sphere(sphere, tol=1e-05)

Create instance from a Sphere

mirror()

Create a full Sphere from a HemiSphere

subdivide(n=1)

Create a more subdivided HemiSphere

See Sphere.subdivide for full documentation.

HistoResDNN

class dipy.nn.histo_resdnn.HistoResDNN(sh_order=8, basis_type='tournier07', verbose=False)

Bases: object

This class is intended for the ResDNN Histology Network model.

Methods

fetch_default_weights()

Load the model pre-training weights to use for the fitting.

load_model_weights(weights_path)

Load the custom pre-training weights to use for the fitting.

predict(data, gtab[, mask, chunk_size])

Wrapper function to faciliate prediction of larger dataset.

__init__(sh_order=8, basis_type='tournier07', verbose=False)

The model was re-trained for usage with a different basis function (‘tournier07’) like the proposed model in [1, 2].

To obtain the pre-trained model, use:: >>> resdnn_model = HistoResDNN() >>> fetch_model_weights_path = get_fnames(‘histo_resdnn_weights’) >>> resdnn_model.load_model_weights(fetch_model_weights_path)

This model is designed to take as input raw DWI signal on a sphere (ODF) represented as SH of order 8 in the tournier basis and predict fODF of order 8 in the tournier basis. Effectively, this model is mimicking a CSD fit.

Parameters
sh_orderint, optional

Maximum SH order in the SH fit. For sh_order, there will be (sh_order + 1) * (sh_order + 2) / 2 SH coefficients for a symmetric basis. Default: 8

basis_type{‘tournier07’, ‘descoteaux07’}, optional

tournier07 (default) or descoteaux07.

verbosebool (optional)

Whether to show information about the processing. Default: False

References

1

Nath, V., Schilling, K. G., Parvathaneni, P., Hansen, C. B., Hainline, A. E., Huo, Y., … & Stepniewska, I. (2019). Deep learning reveals untapped information for local white-matter fiber reconstruction in diffusion-weighted MRI. Magnetic resonance imaging, 62, 220-227.

2

Nath, V., Schilling, K. G., Hansen, C. B., Parvathaneni, P., Hainline, A. E., Bermudez, C., … & Stępniewska, I. (2019). Deep learning captures more accurate diffusion fiber orientations distributions than constrained spherical deconvolution. arXiv preprint arXiv:1911.07927.

fetch_default_weights()

Load the model pre-training weights to use for the fitting. Will not work if the declared SH_ORDER does not match the weights expected input.

load_model_weights(weights_path)

Load the custom pre-training weights to use for the fitting. Will not work if the declared SH_ORDER does not match the weights expected input.

The weights for a sh_order of 8 can be obtained via the function:

get_fnames(‘histo_resdnn_weights’).

Parameters
weights_pathstr

Path to the file containing the weights (hdf5, saved by tensorflow)

predict(data, gtab, mask=None, chunk_size=1000)

Wrapper function to faciliate prediction of larger dataset. The function will mask, normalize, split, predict and ‘re-assemble’ the data as a volume.

Parameters
datanp.ndarray

DWI signal in a 4D array

gtabGradientTable class instance

The acquisition scheme matching the data (must contain at least one b0)

masknp.ndarray (optional)

Binary mask of the brain to avoid unnecessary computation and unreliable prediction outside the brain. Default: Compute prediction only for nonzero voxels (with at least one nonzero DWI value).

Returns
pred_sh_coefnp.ndarray (x, y, z, M)

Predicted fODF (as SH). The volume has matching shape to the input data, but with (sh_order + 1) * (sh_order + 2) / 2 as a last dimension.

Model

class dipy.nn.histo_resdnn.Model(*args, **kwargs)

Bases: keras.engine.base_layer.Layer, keras.utils.version_utils.ModelVersionSelector

Model groups layers into an object with training and inference features.

Args:
inputs: The input(s) of the model: a keras.Input object or list of

keras.Input objects.

outputs: The output(s) of the model. See Functional API example below. name: String, the name of the model.

There are two ways to instantiate a Model:

1 - With the “Functional API”, where you start from Input, you chain layer calls to specify the model’s forward pass, and finally you create your model from inputs and outputs:

```python import tensorflow as tf

inputs = tf.keras.Input(shape=(3,)) x = tf.keras.layers.Dense(4, activation=tf.nn.relu)(inputs) outputs = tf.keras.layers.Dense(5, activation=tf.nn.softmax)(x) model = tf.keras.Model(inputs=inputs, outputs=outputs) ```

Note: Only dicts, lists, and tuples of input tensors are supported. Nested inputs are not supported (e.g. lists of list or dicts of dict).

A new Functional API model can also be created by using the intermediate tensors. This enables you to quickly extract sub-components of the model.

Example:

```python inputs = keras.Input(shape=(None, None, 3)) processed = keras.layers.RandomCrop(width=32, height=32)(inputs) conv = keras.layers.Conv2D(filters=2, kernel_size=3)(processed) pooling = keras.layers.GlobalAveragePooling2D()(conv) feature = keras.layers.Dense(10)(pooling)

full_model = keras.Model(inputs, feature) backbone = keras.Model(processed, conv) activations = keras.Model(conv, feature) ```

Note that the backbone and activations models are not created with keras.Input objects, but with the tensors that are originated from keras.Inputs objects. Under the hood, the layers and weights will be shared across these models, so that user can train the full_model, and use backbone or activations to do feature extraction. The inputs and outputs of the model can be nested structures of tensors as well, and the created models are standard Functional API models that support all the existing APIs.

2 - By subclassing the Model class: in that case, you should define your layers in __init__() and you should implement the model’s forward pass in call().

```python import tensorflow as tf

class MyModel(tf.keras.Model):

def __init__(self):

super().__init__() self.dense1 = tf.keras.layers.Dense(4, activation=tf.nn.relu) self.dense2 = tf.keras.layers.Dense(5, activation=tf.nn.softmax)

def call(self, inputs):

x = self.dense1(inputs) return self.dense2(x)

model = MyModel() ```

If you subclass Model, you can optionally have a training argument (boolean) in call(), which you can use to specify a different behavior in training and inference:

```python import tensorflow as tf

class MyModel(tf.keras.Model):

def __init__(self):

super().__init__() self.dense1 = tf.keras.layers.Dense(4, activation=tf.nn.relu) self.dense2 = tf.keras.layers.Dense(5, activation=tf.nn.softmax) self.dropout = tf.keras.layers.Dropout(0.5)

def call(self, inputs, training=False):

x = self.dense1(inputs) if training:

x = self.dropout(x, training=training)

return self.dense2(x)

model = MyModel() ```

Once the model is created, you can config the model with losses and metrics with model.compile(), train the model with model.fit(), or use the model to do prediction with model.predict().

Attributes
activity_regularizer

Optional regularizer function for the output of this layer.

compute_dtype

The dtype of the layer’s computations.

distribute_strategy

The tf.distribute.Strategy this model was created under.

dtype

The dtype of the layer weights.

dtype_policy

The dtype policy associated with this layer.

dynamic

Whether the layer is dynamic (eager-only); set in the constructor.

inbound_nodes

Deprecated, do NOT use! Only for compatibility with external Keras.

input

Retrieves the input tensor(s) of a layer.

input_mask

Retrieves the input mask tensor(s) of a layer.

input_shape

Retrieves the input shape(s) of a layer.

input_spec

InputSpec instance(s) describing the input format for this layer.

layers
losses

List of losses added using the add_loss() API.

metrics

Returns the model’s metrics added using compile(), add_metric() APIs.

metrics_names

Returns the model’s display labels for all outputs.

name

Name of the layer (string), set in the constructor.

name_scope

Returns a tf.name_scope instance for this class.

non_trainable_variables

Sequence of non-trainable variables owned by this module and its submodules.

non_trainable_weights

List of all non-trainable weights tracked by this layer.

outbound_nodes

Deprecated, do NOT use! Only for compatibility with external Keras.

output

Retrieves the output tensor(s) of a layer.

output_mask

Retrieves the output mask tensor(s) of a layer.

output_shape

Retrieves the output shape(s) of a layer.

run_eagerly

Settable attribute indicating whether the model should run eagerly.

state_updates

Deprecated, do NOT use!

stateful
submodules

Sequence of all sub-modules.

supports_masking

Whether this layer supports computing a mask using compute_mask.

trainable
trainable_variables

Sequence of trainable variables owned by this module and its submodules.

trainable_weights

List of all trainable weights tracked by this layer.

updates
variable_dtype

Alias of Layer.dtype, the dtype of the weights.

variables

Returns the list of all layer variables/weights.

weights

Returns the list of all layer variables/weights.

Methods

__call__(*args, **kwargs)

Wraps call, applying pre- and post-processing steps.

add_loss(losses, **kwargs)

Add loss tensor(s), potentially dependent on layer inputs.

add_metric(value[, name])

Adds metric tensor to the layer.

add_update(updates[, inputs])

Add update op(s), potentially dependent on layer inputs.

add_variable(*args, **kwargs)

Deprecated, do NOT use! Alias for add_weight.

add_weight([name, shape, dtype, ...])

Adds a new variable to the layer.

apply(inputs, *args, **kwargs)

Deprecated, do NOT use!

build(input_shape)

Builds the model based on input shapes received.

call(inputs[, training, mask])

Calls the model on new inputs and returns the outputs as tensors.

compile([optimizer, loss, metrics, ...])

Configures the model for training.

compute_loss([x, y, y_pred, sample_weight])

Compute the total loss, validate it, and return it.

compute_mask(inputs[, mask])

Computes an output mask tensor.

compute_metrics(x, y, y_pred, sample_weight)

Update metric states and collect all metrics to be returned.

compute_output_shape(input_shape)

Computes the output shape of the layer.

compute_output_signature(input_signature)

Compute the output tensor signature of the layer based on the inputs.

count_params()

Count the total number of scalars composing the weights.

evaluate([x, y, batch_size, verbose, ...])

Returns the loss value & metrics values for the model in test mode.

evaluate_generator(generator[, steps, ...])

Evaluates the model on a data generator.

finalize_state()

Finalizes the layers state after updating layer weights.

fit([x, y, batch_size, epochs, verbose, ...])

Trains the model for a fixed number of epochs (iterations on a dataset).

fit_generator(generator[, steps_per_epoch, ...])

Fits the model on data yielded batch-by-batch by a Python generator.

from_config(config[, custom_objects])

Creates a layer from its config.

get_config()

Returns the config of the layer.

get_input_at(node_index)

Retrieves the input tensor(s) of a layer at a given node.

get_input_mask_at(node_index)

Retrieves the input mask tensor(s) of a layer at a given node.

get_input_shape_at(node_index)

Retrieves the input shape(s) of a layer at a given node.

get_layer([name, index])

Retrieves a layer based on either its name (unique) or index.

get_losses_for(inputs)

Deprecated, do NOT use!

get_output_at(node_index)

Retrieves the output tensor(s) of a layer at a given node.

get_output_mask_at(node_index)

Retrieves the output mask tensor(s) of a layer at a given node.

get_output_shape_at(node_index)

Retrieves the output shape(s) of a layer at a given node.

get_updates_for(inputs)

Deprecated, do NOT use!

get_weights()

Retrieves the weights of the model.

load_weights(filepath[, by_name, ...])

Loads all layer weights, either from a TensorFlow or an HDF5 weight file.

make_predict_function([force])

Creates a function that executes one step of inference.

make_test_function([force])

Creates a function that executes one step of evaluation.

make_train_function([force])

Creates a function that executes one step of training.

predict(x[, batch_size, verbose, steps, ...])

Generates output predictions for the input samples.

predict_generator(generator[, steps, ...])

Generates predictions for the input samples from a data generator.

predict_on_batch(x)

Returns predictions for a single batch of samples.

predict_step(data)

The logic for one inference step.

reset_metrics()

Resets the state of all the metrics in the model.

save(filepath[, overwrite, ...])

Saves the model to Tensorflow SavedModel or a single HDF5 file.

save_spec([dynamic_batch])

Returns the tf.TensorSpec of call inputs as a tuple (args, kwargs).

save_weights(filepath[, overwrite, ...])

Saves all layer weights.

set_weights(weights)

Sets the weights of the layer, from NumPy arrays.

summary([line_length, positions, print_fn, ...])

Prints a string summary of the network.

test_on_batch(x[, y, sample_weight, ...])

Test the model on a single batch of samples.

test_step(data)

The logic for one evaluation step.

to_json(**kwargs)

Returns a JSON string containing the network configuration.

to_yaml(**kwargs)

Returns a yaml string containing the network configuration.

train_on_batch(x[, y, sample_weight, ...])

Runs a single gradient update on a single batch of data.

train_step(data)

The logic for one training step.

with_name_scope(method)

Decorator to automatically enter the module name scope.

reset_states

__init__(*args, **kwargs)
build(input_shape)

Builds the model based on input shapes received.

This is to be used for subclassed models, which do not know at instantiation time what their inputs look like.

This method only exists for users who want to call model.build() in a standalone way (as a substitute for calling the model on real data to build it). It will never be called by the framework (and thus it will never throw unexpected errors in an unrelated workflow).

Args:
input_shape: Single tuple, TensorShape instance, or list/dict of shapes,

where shapes are tuples, integers, or TensorShape instances.

Raises:
ValueError:
  1. In case of invalid user-provided data (not of type tuple, list, TensorShape, or dict).

  2. If the model requires call arguments that are agnostic to the input shapes (positional or keyword arg in call signature).

  3. If not all layers were properly built.

  4. If float type inputs are not supported within the layers.

In each of these cases, the user should build their model by calling it on real tensor data.

call(inputs, training=None, mask=None)

Calls the model on new inputs and returns the outputs as tensors.

In this case call() just reapplies all ops in the graph to the new inputs (e.g. build a new computational graph from the provided inputs).

Note: This method should not be called directly. It is only meant to be overridden when subclassing tf.keras.Model. To call a model on an input, always use the __call__() method, i.e. model(inputs), which relies on the underlying call() method.

Args:

inputs: Input tensor, or dict/list/tuple of input tensors. training: Boolean or boolean scalar tensor, indicating whether to run

the Network in training mode or inference mode.

mask: A mask or list of masks. A mask can be either a boolean tensor or
None (no mask). For more details, check the guide

[here](https://www.tensorflow.org/guide/keras/masking_and_padding).

Returns:

A tensor if there is a single output, or a list of tensors if there are more than one outputs.

compile(optimizer='rmsprop', loss=None, metrics=None, loss_weights=None, weighted_metrics=None, run_eagerly=None, steps_per_execution=None, jit_compile=None, **kwargs)

Configures the model for training.

Example:

```python model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=1e-3),

loss=tf.keras.losses.BinaryCrossentropy(), metrics=[tf.keras.metrics.BinaryAccuracy(),

tf.keras.metrics.FalseNegatives()])

```

Args:
optimizer: String (name of optimizer) or optimizer instance. See

tf.keras.optimizers.

loss: Loss function. Maybe be a string (name of loss function), or

a tf.keras.losses.Loss instance. See tf.keras.losses. A loss function is any callable with the signature loss = fn(y_true, y_pred), where y_true are the ground truth values, and y_pred are the model’s predictions. y_true should have shape (batch_size, d0, .. dN) (except in the case of sparse loss functions such as sparse categorical crossentropy which expects integer arrays of shape (batch_size, d0, .. dN-1)). y_pred should have shape (batch_size, d0, .. dN). The loss function should return a float tensor. If a custom Loss instance is used and reduction is set to None, return value has shape (batch_size, d0, .. dN-1) i.e. per-sample or per-timestep loss values; otherwise, it is a scalar. If the model has multiple outputs, you can use a different loss on each output by passing a dictionary or a list of losses. The loss value that will be minimized by the model will then be the sum of all individual losses, unless loss_weights is specified.

metrics: List of metrics to be evaluated by the model during training

and testing. Each of this can be a string (name of a built-in function), function or a tf.keras.metrics.Metric instance. See tf.keras.metrics. Typically you will use metrics=[‘accuracy’]. A function is any callable with the signature result = fn(y_true, y_pred). To specify different metrics for different outputs of a multi-output model, you could also pass a dictionary, such as metrics={‘output_a’: ‘accuracy’, ‘output_b’: [‘accuracy’, ‘mse’]}. You can also pass a list to specify a metric or a list of metrics for each output, such as metrics=[[‘accuracy’], [‘accuracy’, ‘mse’]] or metrics=[‘accuracy’, [‘accuracy’, ‘mse’]]. When you pass the strings ‘accuracy’ or ‘acc’, we convert this to one of tf.keras.metrics.BinaryAccuracy, tf.keras.metrics.CategoricalAccuracy, tf.keras.metrics.SparseCategoricalAccuracy based on the loss function used and the model output shape. We do a similar conversion for the strings ‘crossentropy’ and ‘ce’ as well.

loss_weights: Optional list or dictionary specifying scalar coefficients

(Python floats) to weight the loss contributions of different model outputs. The loss value that will be minimized by the model will then be the weighted sum of all individual losses, weighted by the loss_weights coefficients.

If a list, it is expected to have a 1:1 mapping to the model’s

outputs. If a dict, it is expected to map output names (strings) to scalar coefficients.

weighted_metrics: List of metrics to be evaluated and weighted by

sample_weight or class_weight during training and testing.

run_eagerly: Bool. Defaults to False. If True, this Model’s

logic will not be wrapped in a tf.function. Recommended to leave this as None unless your Model cannot be run inside a tf.function. run_eagerly=True is not supported when using tf.distribute.experimental.ParameterServerStrategy.

steps_per_execution: Int. Defaults to 1. The number of batches to run

during each tf.function call. Running multiple batches inside a single tf.function call can greatly improve performance on TPUs or small models with a large Python overhead. At most, one full epoch will be run each execution. If a number larger than the size of the epoch is passed, the execution will be truncated to the size of the epoch. Note that if steps_per_execution is set to N, Callback.on_batch_begin and Callback.on_batch_end methods will only be called every N batches (i.e. before/after each tf.function execution).

jit_compile: If True, compile the model training step with XLA.

[XLA](https://www.tensorflow.org/xla) is an optimizing compiler for machine learning. jit_compile is not enabled for by default. This option cannot be enabled with run_eagerly=True. Note that jit_compile=True is may not necessarily work for all models. For more information on supported operations please refer to the [XLA documentation](https://www.tensorflow.org/xla). Also refer to [known XLA issues](https://www.tensorflow.org/xla/known_issues) for more details.

**kwargs: Arguments supported for backwards compatibility only.

compute_loss(x=None, y=None, y_pred=None, sample_weight=None)

Compute the total loss, validate it, and return it.

Subclasses can optionally override this method to provide custom loss computation logic.

Example: ```python class MyModel(tf.keras.Model):

def __init__(self, *args, **kwargs):

super(MyModel, self).__init__(*args, **kwargs) self.loss_tracker = tf.keras.metrics.Mean(name=’loss’)

def compute_loss(self, x, y, y_pred, sample_weight):

loss = tf.reduce_mean(tf.math.squared_difference(y_pred, y)) loss += tf.add_n(self.losses) self.loss_tracker.update_state(loss) return loss

def reset_metrics(self):

self.loss_tracker.reset_states()

@property def metrics(self):

return [self.loss_tracker]

tensors = tf.random.uniform((10, 10)), tf.random.uniform((10,)) dataset = tf.data.Dataset.from_tensor_slices(tensors).repeat().batch(1)

inputs = tf.keras.layers.Input(shape=(10,), name=’my_input’) outputs = tf.keras.layers.Dense(10)(inputs) model = MyModel(inputs, outputs) model.add_loss(tf.reduce_sum(outputs))

optimizer = tf.keras.optimizers.SGD() model.compile(optimizer, loss=’mse’, steps_per_execution=10) model.fit(dataset, epochs=2, steps_per_epoch=10) print(‘My custom loss: ‘, model.loss_tracker.result().numpy()) ```

Args:

x: Input data. y: Target data. y_pred: Predictions returned by the model (output of model(x)) sample_weight: Sample weights for weighting the loss function.

Returns:

The total loss as a tf.Tensor, or None if no loss results (which is the case when called by Model.test_step).

compute_metrics(x, y, y_pred, sample_weight)

Update metric states and collect all metrics to be returned.

Subclasses can optionally override this method to provide custom metric updating and collection logic.

Example: ```python class MyModel(tf.keras.Sequential):

def compute_metrics(self, x, y, y_pred, sample_weight):

# This super call updates self.compiled_metrics and returns results # for all metrics listed in self.metrics. metric_results = super(MyModel, self).compute_metrics(

x, y, y_pred, sample_weight)

# Note that self.custom_metric is not listed in self.metrics. self.custom_metric.update_state(x, y, y_pred, sample_weight) metric_results[‘custom_metric_name’] = self.custom_metric.result() return metric_results

```

Args:

x: Input data. y: Target data. y_pred: Predictions returned by the model (output of model.call(x)) sample_weight: Sample weights for weighting the loss function.

Returns:

A dict containing values that will be passed to tf.keras.callbacks.CallbackList.on_train_batch_end(). Typically, the values of the metrics listed in self.metrics are returned. Example: {‘loss’: 0.2, ‘accuracy’: 0.7}.

property distribute_strategy

The tf.distribute.Strategy this model was created under.

evaluate(x=None, y=None, batch_size=None, verbose=1, sample_weight=None, steps=None, callbacks=None, max_queue_size=10, workers=1, use_multiprocessing=False, return_dict=False, **kwargs)

Returns the loss value & metrics values for the model in test mode.

Computation is done in batches (see the batch_size arg.)

Args:
x: Input data. It could be:
  • A Numpy array (or array-like), or a list of arrays (in case the model has multiple inputs).

  • A TensorFlow tensor, or a list of tensors (in case the model has multiple inputs).

  • A dict mapping input names to the corresponding array/tensors, if the model has named inputs.

  • A tf.data dataset. Should return a tuple of either (inputs, targets) or (inputs, targets, sample_weights).

  • A generator or keras.utils.Sequence returning (inputs, targets) or (inputs, targets, sample_weights).

A more detailed description of unpacking behavior for iterator types (Dataset, generator, Sequence) is given in the Unpacking behavior for iterator-like inputs section of Model.fit.

y: Target data. Like the input data x, it could be either Numpy

array(s) or TensorFlow tensor(s). It should be consistent with x (you cannot have Numpy inputs and tensor targets, or inversely). If x is a dataset, generator or keras.utils.Sequence instance, y should not be specified (since targets will be obtained from the iterator/dataset).

batch_size: Integer or None. Number of samples per batch of

computation. If unspecified, batch_size will default to 32. Do not specify the batch_size if your data is in the form of a dataset, generators, or keras.utils.Sequence instances (since they generate batches).

verbose: 0 or 1. Verbosity mode. 0 = silent, 1 = progress bar. sample_weight: Optional Numpy array of weights for the test samples,

used for weighting the loss function. You can either pass a flat (1D) Numpy array with the same length as the input samples

(1:1 mapping between weights and samples), or in the case of

temporal data, you can pass a 2D array with shape (samples, sequence_length), to apply a different weight to every timestep of every sample. This argument is not supported when x is a dataset, instead pass sample weights as the third element of x.

steps: Integer or None. Total number of steps (batches of samples)

before declaring the evaluation round finished. Ignored with the default value of None. If x is a tf.data dataset and steps is None, ‘evaluate’ will run until the dataset is exhausted. This argument is not supported with array inputs.

callbacks: List of keras.callbacks.Callback instances. List of

callbacks to apply during evaluation. See [callbacks](/api_docs/python/tf/keras/callbacks).

max_queue_size: Integer. Used for generator or keras.utils.Sequence

input only. Maximum size for the generator queue. If unspecified, max_queue_size will default to 10.

workers: Integer. Used for generator or keras.utils.Sequence input

only. Maximum number of processes to spin up when using process-based threading. If unspecified, workers will default to 1.

use_multiprocessing: Boolean. Used for generator or

keras.utils.Sequence input only. If True, use process-based threading. If unspecified, use_multiprocessing will default to False. Note that because this implementation relies on multiprocessing, you should not pass non-picklable arguments to the generator as they can’t be passed easily to children processes.

return_dict: If True, loss and metric results are returned as a dict,

with each key being the name of the metric. If False, they are returned as a list.

**kwargs: Unused at this time.

See the discussion of Unpacking behavior for iterator-like inputs for Model.fit.

Returns:

Scalar test loss (if the model has a single output and no metrics) or list of scalars (if the model has multiple outputs and/or metrics). The attribute model.metrics_names will give you the display labels for the scalar outputs.

Raises:

RuntimeError: If model.evaluate is wrapped in a tf.function.

evaluate_generator(generator, steps=None, callbacks=None, max_queue_size=10, workers=1, use_multiprocessing=False, verbose=0)

Evaluates the model on a data generator.

DEPRECATED:

Model.evaluate now supports generators, so there is no longer any need to use this endpoint.

fit(x=None, y=None, batch_size=None, epochs=1, verbose='auto', callbacks=None, validation_split=0.0, validation_data=None, shuffle=True, class_weight=None, sample_weight=None, initial_epoch=0, steps_per_epoch=None, validation_steps=None, validation_batch_size=None, validation_freq=1, max_queue_size=10, workers=1, use_multiprocessing=False)

Trains the model for a fixed number of epochs (iterations on a dataset).

Args:
x: Input data. It could be:
  • A Numpy array (or array-like), or a list of arrays (in case the model has multiple inputs).

  • A TensorFlow tensor, or a list of tensors (in case the model has multiple inputs).

  • A dict mapping input names to the corresponding array/tensors, if the model has named inputs.

  • A tf.data dataset. Should return a tuple of either (inputs, targets) or (inputs, targets, sample_weights).

  • A generator or keras.utils.Sequence returning (inputs, targets) or (inputs, targets, sample_weights).

  • A tf.keras.utils.experimental.DatasetCreator, which wraps a callable that takes a single argument of type tf.distribute.InputContext, and returns a tf.data.Dataset. DatasetCreator should be used when users prefer to specify the per-replica batching and sharding logic for the Dataset. See tf.keras.utils.experimental.DatasetCreator doc for more information.

A more detailed description of unpacking behavior for iterator types (Dataset, generator, Sequence) is given below. If using tf.distribute.experimental.ParameterServerStrategy, only DatasetCreator type is supported for x.

y: Target data. Like the input data x,

it could be either Numpy array(s) or TensorFlow tensor(s). It should be consistent with x (you cannot have Numpy inputs and tensor targets, or inversely). If x is a dataset, generator, or keras.utils.Sequence instance, y should not be specified (since targets will be obtained from x).

batch_size: Integer or None.

Number of samples per gradient update. If unspecified, batch_size will default to 32. Do not specify the batch_size if your data is in the form of datasets, generators, or keras.utils.Sequence instances (since they generate batches).

epochs: Integer. Number of epochs to train the model.

An epoch is an iteration over the entire x and y data provided (unless the steps_per_epoch flag is set to something other than None). Note that in conjunction with initial_epoch, epochs is to be understood as “final epoch”. The model is not trained for a number of iterations given by epochs, but merely until the epoch of index epochs is reached.

verbose: ‘auto’, 0, 1, or 2. Verbosity mode.

0 = silent, 1 = progress bar, 2 = one line per epoch. ‘auto’ defaults to 1 for most cases, but 2 when used with ParameterServerStrategy. Note that the progress bar is not particularly useful when logged to a file, so verbose=2 is recommended when not running interactively (eg, in a production environment).

callbacks: List of keras.callbacks.Callback instances.

List of callbacks to apply during training. See tf.keras.callbacks. Note tf.keras.callbacks.ProgbarLogger and tf.keras.callbacks.History callbacks are created automatically and need not be passed into model.fit. tf.keras.callbacks.ProgbarLogger is created or not based on verbose argument to model.fit. Callbacks with batch-level calls are currently unsupported with tf.distribute.experimental.ParameterServerStrategy, and users are advised to implement epoch-level calls instead with an appropriate steps_per_epoch value.

validation_split: Float between 0 and 1.

Fraction of the training data to be used as validation data. The model will set apart this fraction of the training data, will not train on it, and will evaluate the loss and any model metrics on this data at the end of each epoch. The validation data is selected from the last samples in the x and y data provided, before shuffling. This argument is not supported when x is a dataset, generator or

keras.utils.Sequence instance.

validation_split is not yet supported with tf.distribute.experimental.ParameterServerStrategy.

validation_data: Data on which to evaluate

the loss and any model metrics at the end of each epoch. The model will not be trained on this data. Thus, note the fact that the validation loss of data provided using validation_split or validation_data is not affected by regularization layers like noise and dropout. validation_data will override validation_split. validation_data could be:

  • A tuple (x_val, y_val) of Numpy arrays or tensors.

  • A tuple (x_val, y_val, val_sample_weights) of NumPy arrays.

  • A tf.data.Dataset.

  • A Python generator or keras.utils.Sequence returning

(inputs, targets) or (inputs, targets, sample_weights).

validation_data is not yet supported with tf.distribute.experimental.ParameterServerStrategy.

shuffle: Boolean (whether to shuffle the training data

before each epoch) or str (for ‘batch’). This argument is ignored when x is a generator or an object of tf.data.Dataset. ‘batch’ is a special option for dealing with the limitations of HDF5 data; it shuffles in batch-sized chunks. Has no effect when steps_per_epoch is not None.

class_weight: Optional dictionary mapping class indices (integers)

to a weight (float) value, used for weighting the loss function (during training only). This can be useful to tell the model to “pay more attention” to samples from an under-represented class.

sample_weight: Optional Numpy array of weights for

the training samples, used for weighting the loss function (during training only). You can either pass a flat (1D) Numpy array with the same length as the input samples (1:1 mapping between weights and samples), or in the case of temporal data, you can pass a 2D array with shape (samples, sequence_length), to apply a different weight to every timestep of every sample. This argument is not supported when x is a dataset, generator, or

keras.utils.Sequence instance, instead provide the sample_weights

as the third element of x.

initial_epoch: Integer.

Epoch at which to start training (useful for resuming a previous training run).

steps_per_epoch: Integer or None.

Total number of steps (batches of samples) before declaring one epoch finished and starting the next epoch. When training with input tensors such as TensorFlow data tensors, the default None is equal to the number of samples in your dataset divided by the batch size, or 1 if that cannot be determined. If x is a tf.data dataset, and ‘steps_per_epoch’ is None, the epoch will run until the input dataset is exhausted. When passing an infinitely repeating dataset, you must specify the steps_per_epoch argument. If steps_per_epoch=-1 the training will run indefinitely with an infinitely repeating dataset. This argument is not supported with array inputs. When using tf.distribute.experimental.ParameterServerStrategy:

  • steps_per_epoch=None is not supported.

validation_steps: Only relevant if validation_data is provided and

is a tf.data dataset. Total number of steps (batches of samples) to draw before stopping when performing validation at the end of every epoch. If ‘validation_steps’ is None, validation will run until the validation_data dataset is exhausted. In the case of an infinitely repeated dataset, it will run into an infinite loop. If ‘validation_steps’ is specified and only part of the dataset will be consumed, the evaluation will start from the beginning of the dataset at each epoch. This ensures that the same validation samples are used every time.

validation_batch_size: Integer or None.

Number of samples per validation batch. If unspecified, will default to batch_size. Do not specify the validation_batch_size if your data is in the form of datasets, generators, or keras.utils.Sequence instances (since they generate batches).

validation_freq: Only relevant if validation data is provided. Integer

or collections.abc.Container instance (e.g. list, tuple, etc.). If an integer, specifies how many training epochs to run before a new validation run is performed, e.g. validation_freq=2 runs validation every 2 epochs. If a Container, specifies the epochs on which to run validation, e.g. validation_freq=[1, 2, 10] runs validation at the end of the 1st, 2nd, and 10th epochs.

max_queue_size: Integer. Used for generator or keras.utils.Sequence

input only. Maximum size for the generator queue. If unspecified, max_queue_size will default to 10.

workers: Integer. Used for generator or keras.utils.Sequence input

only. Maximum number of processes to spin up when using process-based threading. If unspecified, workers will default to 1.

use_multiprocessing: Boolean. Used for generator or

keras.utils.Sequence input only. If True, use process-based threading. If unspecified, use_multiprocessing will default to False. Note that because this implementation relies on multiprocessing, you should not pass non-picklable arguments to the generator as they can’t be passed easily to children processes.

Unpacking behavior for iterator-like inputs:

A common pattern is to pass a tf.data.Dataset, generator, or

tf.keras.utils.Sequence to the x argument of fit, which will in fact yield not only features (x) but optionally targets (y) and sample weights. Keras requires that the output of such iterator-likes be unambiguous. The iterator should return a tuple of length 1, 2, or 3, where the optional second and third elements will be used for y and sample_weight respectively. Any other type provided will be wrapped in a length one tuple, effectively treating everything as ‘x’. When yielding dicts, they should still adhere to the top-level tuple structure. e.g. ({“x0”: x0, “x1”: x1}, y). Keras will not attempt to separate features, targets, and weights from the keys of a single dict.

A notable unsupported data type is the namedtuple. The reason is that

it behaves like both an ordered datatype (tuple) and a mapping datatype (dict). So given a namedtuple of the form:

namedtuple(“example_tuple”, [“y”, “x”])

it is ambiguous whether to reverse the order of the elements when interpreting the value. Even worse is a tuple of the form:

namedtuple(“other_tuple”, [“x”, “y”, “z”])

where it is unclear if the tuple was intended to be unpacked into x, y, and sample_weight or passed through as a single element to x. As a result the data processing code will simply raise a ValueError if it encounters a namedtuple. (Along with instructions to remedy the issue.)

Returns:

A History object. Its History.history attribute is a record of training loss values and metrics values at successive epochs, as well as validation loss values and validation metrics values (if applicable).

Raises:

RuntimeError: 1. If the model was never compiled or, 2. If model.fit is wrapped in tf.function.

ValueError: In case of mismatch between the provided input data

and what the model expects or when the input data is empty.

fit_generator(generator, steps_per_epoch=None, epochs=1, verbose=1, callbacks=None, validation_data=None, validation_steps=None, validation_freq=1, class_weight=None, max_queue_size=10, workers=1, use_multiprocessing=False, shuffle=True, initial_epoch=0)

Fits the model on data yielded batch-by-batch by a Python generator.

DEPRECATED:

Model.fit now supports generators, so there is no longer any need to use this endpoint.

classmethod from_config(config, custom_objects=None)

Creates a layer from its config.

This method is the reverse of get_config, capable of instantiating the same layer from the config dictionary. It does not handle layer connectivity (handled by Network), nor weights (handled by set_weights).

Args:
config: A Python dictionary, typically the

output of get_config.

Returns:

A layer instance.

get_config()

Returns the config of the layer.

A layer config is a Python dictionary (serializable) containing the configuration of a layer. The same layer can be reinstantiated later (without its trained weights) from this configuration.

The config of a layer does not include connectivity information, nor the layer class name. These are handled by Network (one layer of abstraction above).

Note that get_config() does not guarantee to return a fresh copy of dict every time it is called. The callers should make a copy of the returned dict if they want to modify it.

Returns:

Python dictionary.

get_layer(name=None, index=None)

Retrieves a layer based on either its name (unique) or index.

If name and index are both provided, index will take precedence. Indices are based on order of horizontal graph traversal (bottom-up).

Args:

name: String, name of layer. index: Integer, index of layer.

Returns:

A layer instance.

get_weights()

Retrieves the weights of the model.

Returns:

A flat list of Numpy arrays.

property layers
load_weights(filepath, by_name=False, skip_mismatch=False, options=None)

Loads all layer weights, either from a TensorFlow or an HDF5 weight file.

If by_name is False weights are loaded based on the network’s topology. This means the architecture should be the same as when the weights were saved. Note that layers that don’t have weights are not taken into account in the topological ordering, so adding or removing layers is fine as long as they don’t have weights.

If by_name is True, weights are loaded into layers only if they share the same name. This is useful for fine-tuning or transfer-learning models where some of the layers have changed.

Only topological loading (by_name=False) is supported when loading weights from the TensorFlow format. Note that topological loading differs slightly between TensorFlow and HDF5 formats for user-defined classes inheriting from tf.keras.Model: HDF5 loads based on a flattened list of weights, while the TensorFlow format loads based on the object-local names of attributes to which layers are assigned in the Model’s constructor.

Args:
filepath: String, path to the weights file to load. For weight files in

TensorFlow format, this is the file prefix (the same as was passed to save_weights). This can also be a path to a SavedModel saved from model.save.

by_name: Boolean, whether to load weights by name or by topological

order. Only topological loading is supported for weight files in TensorFlow format.

skip_mismatch: Boolean, whether to skip loading of layers where there is

a mismatch in the number of weights, or a mismatch in the shape of the weight (only valid when by_name=True).

options: Optional tf.train.CheckpointOptions object that specifies

options for loading weights.

Returns:

When loading a weight file in TensorFlow format, returns the same status object as tf.train.Checkpoint.restore. When graph building, restore ops are run automatically as soon as the network is built (on first call for user-defined classes inheriting from Model, immediately if it is already built).

When loading weights in HDF5 format, returns None.

Raises:
ImportError: If h5py is not available and the weight file is in HDF5

format.

ValueError: If skip_mismatch is set to True when by_name is

False.

make_predict_function(force=False)

Creates a function that executes one step of inference.

This method can be overridden to support custom inference logic. This method is called by Model.predict and Model.predict_on_batch.

Typically, this method directly controls tf.function and tf.distribute.Strategy settings, and delegates the actual evaluation logic to Model.predict_step.

This function is cached the first time Model.predict or Model.predict_on_batch is called. The cache is cleared whenever Model.compile is called. You can skip the cache and generate again the function with force=True.

Args:
force: Whether to regenerate the predict function and skip the cached

function if available.

Returns:

Function. The function created by this method should accept a tf.data.Iterator, and return the outputs of the Model.

make_test_function(force=False)

Creates a function that executes one step of evaluation.

This method can be overridden to support custom evaluation logic. This method is called by Model.evaluate and Model.test_on_batch.

Typically, this method directly controls tf.function and tf.distribute.Strategy settings, and delegates the actual evaluation logic to Model.test_step.

This function is cached the first time Model.evaluate or Model.test_on_batch is called. The cache is cleared whenever Model.compile is called. You can skip the cache and generate again the function with force=True.

Args:
force: Whether to regenerate the test function and skip the cached

function if available.

Returns:

Function. The function created by this method should accept a tf.data.Iterator, and return a dict containing values that will be passed to tf.keras.Callbacks.on_test_batch_end.

make_train_function(force=False)

Creates a function that executes one step of training.

This method can be overridden to support custom training logic. This method is called by Model.fit and Model.train_on_batch.

Typically, this method directly controls tf.function and tf.distribute.Strategy settings, and delegates the actual training logic to Model.train_step.

This function is cached the first time Model.fit or Model.train_on_batch is called. The cache is cleared whenever Model.compile is called. You can skip the cache and generate again the function with force=True.

Args:
force: Whether to regenerate the train function and skip the cached

function if available.

Returns:

Function. The function created by this method should accept a tf.data.Iterator, and return a dict containing values that will be passed to tf.keras.Callbacks.on_train_batch_end, such as {‘loss’: 0.2, ‘accuracy’: 0.7}.

property metrics

Returns the model’s metrics added using compile(), add_metric() APIs.

Note: Metrics passed to compile() are available only after a keras.Model has been trained/evaluated on actual data.

Examples:

>>> inputs = tf.keras.layers.Input(shape=(3,))
>>> outputs = tf.keras.layers.Dense(2)(inputs)
>>> model = tf.keras.models.Model(inputs=inputs, outputs=outputs)
>>> model.compile(optimizer="Adam", loss="mse", metrics=["mae"])
>>> [m.name for m in model.metrics]
[]
>>> x = np.random.random((2, 3))
>>> y = np.random.randint(0, 2, (2, 2))
>>> model.fit(x, y)
>>> [m.name for m in model.metrics]
['loss', 'mae']
>>> inputs = tf.keras.layers.Input(shape=(3,))
>>> d = tf.keras.layers.Dense(2, name='out')
>>> output_1 = d(inputs)
>>> output_2 = d(inputs)
>>> model = tf.keras.models.Model(
...    inputs=inputs, outputs=[output_1, output_2])
>>> model.add_metric(
...    tf.reduce_sum(output_2), name='mean', aggregation='mean')
>>> model.compile(optimizer="Adam", loss="mse", metrics=["mae", "acc"])
>>> model.fit(x, (y, y))
>>> [m.name for m in model.metrics]
['loss', 'out_loss', 'out_1_loss', 'out_mae', 'out_acc', 'out_1_mae',
'out_1_acc', 'mean']
property metrics_names

Returns the model’s display labels for all outputs.

Note: metrics_names are available only after a keras.Model has been trained/evaluated on actual data.

Examples:

>>> inputs = tf.keras.layers.Input(shape=(3,))
>>> outputs = tf.keras.layers.Dense(2)(inputs)
>>> model = tf.keras.models.Model(inputs=inputs, outputs=outputs)
>>> model.compile(optimizer="Adam", loss="mse", metrics=["mae"])
>>> model.metrics_names
[]
>>> x = np.random.random((2, 3))
>>> y = np.random.randint(0, 2, (2, 2))
>>> model.fit(x, y)
>>> model.metrics_names
['loss', 'mae']
>>> inputs = tf.keras.layers.Input(shape=(3,))
>>> d = tf.keras.layers.Dense(2, name='out')
>>> output_1 = d(inputs)
>>> output_2 = d(inputs)
>>> model = tf.keras.models.Model(
...    inputs=inputs, outputs=[output_1, output_2])
>>> model.compile(optimizer="Adam", loss="mse", metrics=["mae", "acc"])
>>> model.fit(x, (y, y))
>>> model.metrics_names
['loss', 'out_loss', 'out_1_loss', 'out_mae', 'out_acc', 'out_1_mae',
'out_1_acc']
property non_trainable_weights

List of all non-trainable weights tracked by this layer.

Non-trainable weights are not updated during training. They are expected to be updated manually in call().

Returns:

A list of non-trainable variables.

predict(x, batch_size=None, verbose=0, steps=None, callbacks=None, max_queue_size=10, workers=1, use_multiprocessing=False)

Generates output predictions for the input samples.

Computation is done in batches. This method is designed for batch processing of large numbers of inputs. It is not intended for use inside of loops that iterate over your data and process small numbers of inputs at a time.

For small numbers of inputs that fit in one batch, directly use __call__() for faster execution, e.g., model(x), or model(x, training=False) if you have layers such as tf.keras.layers.BatchNormalization that behave differently during inference. You may pair the individual model call with a tf.function for additional performance inside your inner loop. If you need access to numpy array values instead of tensors after your model call, you can use tensor.numpy() to get the numpy array value of an eager tensor.

Also, note the fact that test loss is not affected by regularization layers like noise and dropout.

Note: See [this FAQ entry]( https://keras.io/getting_started/faq/#whats-the-difference-between-model-methods-predict-and-call) for more details about the difference between Model methods predict() and __call__().

Args:
x: Input samples. It could be:
  • A Numpy array (or array-like), or a list of arrays (in case the model has multiple inputs).

  • A TensorFlow tensor, or a list of tensors (in case the model has multiple inputs).

  • A tf.data dataset.

  • A generator or keras.utils.Sequence instance.

A more detailed description of unpacking behavior for iterator types (Dataset, generator, Sequence) is given in the Unpacking behavior for iterator-like inputs section of Model.fit.

batch_size: Integer or None.

Number of samples per batch. If unspecified, batch_size will default to 32. Do not specify the batch_size if your data is in the form of dataset, generators, or keras.utils.Sequence instances (since they generate batches).

verbose: Verbosity mode, 0 or 1. steps: Total number of steps (batches of samples)

before declaring the prediction round finished. Ignored with the default value of None. If x is a tf.data dataset and steps is None, predict() will run until the input dataset is exhausted.

callbacks: List of keras.callbacks.Callback instances.

List of callbacks to apply during prediction. See [callbacks](/api_docs/python/tf/keras/callbacks).

max_queue_size: Integer. Used for generator or keras.utils.Sequence

input only. Maximum size for the generator queue. If unspecified, max_queue_size will default to 10.

workers: Integer. Used for generator or keras.utils.Sequence input

only. Maximum number of processes to spin up when using process-based threading. If unspecified, workers will default to 1.

use_multiprocessing: Boolean. Used for generator or

keras.utils.Sequence input only. If True, use process-based threading. If unspecified, use_multiprocessing will default to False. Note that because this implementation relies on multiprocessing, you should not pass non-picklable arguments to the generator as they can’t be passed easily to children processes.

See the discussion of Unpacking behavior for iterator-like inputs for Model.fit. Note that Model.predict uses the same interpretation rules as Model.fit and Model.evaluate, so inputs must be unambiguous for all three methods.

Returns:

Numpy array(s) of predictions.

Raises:

RuntimeError: If model.predict is wrapped in a tf.function. ValueError: In case of mismatch between the provided

input data and the model’s expectations, or in case a stateful model receives a number of samples that is not a multiple of the batch size.

predict_generator(generator, steps=None, callbacks=None, max_queue_size=10, workers=1, use_multiprocessing=False, verbose=0)

Generates predictions for the input samples from a data generator.

DEPRECATED:

Model.predict now supports generators, so there is no longer any need to use this endpoint.

predict_on_batch(x)

Returns predictions for a single batch of samples.

Args:
x: Input data. It could be:
  • A Numpy array (or array-like), or a list of arrays (in case the

    model has multiple inputs).

  • A TensorFlow tensor, or a list of tensors (in case the model has

    multiple inputs).

Returns:

Numpy array(s) of predictions.

Raises:

RuntimeError: If model.predict_on_batch is wrapped in a tf.function.

predict_step(data)

The logic for one inference step.

This method can be overridden to support custom inference logic. This method is called by Model.make_predict_function.

This method should contain the mathematical logic for one step of inference. This typically includes the forward pass.

Configuration details for how this logic is run (e.g. tf.function and tf.distribute.Strategy settings), should be left to Model.make_predict_function, which can also be overridden.

Args:

data: A nested structure of `Tensor`s.

Returns:

The result of one inference step, typically the output of calling the Model on data.

reset_metrics()

Resets the state of all the metrics in the model.

Examples:

>>> inputs = tf.keras.layers.Input(shape=(3,))
>>> outputs = tf.keras.layers.Dense(2)(inputs)
>>> model = tf.keras.models.Model(inputs=inputs, outputs=outputs)
>>> model.compile(optimizer="Adam", loss="mse", metrics=["mae"])
>>> x = np.random.random((2, 3))
>>> y = np.random.randint(0, 2, (2, 2))
>>> _ = model.fit(x, y, verbose=0)
>>> assert all(float(m.result()) for m in model.metrics)
>>> model.reset_metrics()
>>> assert all(float(m.result()) == 0 for m in model.metrics)
reset_states()
property run_eagerly

Settable attribute indicating whether the model should run eagerly.

Running eagerly means that your model will be run step by step, like Python code. Your model might run slower, but it should become easier for you to debug it by stepping into individual layer calls.

By default, we will attempt to compile your model to a static graph to deliver the best execution performance.

Returns:

Boolean, whether the model should run eagerly.

save(filepath, overwrite=True, include_optimizer=True, save_format=None, signatures=None, options=None, save_traces=True)

Saves the model to Tensorflow SavedModel or a single HDF5 file.

Please see tf.keras.models.save_model or the [Serialization and Saving guide](https://keras.io/guides/serialization_and_saving/) for details.

Args:
filepath: String, PathLike, path to SavedModel or H5 file to save the

model.

overwrite: Whether to silently overwrite any existing file at the

target location, or provide the user with a manual prompt.

include_optimizer: If True, save optimizer’s state together. save_format: Either ‘tf’ or ‘h5’, indicating whether to save the

model to Tensorflow SavedModel or HDF5. Defaults to ‘tf’ in TF 2.X, and ‘h5’ in TF 1.X.

signatures: Signatures to save with the SavedModel. Applicable to the

‘tf’ format only. Please see the signatures argument in tf.saved_model.save for details.

options: (only applies to SavedModel format)

tf.saved_model.SaveOptions object that specifies options for saving to SavedModel.

save_traces: (only applies to SavedModel format) When enabled, the

SavedModel will store the function traces for each layer. This can be disabled, so that only the configs of each layer are stored. Defaults to True. Disabling this will decrease serialization time and reduce file size, but it requires that all custom layers/models implement a get_config() method.

Example:

```python from keras.models import load_model

model.save(‘my_model.h5’) # creates a HDF5 file ‘my_model.h5’ del model # deletes the existing model

# returns a compiled model # identical to the previous one model = load_model(‘my_model.h5’) ```

save_spec(dynamic_batch=True)

Returns the tf.TensorSpec of call inputs as a tuple (args, kwargs).

This value is automatically defined after calling the model for the first time. Afterwards, you can use it when exporting the model for serving:

```python model = tf.keras.Model(…)

@tf.function def serve(*args, **kwargs):

outputs = model(*args, **kwargs) # Apply postprocessing steps, or add additional outputs. … return outputs

# arg_specs is [tf.TensorSpec(…), …]. kwarg_specs, in this example, is # an empty dict since functional models do not use keyword arguments. arg_specs, kwarg_specs = model.save_spec()

model.save(path, signatures={

‘serving_default’: serve.get_concrete_function(*arg_specs, **kwarg_specs)

Args:
dynamic_batch: Whether to set the batch sizes of all the returned

tf.TensorSpec to None. (Note that when defining functional or Sequential models with tf.keras.Input([…], batch_size=X), the batch size will always be preserved). Defaults to True.

Returns:

If the model inputs are defined, returns a tuple (args, kwargs). All elements in args and kwargs are tf.TensorSpec. If the model inputs are not defined, returns None. The model inputs are automatically set when calling the model, model.fit, model.evaluate or model.predict.

save_weights(filepath, overwrite=True, save_format=None, options=None)

Saves all layer weights.

Either saves in HDF5 or in TensorFlow format based on the save_format argument.

When saving in HDF5 format, the weight file has:
  • layer_names (attribute), a list of strings

    (ordered names of model layers).

  • For every layer, a group named layer.name
    • For every such layer group, a group attribute weight_names,

      a list of strings (ordered names of weights tensor of the layer).

    • For every weight in the layer, a dataset

      storing the weight value, named after the weight tensor.

When saving in TensorFlow format, all objects referenced by the network are saved in the same format as tf.train.Checkpoint, including any Layer instances or Optimizer instances assigned to object attributes. For networks constructed from inputs and outputs using tf.keras.Model(inputs, outputs), Layer instances used by the network are tracked/saved automatically. For user-defined classes which inherit from tf.keras.Model, Layer instances must be assigned to object attributes, typically in the constructor. See the documentation of tf.train.Checkpoint and tf.keras.Model for details.

While the formats are the same, do not mix save_weights and tf.train.Checkpoint. Checkpoints saved by Model.save_weights should be loaded using Model.load_weights. Checkpoints saved using tf.train.Checkpoint.save should be restored using the corresponding tf.train.Checkpoint.restore. Prefer tf.train.Checkpoint over save_weights for training checkpoints.

The TensorFlow format matches objects and variables by starting at a root object, self for save_weights, and greedily matching attribute names. For Model.save this is the Model, and for Checkpoint.save this is the Checkpoint even if the Checkpoint has a model attached. This means saving a tf.keras.Model using save_weights and loading into a tf.train.Checkpoint with a Model attached (or vice versa) will not match the Model’s variables. See the [guide to training checkpoints](https://www.tensorflow.org/guide/checkpoint) for details on the TensorFlow format.

Args:
filepath: String or PathLike, path to the file to save the weights to.

When saving in TensorFlow format, this is the prefix used for checkpoint files (multiple files are generated). Note that the ‘.h5’ suffix causes weights to be saved in HDF5 format.

overwrite: Whether to silently overwrite any existing file at the

target location, or provide the user with a manual prompt.

save_format: Either ‘tf’ or ‘h5’. A filepath ending in ‘.h5’ or

‘.keras’ will default to HDF5 if save_format is None. Otherwise None defaults to ‘tf’.

options: Optional tf.train.CheckpointOptions object that specifies

options for saving weights.

Raises:
ImportError: If h5py is not available when attempting to save in HDF5

format.

property state_updates

Deprecated, do NOT use!

Returns the updates from all layers that are stateful.

This is useful for separating training updates and state updates, e.g. when we need to update a layer’s internal state during prediction.

Returns:

A list of update ops.

summary(line_length=None, positions=None, print_fn=None, expand_nested=False, show_trainable=False)

Prints a string summary of the network.

Args:
line_length: Total length of printed lines

(e.g. set this to adapt the display to different terminal window sizes).

positions: Relative or absolute positions of log elements

in each line. If not provided, defaults to [.33, .55, .67, 1.].

print_fn: Print function to use. Defaults to print.

It will be called on each line of the summary. You can set it to a custom function in order to capture the string summary.

expand_nested: Whether to expand the nested models.

If not provided, defaults to False.

show_trainable: Whether to show if a layer is trainable.

If not provided, defaults to False.

Raises:

ValueError: if summary() is called before the model is built.

test_on_batch(x, y=None, sample_weight=None, reset_metrics=True, return_dict=False)

Test the model on a single batch of samples.

Args:
x: Input data. It could be:
  • A Numpy array (or array-like), or a list of arrays (in case the

    model has multiple inputs).

  • A TensorFlow tensor, or a list of tensors (in case the model has

    multiple inputs).

  • A dict mapping input names to the corresponding array/tensors, if

    the model has named inputs.

y: Target data. Like the input data x, it could be either Numpy

array(s) or TensorFlow tensor(s). It should be consistent with x (you cannot have Numpy inputs and tensor targets, or inversely).

sample_weight: Optional array of the same length as x, containing

weights to apply to the model’s loss for each sample. In the case of temporal data, you can pass a 2D array with shape (samples, sequence_length), to apply a different weight to every timestep of every sample.

reset_metrics: If True, the metrics returned will be only for this

batch. If False, the metrics will be statefully accumulated across batches.

return_dict: If True, loss and metric results are returned as a dict,

with each key being the name of the metric. If False, they are returned as a list.

Returns:

Scalar test loss (if the model has a single output and no metrics) or list of scalars (if the model has multiple outputs and/or metrics). The attribute model.metrics_names will give you the display labels for the scalar outputs.

Raises:

RuntimeError: If model.test_on_batch is wrapped in a tf.function.

test_step(data)

The logic for one evaluation step.

This method can be overridden to support custom evaluation logic. This method is called by Model.make_test_function.

This function should contain the mathematical logic for one step of evaluation. This typically includes the forward pass, loss calculation, and metrics updates.

Configuration details for how this logic is run (e.g. tf.function and tf.distribute.Strategy settings), should be left to Model.make_test_function, which can also be overridden.

Args:

data: A nested structure of `Tensor`s.

Returns:

A dict containing values that will be passed to tf.keras.callbacks.CallbackList.on_train_batch_end. Typically, the values of the Model’s metrics are returned.

to_json(**kwargs)

Returns a JSON string containing the network configuration.

To load a network from a JSON save file, use keras.models.model_from_json(json_string, custom_objects={}).

Args:
**kwargs: Additional keyword arguments

to be passed to json.dumps().

Returns:

A JSON string.

to_yaml(**kwargs)

Returns a yaml string containing the network configuration.

Note: Since TF 2.6, this method is no longer supported and will raise a RuntimeError.

To load a network from a yaml save file, use keras.models.model_from_yaml(yaml_string, custom_objects={}).

custom_objects should be a dictionary mapping the names of custom losses / layers / etc to the corresponding functions / classes.

Args:
**kwargs: Additional keyword arguments

to be passed to yaml.dump().

Returns:

A YAML string.

Raises:

RuntimeError: announces that the method poses a security risk

train_on_batch(x, y=None, sample_weight=None, class_weight=None, reset_metrics=True, return_dict=False)

Runs a single gradient update on a single batch of data.

Args:
x: Input data. It could be:
  • A Numpy array (or array-like), or a list of arrays

    (in case the model has multiple inputs).

  • A TensorFlow tensor, or a list of tensors

    (in case the model has multiple inputs).

  • A dict mapping input names to the corresponding array/tensors,

    if the model has named inputs.

y: Target data. Like the input data x, it could be either Numpy

array(s) or TensorFlow tensor(s). It should be consistent with x (you cannot have Numpy inputs and tensor targets, or inversely).

sample_weight: Optional array of the same length as x, containing

weights to apply to the model’s loss for each sample. In the case of temporal data, you can pass a 2D array with shape (samples, sequence_length), to apply a different weight to every timestep of every sample.

class_weight: Optional dictionary mapping class indices (integers) to a

weight (float) to apply to the model’s loss for the samples from this class during training. This can be useful to tell the model to “pay more attention” to samples from an under-represented class.

reset_metrics: If True, the metrics returned will be only for this

batch. If False, the metrics will be statefully accumulated across batches.

return_dict: If True, loss and metric results are returned as a dict,

with each key being the name of the metric. If False, they are returned as a list.

Returns:

Scalar training loss (if the model has a single output and no metrics) or list of scalars (if the model has multiple outputs and/or metrics). The attribute model.metrics_names will give you the display labels for the scalar outputs.

Raises:

RuntimeError: If model.train_on_batch is wrapped in a tf.function.

train_step(data)

The logic for one training step.

This method can be overridden to support custom training logic. For concrete examples of how to override this method see [Customizing what happends in fit](https://www.tensorflow.org/guide/keras/customizing_what_happens_in_fit). This method is called by Model.make_train_function.

This method should contain the mathematical logic for one step of training. This typically includes the forward pass, loss calculation, backpropagation, and metric updates.

Configuration details for how this logic is run (e.g. tf.function and tf.distribute.Strategy settings), should be left to Model.make_train_function, which can also be overridden.

Args:

data: A nested structure of `Tensor`s.

Returns:

A dict containing values that will be passed to tf.keras.callbacks.CallbackList.on_train_batch_end. Typically, the values of the Model’s metrics are returned. Example: {‘loss’: 0.2, ‘accuracy’: 0.7}.

property trainable_weights

List of all trainable weights tracked by this layer.

Trainable weights are updated via gradient descent during training.

Returns:

A list of trainable variables.

property weights

Returns the list of all layer variables/weights.

Note: This will not track the weights of nested tf.Modules that are not themselves Keras layers.

Returns:

A list of variables.

Version

class dipy.nn.histo_resdnn.Version(version: str)

Bases: packaging.version._BaseVersion

Attributes
base_version
dev
epoch
is_devrelease
is_postrelease
is_prerelease
local
major
micro
minor
post
pre
public
release
__init__(version: str) None
property base_version: str
property dev: Optional[int]
property epoch: int
property is_devrelease: bool
property is_postrelease: bool
property is_prerelease: bool
property local: Optional[str]
property major: int
property micro: int
property minor: int
property post: Optional[int]
property pre: Optional[Tuple[str, int]]
property public: str
property release: Tuple[int, ...]

Input

dipy.nn.histo_resdnn.Input(shape=None, batch_size=None, name=None, dtype=None, sparse=None, tensor=None, ragged=None, type_spec=None, **kwargs)

Input() is used to instantiate a Keras tensor.

A Keras tensor is a symbolic tensor-like object, which we augment with certain attributes that allow us to build a Keras model just by knowing the inputs and outputs of the model.

For instance, if a, b and c are Keras tensors, it becomes possible to do: model = Model(input=[a, b], output=c)

Args:
shape: A shape tuple (integers), not including the batch size.

For instance, shape=(32,) indicates that the expected input will be batches of 32-dimensional vectors. Elements of this tuple can be None; ‘None’ elements represent dimensions where the shape is not known.

batch_size: optional static batch size (integer). name: An optional name string for the layer.

Should be unique in a model (do not reuse the same name twice). It will be autogenerated if it isn’t provided.

dtype: The data type expected by the input, as a string

(float32, float64, int32…)

sparse: A boolean specifying whether the placeholder to be created is

sparse. Only one of ‘ragged’ and ‘sparse’ can be True. Note that, if sparse is False, sparse tensors can still be passed into the input - they will be densified with a default value of 0.

tensor: Optional existing tensor to wrap into the Input layer.

If set, the layer will use the tf.TypeSpec of this tensor rather than creating a new placeholder tensor.

ragged: A boolean specifying whether the placeholder to be created is

ragged. Only one of ‘ragged’ and ‘sparse’ can be True. In this case, values of ‘None’ in the ‘shape’ argument represent ragged dimensions. For more information about RaggedTensors, see [this guide](https://www.tensorflow.org/guide/ragged_tensors).

type_spec: A tf.TypeSpec object to create the input placeholder from.

When provided, all other args except name must be None.

**kwargs: deprecated arguments support. Supports batch_shape and

batch_input_shape.

Returns:

A tensor.

Example:

`python # this is a logistic regression in Keras x = Input(shape=(32,)) y = Dense(16, activation='softmax')(x) model = Model(x, y) `

Note that even if eager execution is enabled, Input produces a symbolic tensor-like object (i.e. a placeholder). This symbolic tensor-like object can be used with lower-level TensorFlow ops that take tensors as inputs, as such:

`python x = Input(shape=(32,)) y = tf.square(x)  # This op will be treated like a layer model = Model(x, y) `

(This behavior does not work for higher-order TensorFlow APIs such as control flow and being directly watched by a tf.GradientTape).

However, the resulting model will not track any variables that were used as inputs to TensorFlow ops. All variable usages must happen within Keras layers to make sure they will be tracked by the model’s weights.

The Keras Input can also create a placeholder from an arbitrary tf.TypeSpec, e.g:

```python x = Input(type_spec=tf.RaggedTensorSpec(shape=[None, None],

dtype=tf.float32, ragged_rank=1))

y = x.values model = Model(x, y) ``` When passing an arbitrary tf.TypeSpec, it must represent the signature of an entire batch instead of just one example.

Raises:

ValueError: If both sparse and ragged are provided. ValueError: If both shape and (batch_input_shape or batch_shape) are

provided.

ValueError: If shape, tensor and type_spec are None. ValueError: If arguments besides type_spec are non-None while type_spec

is passed.

ValueError: if any unrecognized parameters are provided.

doctest_skip_parser

dipy.nn.histo_resdnn.doctest_skip_parser(func)

Decorator replaces custom skip test markup in doctests.

Say a function has a docstring:

>>> something # skip if not HAVE_AMODULE
>>> something + else
>>> something # skip if HAVE_BMODULE

This decorator will evaluate the expresssion after skip if. If this evaluates to True, then the comment is replaced by # doctest: +SKIP. If False, then the comment is just removed. The expression is evaluated in the globals scope of func.

For example, if the module global HAVE_AMODULE is False, and module global HAVE_BMODULE is False, the returned function will have docstring:

>>> something 
>>> something + else
>>> something

get_bval_indices

dipy.nn.histo_resdnn.get_bval_indices(bvals, bval, tol=20)

Get indices where the b-value is bval

Parameters
bvals: ndarray

Array containing the b-values

bval: float or int

b-value to extract indices

tol: int

The tolerated gap between the b-values to extract and the actual b-values.

Returns
Array of indices where the b-value is bval

get_fnames

dipy.nn.histo_resdnn.get_fnames(name='small_64D')

Provide full paths to example or test datasets.

Parameters
namestr

the filename/s of which dataset to return, one of:

  • ‘small_64D’ small region of interest nifti,bvecs,bvals 64 directions

  • ‘small_101D’ small region of interest nifti, bvecs, bvals 101 directions

  • ‘aniso_vox’ volume with anisotropic voxel size as Nifti

  • ‘fornix’ 300 tracks in Trackvis format (from Pittsburgh Brain Competition)

  • ‘gqi_vectors’ the scanner wave vectors needed for a GQI acquisitions of 101 directions tested on Siemens 3T Trio

  • ‘small_25’ small ROI (10x8x2) DTI data (b value 2000, 25 directions)

  • ‘test_piesno’ slice of N=8, K=14 diffusion data

  • ‘reg_c’ small 2D image used for validating registration

  • ‘reg_o’ small 2D image used for validation registration

  • ‘cb_2’ two vectorized cingulum bundles

Returns
fnamestuple

filenames for dataset

Examples

>>> import numpy as np
>>> from dipy.io.image import load_nifti
>>> from dipy.data import get_fnames
>>> fimg, fbvals, fbvecs = get_fnames('small_101D')
>>> bvals=np.loadtxt(fbvals)
>>> bvecs=np.loadtxt(fbvecs).T
>>> data, affine = load_nifti(fimg)
>>> data.shape == (6, 10, 10, 102)
True
>>> bvals.shape == (102,)
True
>>> bvecs.shape == (102, 3)
True

get_sphere

dipy.nn.histo_resdnn.get_sphere(name='symmetric362')

provide triangulated spheres

Parameters
namestr

which sphere - one of: * ‘symmetric362’ * ‘symmetric642’ * ‘symmetric724’ * ‘repulsion724’ * ‘repulsion100’ * ‘repulsion200’

Returns
spherea dipy.core.sphere.Sphere class instance

Examples

>>> import numpy as np
>>> from dipy.data import get_sphere
>>> sphere = get_sphere('symmetric362')
>>> verts, faces = sphere.vertices, sphere.faces
>>> verts.shape == (362, 3)
True
>>> faces.shape == (720, 3)
True
>>> verts, faces = get_sphere('not a sphere name') 
Traceback (most recent call last):
    ...
DataError: No sphere called "not a sphere name"

optional_package

dipy.nn.histo_resdnn.optional_package(name, trip_msg=None)

Return package-like thing and module setup for package name

Parameters
namestr

package name

trip_msgNone or str

message to give when someone tries to use the return package, but we could not import it, and have returned a TripWire object instead. Default message if None.

Returns
pkg_likemodule or TripWire instance

If we can import the package, return it. Otherwise return an object raising an error when accessed

have_pkgbool

True if import for package was successful, false otherwise

module_setupfunction

callable usually set as setup_module in calling namespace, to allow skipping tests.

Examples

Typical use would be something like this at the top of a module using an optional package:

>>> from dipy.utils.optpkg import optional_package
>>> pkg, have_pkg, setup_module = optional_package('not_a_package')

Of course in this case the package doesn’t exist, and so, in the module:

>>> have_pkg
False

and

>>> pkg.some_function() 
Traceback (most recent call last):
    ...
TripWireError: We need package not_a_package for these functions, but
``import not_a_package`` raised an ImportError

If the module does exist - we get the module

>>> pkg, _, _ = optional_package('os')
>>> hasattr(pkg, 'path')
True

Or a submodule if that’s what we asked for

>>> subpkg, _, _ = optional_package('os.path')
>>> hasattr(subpkg, 'dirname')
True

set_logger_level

dipy.nn.histo_resdnn.set_logger_level(log_level)

Change the logger of the HistoResDNN to one on the following: DEBUG, INFO, WARNING, CRITICAL, ERROR

Parameters
log_levelstr

Log level for the HistoResDNN only

sf_to_sh

dipy.nn.histo_resdnn.sf_to_sh(sf, sphere, sh_order=4, basis_type=None, full_basis=False, legacy=True, smooth=0.0)

Spherical function to spherical harmonics (SH).

Parameters
sfndarray

Values of a function on the given sphere.

sphereSphere

The points on which the sf is defined.

sh_orderint, optional

Maximum SH order in the SH fit. For sh_order, there will be (sh_order + 1) * (sh_order + 2) / 2 SH coefficients for a symmetric basis and (sh_order + 1) * (sh_order + 1) coefficients for a full SH basis.

basis_type{None, ‘tournier07’, ‘descoteaux07’}, optional

None for the default DIPY basis, tournier07 for the Tournier 2007 [R35636a4a5d66-2]_[R35636a4a5d66-3]_ basis, descoteaux07 for the Descoteaux 2007 [1] basis, (None defaults to descoteaux07).

full_basis: bool, optional

True for using a SH basis containing even and odd order SH functions. False for using a SH basis consisting only of even order SH functions.

legacy: bool, optional

True to use a legacy basis definition for backward compatibility with previous tournier07 and descoteaux07 implementations.

smoothfloat, optional

Lambda-regularization in the SH fit.

Returns
shndarray

SH coefficients representing the input function.

References

1

Descoteaux, M., Angelino, E., Fitzgibbons, S. and Deriche, R. Regularized, Fast, and Robust Analytical Q-ball Imaging. Magn. Reson. Med. 2007;58:497-510.

2

Tournier J.D., Calamante F. and Connelly A. Robust determination of the fibre orientation distribution in diffusion MRI: Non-negativity constrained super-resolved spherical deconvolution. NeuroImage. 2007;35(4):1459-1472.

3

Tournier J-D, Smith R, Raffelt D, Tabbara R, Dhollander T, Pietsch M, et al. MRtrix3: A fast, flexible and open software framework for medical image processing and visualisation. NeuroImage. 2019 Nov 15;202:116-137.

sh_to_sf

dipy.nn.histo_resdnn.sh_to_sf(sh, sphere, sh_order=4, basis_type=None, full_basis=False, legacy=True)

Spherical harmonics (SH) to spherical function (SF).

Parameters
shndarray

SH coefficients representing a spherical function.

sphereSphere

The points on which to sample the spherical function.

sh_orderint, optional

Maximum SH order in the SH fit. For sh_order, there will be (sh_order + 1) * (sh_order + 2) / 2 SH coefficients for a symmetric basis and (sh_order + 1) * (sh_order + 1) coefficients for a full SH basis.

basis_type{None, ‘tournier07’, ‘descoteaux07’}, optional

None for the default DIPY basis, tournier07 for the Tournier 2007 [R30944dc1667c-2]_[R30944dc1667c-3]_ basis, descoteaux07 for the Descoteaux 2007 [1] basis, (None defaults to descoteaux07).

full_basis: bool, optional

True to use a SH basis containing even and odd order SH functions. Else, use a SH basis consisting only of even order SH functions.

legacy: bool, optional

True to use a legacy basis definition for backward compatibility with previous tournier07 and descoteaux07 implementations.

Returns
sfndarray

Spherical function values on the sphere.

References

1

Descoteaux, M., Angelino, E., Fitzgibbons, S. and Deriche, R. Regularized, Fast, and Robust Analytical Q-ball Imaging. Magn. Reson. Med. 2007;58:497-510.

2

Tournier J.D., Calamante F. and Connelly A. Robust determination of the fibre orientation distribution in diffusion MRI: Non-negativity constrained super-resolved spherical deconvolution. NeuroImage. 2007;35(4):1459-1472.

3

Tournier J-D, Smith R, Raffelt D, Tabbara R, Dhollander T, Pietsch M, et al. MRtrix3: A fast, flexible and open software framework for medical image processing and visualisation. NeuroImage. 2019 Nov 15;202:116-137.

sph_harm_ind_list

dipy.nn.histo_resdnn.sph_harm_ind_list(sh_order, full_basis=False)

Returns the degree (m) and order (n) of all the symmetric spherical harmonics of degree less then or equal to sh_order. The results, m_list and n_list are kx1 arrays, where k depends on sh_order. They can be passed to real_sh_descoteaux_from_index() and :func:real_sh_tournier_from_index.

Parameters
sh_orderint

even int > 0, max order to return

full_basis: bool, optional

True for SH basis with even and odd order terms

Returns
m_listarray

degrees of even spherical harmonics

n_listarray

orders of even spherical harmonics

See also

shm.real_sh_descoteaux_from_index, shm.real_sh_tournier_from_index

unique_bvals_magnitude

dipy.nn.histo_resdnn.unique_bvals_magnitude(bvals, bmag=None, rbvals=False)

This function gives the unique rounded b-values of the data

Parameters
bvalsndarray

Array containing the b-values

bmagint

The order of magnitude that the bvalues have to differ to be considered an unique b-value. B-values are also rounded up to this order of magnitude. Default: derive this value from the maximal b-value provided: \(bmag=log_{10}(max(bvals)) - 1\).

rbvalsbool, optional

If True function also returns all individual rounded b-values. Default: False

Returns
ubvalsndarray

Array containing the rounded unique b-values

MultipleLayerPercepton

class dipy.nn.model.MultipleLayerPercepton(input_shape=(28, 28), num_hidden=[128], act_hidden='relu', dropout=0.2, num_out=10, act_out='softmax', loss='sparse_categorical_crossentropy', optimizer='adam')

Bases: object

Methods

evaluate(x_test, y_test[, verbose])

Evaluate the model on test dataset.

fit(x_train, y_train[, epochs])

Train the model on train dataset.

predict(x_test)

Predict the output from input samples.

summary()

Get the summary of the model.

__init__(input_shape=(28, 28), num_hidden=[128], act_hidden='relu', dropout=0.2, num_out=10, act_out='softmax', loss='sparse_categorical_crossentropy', optimizer='adam')

Multiple Layer Perceptron with Dropout.

Parameters
input_shapetuple

Shape of data to be trained

num_hiddenlist

List of number of nodes in hidden layers

act_hiddenstring

Activation function used in hidden layer

dropoutfloat

Dropout ratio

num_out10

Number of nodes in output layer

act_outstring

Activation function used in output layer

optimizerstring

Select optimizer. Default adam.

lossstring

Select loss function for measuring accuracy. Default sparse_categorical_crossentropy.

evaluate(x_test, y_test, verbose=2)

Evaluate the model on test dataset.

The evaluate method will evaluate the model on a test dataset.

Parameters
x_testndarray

the x_test is the test dataset

y_testndarray shape=(BatchSize,)

the y_test is the labels of the test dataset

verboseint (Default = 2)

By setting verbose 0, 1 or 2 you just say how do you want to ‘see’ the training progress for each epoch.

Returns
evaluateList

return list of loss value and accuracy value on test dataset

fit(x_train, y_train, epochs=5)

Train the model on train dataset.

The fit method will train the model for a fixed number of epochs (iterations) on a dataset.

Parameters
x_trainndarray

the x_train is the train dataset

y_trainndarray shape=(BatchSize,)

the y_train is the labels of the train dataset

epochsint (Default = 5)

the number of epochs

Returns
histobject

A History object. Its History.history attribute is a record of training loss values and metrics values at successive epochs

predict(x_test)

Predict the output from input samples.

The predict method will generates output predictions for the input samples.

Parameters
x_trainndarray

the x_test is the test dataset or input samples

Returns
predictndarray shape(TestSize,OutputSize)

Numpy array(s) of predictions.

summary()

Get the summary of the model.

The summary is textual and includes information about: The layers and their order in the model. The output shape of each layer.

Returns
summaryNoneType

the summary of the model

SingleLayerPerceptron

class dipy.nn.model.SingleLayerPerceptron(input_shape=(28, 28), num_hidden=128, act_hidden='relu', dropout=0.2, num_out=10, act_out='softmax', optimizer='adam', loss='sparse_categorical_crossentropy')

Bases: object

Methods

evaluate(x_test, y_test[, verbose])

Evaluate the model on test dataset.

fit(x_train, y_train[, epochs])

Train the model on train dataset.

predict(x_test)

Predict the output from input samples.

summary()

Get the summary of the model.

__init__(input_shape=(28, 28), num_hidden=128, act_hidden='relu', dropout=0.2, num_out=10, act_out='softmax', optimizer='adam', loss='sparse_categorical_crossentropy')

Single Layer Perceptron with Dropout.

Parameters
input_shapetuple

Shape of data to be trained

num_hiddenint

Number of nodes in hidden layer

act_hiddenstring

Activation function used in hidden layer

dropoutfloat

Dropout ratio

num_out10

Number of nodes in output layer

act_outstring

Activation function used in output layer

optimizerstring

Select optimizer. Default adam.

lossstring

Select loss function for measuring accuracy. Default sparse_categorical_crossentropy.

evaluate(x_test, y_test, verbose=2)

Evaluate the model on test dataset.

The evaluate method will evaluate the model on a test dataset.

Parameters
x_testndarray

the x_test is the test dataset

y_testndarray shape=(BatchSize,)

the y_test is the labels of the test dataset

verboseint (Default = 2)

By setting verbose 0, 1 or 2 you just say how do you want to ‘see’ the training progress for each epoch.

Returns
evaluateList

return list of loss value and accuracy value on test dataset

fit(x_train, y_train, epochs=5)

Train the model on train dataset.

The fit method will train the model for a fixed number of epochs (iterations) on a dataset.

Parameters
x_trainndarray

the x_train is the train dataset

y_trainndarray shape=(BatchSize,)

the y_train is the labels of the train dataset

epochsint (Default = 5)

the number of epochs

Returns
histobject

A History object. Its History.history attribute is a record of training loss values and metrics values at successive epochs

predict(x_test)

Predict the output from input samples.

The predict method will generates output predictions for the input samples.

Parameters
x_trainndarray

the x_test is the test dataset or input samples

Returns
predictndarray shape(TestSize,OutputSize)

Numpy array(s) of predictions.

summary()

Get the summary of the model.

The summary is textual and includes information about: The layers and their order in the model. The output shape of each layer.

Returns
summaryNoneType

the summary of the model

Version

class dipy.nn.model.Version(version: str)

Bases: packaging.version._BaseVersion

Attributes
base_version
dev
epoch
is_devrelease
is_postrelease
is_prerelease
local
major
micro
minor
post
pre
public
release
__init__(version: str) None
property base_version: str
property dev: Optional[int]
property epoch: int
property is_devrelease: bool
property is_postrelease: bool
property is_prerelease: bool
property local: Optional[str]
property major: int
property micro: int
property minor: int
property post: Optional[int]
property pre: Optional[Tuple[str, int]]
property public: str
property release: Tuple[int, ...]

optional_package

dipy.nn.model.optional_package(name, trip_msg=None)

Return package-like thing and module setup for package name

Parameters
namestr

package name

trip_msgNone or str

message to give when someone tries to use the return package, but we could not import it, and have returned a TripWire object instead. Default message if None.

Returns
pkg_likemodule or TripWire instance

If we can import the package, return it. Otherwise return an object raising an error when accessed

have_pkgbool

True if import for package was successful, false otherwise

module_setupfunction

callable usually set as setup_module in calling namespace, to allow skipping tests.

Examples

Typical use would be something like this at the top of a module using an optional package:

>>> from dipy.utils.optpkg import optional_package
>>> pkg, have_pkg, setup_module = optional_package('not_a_package')

Of course in this case the package doesn’t exist, and so, in the module:

>>> have_pkg
False

and

>>> pkg.some_function() 
Traceback (most recent call last):
    ...
TripWireError: We need package not_a_package for these functions, but
``import not_a_package`` raised an ImportError

If the module does exist - we get the module

>>> pkg, _, _ = optional_package('os')
>>> hasattr(pkg, 'path')
True

Or a submodule if that’s what we asked for

>>> subpkg, _, _ = optional_package('os.path')
>>> hasattr(subpkg, 'dirname')
True