braindecode.torch_ext package

Torch extensions, for example new functions or modules.

Submodules

braindecode.torch_ext.constraints module

class braindecode.torch_ext.constraints.MaxNormDefaultConstraint[source]

Bases: object

Applies max L2 norm 2 to the weights until the final layer and L2 norm 0.5 to the weights of the final layer as done in [1].

References

[1]Schirrmeister, R. T., Springenberg, J. T., Fiederer, L. D. J., Glasstetter, M., Eggensperger, K., Tangermann, M., Hutter, F. & Ball, T. (2017). Deep learning with convolutional neural networks for EEG decoding and visualization. Human Brain Mapping , Aug. 2017. Online: http://dx.doi.org/10.1002/hbm.23730
apply(model)[source]

braindecode.torch_ext.functions module

braindecode.torch_ext.functions.square(x)[source]
braindecode.torch_ext.functions.safe_log(x, eps=1e-06)[source]

Prevents \(log(0)\) by using \(log(max(x, eps))\).

braindecode.torch_ext.functions.identity(x)[source]

braindecode.torch_ext.init module

braindecode.torch_ext.init.glorot_weight_zero_bias(model)[source]

Initalize parameters of all modules by initializing weights with glorot uniform/xavier initialization, and setting biases to zero. Weights from batch norm layers are set to 1.

Parameters:model (Module)

braindecode.torch_ext.losses module

braindecode.torch_ext.losses.log_categorical_crossentropy_1_hot(logpreds, targets, dims=None)[source]

Returns log categorical crossentropy for given log-predictions and targets, targets should be one-hot-encoded.

Computes \(-\mathrm{logpreds} \cdot \mathrm{targets}\)

Parameters:
  • logpreds (torch.autograd.Variable) – Logarithm of softmax output.
  • targets (torch.autograd.Variable) – One-hot encoded targets
  • dims (int or iterable of int, optional.) – Compute sum across these dims
Returns:

loss\(-\mathrm{logpreds} \cdot \mathrm{targets}\)

Return type:

torch.autograd.Variable

braindecode.torch_ext.losses.log_categorical_crossentropy(log_preds, targets, class_weights=None)[source]

Returns log categorical crossentropy for given log-predictions and targets.

Computes \(-\mathrm{logpreds} \cdot \mathrm{targets}\) if you assume targets to be one-hot-encoded. Also works for targets that are not one-hot-encoded, in this case only uses targets that are in the range of the expected class labels, i.e., [0,log_preds.size()[1]-1].

Parameters:
  • log_preds (torch.autograd.Variable`) – Logarithm of softmax output.
  • targets (torch.autograd.Variable)
  • class_weights (list of int, optional) – Weights given to loss of different classes
Returns:

loss

Return type:

torch.autograd.Variable

braindecode.torch_ext.losses.l2_loss(model)[source]
braindecode.torch_ext.losses.l1_loss(model)[source]

braindecode.torch_ext.modules module

class braindecode.torch_ext.modules.Expression(expression_fn)[source]

Bases: torch.nn.modules.module.Module

Compute given expression on forward pass.

Parameters:expression_fn (function) – Should accept variable number of objects of type torch.autograd.Variable to compute its output.
forward(*x)[source]
class braindecode.torch_ext.modules.AvgPool2dWithConv(kernel_size, stride, dilation=1, padding=0)[source]

Bases: torch.nn.modules.module.Module

Compute average pooling using a convolution, to have the dilation parameter.

Parameters:
  • kernel_size ((int,int)) – Size of the pooling region.
  • stride ((int,int)) – Stride of the pooling operation.
  • dilation (int or (int,int)) – Dilation applied to the pooling filter.
  • padding (int or (int,int)) – Padding applied before the pooling operation.
forward(x)[source]
class braindecode.torch_ext.modules.IntermediateOutputWrapper(to_select, model)[source]

Bases: torch.nn.modules.module.Module

Wraps network model such that outputs of intermediate layers can be returned. forward() returns list of intermediate activations in a network during forward pass.

Parameters:
  • to_select (list) – list of module names for which activation should be returned
  • model (model object) – network model

Examples

>>> model = Deep4Net()
>>> select_modules = ['conv_spat','conv_2','conv_3','conv_4'] # Specify intermediate outputs
>>> model_pert = IntermediateOutputWrapper(select_modules,model) # Wrap model
forward(x)[source]

braindecode.torch_ext.optimizers module

class braindecode.torch_ext.optimizers.AdamW(params, lr=0.001, betas=(0.9, 0.999), eps=1e-08, weight_decay=0)[source]

Bases: torch.optim.optimizer.Optimizer

Implements Adam algorithm with weight decay fixed as in [AdamW] .

Parameters:
  • params (iterable) – Iterable of parameters to optimize or dicts defining parameter groups
  • lr (float, optional) – Learning rate.
  • betas (Tuple[float, float], optional) – Coefficients used for computing running averages of gradient and its square
  • eps (float, optional) – Term added to the denominator to improve numerical stability
  • weight_decay (float, optional) – The “fixed” weight decay.

References

[AdamW]Loshchilov, I. & Hutter, F. (2017). Fixing Weight Decay Regularization in Adam. arXiv preprint arXiv:1711.05101. Online: https://arxiv.org/abs/1711.05101
step(closure=None)[source]

Performs a single optimization step.

Arguments:
closure (callable, optional): A closure that reevaluates the model
and returns the loss.

braindecode.torch_ext.schedulers module

class braindecode.torch_ext.schedulers.ScheduledOptimizer(scheduler, optimizer, schedule_weight_decay)[source]

Bases: object

step()[source]
state_dict()[source]
load_state_dict(state_dict)[source]
zero_grad()[source]
class braindecode.torch_ext.schedulers.CosineAnnealing(n_updates_per_period)[source]

Bases: object

get_lr(initial_val, i_update)[source]
get_weight_decay(initial_val, i_update)[source]

braindecode.torch_ext.util module

braindecode.torch_ext.util.np_to_var(X, requires_grad=False, dtype=None, pin_memory=False, **tensor_kwargs)[source]

Convenience function to transform numpy array to torch.Tensor.

Converts X to ndarray using asarray if necessary.

Parameters:
  • X (ndarray or list or number) – Input arrays
  • requires_grad (bool) – passed on to Variable constructor
  • dtype (numpy dtype, optional)
  • var_kwargs – passed on to Variable constructor
Returns:

var

Return type:

torch.Tensor

braindecode.torch_ext.util.var_to_np(var)[source]

Convenience function to transform torch.Tensor to numpy array.

Should work both for CPU and GPU.

braindecode.torch_ext.util.set_random_seeds(seed, cuda)[source]

Set seeds for python random module numpy.random and torch.

Parameters:
  • seed (int) – Random seed.
  • cuda (bool) – Whether to set cuda seed with torch.
braindecode.torch_ext.util.confirm_gpu_availability()[source]

Should crash if gpu not available, attempts to create a FloatTensor on GPU. :returns: success – Always returns true, should crash if gpu not available :rtype: bool