braindecode.torch_ext package¶
Torch extensions, for example new functions or modules.
Submodules¶
braindecode.torch_ext.constraints module¶
-
class
braindecode.torch_ext.constraints.
MaxNormDefaultConstraint
[source]¶ Bases:
object
Applies max L2 norm 2 to the weights until the final layer and L2 norm 0.5 to the weights of the final layer as done in [1].
References
[1] Schirrmeister, R. T., Springenberg, J. T., Fiederer, L. D. J., Glasstetter, M., Eggensperger, K., Tangermann, M., Hutter, F. & Ball, T. (2017). Deep learning with convolutional neural networks for EEG decoding and visualization. Human Brain Mapping , Aug. 2017. Online: http://dx.doi.org/10.1002/hbm.23730
braindecode.torch_ext.functions module¶
braindecode.torch_ext.init module¶
braindecode.torch_ext.losses module¶
-
braindecode.torch_ext.losses.
log_categorical_crossentropy_1_hot
(logpreds, targets, dims=None)[source]¶ Returns log categorical crossentropy for given log-predictions and targets, targets should be one-hot-encoded.
Computes \(-\mathrm{logpreds} \cdot \mathrm{targets}\)
Parameters: - logpreds (torch.autograd.Variable) – Logarithm of softmax output.
- targets (torch.autograd.Variable) – One-hot encoded targets
- dims (int or iterable of int, optional.) – Compute sum across these dims
Returns: loss – \(-\mathrm{logpreds} \cdot \mathrm{targets}\)
Return type: torch.autograd.Variable
-
braindecode.torch_ext.losses.
log_categorical_crossentropy
(log_preds, targets, class_weights=None)[source]¶ Returns log categorical crossentropy for given log-predictions and targets.
Computes \(-\mathrm{logpreds} \cdot \mathrm{targets}\) if you assume targets to be one-hot-encoded. Also works for targets that are not one-hot-encoded, in this case only uses targets that are in the range of the expected class labels, i.e., [0,log_preds.size()[1]-1].
Parameters: - log_preds (torch.autograd.Variable`) – Logarithm of softmax output.
- targets (torch.autograd.Variable)
- class_weights (list of int, optional) – Weights given to loss of different classes
Returns: loss
Return type: torch.autograd.Variable
braindecode.torch_ext.modules module¶
-
class
braindecode.torch_ext.modules.
Expression
(expression_fn)[source]¶ Bases:
torch.nn.modules.module.Module
Compute given expression on forward pass.
Parameters: expression_fn (function) – Should accept variable number of objects of type torch.autograd.Variable to compute its output.
-
class
braindecode.torch_ext.modules.
AvgPool2dWithConv
(kernel_size, stride, dilation=1, padding=0)[source]¶ Bases:
torch.nn.modules.module.Module
Compute average pooling using a convolution, to have the dilation parameter.
Parameters: - kernel_size ((int,int)) – Size of the pooling region.
- stride ((int,int)) – Stride of the pooling operation.
- dilation (int or (int,int)) – Dilation applied to the pooling filter.
- padding (int or (int,int)) – Padding applied before the pooling operation.
-
class
braindecode.torch_ext.modules.
IntermediateOutputWrapper
(to_select, model)[source]¶ Bases:
torch.nn.modules.module.Module
Wraps network model such that outputs of intermediate layers can be returned. forward() returns list of intermediate activations in a network during forward pass.
Parameters: - to_select (list) – list of module names for which activation should be returned
- model (model object) – network model
Examples
>>> model = Deep4Net() >>> select_modules = ['conv_spat','conv_2','conv_3','conv_4'] # Specify intermediate outputs >>> model_pert = IntermediateOutputWrapper(select_modules,model) # Wrap model
braindecode.torch_ext.optimizers module¶
-
class
braindecode.torch_ext.optimizers.
AdamW
(params, lr=0.001, betas=(0.9, 0.999), eps=1e-08, weight_decay=0)[source]¶ Bases:
torch.optim.optimizer.Optimizer
Implements Adam algorithm with weight decay fixed as in [AdamW] .
Parameters: - params (iterable) – Iterable of parameters to optimize or dicts defining parameter groups
- lr (float, optional) – Learning rate.
- betas (Tuple[float, float], optional) – Coefficients used for computing running averages of gradient and its square
- eps (float, optional) – Term added to the denominator to improve numerical stability
- weight_decay (float, optional) – The “fixed” weight decay.
References
[AdamW] Loshchilov, I. & Hutter, F. (2017). Fixing Weight Decay Regularization in Adam. arXiv preprint arXiv:1711.05101. Online: https://arxiv.org/abs/1711.05101
braindecode.torch_ext.schedulers module¶
braindecode.torch_ext.util module¶
-
braindecode.torch_ext.util.
np_to_var
(X, requires_grad=False, dtype=None, pin_memory=False, **tensor_kwargs)[source]¶ Convenience function to transform numpy array to torch.Tensor.
Converts X to ndarray using asarray if necessary.
Parameters: - X (ndarray or list or number) – Input arrays
- requires_grad (bool) – passed on to Variable constructor
- dtype (numpy dtype, optional)
- var_kwargs – passed on to Variable constructor
Returns: var
Return type: torch.Tensor
-
braindecode.torch_ext.util.
var_to_np
(var)[source]¶ Convenience function to transform torch.Tensor to numpy array.
Should work both for CPU and GPU.