qualia_plugin_snn.learningmodel.pytorch.layers.CustomNode module
ATIF-u: spiking neuron with learnable quantization steps.
Author: Andrea Castagetti <Andrea.CASTAGNETTI@univ-cotedazur.fr>
- qualia_plugin_snn.learningmodel.pytorch.layers.CustomNode.heaviside(x: Tensor) Tensor[source]
Heaviside function.
- class qualia_plugin_snn.learningmodel.pytorch.layers.CustomNode.SpikeFunctionSigmoid[source]
Bases:
FunctionSpike functions with surrogate bkw gradient.
- static forward(ctx: FunctionCtx, *args: Tensor, **_: Any) Tensor[source]
Forward of
heaviside()function.- Parameters:
ctx (FunctionCtx) – A context object used to save tensors for
backward()args (Tensor) – Tuple of 2 tensors for
xandalpha, respectively, saved inctxfor backward pass_ (Any) – Unused
- Returns:
Tensor of
heaviside()applied overx.- Return type:
- static backward(ctx: Function, *grad_outputs: Tensor) tuple[Tensor | None, None][source]
Backward pass of surrogate gradient using
torch.Tensor.sigmoid_()function.- Parameters:
- Returns:
A tuple of Tensor and None with the first element being the computed gradient for
xor None if there is no gradient to compute and the second element a placeholder for the gradient ofalpha.- Return type:
- class qualia_plugin_snn.learningmodel.pytorch.layers.CustomNode.IFSRL[source]
Bases:
ModuleIFSRL: Integrate and Fire soft-reset with learnable Vth and activation scaling.
- __init__(v_threshold: float = 1.0, vth_init_l: float = 0.8, vth_init_h: float = 1.0, alpha: float = 1.0, device: str = 'cpu') None[source]
Construct
IFSRL.- Parameters:
v_threshold (float) – Factor to apply to the uniform initialization bounds
vth_init_l (float) – Lower bound for uniform initialization of threshold Tensor
vth_init_h (float) – Higher bound for uniform initialization of threshold Tensor
alpha (float) – Sigmoig surrogate scale factor
device (str) – Device to run the computation on
- Return type:
None
- get_coeffs() Tensor[source]
Return the Tensor of threshold
vp_th.- Returns:
Tensor of threshold
vp_th- Return type:
- set_coeffs(vp_th: Tensor) None[source]
Replace the Tensor of threshold
vp_th.- Parameters:
vp_th (Tensor) – New Tensor of threshold to replace
vp_th- Return type:
None
- forward(input: Tensor) Tensor[source]
Forward of
ifsrl_fn().
- class qualia_plugin_snn.learningmodel.pytorch.layers.CustomNode.ATIF[source]
Bases:
BaseNodeIFSRLSJ: Integrate and Fire soft-reset with learnable Vth and activation scaling, based on spikingjelly.
- __init__(v_threshold: float = 1.0, vth_init_l: float = 0.8, vth_init_h: float = 1.0, alpha: float = 1.0, device: str = 'cpu') None[source]
Construct
ATIF.- Parameters:
v_threshold (float) – Factor to apply to the uniform initialization bounds
vth_init_l (float) – Lower bound for uniform initialization of threshold Tensor
vth_init_h (float) – Higher bound for uniform initialization of threshold Tensor
alpha (float) – Sigmoig surrogate scale factor
device (str) – Device to run the computation on
- Return type:
None
- property supported_backends: tuple[Literal['torch']]
Supported step_mode and backend.
Only single-step mode with torch backend is supported.
- Returns:
Tuple of
'torch'ifstep_modeis's'- Raises:
ValueError – When
step_modeis not's'
- get_coeffs() Tensor[source]
Return the Tensor of threshold
v_threshold.- Returns:
Tensor of threshold
v_threshold- Return type:
- set_coeffs(v_threshold: Tensor) None[source]
Replace the Tensor of threshold
v_threshold.- Parameters:
v_threshold (Tensor) – New Tensor of threshold to replace
v_threshold- Return type:
None