qualia_plugin_snn.learningmodel.pytorch.layers.CustomNode module

ATIF-u: spiking neuron with learnable quantization steps.

Author: Andrea Castagetti <Andrea.CASTAGNETTI@univ-cotedazur.fr>

qualia_plugin_snn.learningmodel.pytorch.layers.CustomNode.heaviside(x: Tensor) Tensor[source]

Heaviside function.

Parameters:

x (Tensor) – Input tensor

Returns:

Boolean tensor of the same dimension as x where each element is True if the element in x is greater than or equal to 0, False otherwise

Return type:

Tensor

class qualia_plugin_snn.learningmodel.pytorch.layers.CustomNode.SpikeFunctionSigmoid[source]

Bases: Function

Spike functions with surrogate bkw gradient.

static forward(ctx: FunctionCtx, *args: Tensor, **_: Any) Tensor[source]

Forward of heaviside() function.

Parameters:
  • ctx (FunctionCtx) – A context object used to save tensors for backward()

  • args (Tensor) – Tuple of 2 tensors for x and alpha, respectively, saved in ctx for backward pass

  • _ (Any) – Unused

Returns:

Tensor of heaviside() applied over x.

Return type:

Tensor

static backward(ctx: Function, *grad_outputs: Tensor) tuple[Tensor | None, None][source]

Backward pass of surrogate gradient using torch.Tensor.sigmoid_() function.

Parameters:
  • ctx (Function) – Context to restore the x and alpha tensors from

  • grad_outputs (Tensor) – Output tensor from the computation of the forward() pass

Returns:

A tuple of Tensor and None with the first element being the computed gradient for x or None if there is no gradient to compute and the second element a placeholder for the gradient of alpha.

Return type:

tuple[Tensor | None, None]

classmethod apply(*args: Tensor, **kwargs: Any) Tensor[source]

Apply heaviside activation with sigmoid surrogate gradient.

Parameters:
  • args (Tensor) – Input tensor

  • kwargs (Any) – Unused

Returns:

Output tensor

Return type:

Tensor

class qualia_plugin_snn.learningmodel.pytorch.layers.CustomNode.IFSRL[source]

Bases: Module

IFSRL: Integrate and Fire soft-reset with learnable Vth and activation scaling.

__init__(v_threshold: float = 1.0, vth_init_l: float = 0.8, vth_init_h: float = 1.0, alpha: float = 1.0, device: str = 'cpu') None[source]

Construct IFSRL.

Parameters:
  • v_threshold (float) – Factor to apply to the uniform initialization bounds

  • vth_init_l (float) – Lower bound for uniform initialization of threshold Tensor

  • vth_init_h (float) – Higher bound for uniform initialization of threshold Tensor

  • alpha (float) – Sigmoig surrogate scale factor

  • device (str) – Device to run the computation on

Return type:

None

v: torch.Tensor
get_coeffs() Tensor[source]

Return the Tensor of threshold vp_th.

Returns:

Tensor of threshold vp_th

Return type:

Tensor

set_coeffs(vp_th: Tensor) None[source]

Replace the Tensor of threshold vp_th.

Parameters:

vp_th (Tensor) – New Tensor of threshold to replace vp_th

Return type:

None

reset() None[source]

Reset potential to 0.

Return type:

None

ifsrl_fn(x: Tensor) Tensor[source]

Integrate-and-Fire soft-reset neuron with learnable threshold.

Parameters:

x (Tensor) – Input tensor

Returns:

Output tensor

Return type:

Tensor

forward(input: Tensor) Tensor[source]

Forward of ifsrl_fn().

Parameters:

input (Tensor) – Input tensor

Returns:

Output tensor

Return type:

Tensor

training: bool
class qualia_plugin_snn.learningmodel.pytorch.layers.CustomNode.ATIF[source]

Bases: BaseNode

IFSRLSJ: Integrate and Fire soft-reset with learnable Vth and activation scaling, based on spikingjelly.

__init__(v_threshold: float = 1.0, vth_init_l: float = 0.8, vth_init_h: float = 1.0, alpha: float = 1.0, device: str = 'cpu') None[source]

Construct ATIF.

Parameters:
  • v_threshold (float) – Factor to apply to the uniform initialization bounds

  • vth_init_l (float) – Lower bound for uniform initialization of threshold Tensor

  • vth_init_h (float) – Higher bound for uniform initialization of threshold Tensor

  • alpha (float) – Sigmoig surrogate scale factor

  • device (str) – Device to run the computation on

Return type:

None

v: torch.Tensor
property supported_backends: tuple[Literal['torch']]

Supported step_mode and backend.

Only single-step mode with torch backend is supported.

Returns:

Tuple of 'torch' if step_mode is 's'

Raises:

ValueError – When step_mode is not 's'

get_coeffs() Tensor[source]

Return the Tensor of threshold v_threshold.

Returns:

Tensor of threshold v_threshold

Return type:

Tensor

set_coeffs(v_threshold: Tensor) None[source]

Replace the Tensor of threshold v_threshold.

Parameters:

v_threshold (Tensor) – New Tensor of threshold to replace v_threshold

Return type:

None

training: bool
ifsrl_fn(x: Tensor) Tensor[source]

Integrate-and-Fire soft-reset neuron with learnable threshold.

Parameters:

x (Tensor) – Input tensor

Returns:

Output tensor

Return type:

Tensor

single_step_forward(x: Tensor) Tensor[source]

Single-step mode forward of ATIF.

Calls ifsrl_fn().

Parameters:

x (Tensor) – Input tensor

Returns:

Output tensor

Return type:

Tensor