gossipy.model.nn module#

Module contents#

class gossipy.model.nn.AdaLine(dim)#

Bases: gossipy.model.TorchModel

The Adaline perceptron model.

Implementation of the AdaLine perceptron model [], []. The model is a simple perceptron with a linear activation function.

Parameters

dim (int) – The number of input features.

forward(x)#

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

Return type

Tensor

Parameters

x (torch.Tensor) –

get_size()#

Returns the number of parameters of the model.

Returns

The number of parameters of the model.

Return type

int

init_weights()#

Initialize the weights of the model.

Return type

None

class gossipy.model.nn.LogisticRegression(input_dim, output_dim)#

Bases: gossipy.model.TorchModel

Logistic regression model.

Implementation of the logistic regression model.

Parameters
  • input_dim (int) – The number of input features.

  • output_dim (int) – The number of output neurons.

forward(x)#

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

Return type

Tensor

Parameters

x (torch.Tensor) –

init_weights()#

Initialize the weights of the model.

Return type

None

class gossipy.model.nn.TorchMLP(input_dim, output_dim, hidden_dims=(100, ), activation=<class 'torch.nn.modules.activation.ReLU'>)#

Bases: gossipy.model.TorchModel

Multi-layer perceptron model.

Implementation of the multi-layer perceptron model. The model is composed of a sequence of linear layers with the specified activation function (same activation for all layers but the last one).

Parameters
  • input_dim (int) – The number of input features.

  • output_dim (int) – The number of output neurons.

  • hidden_dims (Tuple[int], default=(100,)) – The number of hidden neurons in each hidden layer.

  • activation (torch.nn.modules.activation, default=ReLU) – The activation function of the hidden layers.

forward(x)#

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

Return type

Tensor

Parameters

x (torch.Tensor) –

init_weights()#

Initialize the weights of the model.

Return type

None

class gossipy.model.nn.TorchPerceptron(dim, activation=<class 'torch.nn.modules.activation.Sigmoid'>, bias=True)#

Bases: gossipy.model.TorchModel

Perceptron model.

Implementation of the perceptron model by Rosenblatt [].

Parameters
  • dim (int) – The number of input features.

  • activation (torch.nn.modules.activation, default=Sigmoid()) – The activation function of the output neuron.

  • bias (bool, optional) – Whether to add a bias term to the output neuron.

forward(x)#

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

Return type

Tensor

Parameters

x (torch.Tensor) –

init_weights()#

Initialize the weights of the model.

Return type

None