rectorch.nets

Class list

rectorch.nets.AE_net(dec_dims[, enc_dims])

Abstract Autoencoder network.

rectorch.nets.MultiDAE_net(dec_dims[, …])

Denoising Autoencoder network for collaborative filtering.

rectorch.nets.VAE_net(dec_dims[, enc_dims])

Variational Autoencoder network.

rectorch.nets.MultiVAE_net(dec_dims[, …])

Variational Autoencoder network for collaborative filtering.

rectorch.nets.CMultiVAE_net(cond_dim, dec_dims)

Conditioned Variational Autoencoder network for collaborative filtering.

rectorch.nets.CFGAN_G_net(layers_dim)

Generator network of the CFGAN model.

rectorch.nets.CFGAN_D_net(layers_dim)

Discriminator network of the CFGAN model.

rectorch.nets.SVAE_net(n_items, embed_size, …)

Sequential Variational Autoencoders for Collaborative Filtering.

This module contains definitions of the neural newtork architectures used by the rectorch models.

See also

Modules: models

class rectorch.nets.AE_net(dec_dims, enc_dims=None)[source]

Bases: torch.nn.modules.module.Module

Abstract Autoencoder network.

This abstract class must be inherited anytime a new autoencoder network is defined. The following methods must be implemented in the sub-classes:

Parameters
  • dec_dimslist or array_like
    • Dimensions of the decoder network. dec_dims[0] indicates the dimension of the latent space, and dec_dims[-1] indicates the dimension of the input space.

  • enc_dimslist, array_like or None [optional]
    • Dimensions of the encoder network. end_dims[0] indicates the dimension of the input space, and end_dims[-1] indicates the dimension of the latent space. If evaluates to False, enc_dims = dec_dims[::-1]. By default None.

    Attributes
  • dec_dimslist or array_like of int
    • See dec_dims parameter.

  • enc_dimslist or array_like of int
    • See end_dims parameter.

    decode(self, z)[source]

    Forward propagate the latent represenation in the decoder network.

    Parameters
  • ztorch.Tensor
    • The latent tensor

    encode(self, x)[source]

    Forward propagate the input in the encoder network.

    Parameters
  • xtorch.Tensor
    • The input tensor

    forward(self, x)[source]

    Forward propagate the input in the network.

    Parameters
  • xtorch.Tensor
    • The input tensor to feed to the network.

    init_weights(self)[source]

    Initialize the weights of the network.

    class rectorch.nets.MultiDAE_net(dec_dims, enc_dims=None, dropout=0.5)[source]

    Bases: rectorch.nets.AE_net

    Denoising Autoencoder network for collaborative filtering.

    The network structure follows the definition as in [R2dd2e16d1ef4-VAE]. Hidden layers are fully connected and tanh activated. The output layer of both the encoder and the decoder are linearly activated.

    Parameters
  • dec_dimslist or array_like of int
  • enc_dimslist, array_like of int or None [optional]
  • dropoutfloat, optional
    • The dropout probability (in the range [0,1]), by default 0.5.

    References

    R2dd2e16d1ef4-VAE

    Dawen Liang, Rahul G. Krishnan, Matthew D. Hoffman, and Tony Jebara. 2018. Variational Autoencoders for Collaborative Filtering. In Proceedings of the 2018 World Wide Web Conference (WWW ’18). International World Wide Web Conferences Steering Committee, Republic and Canton of Geneva, CHE, 689–698. DOI: https://doi.org/10.1145/3178876.3186150

    Attributes
  • dec_dimslist or array_like of int
    • See dec_dims parameter.

  • enc_dimslist or array_like of int
    • See end_dims parameter.

  • dropoutfloat
  • decode(self, z)[source]

    Forward propagate the latent represenation in the decoder network.

    Parameters
  • ztorch.Tensor
    • The latent tensor

    encode(self, x)[source]

    Forward propagate the input in the encoder network.

    Parameters
  • xtorch.Tensor
    • The input tensor

    init_weights(self)[source]

    Initialize the weights of the network.

    Weights are initialized with the torch.nn.init.xavier_uniform_() initializer, while biases are initalized with the torch.nn.init.normal_() initializer.

    class rectorch.nets.VAE_net(dec_dims, enc_dims=None)[source]

    Bases: rectorch.nets.AE_net

    Variational Autoencoder network.

    Layers are fully connected and ReLU activated with the exception of the ouput layers of both the encoder and decoder that are linearly activated.

    Note

    See AE_net for parameters and attributes.

    decode(self, z)[source]

    Apply the decoder network to the sampled latent representation.

    Parameters
  • ztorch.Tensor
    • The sampled (trhough the reparameterization trick) latent tensor.

    Returns
  • torch.Tensor
    • The output tensor of the decoder network.

    encode(self, x)[source]

    Apply the encoder network of the Variational Autoencoder.

    Parameters
  • xtorch.Tensor
    • The input tensor

    Returns
  • mu, logvartuple of torch.Tensor
    • The tensors in the latent space representing the mean and standard deviation (actually the logarithm of the variance) of the probability distributions over the latent variables.

    forward(self, x)[source]

    Apply the full Variational Autoencoder network to the input.

    Parameters
  • xtorch.Tensor
    • The input tensor

    Returns
  • x’, mu, logvartuple of torch.Tensor
    • The reconstructed input (x’) along with the intermediate tensors in the latent space representing the mean and standard deviation (actually the logarithm of the variance) of the probability distributions over the latent variables.

    init_weights(self)[source]

    Initialize the weights of the network.

    Weights are initialized with the torch.nn.init.xavier_uniform_() initializer, while biases are initalized with the torch.nn.init.normal_() initializer.

    class rectorch.nets.MultiVAE_net(dec_dims, enc_dims=None, dropout=0.5)[source]

    Bases: rectorch.nets.VAE_net

    Variational Autoencoder network for collaborative filtering.

    The network structure follows the definition as in [Rb6211bc148e3-VAE]. Hidden layers are fully connected and tanh activated. The output layer of both the encoder and the decoder are linearly activated.

    Parameters
  • dec_dimslist or array_like of int
  • enc_dimslist, array_like of int or None [optional]
  • dropoutfloat, optional
  • References

    Rb6211bc148e3-VAE

    Dawen Liang, Rahul G. Krishnan, Matthew D. Hoffman, and Tony Jebara. 2018. Variational Autoencoders for Collaborative Filtering. In Proceedings of the 2018 World Wide Web Conference (WWW ’18). International World Wide Web Conferences Steering Committee, Republic and Canton of Geneva, CHE, 689–698. DOI: https://doi.org/10.1145/3178876.3186150

    Attributes
  • dec_dimslist or array_like of int
    • See dec_dims parameter.

  • enc_dimslist or array_like of int
    • See end_dims parameter.

  • dropoutfloat
  • decode(self, z)[source]

    Apply the decoder network to the sampled latent representation.

    Parameters
  • ztorch.Tensor
    • The sampled (trhough the reparameterization trick) latent tensor.

    Returns
  • torch.Tensor
    • The output tensor of the decoder network.

    encode(self, x)[source]

    Apply the encoder network of the Variational Autoencoder.

    Parameters
  • xtorch.Tensor
    • The input tensor

    Returns
  • mu, logvartuple of torch.Tensor
    • The tensors in the latent space representing the mean and standard deviation (actually the logarithm of the variance) of the probability distributions over the latent variables.

    class rectorch.nets.CMultiVAE_net(cond_dim, dec_dims, enc_dims=None, dropout=0.5)[source]

    Bases: rectorch.nets.MultiVAE_net

    Conditioned Variational Autoencoder network for collaborative filtering.

    The network structure follows the definition as in [R9611bc073931-CVAE]. Hidden layers are fully connected and tanh activated. The output layer of both the encoder and the decoder are linearly activated.

    Parameters
  • cond_dimint
    • The size of the condition vector.

  • dec_dimslist or array_like of int
  • enc_dimslist, array_like of int or None [optional]
  • dropoutfloat, optional
  • References

    R9611bc073931-CVAE

    Tommaso Carraro, Mirko Polato and Fabio Aiolli. Conditioned Variational Autoencoder for top-N item recommendation, 2020. arXiv pre-print: https://arxiv.org/abs/2004.11141

    Attributes
  • cond_dimint
    • See cond_dim parameter.

  • dec_dimslist or array_like of int
    • See dec_dims parameter.

  • enc_dimslist or array_like
    • See end_dims parameter.

  • dropoutfloat
  • encode(self, x)[source]

    Apply the encoder network of the Variational Autoencoder.

    Parameters
  • xtorch.Tensor
    • The input tensor

    Returns
  • mu, logvartuple of torch.Tensor
    • The tensors in the latent space representing the mean and standard deviation (actually the logarithm of the variance) of the probability distributions over the latent variables.

    class rectorch.nets.CFGAN_G_net(layers_dim)[source]

    Bases: torch.nn.modules.module.Module

    Generator network of the CFGAN model.

    The generator newtork of CFGAN is a simple Multi Layer perceptron. Each internal layer is fully connected and ReLU activated. The output layer insted has a sigmoid as activation funciton. See [R6425608c9e5d-CFGAN] for a full description.

    Parameters
  • layers_dimlist of int
    • The dimension of the layers of the network ordered from the input to the output.

    References

    R6425608c9e5d-CFGAN

    Dong-Kyu Chae, Jin-Soo Kang, Sang-Wook Kim, and Jung-Tae Lee. 2018. CFGAN: A Generic Collaborative Filtering Framework based on Generative Adversarial Networks. In Proceedings of the 27th ACM International Conference on Information and Knowledge Management (CIKM ’18). Association for Computing Machinery, New York, NY, USA, 137–146. DOI: https://doi.org/10.1145/3269206.3271743

    Attributes
  • layers_dimlist of int
    • See the layers_dim parameter.

  • input_dimint
    • The dimension of the output of the generator, i.e., the input of the discriminator.

  • latent_dimint
    • The dimension of the latent space, i.e., the dimension of the input of the generator.

    forward(self, z)[source]

    Apply the generator network to the input.

    Parameters
  • xtorch.Tensor
    • The input tensor to be forwarded.

    Returns
  • torch.Tensor
    • The output tensor results of the application of the generator network.

    init_weights(self, layer)[source]

    Initialize the weights of the network.

    Weights are initialized with the torch.nn.init.xavier_uniform_() initializer, while biases are initalized with the torch.nn.init.normal_() initializer.

    class rectorch.nets.CFGAN_D_net(layers_dim)[source]

    Bases: torch.nn.modules.module.Module

    Discriminator network of the CFGAN model.

    The discriminator newtork of CFGAN is a simple Multi Layer perceptron. Each internal layer is fully connected and ReLU activated. The output layer insted has a sigmoid as activation funciton. See [R59de8c9a4ef6-CFGAN] for a full description.

    Parameters
  • layers_dimlist of int
    • The dimension of the layers of the network ordered from the input to the output.

    References

    R59de8c9a4ef6-CFGAN

    Dong-Kyu Chae, Jin-Soo Kang, Sang-Wook Kim, and Jung-Tae Lee. 2018. CFGAN: A Generic Collaborative Filtering Framework based on Generative Adversarial Networks. In Proceedings of the 27th ACM International Conference on Information and Knowledge Management (CIKM ’18). Association for Computing Machinery, New York, NY, USA, 137–146. DOI: https://doi.org/10.1145/3269206.3271743

    Attributes
  • layers_dimlist of int
    • See the layers_dim parameter.

  • input_dimint
    • The dimension of the input of the discriminator.

    forward(self, x, cond)[source]

    Apply the discriminator network to the input.

    Parameters
  • xtorch.Tensor
    • The input tensor to be forwarded.

  • condtorch.Tensor
    • The condition tensor. Note that must hold that x.shape[0] == cond.shape[0].

    Returns
  • torch.Tensor
    • The output tensor results of the application of the discriminator to the input concatenated with the condition.

    init_weights(self, layer)[source]

    Initialize the weights of the network.

    Weights are initialized with the torch.nn.init.xavier_uniform_() initializer, while biases are initalized with the torch.nn.init.normal_() initializer.

    class rectorch.nets.SVAE_net(n_items, embed_size, rnn_size, dec_dims, enc_dims)[source]

    Bases: rectorch.nets.VAE_net

    Sequential Variational Autoencoders for Collaborative Filtering.

    UNDOCUMENTED [R894e5278d4e4-SVAE]

    Parameters
  • n_itemsint
    • Number of items.

  • embed_sizeint
    • Size of the embedding for the items.

  • rnn_sizeint
    • Size of the recurrent layer if the GRU part of the network.

  • dec_dimslist or array_like of int
  • enc_dimslist, array_like of int or None [optional]
  • References

    R894e5278d4e4-SVAE

    Noveen Sachdeva, Giuseppe Manco, Ettore Ritacco, and Vikram Pudi. 2019. Sequential Variational Autoencoders for Collaborative Filtering. In Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining (WSDM ‘19). Association for Computing Machinery, New York, NY, USA, 600–608. DOI: https://doi.org/10.1145/3289600.3291007

    Attributes
  • See *Parameres* section.
    • decode(self, z)[source]

      Apply the decoder network to the sampled latent representation.

      Parameters
    • ztorch.Tensor
      • The sampled (trhough the reparameterization trick) latent tensor.

      Returns
    • torch.Tensor
      • The output tensor of the decoder network.

      forward(self, x)[source]

      Apply the full Variational Autoencoder network to the input.

      Parameters
    • xtorch.Tensor
      • The input tensor

      Returns
    • x’, mu, logvartuple of torch.Tensor
      • The reconstructed input (x’) along with the intermediate tensors in the latent space representing the mean and standard deviation (actually the logarithm of the variance) of the probability distributions over the latent variables.

      init_weights(self)[source]

      Initialize the weights of the network.

      Weights are initialized with the torch.nn.init.xavier_uniform_() initializer, while biases are initalized with the torch.nn.init.normal_() initializer.