rectorch.nets¶
Class list¶
|
Abstract Autoencoder network. |
|
Denoising Autoencoder network for collaborative filtering. |
|
Variational Autoencoder network. |
|
Variational Autoencoder network for collaborative filtering. |
|
Conditioned Variational Autoencoder network for collaborative filtering. |
|
Generator network of the CFGAN model. |
|
Discriminator network of the CFGAN model. |
|
Sequential Variational Autoencoders for Collaborative Filtering. |
This module contains definitions of the neural newtork architectures used by the rectorch models.
See also
Modules:
models
-
class
rectorch.nets.AE_net(dec_dims, enc_dims=None)[source]¶ Bases:
torch.nn.modules.module.ModuleAbstract Autoencoder network.
This abstract class must be inherited anytime a new autoencoder network is defined. The following methods must be implemented in the sub-classes:
- Parameters
- dec_dimslist or array_like
Dimensions of the decoder network.
dec_dims[0]indicates the dimension of the latent space, anddec_dims[-1]indicates the dimension of the input space.- enc_dimslist, array_like or
None[optional]Dimensions of the encoder network.
end_dims[0]indicates the dimension of the input space, andend_dims[-1]indicates the dimension of the latent space. If evaluates to False,enc_dims = dec_dims[::-1]. By defaultNone.
- Attributes
- dec_dims
listor array_like ofintSee
dec_dimsparameter.- enc_dims
listor array_like ofintSee
end_dimsparameter.- dec_dims
-
decode(self, z)[source]¶ Forward propagate the latent represenation in the decoder network.
- Parameters
- z
torch.TensorThe latent tensor
- z
-
encode(self, x)[source]¶ Forward propagate the input in the encoder network.
- Parameters
- x
torch.TensorThe input tensor
- x
-
forward(self, x)[source]¶ Forward propagate the input in the network.
- Parameters
- x
torch.TensorThe input tensor to feed to the network.
- x
-
class
rectorch.nets.MultiDAE_net(dec_dims, enc_dims=None, dropout=0.5)[source]¶ Bases:
rectorch.nets.AE_netDenoising Autoencoder network for collaborative filtering.
The network structure follows the definition as in [R2dd2e16d1ef4-VAE]. Hidden layers are fully connected and tanh activated. The output layer of both the encoder and the decoder are linearly activated.
References
- R2dd2e16d1ef4-VAE
Dawen Liang, Rahul G. Krishnan, Matthew D. Hoffman, and Tony Jebara. 2018. Variational Autoencoders for Collaborative Filtering. In Proceedings of the 2018 World Wide Web Conference (WWW ’18). International World Wide Web Conferences Steering Committee, Republic and Canton of Geneva, CHE, 689–698. DOI: https://doi.org/10.1145/3178876.3186150
- Attributes
- dec_dims
listor array_like ofintSee
dec_dimsparameter.- enc_dims
listor array_like ofintSee
end_dimsparameter.- dropout
floatThe dropout layer that is applied to the input during the
AE_net.forward().- dec_dims
-
decode(self, z)[source]¶ Forward propagate the latent represenation in the decoder network.
- Parameters
- z
torch.TensorThe latent tensor
- z
-
encode(self, x)[source]¶ Forward propagate the input in the encoder network.
- Parameters
- x
torch.TensorThe input tensor
- x
-
init_weights(self)[source]¶ Initialize the weights of the network.
Weights are initialized with the
torch.nn.init.xavier_uniform_()initializer, while biases are initalized with thetorch.nn.init.normal_()initializer.
-
class
rectorch.nets.VAE_net(dec_dims, enc_dims=None)[source]¶ Bases:
rectorch.nets.AE_netVariational Autoencoder network.
Layers are fully connected and ReLU activated with the exception of the ouput layers of both the encoder and decoder that are linearly activated.
Note
See
AE_netfor parameters and attributes.-
decode(self, z)[source]¶ Apply the decoder network to the sampled latent representation.
- Parameters
- z
torch.TensorThe sampled (trhough the reparameterization trick) latent tensor.
- z
- Returns
torch.TensorThe output tensor of the decoder network.
-
-
encode(self, x)[source]¶ Apply the encoder network of the Variational Autoencoder.
- Parameters
- x
torch.TensorThe input tensor
- x
- Returns
- mu, logvar
tupleoftorch.TensorThe tensors in the latent space representing the mean and standard deviation (actually the logarithm of the variance) of the probability distributions over the latent variables.
- mu, logvar
-
forward(self, x)[source]¶ Apply the full Variational Autoencoder network to the input.
- Parameters
- x
torch.TensorThe input tensor
- x
- Returns
- x’, mu, logvar
tupleoftorch.TensorThe reconstructed input (x’) along with the intermediate tensors in the latent space representing the mean and standard deviation (actually the logarithm of the variance) of the probability distributions over the latent variables.
- x’, mu, logvar
-
init_weights(self)[source]¶ Initialize the weights of the network.
Weights are initialized with the
torch.nn.init.xavier_uniform_()initializer, while biases are initalized with thetorch.nn.init.normal_()initializer.
-
class
rectorch.nets.MultiVAE_net(dec_dims, enc_dims=None, dropout=0.5)[source]¶ Bases:
rectorch.nets.VAE_netVariational Autoencoder network for collaborative filtering.
The network structure follows the definition as in [Rb6211bc148e3-VAE]. Hidden layers are fully connected and tanh activated. The output layer of both the encoder and the decoder are linearly activated.
References
- Rb6211bc148e3-VAE
Dawen Liang, Rahul G. Krishnan, Matthew D. Hoffman, and Tony Jebara. 2018. Variational Autoencoders for Collaborative Filtering. In Proceedings of the 2018 World Wide Web Conference (WWW ’18). International World Wide Web Conferences Steering Committee, Republic and Canton of Geneva, CHE, 689–698. DOI: https://doi.org/10.1145/3178876.3186150
- Attributes
- dec_dims
listor array_like ofintSee
dec_dimsparameter.- enc_dims
listor array_like ofintSee
end_dimsparameter.- dropout
floatThe dropout layer that is applied to the input during the
VAE_net.forward().- dec_dims
-
decode(self, z)[source]¶ Apply the decoder network to the sampled latent representation.
- Parameters
- z
torch.TensorThe sampled (trhough the reparameterization trick) latent tensor.
- z
- Returns
torch.TensorThe output tensor of the decoder network.
-
encode(self, x)[source]¶ Apply the encoder network of the Variational Autoencoder.
- Parameters
- x
torch.TensorThe input tensor
- x
- Returns
- mu, logvar
tupleoftorch.TensorThe tensors in the latent space representing the mean and standard deviation (actually the logarithm of the variance) of the probability distributions over the latent variables.
- mu, logvar
-
class
rectorch.nets.CMultiVAE_net(cond_dim, dec_dims, enc_dims=None, dropout=0.5)[source]¶ Bases:
rectorch.nets.MultiVAE_netConditioned Variational Autoencoder network for collaborative filtering.
The network structure follows the definition as in [R9611bc073931-CVAE]. Hidden layers are fully connected and tanh activated. The output layer of both the encoder and the decoder are linearly activated.
References
- R9611bc073931-CVAE
Tommaso Carraro, Mirko Polato and Fabio Aiolli. Conditioned Variational Autoencoder for top-N item recommendation, 2020. arXiv pre-print: https://arxiv.org/abs/2004.11141
- Attributes
- cond_dim
intSee
cond_dimparameter.- dec_dims
listor array_like ofintSee
dec_dimsparameter.- enc_dims
listor array_likeSee
end_dimsparameter.- dropout
floatThe dropout layer that is applied to the input during the
VAE_net.forward().- cond_dim
-
encode(self, x)[source]¶ Apply the encoder network of the Variational Autoencoder.
- Parameters
- x
torch.TensorThe input tensor
- x
- Returns
- mu, logvar
tupleoftorch.TensorThe tensors in the latent space representing the mean and standard deviation (actually the logarithm of the variance) of the probability distributions over the latent variables.
- mu, logvar
-
class
rectorch.nets.CFGAN_G_net(layers_dim)[source]¶ Bases:
torch.nn.modules.module.ModuleGenerator network of the CFGAN model.
The generator newtork of CFGAN is a simple Multi Layer perceptron. Each internal layer is fully connected and ReLU activated. The output layer insted has a sigmoid as activation funciton. See [R6425608c9e5d-CFGAN] for a full description.
References
- R6425608c9e5d-CFGAN
Dong-Kyu Chae, Jin-Soo Kang, Sang-Wook Kim, and Jung-Tae Lee. 2018. CFGAN: A Generic Collaborative Filtering Framework based on Generative Adversarial Networks. In Proceedings of the 27th ACM International Conference on Information and Knowledge Management (CIKM ’18). Association for Computing Machinery, New York, NY, USA, 137–146. DOI: https://doi.org/10.1145/3269206.3271743
- Attributes
- layers_dim
listofintSee the
layers_dimparameter.- input_dim
intThe dimension of the output of the generator, i.e., the input of the discriminator.
- latent_dim
intThe dimension of the latent space, i.e., the dimension of the input of the generator.
- layers_dim
-
forward(self, z)[source]¶ Apply the generator network to the input.
- Parameters
- x
torch.TensorThe input tensor to be forwarded.
- x
- Returns
torch.TensorThe output tensor results of the application of the generator network.
-
init_weights(self, layer)[source]¶ Initialize the weights of the network.
Weights are initialized with the
torch.nn.init.xavier_uniform_()initializer, while biases are initalized with thetorch.nn.init.normal_()initializer.
-
class
rectorch.nets.CFGAN_D_net(layers_dim)[source]¶ Bases:
torch.nn.modules.module.ModuleDiscriminator network of the CFGAN model.
The discriminator newtork of CFGAN is a simple Multi Layer perceptron. Each internal layer is fully connected and ReLU activated. The output layer insted has a sigmoid as activation funciton. See [R59de8c9a4ef6-CFGAN] for a full description.
References
- R59de8c9a4ef6-CFGAN
Dong-Kyu Chae, Jin-Soo Kang, Sang-Wook Kim, and Jung-Tae Lee. 2018. CFGAN: A Generic Collaborative Filtering Framework based on Generative Adversarial Networks. In Proceedings of the 27th ACM International Conference on Information and Knowledge Management (CIKM ’18). Association for Computing Machinery, New York, NY, USA, 137–146. DOI: https://doi.org/10.1145/3269206.3271743
- Attributes
- layers_dim
listofintSee the
layers_dimparameter.- input_dim
intThe dimension of the input of the discriminator.
- layers_dim
-
forward(self, x, cond)[source]¶ Apply the discriminator network to the input.
- Parameters
- x
torch.TensorThe input tensor to be forwarded.
- cond
torch.TensorThe condition tensor. Note that must hold that
x.shape[0] == cond.shape[0].- x
- Returns
torch.TensorThe output tensor results of the application of the discriminator to the input concatenated with the condition.
-
init_weights(self, layer)[source]¶ Initialize the weights of the network.
Weights are initialized with the
torch.nn.init.xavier_uniform_()initializer, while biases are initalized with thetorch.nn.init.normal_()initializer.
-
class
rectorch.nets.SVAE_net(n_items, embed_size, rnn_size, dec_dims, enc_dims)[source]¶ Bases:
rectorch.nets.VAE_netSequential Variational Autoencoders for Collaborative Filtering.
UNDOCUMENTED [R894e5278d4e4-SVAE]
References
- R894e5278d4e4-SVAE
Noveen Sachdeva, Giuseppe Manco, Ettore Ritacco, and Vikram Pudi. 2019. Sequential Variational Autoencoders for Collaborative Filtering. In Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining (WSDM ‘19). Association for Computing Machinery, New York, NY, USA, 600–608. DOI: https://doi.org/10.1145/3289600.3291007
- Attributes
- See *Parameres* section.
-
decode(self, z)[source]¶ Apply the decoder network to the sampled latent representation.
- Parameters
- z
torch.TensorThe sampled (trhough the reparameterization trick) latent tensor.
- z
- Returns
torch.TensorThe output tensor of the decoder network.
-
forward(self, x)[source]¶ Apply the full Variational Autoencoder network to the input.
- Parameters
- x
torch.TensorThe input tensor
- x
- Returns
- x’, mu, logvar
tupleoftorch.TensorThe reconstructed input (x’) along with the intermediate tensors in the latent space representing the mean and standard deviation (actually the logarithm of the variance) of the probability distributions over the latent variables.
- x’, mu, logvar
-
init_weights(self)[source]¶ Initialize the weights of the network.
Weights are initialized with the
torch.nn.init.xavier_uniform_()initializer, while biases are initalized with thetorch.nn.init.normal_()initializer.