API overview¶
In this page you can find the list of modules/submodules defined in fluke
with their classes and functions.
Modules¶
fluke
is organized in the following modules:
fluke
: contains the core classes and utilities;fluke.data
: contains classes for data handling;fluke.data.datasets
: contains classes for datasets loading;fluke.client
: contains classes for client-side functionalities;fluke.server
: contains classes for server-side functionalities;fluke.comm
: contains classes for communication;fluke.nets
: contains classes for neural networks;fluke.utils
: contains utility classes and functions;fluke.utils.log
: contains classes for logging;fluke.utils.model
: contains classes for model manipulation;fluke.evaluation
: contains classes for evaluation;fluke.algorithms
: contains classes for federated learning algorithms.
fluke
¶
Classes
A dictionary that can be accessed with dot notation recursively. |
|
A cache class that can store data on disk. |
|
A reference to an object in the cache. |
|
A reference counter for an object in the cache. |
|
Environment class for the |
|
Subject class for the observer pattern. |
|
This metaclass is used to create singleton classes. |
fluke.data
¶
Classes
Container for train and test data. |
|
DataContainer designed for those datasets with a fixed data assignments, e.g., FEMNIST, Shakespeare and FCUBE. |
|
A DataLoader-like object for a set of tensors that can be much faster than TensorDataset + DataLoader because dataloader grabs individual indices of the dataset and calls cat (slow). |
|
Utility class for splitting the data across clients. |
fluke.data.datasets
¶
Classes
Static class for loading datasets. |
fluke.client
¶
Classes
fluke.server
¶
Classes
Basic Server for Federated Learning. |
fluke.comm
¶
Classes
This class represents a message that can be exchanged between clients and the server. |
|
A bi-directional communication channel. |
|
Channel observer interface for the Observer pattern. |
fluke.nets
¶
Classes
Encoder (aka backbone) + Head Network [Base Class] This type of networks are defined as two subnetworks, where one is meant to be the encoder/backbone network that learns a latent representation of the input, and the head network that is the classifier part of the model. |
|
Global-Local Network (Abstract Class). |
|
This implementation of the Global-Local Network ( |
|
This implementation of the Global-Local Network ( |
|
Multi-layer Perceptron for MNIST. |
|
Convolutional Neural Network for MNIST. |
|
Convolutional Neural Network with Batch Normalization for CIFAR-10. |
|
Logistic Regression for MNIST. |
|
Convolutional Neural Network for CIFAR-10. |
|
ResNet-9 network for CIFAR-100 classification. |
|
Convolutional Neural Network for FEMNIST. |
|
VGG-9 network for FEMNIST classification. |
|
Convolutional Neural Network for CIFAR-10. |
|
ResNet-18 network as defined in the torchvision library. |
|
ResNet-34 network as defined in the torchvision library. |
|
ResNet-50 network as defined in the torchvision library. |
|
ResNet-18 network as defined in the torchvision library but with Group Normalization layers instead of Batch Normalization. |
|
Convolutional Neural Network for CIFAR-10. |
|
LeNet-5 for CIFAR. |
|
LSTM for Shakespeare. |
fluke.utils
¶
Classes
This class is used to configure the optimizer and the learning rate scheduler. |
|
Fluke configuration class. |
|
Client observer interface. |
|
Server observer interface. |
Functions
Convert bytes to human-readable format. |
|
Move the object from the RAM to the disk cache to free up memory. |
|
Clear the CUDA cache. |
|
Get a class from its name. |
|
Get a class from its fully qualified name. |
|
Get the fully qualified name of a class. |
|
Get a loss function from its name. |
|
Get a model from its name. |
|
Get an optimizer from its name. |
|
Get a learning rate scheduler from its name. |
|
Flatten a nested dictionary. |
|
Import a module from its name. |
|
Get the memory usage of the current process. |
|
Plot the distribution of classes for each client. |
|
Load an object from the cache (disk). |
fluke.utils.log
¶
Classes
Basic logger. |
|
TensorBoard logger. |
|
Weights and Biases logger. |
|
ClearML logger. |
Functions
Get a logger from its name. |
fluke.utils.model
¶
Classes
Wrapper class to get the output of all layers in a model. |
|
Dataclass to store the model, its associated optimizer and scheduler. |
|
Mixin class for model interpolation. |
|
Linear layer with gloabl and local weights. |
|
Conv2d layer with gloabl and local weights. |
|
LSTM layer with gloabl and local weights. |
|
Embedding layer with gloabl and local weights. |
|
BatchNorm2d layer with gloabl and local weights. |
Functions
Aggregate the models using a weighted average. |
|
Iterates over a whole model (or layer of a model) and replaces every batch norm 2D with a group norm |
|
Check if the models fit in the memory of the device. |
|
Compute the difference between two model state dictionaries. |
|
Returns the model parameters as a contiguous tensor. |
|
Get the size of the activations of the model. |
|
Get the global model state dictionary. |
|
Get the local model state dictionary. |
|
Get the output shape of a model given the shape of the input. |
|
Get the keys of the model parameters that are trainable (i.e., require gradients). |
|
Merge two models using a linear interpolation. |
|
Mix two networks using a linear interpolation. |
|
Set model interpolation constant. |
|
Load a state dictionary into a model. |
|
Create a state dictionary with the same keys as the input state dictionary but with zeros tensors. |
fluke.evaluation
¶
Classes
This class is the base class for all evaluators in |
|
Evaluate a PyTorch model for classification. |
fluke.algorithms
¶
Classes
Centralized Federated Learning algorithm. |
|
Personalized Federated Learning algorithm. |
Submodules
Implementation of the APFL [APFL20] algorithm. |
|
Implementation of the CCVR [CCVR21] algorithm. |
|
Implementation of the DITTO [Ditto21] algorithm. |
|
Implementation of the DPFedAVG: Differential Privacy Federated Averaging [DPFedAVG2017] algorithm. |
|
Implementation of the FAT [FAT20] algorithm. |
|
Implementation of the FedALA [FedALA23] algorithm. |
|
Implementation of the FedAMP [FedAMP21] algorithm. |
|
Implementation of the Federated Averaging [FedAVG17] algorithm. |
|
Implementation of the Federated Averaging with momentum [FedAVGM19] algorithm. |
|
Implementation of the Federated Averaging with Spreadout [FedAwS20] algorithm. |
|
Implementation of the Federated Averaging with Body Aggregation and Body Update [FedBABU22] algorithm. |
|
Implementation of the FedBN [FedBN21] algorithm. |
|
Implementation of the FedDyn [FedDyn21] algorithm. |
|
Implementation of the FedExP [FedExP23] algorithm. |
|
Implementation of the FedHP: Federated Learning with Hyperspherical Prototypical Regularization [FedHP24] algorithm. |
|
Implementation of the [FedLC22] algorithm. |
|
Implementation of the FedLD [FedLD24] algorithm. |
|
Implementation of the [FedNH23] algorithm. |
|
Implementation of the [FedNova21] algorithm. |
|
Implementation of the [FedOpt21] algorithm. |
|
Implementation of the [FedPer19] algorithm. |
|
Implementation of the FedProto [FedProto22] algorithm. |
|
Implementation of the [FedProx18] algorithm. |
|
Implementation of the [FedRep21] algorithm. |
|
Implementation of the FedROD [FedROD22] algorithm. |
|
Implementation of the [FedRS21] algorithm. |
|
Implementation of the [FedSAM22] algorithm. |
|
Implementation of the [FedSGD17] algorithm. |
|
Implementation of the GEAR [GEAR22] algorithm. |
|
Implementation of the [LG-FedAVG20] algorithm. |
|
Implementation of the [Moon21] algorithm. |
|
Implementation of the [Per-FedAVG20] algorithm. |
|
Implementation of the [pFedMe20] algorithm. |
|
Implementation of the [SCAFFOLD20] algorithm. |
|
Implementation of the [SuPerFed22] algorithm. |