fluke.algorithms

Warning

This section of the documentation is under construction!

This module contains (as submodules) the implementation of several the federated learning algorithms.

Classes included in fluke.algorithms

CentralizedFL

Centralized Federated Learning algorithm.

PersonalizedFL

Personalized Federated Learning algorithm.

class fluke.algorithms.CentralizedFL

class fluke.algorithms.CentralizedFL(n_clients: int, data_splitter: DataSplitter, hyper_params: DDict)[source]

Centralized Federated Learning algorithm. This class is a generic implementation of a centralized federated learning algorithm that follows the Federated Averaging workflow. This class represents the entry point to the federated learning algorithm. Each new algorithm should inherit from this class and implement the specific logic of the algorithm. The main components of the algorithm are:

  • Clients: Each client should implement fluke.client.Client class and the specific specialization must be defined in the get_client_class() method. The initialization of the clients is done in the init_clients() method.

  • Server: The server is the entity that coordinates the training process. It should implement the fluke.server.Server class and the specific specialization must be defined in the get_server_class() method. The initialization of the server is done in the init_server() method.

  • Optimizer: The optimizer used by the clients. The default optimizer class is defined in the get_optimizer_class() method.

To run the algorithm, the run() method should be called with the number of rounds and the percentage of eligible clients. This method will call the Server.fit() method which will orchestrate the training process.

Parameters:
  • n_clients (int) – Number of clients.

  • data_splitter (DataSplitter) – Data splitter object.

  • hyper_params (DDict) – Hyperparameters of the algorithm. This set of hyperparameteers should be divided in two parts: the client hyperparameters and the server hyperparameters.

can_override_optimizer() bool[source]

Return whether the optimizer can be changed user-side. Generally, the optimizer can be configured by the user. However, in some cases, the algorithm may require a specific optimizer and the user should not be able to change it.

Returns:

Whether the optimizer can be changed user-side.

Return type:

bool

get_optimizer_class() Optimizer[source]

Get the optimizer class.

Returns:

Optimizer class.

Return type:

torch.optim.Optimizer

get_client_class() Client[source]

Get the client class. This method should be overriden by the subclasses when a different client class is defined. This allows to reuse all the logic of the algorithm and only change the client class.

Returns:

Client class.

Return type:

Client

get_server_class() Server[source]

Get the server class. This method should be overriden by the subclasses when a different server class is defined. This allows to reuse all the logic of the algorithm and only change the server class.

Returns:

Server class.

Return type:

Server

init_clients(clients_tr_data: list[FastDataLoader], clients_te_data: list[FastDataLoader], config: DDict) None[source]

Initialize the clients.

Parameters:
  • clients_tr_data (list[FastDataLoader]) – List of training data loaders, one for each client.

  • clients_te_data (list[FastDataLoader]) – List of test data loaders, one for each client. The test data loaders can be None.

  • config (DDict) – Configuration of the clients.

Important

For more deatils about the configuration of the clients, see the configuration page.

init_server(model: Any, data: FastDataLoader, config: DDict)[source]

Initailize the server.

Parameters:
  • model (Any) – The global model.

  • data (FastDataLoader) – The server-side test set.

  • config (DDict) – Configuration of the server.

set_callbacks(callbacks: callable | Iterable[callable])[source]

Set the callbacks.

Parameters:

callbacks (Union[callable, Iterable[callable]]) – Callbacks to attach to the algorithm.

run(n_rounds: int, eligible_perc: float, finalize: bool = True, **kwargs: dict[str, Any])[source]

Run the federated algorithm. This method will call the Server.fit() method which will orchestrate the training process.

Parameters:
  • n_rounds (int) – Number of rounds.

  • eligible_perc (float) – Percentage of eligible clients.

  • finalize (bool, optional) – Whether to finalize the training process. Defaults to True.

  • **kwargs (dict[str, Any]) – Additional keyword arguments.

save(path: str) None[source]

Save the algorithm state into files in the specified directory.

Parameters:

path (str) – Path to the folder where the algorithm state will be saved.

load(path: str) None[source]

Load the algorithm state from the specified folder

Parameters:

path (str) – Path to the folder where the algorithm state is saved.

abstract class fluke.algorithms.PersonalizedFL

class fluke.algorithms.PersonalizedFL(n_clients: int, data_splitter: DataSplitter, hyper_params: DDict)[source]

Personalized Federated Learning algorithm. This class is a simple extension of the CentralizedFL class where the clients are expected to implement the fluke.client.PFLClient class (see get_client_class()). The main difference with respect to the CentralizedFL class is that the clients initialization requires a model that is the personalized model of the client.

Important

Differently from CentralizedFL, which is actually the FedAvg algorithm, the PersonalizedFL class must not be used as is because it is a generic implementation of a personalized federated learning algorithm. The subclasses of this class should implement the specific logic of the personalized federated learning algorithm.