fluke.client

The module fluke.client provides the base classes for the clients in fluke.

Classes included in fluke.client

Client

Base Client class.

PFLClient

Personalized Federated Learning client.

Classes

class fluke.client.Client

channel

The communication channel.

evaluate

Evaluate the local model on the client's test_set.

finalize

Finalize the client.

fit

Client's local training procedure.

index

The client identifier.

load

Load the client state from a file.

local_model

The client's local model.

local_update

Client's local update procedure.

model

The client's local model.

optimizer

The optimizer of the client.

n_examples

The number of examples in the local training set.

receive_model

Receive the global model from the server.

save

Save the client state to a file.

scheduler

The learning rate scheduler of the client.

send_model

Send the current model to the server.

server

The server to which the client is connected.

set_server

Set the reference to the server.

state_dict

Get the client state as a dictionary.

_load_from_cache

Load the model, optimizer, and scheduler from the cache.

_save_to_cache

Save the model, optimizer, and scheduler to the cache.

class fluke.client.Client(index: int, train_set: FastDataLoader, test_set: FastDataLoader, optimizer_cfg: OptimizerConfigurator, loss_fn: Module, local_epochs: int = 3, fine_tuning_epochs: int = 0, clipping: float = 0, **kwargs: dict[str, Any])[source]

Bases: ObserverSubject

Base Client class. This class is the base class for all clients in fluke. The standard behavior of a client is based on the Federated Averaging algorithm. The default behavior of a client includes:

  • Receiving the global model from the server;

  • Training the model locally for a number of epochs using the local training set;

  • Sending the updated local model back to the server;

  • (Optional) Evaluating the model on the local test set before and after the training.

hyper_params

The hyper-parameters of the client. The default hyper-parameters are:

  • loss_fn: The loss function.

  • local_epochs: The number of local epochs.

  • fine_tuning_epochs: The number of fine-tuning epochs, i.e., the number of epochs to train the model after the federated learning process.

  • clipping: The clipping value for the gradients.

When a new client class inherits from this class, it must add all its hyper-parameters to this dictionary.

Type:

DDict

train_set

The local training set.

Type:

FastDataLoader

test_set

The local test set.

Type:

FastDataLoader

optimizer_cfg

The optimizer configurator. This is used to create the optimizer and the learning rate scheduler.

Type:

OptimizerConfigurator

device

The device where the client trains the model. By default, it is the device defined in fluke.FlukeENV.

Type:

torch.device

Attention

The client should not directly call methods of the server. The communication between the client and the server must be done through the channel.

Important

When inheriting from this class, make sure to handle the caching of the model, optimizer, scheduler and any other attribute that should be cached. The caching is done automatically by the methods _load_from_cache() and _save_to_cache(). The method _load_from_cache() is called before the local update and the method _save_to_cache() is called after the local update. If the client has additional attributes that should be cached, add them to the _attr_to_cache list. If the client has an additional model, optimizer, and scheduler, these object should be handled via a fluke.utils.model.ModOpt private object. The fluke.utils.model.ModOpt object is used to store the model, optimizer, and scheduler. To access the model, optimizer, and scheduler, define the corresponding properties (e.g., model, optimizer, scheduler).

Caution

When inheriting from this class, make sure to put all the specific hyper-parameters in the hyper_params attribute. In this way fluke can properly handle the hyper-parameters of the client in the federated learning process.

For example:

1class MyClient(Client):
2    # We omit the type hints for brevity
3    def __init__(self, index, ..., my_param):
4        super().__init__(index, ...)
5        self.hyper_params.update(my_param=my_param) # This is important
property model: Module

The client’s local model.

Warning

If the model is stored in the cache, the method retrieves it from the cache but does not remove it. Thus, the performance may be affected if this property is used to get the model multiple times while the model is in the cache.

Returns:

The local model.

Return type:

torch.nn.Module

property optimizer: Optimizer

The optimizer of the client.

Warning

If the optimizer is stored in the cache, the method retrieves it from the cache but does not remove it. Thus, the performance may be affected if this property is used to get the optimizer multiple times while the optimizer is in the cache.

Returns:

The optimizer.

Return type:

torch.optim.Optimizer

property scheduler: LRScheduler

The learning rate scheduler of the client.

Warning

If the scheduler is stored in the cache, the method retrieves it from the cache but does not remove it. Thus, the performance may be affected if this property is used to get the scheduler multiple times while the scheduler is in the cache.

Returns:

The learning rate scheduler.

Return type:

torch.optim.lr_scheduler.LRScheduler

property index: int

The client identifier. This might be useful to identify the client in the federated learning process for logging or debugging purposes.

Returns:

The client identifier.

Return type:

int

property local_model: Module

The client’s local model. This is an alias for model.

Returns:

The local model.

Return type:

torch.nn.Module

property n_examples: int

The number of examples in the local training set.

Returns:

The number of examples in the local training set.

Return type:

int

property channel: Channel

The communication channel.

Attention

Use this channel to exchange data/information with the server.

Returns:

The communication channel.

Return type:

Channel

property server: Server

The server to which the client is connected. This reference must only be used to send messages through the channel.

Returns:

The server.

Return type:

Server

set_server(server: Server) None[source]

Set the reference to the server. Along with the server, the communication channel is also set and the client must use this channel to communicate with the server.

Parameters:

server (Server) – The server that orchestrates the federated learning process.

receive_model() None[source]

Receive the global model from the server. This method is responsible for receiving the global model from the server and updating the local model accordingly. The model is received as a payload of a fluke.comm.Message with msg_type="model" from the inbox of the client itself. The method uses the channel to receive the message.

send_model() None[source]

Send the current model to the server. The model is sent as a fluke.comm.Message with msg_type “model” to the server. The method uses the channel to send the message.

local_update(current_round: int) None[source]

Client’s local update procedure. Before starting the local training, the client receives the global model from the server. Then, the training occurs for a number of hyper_params.local_epochs epochs using the local training set and as loss function the one defined in hyper_params.loss_fn. The training happens on the device defined in the client. After the training, the client sends the model to the server.

Parameters:

current_round (int) – The current round of the federated learning process.

fit(override_local_epochs: int = 0) float[source]

Client’s local training procedure.

Parameters:

override_local_epochs (int, optional) – Overrides the number of local epochs, by default 0 (use the default number of local epochs as in hyper_params.local_epochs).

Returns:

The average loss of the model during the training.

Return type:

float

evaluate(evaluator: Evaluator, test_set: FastDataLoader) dict[str, float][source]

Evaluate the local model on the client’s test_set. If the test set is not set or the client has not received the global model from the server, the method returns an empty dictionary.

Parameters:
  • evaluator (Evaluator) – The evaluator to use for the evaluation.

  • test_set (FastDataLoader) – The test set to use for the evaluation.

Returns:

The evaluation results. The keys are the metrics and the values are the results.

Return type:

dict[str, float]

finalize() None[source]

Finalize the client. This method is called at the end of the federated learning process. The default behavior is to receive the global model from the server that is then potentially used to evaluate the performance of the model on the local test set.

Attention

When inheriting from this class, make sure to override this method if this behavior is not desired.

state_dict() dict[str, Any][source]

Get the client state as a dictionary.

Returns:

The client state.

Return type:

dict

save(path: str) None[source]

Save the client state to a file.

Parameters:

path (str) – The path to the file where the client state will be saved.

load(path: str, model: Module) dict[str, Any][source]

Load the client state from a file.

Parameters:
  • path (str) – The path to the file where the client state is saved.

  • model (torch.nn.Module) – The model to use for the client.

Returns:

The loaded lient state.

Return type:

dict

_load_from_cache() None[source]

Load the model, optimizer, and scheduler from the cache. The method retrieves the model, optimizer, and scheduler from the cache and sets them as the client’s model, optimizer, and scheduler. The method should be called before the local update.

Potential additional attributes that should be loaded from the cache should be added to the _attr_to_cache list.

_save_to_cache() None[source]

Save the model, optimizer, and scheduler to the cache. The method should be called after the local update.

Potential additional attributes that should be saved to the cache should be added to the _attr_to_cache list.

class fluke.client.PFLClient

evaluate

Evaluate the personalized model on the test_set.

local_model

The client's local model.

personalized_model

The personalized model.

pers_scheduler

The learning rate scheduler of the personalized model.

pers_optimizer

The optimizer of the personalized model.

class fluke.client.PFLClient(index: int, model: Module, train_set: FastDataLoader, test_set: FastDataLoader, optimizer_cfg: OptimizerConfigurator, loss_fn: Module, local_epochs: int = 3, fine_tuning_epochs: int = 0, clipping: float = 0, **kwargs: dict[str, Any])[source]

Bases: Client

Personalized Federated Learning client. This class is a personalized version of the Client class. It is used to implement personalized federated learning algorithms. The main difference is that the client has a personalized model (i.e., the attribute personalized_model).

Note

The client evaluation is performed using personalized_model instead of the global model (i.e., fluke.client.Client.model).

evaluate(evaluator: Evaluator, test_set: FastDataLoader) dict[str, float][source]

Evaluate the personalized model on the test_set. If the test set is not set or the client has no personalized model, the method returns an empty dictionary.

Parameters:
  • evaluator (Evaluator) – The evaluator to use for the evaluation.

  • test_set (FastDataLoader) – The test set to use for the evaluation.

Returns:

The evaluation results. The keys are the metrics and the values are the results.

Return type:

dict[str, float]

property local_model: Module

The client’s local model. This is an alias for personalized_model.

Returns:

The local model.

Return type:

torch.nn.Module

property pers_optimizer: Optimizer

The optimizer of the personalized model.

Warning

If the optimizer is stored in the cache, the method retrieves it from the cache but does not remove it. Thus, the performance may be affected if this property is used to get the optimizer multiple times while the optimizer is in the cache.

Returns:

The optimizer.

Return type:

torch.optim.Optimizer

property pers_scheduler: LRScheduler

The learning rate scheduler of the personalized model.

Warning

If the scheduler is stored in the cache, the method retrieves it from the cache but does not remove it. Thus, the performance may be affected if this property is used to get the scheduler multiple times while the scheduler is in the cache.

Returns:

The learning rate scheduler.

Return type:

torch.optim.lr_scheduler.LRScheduler

property personalized_model: Module

The personalized model.

Warning

If the model is stored in the cache, the method retrieves it from the cache but does not remove it. Thus, the performance may be affected if this property is used to get the model multiple times while the model is in the cache.

Returns:

The personalized model.

Return type:

torch.nn.Module