rectorch.evaluation

Module containing utility functions to evaluate recommendation engines.

rectorch.evaluation.evaluate(model, test_loader, metric_list)[source]

Evaluate the given method.

The model evaluation is performed with all the provided metrics in metric_list. The test set is loaded through the provided rectorch.samplers.Sampler (i.e., test_loader).

Parameters
  • modelrectorch.models.RecSysModel
    • The model to evaluate.

  • test_loaderrectorch.samplers.Sampler
    • The test set loader.

  • metric_listlist of str
    • The list of metrics to compute. Metrics are indicated by strings formed in the following way:

      matric_name @ k

      where matric_name must correspond to one of the method names without the suffix ‘_at_k’, and k is the corresponding parameter of the method and it must be an integer value. For example: ndcg@10 is a valid metric name and it corresponds to the method ndcg_at_k with k=10.

    Returns
  • dict of numpy.array
    • Dictionary with the results for each metric in metric_list. Keys are string representing the metric, while the value is an array with the value of the metric computed on the users.