rectorch.evaluation¶
Module containing utility functions to evaluate recommendation engines.
-
rectorch.evaluation.
evaluate
(model, test_loader, metric_list)[source]¶ Evaluate the given method.
The
model
evaluation is performed with all the provided metrics inmetric_list
. The test set is loaded through the providedrectorch.samplers.Sampler
(i.e.,test_loader
).- Parameters
- model
rectorch.models.RecSysModel
The model to evaluate.
- test_loader
rectorch.samplers.Sampler
The test set loader.
- metric_list
list
ofstr
The list of metrics to compute. Metrics are indicated by strings formed in the following way:
matric_name
@k
where
matric_name
must correspond to one of the method names without the suffix ‘_at_k’, andk
is the corresponding parameter of the method and it must be an integer value. For example:ndcg@10
is a valid metric name and it corresponds to the methodndcg_at_k
withk=10
.- model
- Returns
dict
ofnumpy.array
Dictionary with the results for each metric in
metric_list
. Keys are string representing the metric, while the value is an array with the value of the metric computed on the users.