rectorch.evaluation¶
Module containing utility functions to evaluate recommendation engines.
- 
rectorch.evaluation.evaluate(model, test_loader, metric_list)[source]¶ Evaluate the given method.
The
modelevaluation is performed with all the provided metrics inmetric_list. The test set is loaded through the providedrectorch.samplers.Sampler(i.e.,test_loader).- Parameters
 - model
 rectorch.models.RecSysModelThe model to evaluate.
- test_loader
 rectorch.samplers.SamplerThe test set loader.
- metric_list
 listofstrThe list of metrics to compute. Metrics are indicated by strings formed in the following way:
matric_name@kwhere
matric_namemust correspond to one of the method names without the suffix ‘_at_k’, andkis the corresponding parameter of the method and it must be an integer value. For example:ndcg@10is a valid metric name and it corresponds to the methodndcg_at_kwithk=10.- model
 
- Returns
 dictofnumpy.arrayDictionary with the results for each metric in
metric_list. Keys are string representing the metric, while the value is an array with the value of the metric computed on the users.