In recent years, immense progress has been made in the development of logorecommendation, retrieval and personalization techniques. The evaluation of these systems is still based on traditional metrics, e.g. precision, recall and/or RMSE, often not taking the use-case and situation of the system into consideration. However, the rapid evolution of novel IR and recommender systems foster the need for new evaluation paradigms.

This workshop serves as a venue for work on novel, personalization-centric benchmarking approaches to evaluate adaptive retrieval and recommender systems.

New evaluation approaches of such systems should assess both functional and non-functional requirements. Functional requirements go beyond traditional relevance metrics and focus on user-centered utility metrics, such as novelty, diversity and serendipity. Non-functional requirements focus on performance and technical aspects, e.g. scalability and reactivity.

The aim of this workshop is to foster research on the evaluation of adaptive retrieval and recommender systems, thus joining benchmarking efforts from the information retrieval and recommender systems communities.

Keynote Speaker and Panelists confirmed

Keynote Speaker and Panelists confi…

395 days ago Frank Hopfgartner Frank Hopfgartner
We are happy to announce that the keynote speaker and panelists for the BARS workshop have been confirmed. Torben Brodt from plista GmbH will give a keynote on "The Search for the Best Live Recomm…
Read More
395 days agoKeynote Speaker and Panelists confi…
CFP: Benchmarking Recommender Systems

CFP: Benchmarking Recommender Systems

500 days ago Alan Said Alan Said
This is a call for papers for the ACM TIST special issue on Benchmarking Recommender Systems. Cal…
Read More
500 days agoCFP: Benchmarking Recommender Systems
Accepted

Accepted

500 days ago Frank Hopfgartner Frank Hopfgartner
We are pleased to announce that the Workshop on Benchmarking Adaptive Retrieval and Recommender Syst…
Read More
500 days agoAccepted
SlideDeck 2