CFP: Benchmarking Recommender Systems

This is a call for papers for the ACM TIST special issue on Benchmarking Recommender Systems.

Call for Papers
ACM Transactions on Intelligent Systems and Technology
Special Issue on Recommender System Benchmarking

Overview

Recommender systems add value to vast content resources by matching users with items of interest. In recent years, immense progress has been made in recommendation techniques. The evaluation of these systems is still based on traditional information retrieval and statistics metrics, e.g. precision, recall, RMSE often not taking the use-case and situation of the system into consideration.
However, the rapid evolution of recommender systems in both their goals and their application domains foster the need for new evaluation methodologies and environments.


This special issue serves as a venue for work on novel, recommendation-centric benchmarking approaches taking the users’ utility, the business values and the technical constraints into consideration.
New evaluation approaches should evaluate both functional and non-functional requirements. Functional requirements go beyond traditional relevance metrics and focus on user-centered utility metrics, such as novelty, diversity and serendipity.
Non-functional requirements focus on performance (e.g., scalability of both model building and on-line recommendation phases) and reliability (e.g., consistency of recommendations with time, robustness to incomplete, erroneous or malicious input data).

Topics of Interests

We invite the submission of high-quality manuscripts reporting relevant research in the area of benchmarking and evaluation of recommendation systems. The special issue welcomes submissions presenting technical, experimental, methodological and/or applicative contributions in this scope, addressing -though not limited to- the following topics:

  • New metrics and methods for the quality estimation of recommender systems
  • Mapping metrics to business goals and values
  • Novel frameworks for the user-centric evaluation of recommender systems
  • Validation of off-line methods with online studies
  • Comparison of evaluation metrics and methods
  • Comparison of recommender algorithms across multiple systems and domains
  • Measuring technical constraints vs. accuracy
  • Robustness of recommender systems to missing, erroneous or malicious data
  • Evaluation methods in new application scenarios (cross domain, live/stream recommendation)
  • New datasets for the evaluation of recommender systems
  • Benchmarking frameworks
  • Multiple-objective benchmarking
  • Real benchmarking experiences (from benchmarking event organizers)

Submissions

Manuscripts shall be sent through the ACM TIST electronic submission system at http://mc.manuscriptcentral.com/tist (please select “Special Issue: Recommender System Benchmarking” as the manuscript type). Submissions shall adhere to the ACM TIST instructions and guidelines for authors available at the journal website: http://tist.acm.org.

The papers will be evaluated for their originality, contribution significance, soundness, clarity, and overall quality. The interest of contributions will be assessed in terms of technical and scientific findings, contribution to the knowledge and understanding of the problem, methodological advancements, and/or applicative value.

Important Dates

  • Paper submission due:         December 15th, 2013
  • First round of reviews:         February 15th, 2014
  • First round of revisions:         March 15th, 2014
  • Second round of reviews:         April 15th, 2014
  • Final round of revisions:         May 15th, 2014
  • Final paper notification:        June 15th, 2014
  • Camera-ready due:                 July 2014

Guest Editors

Paolo Cremonesi – Politecnico di Milano
paolo.cremonesi[at]polimi.it
http://home.dei.polimi.it/cremones/

Alan Said – CWI
alan[at]cwi.nl
http://www.alansaid.com

Domonkos Tikk – Gravity R&D
domonkos.tikk[at]gravityrd.com
http://www.tmit.bme.hu/tikk.domonkos

Michelle X. Zhou – IBM Research
mzhou[at]us.ibm.com
http://researcher.watson.ibm.com/researcher/view.php?person=us-mzhou