BARS 2013 – Workshop on Benchmarking Adaptive Retrieval and Recommender Systems http://www.bars-workshop.org In conjunction with ACM SIGIR. Dublin, August 1, 2013. Tue, 30 Jul 2013 13:29:52 +0000 en-US hourly 1 Keynote Speaker and Panelists confirmed http://www.bars-workshop.org/2013/07/keynote-speaker-and-panelists-confirmed/#utm_source=feed&utm_medium=feed&utm_campaign=feed Tue, 30 Jul 2013 13:03:18 +0000 http://www.bars-workshop.org/?p=159 Continue reading Keynote Speaker and Panelists confirmed ]]> We are happy to announce that the keynote speaker and panelists for the BARS workshop have been confirmed.

Torben Brodt from plista GmbH will give a keynote on “The Search for the Best Live Recommender System”. plista is a data-driven content- and ad distribution network, mainly active in the German speaking world. Its technology is used on thousands of premium websites for recommending millions of news articles and ads to even more users. Currently, over 5,000 requests are processed per second. In his talk, he will present challenges and opportunities that arise from handling such vast amounts of data in real-time. In particular, he will illustrate why users’ context (e.g., category, geo-location, day of the week) plays a key role in the provision of successful recommender algorithms and will outline the importance of a thorough evaluation of our techniques. Besides, he will talk about their experience in organizing a real-time news recommender contest which provides researchers the chance to run their algorithms in our production system, thus benefiting from real feedback from real users.

The workshop will end with a panel session where experts from Academia and Industry who will discuss emerging challenges in the field. Panel members are Paul N. Bennett (Microsoft Research), Jeremy Pickens (Catalyst Repository Systems), Jimmy Lin (University of Maryland) and Neil Hurley (UCD). We are looking forward to some lively discussions.

]]>
CFP: Benchmarking Recommender Systems http://www.bars-workshop.org/2013/04/cfp-benchmarking-recommender-systems/#utm_source=feed&utm_medium=feed&utm_campaign=feed Tue, 16 Apr 2013 16:16:03 +0000 http://www.bars-workshop.org/?p=46 Continue reading CFP: Benchmarking Recommender Systems ]]> This is a call for papers for the ACM TIST special issue on Benchmarking Recommender Systems.

Call for Papers
ACM Transactions on Intelligent Systems and Technology
Special Issue on Recommender System Benchmarking

Overview

Recommender systems add value to vast content resources by matching users with items of interest. In recent years, immense progress has been made in recommendation techniques. The evaluation of these systems is still based on traditional information retrieval and statistics metrics, e.g. precision, recall, RMSE often not taking the use-case and situation of the system into consideration.
However, the rapid evolution of recommender systems in both their goals and their application domains foster the need for new evaluation methodologies and environments.


This special issue serves as a venue for work on novel, recommendation-centric benchmarking approaches taking the users’ utility, the business values and the technical constraints into consideration.
New evaluation approaches should evaluate both functional and non-functional requirements. Functional requirements go beyond traditional relevance metrics and focus on user-centered utility metrics, such as novelty, diversity and serendipity.
Non-functional requirements focus on performance (e.g., scalability of both model building and on-line recommendation phases) and reliability (e.g., consistency of recommendations with time, robustness to incomplete, erroneous or malicious input data).

Topics of Interests

We invite the submission of high-quality manuscripts reporting relevant research in the area of benchmarking and evaluation of recommendation systems. The special issue welcomes submissions presenting technical, experimental, methodological and/or applicative contributions in this scope, addressing -though not limited to- the following topics:

  • New metrics and methods for the quality estimation of recommender systems
  • Mapping metrics to business goals and values
  • Novel frameworks for the user-centric evaluation of recommender systems
  • Validation of off-line methods with online studies
  • Comparison of evaluation metrics and methods
  • Comparison of recommender algorithms across multiple systems and domains
  • Measuring technical constraints vs. accuracy
  • Robustness of recommender systems to missing, erroneous or malicious data
  • Evaluation methods in new application scenarios (cross domain, live/stream recommendation)
  • New datasets for the evaluation of recommender systems
  • Benchmarking frameworks
  • Multiple-objective benchmarking
  • Real benchmarking experiences (from benchmarking event organizers)

Submissions

Manuscripts shall be sent through the ACM TIST electronic submission system at http://mc.manuscriptcentral.com/tist (please select “Special Issue: Recommender System Benchmarking” as the manuscript type). Submissions shall adhere to the ACM TIST instructions and guidelines for authors available at the journal website: http://tist.acm.org.

The papers will be evaluated for their originality, contribution significance, soundness, clarity, and overall quality. The interest of contributions will be assessed in terms of technical and scientific findings, contribution to the knowledge and understanding of the problem, methodological advancements, and/or applicative value.

Important Dates

  • Paper submission due:         December 15th, 2013
  • First round of reviews:         February 15th, 2014
  • First round of revisions:         March 15th, 2014
  • Second round of reviews:         April 15th, 2014
  • Final round of revisions:         May 15th, 2014
  • Final paper notification:        June 15th, 2014
  • Camera-ready due:                 July 2014

Guest Editors

Paolo Cremonesi – Politecnico di Milano
paolo.cremonesi[at]polimi.it
http://home.dei.polimi.it/cremones/

Alan Said – CWI
alan[at]cwi.nl
http://www.alansaid.com

Domonkos Tikk – Gravity R&D
domonkos.tikk[at]gravityrd.com
http://www.tmit.bme.hu/tikk.domonkos

Michelle X. Zhou – IBM Research
mzhou[at]us.ibm.com
http://researcher.watson.ibm.com/researcher/view.php?person=us-mzhou

]]>
Accepted http://www.bars-workshop.org/2013/04/accepted/#utm_source=feed&utm_medium=feed&utm_campaign=feed Tue, 16 Apr 2013 12:32:05 +0000 http://www.bars-workshop.org/?p=14 We are pleased to announce that the Workshop on Benchmarking Adaptive Retrieval and Recommender Systems has been accepted to ACM SIGIR in Dublin, Ireland.

This website will be continuously updated with information regarding the workshop.

]]>