Evaluating retrieval over sessions: the trec session track 2011–2014. Proceedings of the 39th International ACM SIGIR conference on Research and Development in Information Retrieval

Ben Carterette, Paul Clough, Mark Hall, Evangelos Kanoulas, Mark Sanderson

Research output: Chapter in Book/Report/Conference proceedingConference proceeding (ISBN)

13 Citations (Scopus)
4 Downloads (Pure)

Abstract

Information Retrieval (IR) research has traditionally focused on serving the best results for a single query| so-called ad hoc retrieval. However, users typically search iteratively, re ning and reformulating their queries during a session. A key challenge in the study of this interaction is the creation of suitable evaluation resources to assess the e ectiveness of IR systems over sessions. This paper describes the TREC Session Track, which ran from 2010 through to 2014, which focussed on forming test collections that included various forms of implicit feedback. We describe the test collections; a brief analysis of the di erences between datasets over the years; and the evaluation results that demonstrate that the use of user session data signi cantly improved e ectiveness.
Original languageEnglish
Title of host publicationNot Known
Pages685-688
DOIs
Publication statusPublished - 18 Jul 2016
EventSpecial Interest Group on Information Retrieval (SIGIR) - Pisa, Italy
Duration: 17 Jul 201621 Jul 2016

Conference

ConferenceSpecial Interest Group on Information Retrieval (SIGIR)
CountryItaly
CityPisa
Period17/07/1621/07/16

Fingerprint

Information retrieval
Feedback

Cite this

@inproceedings{3b28d183778a434884a47b4f310208c7,
title = "Evaluating retrieval over sessions: the trec session track 2011–2014. Proceedings of the 39th International ACM SIGIR conference on Research and Development in Information Retrieval",
abstract = "Information Retrieval (IR) research has traditionally focused on serving the best results for a single query| so-called ad hoc retrieval. However, users typically search iteratively, re ning and reformulating their queries during a session. A key challenge in the study of this interaction is the creation of suitable evaluation resources to assess the e ectiveness of IR systems over sessions. This paper describes the TREC Session Track, which ran from 2010 through to 2014, which focussed on forming test collections that included various forms of implicit feedback. We describe the test collections; a brief analysis of the di erences between datasets over the years; and the evaluation results that demonstrate that the use of user session data signi cantly improved e ectiveness.",
author = "Ben Carterette and Paul Clough and Mark Hall and Evangelos Kanoulas and Mark Sanderson",
year = "2016",
month = "7",
day = "18",
doi = "10.1145/2911451.2914675",
language = "English",
isbn = "978-1-4503-4069-4",
pages = "685--688",
booktitle = "Not Known",

}

Carterette, B, Clough, P, Hall, M, Kanoulas, E & Sanderson, M 2016, Evaluating retrieval over sessions: the trec session track 2011–2014. Proceedings of the 39th International ACM SIGIR conference on Research and Development in Information Retrieval. in Not Known. pp. 685-688, Special Interest Group on Information Retrieval (SIGIR), Pisa, Italy, 17/07/16. https://doi.org/10.1145/2911451.2914675

Evaluating retrieval over sessions: the trec session track 2011–2014. Proceedings of the 39th International ACM SIGIR conference on Research and Development in Information Retrieval. / Carterette, Ben; Clough, Paul; Hall, Mark; Kanoulas, Evangelos; Sanderson, Mark.

Not Known. 2016. p. 685-688.

Research output: Chapter in Book/Report/Conference proceedingConference proceeding (ISBN)

TY - GEN

T1 - Evaluating retrieval over sessions: the trec session track 2011–2014. Proceedings of the 39th International ACM SIGIR conference on Research and Development in Information Retrieval

AU - Carterette, Ben

AU - Clough, Paul

AU - Hall, Mark

AU - Kanoulas, Evangelos

AU - Sanderson, Mark

PY - 2016/7/18

Y1 - 2016/7/18

N2 - Information Retrieval (IR) research has traditionally focused on serving the best results for a single query| so-called ad hoc retrieval. However, users typically search iteratively, re ning and reformulating their queries during a session. A key challenge in the study of this interaction is the creation of suitable evaluation resources to assess the e ectiveness of IR systems over sessions. This paper describes the TREC Session Track, which ran from 2010 through to 2014, which focussed on forming test collections that included various forms of implicit feedback. We describe the test collections; a brief analysis of the di erences between datasets over the years; and the evaluation results that demonstrate that the use of user session data signi cantly improved e ectiveness.

AB - Information Retrieval (IR) research has traditionally focused on serving the best results for a single query| so-called ad hoc retrieval. However, users typically search iteratively, re ning and reformulating their queries during a session. A key challenge in the study of this interaction is the creation of suitable evaluation resources to assess the e ectiveness of IR systems over sessions. This paper describes the TREC Session Track, which ran from 2010 through to 2014, which focussed on forming test collections that included various forms of implicit feedback. We describe the test collections; a brief analysis of the di erences between datasets over the years; and the evaluation results that demonstrate that the use of user session data signi cantly improved e ectiveness.

U2 - 10.1145/2911451.2914675

DO - 10.1145/2911451.2914675

M3 - Conference proceeding (ISBN)

SN - 978-1-4503-4069-4

SP - 685

EP - 688

BT - Not Known

ER -