The TREC Session track ran for the second time in 2011. The track has the primary goal of providing test collections and evaluation measures for studying information retrieval over user sessions rather than one-time queries. These test collections are meant to be portable, reusable, statistically powerful, and open to anyone that wishes to work on the problem of retrieval over sessions. The second year has seen a near-complete overhaul of the track in terms of topic design, session data, and experimental evaluation. The changes are: 1. topics were formed from real user sessions with a search engine, and include queries, retrieved results, clicks, and dwell times; 2. retrieval tasks designed to study the eff�ect of using increasing amounts of user data on retrieval e�ffectiveness for the mth query in a session; 3. subtopic relevance judgments similar to the Web track diversity task. We believe the resulting test collection better models the interaction between system and user, though there is certainly still room for improvement. This overview is organized as follows: in Section 2 we describe the tasks participants were to perform. In Section 3 we describe the corpus, topics, and sessions that comprise the test collection. Section 4 gives some information about submitted runs. In Section 5 we describe relevance judging and evaluation measures, and Sections 6 and 7 present evaluation results and analysis. We conclude in Section 8 with some directions for the 2012 Session track.
|Publication status||Published - 2011|
|Event||The Twentieth Text Retrieval Conference (TREC 2011) Proceedings - Gaithersburg, United States|
Duration: 15 Nov 2011 → 18 Nov 2011
|Conference||The Twentieth Text Retrieval Conference (TREC 2011) Proceedings|
|Period||15/11/11 → 18/11/11|