START Conference Manager    

A Decade of Automatic Content Evaluation of News Summaries: Reassessing the State of the Art

Peter A. Rankel, John M. Conroy, Hoa Trang Dang and Ani Nenkova

The 51st Annual Meeting of the Association for Computational Linguistics - Short Papers (ACL Short Papers 2013)
Sofia, Bulgaria, August 4-9, 2013


Abstract

How good are automatic content metrics for news summary evaluation? Here we provide a detailed answer to this question, with a particular focus on assessing the ability of automatic evaluations to identify statistically significant differences present in manual evaluation of content. Using four years of TAC data, we analyze the performance of eight ROUGE variants in terms of accuracy, precision and recall in finding significantly different systems. Our experiments show that some of the neglected variants of ROUGE, based on higher order n-gram syntactic dependencies are most accurate across the years; the commonly used R-1 scores find too many significant differences. We also test combinations of ROUGE variants and find that they considerably improve the accuracy of automatic prediction.


START Conference Manager (V2.61.0 - Rev. 2792M)