Description

In this project, my team and I have explored using different levels of linguistic representations to improve Machine Translation Evaluation. We have looked at combining discourse trees, semantics and syntax to improve the state-of-the-art. We have used both structured (trees) and distributed (vector) representations to perform this task. Currently, we're looking at how humans evaluate translations using eyetracking.

Achievements

  • August 2014Best metric at the WMT2014 metrics task.

Related Publications

How do Humans Evaluate Machine Translation
Francisco Guzmán, Ahmed Abdelali, Irina Temnikova, Hassan Sajjad, and Stephan Vogel. In Proceedings of the Tenth Workshop on Statistical Machine Translation, pages 457-466, 2015.
PDF Abstract BibTex Errata   Slides Software/Code
Pairwise Neural Machine Translation Evaluation
Francisco Guzmán, Shafiq Joty, Lluís Màrquez, and Preslav Nakov. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and The 7th International Joint Conference of the Asian Federation of Natural Language Processing , pages 805-814, 2015.
PDF Abstract BibTex Slides
DiscoTK: Using Discourse Structure for Machine Translation Evaluation
Shafiq Joty, Francisco Guzmán, Lluís Màrquez, and Preslav Nakov. In Proceedings of the Ninth Workshop on Statistical Machine Translation (WMT'14), pages 402-408, 2014.
PDF Abstract BibTex Slides
Learning to Differentiate Better from Worse Translations
Francisco Guzmán, Shafiq Joty, Lluís Màrquez, Alessandro Moschitti, Preslav Nakov, and Massimo Nicosia. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 214-220, 2014.
PDF Abstract BibTex Slides
Using Discourse Structure Improves Machine Translation Evaluation
Francisco Guzmán, Shafiq Joty, Lluís Màrquez, and Preslav Nakov. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (ACL'14), pages 687-698, 2014.
PDF Abstract BibTex Slides