Repository logo

Towards an evaluation methodology for machine translation output

Loading...
Thumbnail ImageThumbnail Image

Date

Journal Title

Journal ISSN

Volume Title

Publisher

University of Ottawa (Canada)

Abstract

The exponential increase in communications in many different languages brought about by globalization has resulted in a corresponding demand for translation to be done more quickly, but without sacrificing quality. However, there are currently not enough qualified human translators to keep up with the demand. One way in which translators are trying to cope is by turning to technology, including machine translation (MT) systems. MT has the advantage of being able to produce a large volume of translation in a very short time, but it does not always produce high-quality translations. For this reason, MT cannot replace human translation, but it can make translators' work easier by producing rough drafts. Since there are currently many MT systems on the market, there is a real need for an evaluation methodology for translators to help them choose a system that will best meet their needs. As yet, no universally accepted evaluation methodology exists. The objective of this thesis is to develop and test an evaluation methodology that average translators can use to compare off-the-shelf MT systems and select the appropriate one. (Abstract shortened by UMI.)

Description

Keywords

Citation

Source: Masters Abstracts International, Volume: 43-06, page: 1925.

Related Materials

Alternate Version