Автор: Пользователь скрыл имя, 31 Мая 2013 в 07:22, реферат
I wish to thank the MIFL Chancellor - Academician Emma F. Volodarskaya, President of the Russian Academy of Linguistic Sciences, for the encouragement I received in the course of my research into the subject of the present term paper. My enormous gratitude also goes to the Dean of the Translation and Interpreting Faculty B.S. Ivanov for his advice and understanding as well as to the entire staff of the MIFL Department of Foreign Languages Theory and Practice for their disinterested assistance and pertinent advice at every stage of my work.
The transfer strategy focuses on the concept of level of representation and involves three stages. The analysis stage describes the source document linguistically and uses a source language dictionary. The transfer stage transforms the results of the analysis stage and establishes the linguistic and structural equivalents between the two languages. It uses a bilingual dictionary from source language to target language. The generation stage produces a document in the target language on the basis of the linguistic data of the source language by means of a target language dictionary.
The transfer strategy, developed by GETA (Groupe d'Etude pour la Traduction Automatique / Machine Translation Study Group) in Grenoble, France, led by B. Vauquois, has stimulated other research projects. Some, such as the Canadian TAUM-MÉTÉO and the American METAL, are already functioning. Others are still at the experimental stage, for example, SUSY in Germany and EUROTRA, which is a joint European project. TAUM, an acronym for Traduction Automatique de l'Université de Montréal (University of Montreal Machine Translation) was created by the Canadian Government in 1965. It has been functioning to translate weather forecasts from English to French since 1977 and from French to English since 1989. One of the oldest effective systems in existence, TAUM-MÉTÉO carries out both a syntactic and a semantic analysis and is 80% effective because weather forecasts are linguistically restricted and clearly defined. It works with only 1,500 lexical entries, many of which are proper nouns. In short, it carries out limited repetitive tasks, translating texts that are highly specific, with a limited vocabulary (although it uses an exhaustive dictionary) and stereotyped syntax, and there is perfect correspondence from structure to structure.
The pivot language strategy is based on the idea of creating a representation of the text independent of any particular language. This representation functions as a neutral, universal central axis that is distinct from both the source language and the target language. In theory this method reduces the machine translation process to only two stages: analysis and generation. The analysis of the source text leads to a conceptual representation, the diverse components of which are matched by the generation module to their equivalents in the target language. The research on this strategy is related to artificial intelligence and the representation of knowledge. The systems based on the idea of a pivot language do not aim at direct translation, but rather reformulate the source text from the essential information. At the present time the transfer and pivot language strategies are generating the most research in the field of machine translation. With regard to the pivot language strategy, it is worth mentioning the Dutch DLT (Distributed Language Translation) project which ran from 1985 to 1990 and which used Esperanto as a pivot language in the translation of 12 European languages.
It should be repeated that unless the systems function within a rigidly defined sphere, as is the case with TAUM-MÉTÉO, machine translation in no way offers a finished product. As Christian Boitet, director of GETA (Grenoble) says in an interview given to the journal Le français dans le monde Nº314 in which he summarizes the most important aspects of MT, it allows translators to concentrate on producing a high-quality target text. Perhaps then "machine translation" is not an appropriate term, since the machine only completes the first stage of the process. It would be more accurate to talk of a tool that aids the translation process, rather than an independent translation system.
Translation Memories
Translation memories represent one of the most important applications of on-line bilingual texts, going back to the beginning of the 1980s with the pioneering TSS system of ALPS, later Alpnet. This was succeeded at the beginning of the 90s by programs such as Translator Manager, Translator's Workbench, Optimizer, Déjà Vu, Trados and Eurolang, among others. In its simplest form, a translation memory is a database in which a translator stores translations for future re-use, either in the same text or other texts. Basically the program records bilingual pairs: a source-language segment (usually a sentence) combined with a target-language segment. If an identical or similar source-language segment comes up later, the translation memory program will find the previously-translated segment and automatically suggest it for the new translation. The translator is free to accept it without change, or edit it to fit the current context, or reject it altogether. Most programs find not only perfect matches but also partially-matching segments. This computer-assisted translation tool is most useful with texts possessing the following characteristics:
A translation memory can be used in two ways:
1. In interactive mode: The text to be translated is on the computer screen and the translator selects the segments one by one to translate them. After each selection the program searches its memory for identical or similar segments and produces possible translations in a separate window. The translator accepts, modifies or rejects the suggestions.
2. In automatic mode: The program automatically processes the whole source-language text and inserts into the target-language text the translations it finds in the memory. This is a more useful mode if there is a lot of repetition because it avoids treating each segment in a separate operation.
A translation memory program is normally made up of the following elements:
a. A translation editor, which protects the target text format.
b. A text segment localizer.
c. A terminological tool for dictionary management.
d. An automatic system of analysis for new texts.
e. A statistical tool that indicates the number of words translated and to be translated, the language, etc.
Thus translation memory programs are based on the accumulation and storing of knowledge that is recycled according to need, automating the use of terminology and access to dictionaries. When translation tasks are repeated, memories save the translator valuable time and even physical effort: for example, keyboard use can be reduced by as much as 70% with some texts. Memories also simplify project management and team translation by ensuring consistency. However, translation memories can only deal with a text simplistically in terms of linguistic segments; they cannot, unlike the human translator, have a vision of the text as a whole with regard to ideas and concepts or overall message. A human translator may choose to rearrange or redistribute the information in the source text because the target language and culture demand a different content relationship to create coherence or facilitate comprehension. Another disadvantage of memories is that training time is essential for efficient use and even then it takes time to build up an extensive database i.e. they are not immediate time-savers straight out of the box. Finally, it should be stressed that translation memory programs are designed to increase the quality and efficiency of the translation process, particularly with regard to specialized texts with non-figurative language and fixed grammatical constructions, but they are not designed to replace the human translator.
2.2.4 Information search and retrieval
The key processes of a translation memory tool include text segmentation, text alignment, indexing, search and match retrieval. Generally, the text segmentation and alignment processes are carried out before indexing the text. In the text segmentation process, source and target sentences are segmented into various translation units. Usually, the basic unit of a text in a translation tool system is a sentence; this unit might even be a paragraph, a phrase, a sentence, a sentence fragment, or a word, but in some systems the user can also define what the unit will be. Once the segmentation process is completed, the text segments (pairs of translation units) will be aligned. The alignment involves matching the source texts by aligning matching target text segments. Then the translational units will be indexed, and stored in the terminology database along with other useful attributes such as document creation date, author, the client, project ID, domain, etc. This approach is used in popular commercial systems including TRADOS, DLX, STAR Tansit and Deja Vu, among others.
The core element of every translation memory system is the match retrieval process. The main task of a translation memory is to match text fragments in the current document to those stored in the system’s database and retrieve the suitable translated text fragments (see Figure 1) from it. In the search process, there may be situations where exact matches are very rare; in such situations the system must also find text fragments (phrases, sentences, words) similar to the current one. As such, the system’s search process can either return an exact or a fuzzy (proximity) match. In Exact matching, the system pairs text segments in a source document that match the original source text exactly. But, any text in the document that does not exactly match the original will not be translated. Fuzzy matching, will however find segments that are very similar to the original text segment and suggest the original translation.
Though fuzzy matching is useful, in certain situations translations retrieved by fuzzy matching can be inappropriate in the context. Therefore, manual-post editing by the translator becomes essential. Once the search process is over for each text fragment, the system provides the corresponding translation stored in the database. In order to optimize this search and retrieval process, translation memory systems usually stores and indexes previously translated content in an organized way. This enables the retrieval of best matching translation units within a very short period.
The efficiency of a system depends on the retrieval of all available exact or fuzzy matches for a source segment (match recall) and the accuracy of the exact or fuzzy matches for the source segment (match precision) in context.
Mainly, two matching techniques have been reported to use in translation memory systems. The first technique, “character-string based method” tries to recognize matches not only at segment level but also in sub-parts of the segments. The second technique is known as “linguistically enhanced matching”, and this technique make use of natural language processing techniques and resources to analyze sentences, to identify and separate different syntactic and semantic text chunks.
Furthermore, some systems employ Part-of-Speech tagging and other grammatical annotations along with advanced algorithms such as statistical or neural network based approaches to pair the best translation units. The latter technique was employed more recently and is reported to produce improved results in terms of both precision and recall. However, one of the crucial disadvantage of the systems those use linguistically enhanced matching technique is that these systems are language depended.
After the search and retrieval process, the translator can modify the remaining text segments that reflect the changes between the source and target texts without retranslating the entire document.
Conclusion
It has long been a subject of discussion whether machine translation and computer-assisted translation could convert translators into mere editors, making them less important than the computer programs. The fear of this happening has led to a certain rejection of the new technologies on the part of translators, not only because of a possible loss of work and professional prestige, but also because of concern about a decline in the quality of production.
The computer provides access to the huge stock of both linguistic and encyclopedic information and allows processing this information much quicker. Finally it helps to have a high quality product than earlier. Translation has always been considered as mainly a creative kind of activity. However before a translator had to spend much more time and forces for purely mechanical operations necessary for obtaining the original, looking through a great number of books, printing of a target text, searching and correcting mistakes and typographical errors and than reprinting. Because of new communication and information technologies this work is minimized now. As a result a translator pays more attention to the process of translation that gives the chance to make the final product better.
There are talks if the new technologies have created a new profession. Some translators totally reject machine translation because they associate it with the point of view that CA translation is merely one more marketable product based on a calculation of investment versus profits. They define translation as an art that possesses its own aesthetic criteria that have nothing to do with profit and loss, but are rather related to creativity and the power of the imagination. It should be noted that application of electronic technologies itself isn't panacea from all troubles and problems in translation. Technical software with high efficiency and the latest electronic means aren't capable to replace completely a translator as it still can't guarantee a high-quality translation. They are to accelerate and facilitate the translation process helping with various doubts, questions, unclear moments arising in the course of work, and to minimize efforts and time spent translating. This applies mostly, however, to specific kinds of translation, such as that of literary texts, where polysemy, connotation and style play a crucial role. Even with other kinds of texts, our analysis of the roles and capabilities of both MT and CAT shows that neither is efficient and accurate enough to eliminate the necessity for human translators. In fact, so-called machine translation would be more accurately described as computer-assisted translation too.
The most powerful electronic resource is a specialized corpus. This virtual "native speaker" answers the questions connected with a choice made by the translator of the correct syntactic design and phrases with prepositional government, prepositive attributes and so forth.
The computer offers the translator a more or less wide set of variants of translation, but the final choice always remains for the person.
Improvement of available commercial programs and emergence of new technologies gives the chance to the translator to gradually let the computer software take more and more mechanical functions.
We share the conviction that translation has not become a new profession, but the changes are here to stay and will continue to evolve. Translators need to accept the new technologies and learn how to use them to their maximum potential as a means to increased productivity and quality improvement.
In conclusion we would like to note that we didn't intend to consider all the CAT programs for the solution of linguistic and other problems. A part of resources and means demands separate consideration, for example such difficult and ambiguous questions, as use of the Translation Memory programs, modern approaches to translating concerning marketing and work procurement and business management. They are the subject of our future researches in this area.
Refeences
[1] Шевчук В.Н. Электронные ресурсы переводчика: Справочные материалы для начинающего переводчика. — М.: Либрайт, 2010.— 136 с. ISBN 978-5-91870-002-0
[2] Encyclopedia of Translation Studies / Ed. M. Baker. - London: Routledge, 2004. - 654 p
[3] Bowker, Lynne. Computer-Aided Translation Technology: A Practical Introduction. - University of Ottawa Press, 2002. - 185 p.
[4] Kenny, D. Teaching Machine Translation and Translation Technology: a Contrastive Study / D. Kenny, A. Way. Dublin, 2001.
[5] An Introduction to CAT Tools (Translation Memory). - Keypot corporation.
[6] Sparano, M de. Computer Assisted Translation: Course Syllabus. - Monterey Institute of International Studies, 2006.
[7] Bowker, L. and Pearson, f. (2002). Working with Specialized Language: A Practical Guide to Using Corpora. — London: Routledge.
[8] Brogueira, ]. (2004). Beyond translation competence: merging technological skills with translation skills in translation practice classes.
[9] Fulford H., Zafra, J. (2005). Translation and Technology: a Study of UK Freelance Translators. — The Journal of Specialized Translation. Issue 4.
[10] Menagarishvili, О. & O. Coggin, W. Electronic resources and aids for writing, editing and translation. — STC s 50th Conference Proceedings.
[11] Newmark, P. (1981). Approaches to Translation. London: Pergamon.
[12] Olohan, M. (2004). Introducing Corpora in Translation Studies. London: Routledge.
[13] Рут, A. et al (ed). (2003). Innovation and E-Learning in Translator Education. Tarragona: Intercultural Studies Group of Universitat Rovira i Virgili.
[14] Somers, H. (ed). (2003). Computers and Translation. — John Benjamins, Philadelphia.
[15] Varantola, K. (2003). Translators and Disposable Corpora. —In: Federico Zanettin, Silvia Bernardini and Dominic Stewart (eds.) Corpora in Translator Education. — Manchester: St.Jerome.
[16] Wilkinson, M. (2005a). Using a Specialized Corpus to Improve Translation Quality.— In: Translation Journal, Vol. 9, №3..
[17] Wilkinson, M. (2006b). Compiling Corpora for use as Translation Resources — In: Translation Journal, Vol. 10, №1.
[18] Zanettin, F. (1998). Bilingual Comparable Corpora and the Training of Translators. — In: Meta 43(4).
[19] John Hadfield's articles http://www.bues.ru/articles/
[20]Wikipedia. Competence (human resources).
http://en.wikipedia.org/wiki/
[21] Language Outreach by the United Nations
http://www.unlanguage.org/
[22] Susanne Göpferich and Riitta Jääskeläinen 'Process research into the development of translation competence: where are we, and where do we need to go?' Across Languages and Cultures 10 (2), pp. 169–191 (2009) DOI: 10.1556/Acr.10.2009.2.1
[23] 'Linguistic Globalization as a Reflection of Cultural Changes' Alicja Witalisz Pedagogical University of Cracow Proceedings of the 19th Annual Conference of the Global Awareness Society International May 2011,Jagiellonian University, Krakow, Poland
[24] Oxford Dictionary of English (3rd Edition © Oxford University Press 2010)
1
From http://www.bues.ru/articles/
Информация о работе Using computer software for enhancing professional competence of a translator