This week we’ve been getting a better idea of what is expected of us over the coming semester, with various assignments big and small mounting up on our schedules. Clearly it’s very important to keep track of it all and get individual files organized for each module. I’ve been trying to review my notes immediately after each lecture, but as the bulk of my lectures now fall on a Thursday, this is proving to be tricky. Dead time on the bus to and from the campus, I try to read one of the recommended books. We have a lot of texts we are expected to read, particularly for Computerised Terminology. This week I got stuck into Computer-Aided Translation Technology by Lynne Bowker. A lot of key terms for more than one module in this course are popping up in that book.
Translation Technology – Dr Carlos has a bad day.
On Monday morning we were meant to have our first hands-on practice with a Translation Memory tool; specifically SDL TRADOS. However, to our teacher Dr. Carlos Teixeira’s dismay, we were unable to open TRADOS on the students’ computers. In the end he simply showed us how it worked by hooking his laptop up to the projector (though he had some difficulty getting the projector to work too). I could follow his demonstration well enough (despite the fire drill that went off mid-way through), but obviously this kind of practical how-to knowledge is retained more easily if you do it yourself. Fortunately over the next couple of days the problem was resolved and we were able to go back in our own time and complete the task we were meant to do in the lesson. Despite the mishaps in this lesson, Dr Carlos kept his cool. I would characterize him as a serious yet very mellow dude.
memoQ Looks Cool
In Thursday’s lecture, again with Dr. Carlos, we were shown a variety of available Translation Memory tools including some that are free (the free ones didn’t look so good though). These were…
Wordfast Anywhere (free)
MemoQ looked pretty cool to me but I think we will be using Trados mostly during this course. Two features of these tools that we looked at were analysis and concordance.
The analysis feature is used to create a report of
* how many words are contained in a project
* how many words are repeated
* how much text can be leveraged (reused) from existing translation memories.
This helps give you an idea of how much the translation will cost and how long it will take to do.
The concordance feature is used to find sub-segment matches so it checks only for highlighted words in the memory (unlike the regular TM retrieval function, which looks for entire segments). This is useful when you want to see if a particular phrase has been translated before.
For my own record here is some terminology I need to remember from this lesson (feel free to skip):
* Fuzzy match – The new source text and the source text in memory are similar but are not identical.
* Exact match – The new source text and the source text in memory are 100% identical.
* Full match – The text of a new source segment is the same as the source text in memory except for an unmatching variable (e.g., numbers, dates, times, currencies).
* Context match – The new source text and the source text in memory are 100% identical. The previous source
segment also matches the one in memory
If she falls, she gets back up and has another go. “Climbing is like life.”
The tasks and assignments I need to complete in this module are all quite useful, as my weak areas are speaking and writing and that is precisely what this course covers. This week we practiced writing お礼状 – a letter of thanks, and in a “peer reading” class two students prepared an article with a glossary for us and then introduced it to the class. The piece was all about a phenomenal and very determined teenage rock climber named Shiraishi Ashima. I was impressed by the work the students put in to preparing this piece, especially as they managed to prepare a video news report to supplement our reading material.
Japanese Economic Translation
Another great session with Dr Pat Cadwell – this time introducing the topic of Translation Theory and Practice.
Key points this week were:
* Translation theory is useful because it prompts self-reflection on how you yourself work.
* Functionalist theories inform much economic translation.
* Functionalist theory focuses on the text: the type of text, and how the text works in that language.
* For economics translation you should produce a text that looks at home in the target language. The reader shouldn’t realize it is a translation.
Our task during the class was to back-translate a Japanese translation of an original English article back into English and then compare our translation with the original. This was a really eye-opening exercise. Of course our translations were different from the original – but what was surprising was just how different they were. Quite a lot of content from the original English piece wasn’t even in the Japanese translation and the headlines were entirely different. I had laboriously tried to back-translate this Japanese headline: 「中国失速でドイツの退潮鮮明、 対中輸出の強さ裏目に」 into “German Economy Wanes in Reaction to Chinese Slump as Side effect of Strong Export Trade” - which isn’t very snappy, is it? Actually the headline on the original article was “Once a source of envy, Germany’s China exports turn into a risk”. Clearly Japanese and English headline styles are so radically different, that it may be better sometimes to discard the original headline and simply write a new one based on the article’s content.
Lesson learned: Substituting, omitting, or adding semantic chunks in a translation is perfectly valid in certain given circumstances.
A couple of useful language points that came up in this exercise:
* The つつある structure (in the process of doing) is often used in economics texts and can often be simply translated as “-ing”.
* Where a Japanese text might include phrases like という、と指摘、and との見方を示した an English text would simply read “said”.
In our Friday morning class with Pat Cadwell he introduced us to the “Theoretical Basis of Terminology”. Lots of big ideas to play with in this lecture. First off we learnt about the semasiological approach to language which basically means you start with a word and ask what it means. This can be problematic as words can be misleading, Homonyms can have many meanings (stalk for example), contronyms can hold contradictory meanings (peruse for example can mean both to read carefully or to skim read something), and synonynms (like earth and ground) show us that many words can be used to describe the same concept. The semasiological approach to language requires context in order to properly understand the words.
Terminology tries to avoid the problems associated with the semasiological approach by means of specificity. It employs the onomasiological approach to language which proceeds from concepts to labels. In specialized communication the labels for concepts are usually terms and so terminology can be considered as a set of concepts and their associated vocabulary (terms) within a specialized field. Labels aren’t always terms though, but can be gestures, symbols, and sounds. At this point the good doctor introduced the semiotic triangle of reality, thought, and language (which is used to model the relationship between objects in the real world, our conceptual apprehension of those objects, and the words we use to label those concepts) – and our brains collectively exploded.
That’s just a very brief summary of an extremely stimulating lesson, at the end of which we got our first whiff of the main assignment we will be working on this semester. The assignment is both exciting and daunting as our university is collaborating with WIPO (the World Intellectual Property Organization) on building a multilingual terminology database of specialized terms extracted from patents – and we are going to be adding to it. So over the next few days I am going to be examining patents in Japanese and looking for concepts that have not yet been registered as terms on the WIPO database. So that’ll be fun…
I guess I better get started.