Volume 5, No. 1 
January 2001

 
  Celia


 
 



 


 

 

 
 
 
 
 
 
 
 
 
 
 
 
 
Translation Journal
 
Computers


From Novelty to Ubiquity:

Computers and Translation at the Close of the Industrial Age

by Celia Rico Pérez
 





hen the first computer was created, we were in search of that age—old dream of mankind, the one Aristotle had had more than 2,000 years ago when he foresaw our complete liberation from unworthy and abject tasks once man-made tools could perform them on command or by premonition, when looms could weave by themselves without the help of human hands. Then we would be free to devote our time to the true activities of citizens and to the search for knowledge and the wisdom it brings (Ifrah 1997: 1474).

It may certainly seem that for centuries our aspiration has been to free ourselves from the burden and routine of our jobs with a view to dedicating our time to responsibilities grand or small, while somebody or something else accomplishes our daily work. In fact, if we briefly examine the history of computers we will realize how, in a rather short period of time, mankind has achieved considerable technological progress, which has contributed to some extent toward making this dream come true.


1. A brief history of computers

The advent of computers is best viewed as the convergence of a number of separate steps in a combined effort by certain exceptional minds and the ongoing endeavor of a long list of scholars, philosophers, visionaries, inventors, engineers, mathematicians, physicists, and technicians from all over the world throughout history .

different approaches to MT should be be used for different problems, environments and translation needs.
Computers came into being around 1945, when a group of two physicists and a mathematician from the University of Pennsylvania created ENIAC (Electronic Numerical Integrator and Computer), "the first fully electronic and operational calculator of history" (Ifrah: 1570). It was big, weighed 30 tons and had innumerable switches, lamps, pipes, and resistors. It used punch cards, needed fastidious manipulations tooperate, and a considerable amount of time was used up feeding it with data and working the switches. The result was frequent errors, failures and miscalculations. Predictably, the life of the first generation of computers would be short. Not only were they unwieldy and unreliable, but they had limited storage capacity and consumed a tremendous amount of energy , even in performing minor tasks. Around 1960, the transistor and the integrated circuit definitively sent the first computers to the museums. We then entered the Age of the Dinosaurs (Rochlin, 1997: 18-20), the age of mainframe computers, faster and more reliable than their predecessors but still too big, expensive and complex to make any difference in our daily lives. That was the time when punch cards and magnetic tape were used to store information, when computers operated in very restricted environments.

The computer revolution definitely started with the microprocessor, "an integrated circuit that put the essential of a computer on a single chip, fully programmable despite its small size" (21). The microcomputer was born and would soon take the form of the PC by IBM and the Macintosh by Apple. "Novelty" became "ubiquity" with new designs for video displays, smaller disks with greater storage capacity, new programming languages, faster processors, increasingly sophisticated software, and more user-friendly operating systems.

The ultimate step in this brief history so far is the Internet. Conceived in the late 1950s by the Defense Advanced Research Project Agency (DARPA) to provide both interconnectivity to relatively independent computing centers and a communication system that could survive a nuclear war, the Internet was soon to become the means and medium for a worldwide virtual community. New standards and protocols for transmitting information, together with hypertext, browsers and interfaces have eventually taken computers from a multitude of isolated machines spread all over the world to a single international community of interconnected computers.


2. Translation: the first steps toward automation

The advent of the first computers gave rise to the idea of using them to translate. In 1949, Warren Weaver, an official of the Rockefeller Foundation, suggested using cryptographic techniques, statistics and universals of language to mechanize the translation process. Punch cards had already been used to perform word-for-word translation of abstracts of scientific articles; in 1951 the first research group dedicated to machine translation was set up in the USA, and in 1954 the first public demo of a machine translation system was given. The system translated 49 pre-selected sentences from Russian into English, using a vocabulary of 250 words and only six grammatical rules (Hutchins, 1992: 6). At that time, the goal of machine translation (MT) was "the creation of fully automatic high-quality translation systems producing results indistinguishable from those of human translators" (7). The basic idea for research was that translation by machines was viable as long as it was conceived as a sort of message decodification, which meant translating word for word, a strategy we now know seldom leads to success.

As a result, disillusion spread among researchers as they realized the complexity of language, the problems of language automation and the semantic and idiomatic barriers of language, not to mention ambiguity. In 1966 the ALPAC (Automatic Language Processing Advisory Committee) published its now famous report, concluding that machine translation was slower, less precise and more expensive than human translation, and stating that there were no immediate or predictable prospects of making machine translation useful. As a result, funding for this type of project was cut, and it was recommended that useful tools to assist human translation be developed.

Nevertheless, the dream to use computers in translation was to remain latent among the scientific community—only natural if we remember that in the 17th century the idea of using mechanical dictionaries to overcome the language barrier had already been suggested by scientists who speculated that such dictionaries could be created from universal numeric codes. In this context, it comes as no surprise that, despite the ALPAC report, research continued, albeit slowly and outside the USA. This research would give birth to systems such as Météo (used in Canada to translate weather reports); Systran, currently used by the European Commission to translate hundreds of thousands of pages a year; and the Eurotra project, which during the 1980s consolidated the efforts of many research teams throughout Europe for the purpose of creating a machine translation system that could translate all languages of the European Community into each other.

It is important to mention that, although in the beginning MT may have not taken the right course, it certainly led to truly important advances that had a major impact on the development of automatic dictionaries, parsing techniques, and linguistic theories. In any case even after MT moved from academic labs to competitive commercial fields, it continued to cause controversy. Some view it with skepticism and refuse to consider its final product a real translation, while for others, MT is a useful tool, generating appropriate translation in certain contexts and addressing specific needs.

In my opinion, and as we shall see later, a more practical and unsentimental view of translation and technology is that of conceiving MT as an effective instrument already in use in many companies and international organizations such as Caterpillar, CompuServe, Ericsson Language Services, Rank Xerox or the European Commission translation service (Lockwood et al, 1995: 191-240).We should bear in mind that MT is an attempt to use computers to simulate a non-numerical human activity; that it was never intended to eliminate human participation; and that chance and accident are not among the aspects to be considered when choosing an MT system. (Rico Pérez, 1998a).


3. Can computers think?

In the early days, when science still had that aura of romanticism, the computer was conceived as an 'electronic mind,' a 'thinking machine' (Ifrah: 1679). This idea sprang from a mechanical notion of the human being, resulting in an oversimplified misconception of computers: those machines with an inductive and creative mind, capable of taking initiative, to which human beings could confide all their problems and obtain instant solutions in return. The direct outcome of this idea is a number of inaccurate beliefs about computers that persist today. In this sense, it would be appropriate, before analyzing the role technology plays in the translation profession, to examine in detail the possibilities and limitations of computers when dealing with language.

The skills activated by a human translator during the translation process include not only the necessary expertise in the source and target languages with their particular worlds and cultures, but also a certain sensibility, a curiosity toward what the use of language encompasses, a logical capacity and the ability to make decisions when confronted with one of the many traps language presents. A translator pursues a discourse adequate to the communicative situation, addressing the expectations of all players in that situation. These are but a small part of the whole range of capabilities a professional translator learns to exercise at each and every piece of translation, capabilities that are developed through constant work and personal interest.

If we consider now the idea of applying these capabilities to the mechanization of translation, we would realize that we are asking the computer to decide when something 'makes sense,' to be able to analyze a person's intentions and objectives hidden in a piece of writing and to produce intelligent thoughts. We are trying to automate purely human processes and simulate them in a computer; how is a computer going to tackle the infinite complexity of language?

Computers are nothing more than machines good at tackling problems of algorithmic nature, i.e. problems that can be formalized through a series of finite steps of elementary rules, controlled by a uniform and precise description, which allows them to complete certain operations in a strict sequence of steps (Ifrah: 1616). Computers, despite the picture some books or films insist on drawing, do not have a sense of right and wrong and cannot think, feel or judge. Obviously, we cannot deny some type of intelligence to computers, a limited one when compared to the infinite complexity of the human brain, but we must recognize their superiority when it comes to the technical capacity to process data. In fact, it is thanks to this processing ability that computers are useful for translators, with tools such as electronic dictionaries, translation memories, the Internet or MT in restricted domains. However, computers do not think rationally, do not invent, have no imagination, do not produce abstract thoughts, do not feel, do not have the ability to create or manage abstract ideas expressed in words, do not judge, do not adapt themselves to different situations, and do not know how to solve new problems. Computers are not aware of their own existence or of the outside world, are unable to perform voluntary or creative activities. Computers are machines that are good at performing routine tasks, and translation is everything but routine; it is not a problem of algorithmic nature.


4. The new world of translation

At this point, we should pause for a moment and examine the role translators and translations are being assigned in the present wave of digital fascination. When people are getting ready to dive into the new global world of technology, will translators be swept up by a revolution which is already inventing so-called "real-time translation"? Or, on the contrary, will it be their task to contribute to this globalization process, with the help of new tools, in search of this speed imposed by the new social order? If that is the case, how are they going to accomplish this task?

It is now widely accepted that technology is having a strong impact on all areas of society, with consequences for some similar to those of the Industrial Revolution. Actually, whole aspects of economic activities, finance, trade, research, education and leisure are being profoundly transformed by the explosion of electronic networks, digital technology and multimedia; even our private lives are being transformed. The combination of technologies has given rise to new products and services, to new modes of work, in an ongoing process of transformation affecting society worldwide. We are witnessing the era of information superhighways, of data transmission at the speed of light, of communication satellites, of the general use of computers in all sectors of manufacturing and services, of the miniaturization of computers and their linkage to worldwide networks.

All industrial and professional sectors are experiencing this transformation in the way they interact with each other and with clients and suppliers, and in the way they do business. It affects work organization, industrial processes, and the way markets are conceived. The idea of the world as a global entity is emerging as a result of these transformations, and globalization has become the buzzword of this turn of the century. It influences industry, economics, society and politics alike, in a wave of decentralization which tends to eliminate barriers in both time and space, creating a new labor force distributed over the world, working from remote locations with their computers, printers, faxes, phones and modems in the interconnected global office. The sense of time and place has given way to the concept of here and now. The immediate consequence is that modern companies are focusing on innovation and creativity, looking for new products and services, new models and new strategies with one question in mind: can we respond quickly to the new requirements of the market? According to Dyson (1997), it is a process of continuous innovation in the pursuit of maximum competitive advantage through speed, teamwork and real-time response, sometimes even neglecting labor-intensive attention to detail.

Two key factors directly related to technology thus influence translation: the growth of the market and the need for a quick response. Accordingly, translation, as both an industrial process and product, a service business some might say (Nogueira, 1998), should respond to the new requirements and revise its modes of operation. In this sense, and as Sager points out (1998: 298-299), a series of indicators show that a change is already taking place; these indicators include: an increasing volume of translation, the creation of big translation companies to manage these huge volumes of information, and a need to hire huge teams of translators to produce and translate the information. Together with these, the translation market is opening to software localization, multilingual publishing, electronic composition, post-editing and desktop publishing (DTP) among others. The direct consequences in this scenario are, first, that an individual translator cannot carry out the task of managing an entire project alone in a reasonable amount of time unless he or she works in a team; second, that this team needs to automate as many parts as possible of the process if it is to provide a quick response to the client; and; finally, that translators need to adapt themselves to this new environment and learn new skills.

In order to meet the new challenge, many translation companies have set up an interconnected office, where a team of worldwide professionals is connected using the latest technologies. This new arrangement represents a unique concept of translation whereby the market's growing demand prompts us to take a particular direction and adopt certain skills in new fields so that we can follow the trend toward eliminating the barriers imposed by time and space, toward translation as an industrial process, toward automation and toward the global team. Naturally, there is always a danger that the translator's work may remain hidden among the computers and the global team However, even when machines seem to be intruding on our privacy, we cannot deny the urgent need to adapt translation to the new modes of communication and work.


5. Three cases in point

When time is of the essence, when commissioning a translation is based on trade-offs between delivery time, quality and price, and when a translation job "goes to the lowest bidder among those who offer the fastest turnaround" (Nogueira, 1998), computerization and team work may constitute a useful solution. The new market has become so demanding that traditional translation skills are no longer sufficient.

Computers allow for fast, efficient communication with our colleagues in any part of the world, help us to cut the time we spend doing research and consulting dictionaries, and even translating, in an effort to adapt our work to our new clients' needs. The key is the cost-effective use of computerized translation tools, since translation, as an industrial process, has its own tools, some more sophisticated than others, ranging from word processors and electronic dictionaries to translation memory and machine translation. Ideally, a translator work station would consist, as Melby (1994) suggests, of the latest hardware, operating system and user environment, as well as application software, which should at include, as a minimum, a word processor, a telecommunications package and an accounting system. Workstation sophistication starts at level one, with a translation job being handed in on paper and software consisting of a word processor, spell-checker, thesaurus and a terminology database and goes up to level three, where machine translation is introduced. Level two, according to Melby's classification, is an intermediate stage where the source text is received in electronic form and software is used for text analysis (formatting codes and structure), indexing (automatic search of word occurrences), automatic dictionary look-up and bilingual text retrieval.

In any case, no matter how we organize the different steps toward translation automation, translation can definitely benefit from technology both as a process and as a product. From the very moment the translation is commissioned and work is planned, computers come into play with such now widely used tools as fax machines and e-mail. As work progresses from the first approach to the text and background research to the actual translation, editing, proofreading and billing, computers are involved at every step of the way in the form of electronic dictionaries (CD-ROM or on-line), terminological databases, the Internet, word-processing packages and DTP tools.

To illustrate how the translation world is changing, I will now review how the use of the new technologies has led to the emergence of new markets, new tools and new paths into research models.


5.1. Case One. New markets: software localization

In recent years, localization has emerged as a new, revolutionary market for translation. With the arrival of computers in all sectors of society, the need for translation of software, including user's manuals, menus, graphics and online help, has prompted the development of a new market. This new field involves not only translation in the traditional sense of dealing with written text, but also adaptation to local markets—hence the term localization—which means readjusting graphics or text labels in menus and boxes, adapting software to language-specific sets of characters, interface design, or using appropriate colors and symbols, to name only a few of the aspects involved.

Localization, defined as the process of adapting a product to the peculiarities of a language community—linguistic, terminological, cultural, or legal—is only the final step in a more far-reaching trend: globalization. This worldwide phenomenon mentioned earlier, in which the world is viewed as a single entity in economics, politics and society , is affecting translation as well. The industry conceives its products from a broader, international perspective which eventually entails reaching the final clients before any competitors do. Globalization thus involves launching a product simultaneously worldwide, with a specific product for each and every language community. In the best of worlds, the entire process of globalization, including product localization, would be carried out by the company manufacturing this product. However, since translation is still seen as the last step in the production chain (with the resulting tradeoff between time and quality), industry tends to outsource this part of the process. External translation companies are in charge of preparing the appropriate documentation—which may involve translating technical documentation or correspondence, editing and publishing it, or even preparing other materials for product packing and distribution. These companies may also be responsible for adapting the product to its final market, for making sure legal standards are met and quality conforms to local customers' requirements, for evaluating the potential success of the product in a particular market, or for compiling terminological glossaries to adapt the product to local consumers.

This trend complicates work for individual translators who are part of a team. Teams are usually distributed across the globe and consist of a variety of members ranging from the project manager, translator, localization specialist, proofreader, QA specialist, localization engineer, testing engineer and desktop publisher to the software developer and web site designer (Esselink, 1998: 6). Naturally, the localization process involves knowledge and skills previously unheard of in the translation profession, and has raised new issues that need to be addressed, new procedures and new tools. Translators need to be familiar with double-byte character sets, multimedia localization, operating systems, and different platforms, such as Microsoft Windows or Apple Mac OS. They must know the meaning of every option or command they translate in the form of dialog boxes, menus, strings containing error messages, status messages or user prompts. Skilled professionals will translate documentation files, compiled software files and software resource files containing all user interface information for an application, such as menu items, dialog box titles and options, error messages and strings; these they will not only translate but also compile to check whether they still run as intended.

It goes without saying that translators are not without support in this industrial process, for tools are constantly being developed to help them maintain consistency of use in the target operating environment, respect length restrictions in menus or reuse (leverage) previously localized material. These tools offer a solution to the problem of coping with language embedded among lines of code, interfaces or graphics and assist translators in their job—translating—regardless of format or platform. They also give support for linguistic and functional software testing, for customizing features in the application of country-specific standards (date, time, number, and currency formats), communication protocols or default paper sizes (Esselink: 2). There is only one drawback: the need for some expertise in operating the tools.


5.2. Case Two. New tools: translation memories

Imagine the following scenario: a translator sits at her desk with a translation assignment from a client for whom she's been translating for some time. Presumably, much of the text will contain terms, names, and expressions already translated. Furthermore, her client would like the target version to be consistent with previously produced texts. Our translator, who is computer literate and works with a word processor, may have a file containing the appropriate client-specific glossaries, and may even keep a record of all former translations with particular problems properly identified. However, our translator may not have a tool that helps her manage this information, a tool that even allows her "to store translated sentences in a special database for re-use or shared use on a network," a tool that looks for matches (words, expressions or even sentences) between the text to be translated and those stored in the database and, whenever a match is found, "proposes the ready-made translation in the target language" (Esselink: 134). This tool is translation memory (TM).

TM is not a machine translation system (MT), although some experiments are currently being conducted to combine the two so that, while the TM stores pairs of translated sentences for later use by the human translator, the machine translation system may offer some automatically generated translations in restricted domains and contexts. In neither case are they to be conceived as replacements for a human translator. Quite on the contrary, they should be viewed as valuable help. TM in particular is of significant use for improving efficiency in repetitive tasks, maintaining consistency, efficiently updating previous translations, ensuring easy terminology management and fast, efficient project management, speeding up translation and protecting codes in tables, graphs, figures, notes and images. In short, translation memories are good at improving productivity and reducing translation costs.

There are different TM programs currently available on the market, but they share similar features, albeit with some differences in speed and data management. Normally, the core of TM is the memory, a complex database where source text sentences are aligned side by side with the corresponding target text sentences. The ways in which the memory can be accessed and managed vary from one TM program to the other, but the philosophy behind the tool is basically the same: reusing previous work. Together with the memory, these systems usually include a module for terminology and glossary management and a tool for automatic alignment of translated sentences.

I would not like to end this brief section on TM without at least hinting at an issue which is becoming increasingly controversial and which may soon need to be given serious consideration: once a translator has created her own memory containing the translation pairs as a result of her work, as commissioned by her client, who owns the memory thus generated? The translator, since it is the outcome of her work? The client, as it is the originator of the document to be translated, or the end-user? What is more, if the same memory is used for different clients and different translations, would this involve confidentiality issues?


5.3. Case Three. New paths into research models: machine translation

Much has been written about MT and much still remains to be said. For fifty years now, research teams around the world have been developing different MT systems for different purposes in private labs and research centers, in public institutions, universities and private companies. In an attempt to mechanize a purely human activity such as translation, different approaches have been adopted: linguistic, mathematical, logical, statistical; different algorithms have been designed and implemented; and thousands of lines of code have been written. Nevertheless, the issue is still far from settled, at least in in terms of expectations of the general public : an MT system as the ultimate translation tool which saves time and money by offering an instant and perfect translation. Just remember the comments I made earlier about a computer's ability to understand that the toy system we can buy off the shelf is nothing more than a toy, an instrument which badly hurts the effort of true research to develop solid, effective MT systems.

Misconceptions have always been at the heart of MT and somehow explain why it is still shunned in some environments. The ALPAC report already proved this back in 1966 and, as Melby (1995) states, some erroneous ideas the early days of MT, such as the search for universals of language—and interlingua, a universal grammar and linguistic system which could work for all languages in all contexts—could only lead to the failure of projects. Still some systems could work, which implied, as Melby points out, a change in design requirements: "the crucial difference [...] was that we were not able to restrict the source text to the sublanguage of a single, well-defined domain. There was some kind of wall between sublanguage machine translation and general-language machine translation" (49), some kind of wall that prevented the system from producing "high-quality translations of a variety of general-language texts." This wall may take the form of an incorrect linguistic theory, unsuitable design criteria, overambitious project goals, or a mistaken assumption of what computers can do; yet the barrier is there and the sooner we admit its existence, the better, since we would then be prepared to understand MT's real potential.

Another source of dissent facing MT is translators' hesitations, which "raise deep doubts, aesthetic and even moral" (Vasconcellos, 1994: 109), whereby MT is seen as "an offence to the genius of language and to the translation profession in particular.." The key is to approach MT with a practical, realistic attitude because the market is changing towards more specialized jobs—such as software localization—towards more specific tasks for which machine output can be used—reusing previously translated materials, for instance—and because a translator's workstation is the only means the translator has to meet the need for increasing speed while maintaining quality.

Thus, different approaches to MT should be be used for different problems, environments and translation needs. Accordingly, current research is being conducted in two main directions, both based on corpus exploitation: example-based MT and statistics-based MT (Rico Pérez, 1998b). The first "encodes knowledge from corpora as translation patterns and then looks for the best translation match for the source language text, whereas the second assigns a probability of translation to every pair of sentences."

As for practical systems presently in use, we shall only mention two, one widely known in the translation community—one is Systran, successfully used in the European Commission Translation Service—and the other, a very interesting experience at the newspaper Periódico de Catalunya. Systran has been used in the Commission since 1976 and is currently available in 17 language pairs (of which 8 give satisfactory results) and is freely available, via the electronic mail system, to all Commission staff.1 The case of Periódico de Catalunya is intriguing, since, to our knowledge, it is the first time a fully operational MT system is used for translation of unrestricted text, producing almost a hundred percent satisfactory results from Spanish into Catalan.2 The translation is generated every day and is considered a "clone" (absolutely identical to the original), giving rise to a daily edition of 60,000 copies. Surprisingly enough, the MT system that performs the translation is not based on any of the current principles of computational linguistics; it does not analyze the sentences in any way; it simply works "by replacing words the way a spell-checker would do."

The system has a huge dictionary which, according to Montoliu (1998), replaces all linguistic analysis. He reports extraordinary quality of the output, with a high level of linguistic accuracy, while processing of the text responds to the speed needed in this particular context—two seconds for translating a page of about 15 K. This is possible because a team of linguists constantly manually update the dictionary with new terms, verbs in their different forms and sequences of words of up to six elements. In a sense, this is a practical implementation of a purely unsophisticated pattern-matching system which benefits from the similarities of the two languages involved.


 

6. Conclusion

In an attempt to clarify the role technology plays in the translation profession, an issue that still generates a range of practical questions and doubts, we have reviewed the history of computers, albeit briefly, and directed our attention to their current ubiquity in our lives. We have witnessed the efforts at mechanizing translation—in pursuit of Aristotle's dream—and discussed the potential intelligence of computers. The new world of translation was then presented, together with new tools and working methods, and we have seen how translators have become members of a global team and connected through new information technologies.

We are captivated by the digital era, conscious of the relevance of computers in the translation profession, which increasingly involves working with the latest tools, reducing time and space and gaining immediate access to information and colleagues. The new technologies, in conjunction with the new market requirements, mean a profound change in both the translation process and the product. Hence the need for new knowledge and skills: translators as expert users of the new advances, capable of responding to the ongoing transformations. The new world requires experts capable of building bridges between new types of translations and their new requirements by using the latest tools. However, we still remain skeptical about how a machine might ever supersede a human translator.



References

Dyson, E. (1997): RELEASE 2.0: A Design for Living in the Digital Age. Broadway Books.

Esselink, B. (1998): A Practical Guide to Software Localization. Amsterdam/Philadelphia: John Benjamins.

Ifrah, G. (1997): Historia universal de las cifras. Madrid: Espasa Calpe.

Hutchins, J. y H. Somers (1992): An Introduction to Machine Translation. London: Academic Press.

Lockwood, R., J. Leston y L. Lachal (1995): Globalisation. Creating New Markets with Translation Technology. Ovum Reports. Ovum Limited, London.

Melby, A.K. (1994): "The Translator Workstation," Professional Issues for Translators and Interpreters. vol.VII. American Translators Association. Amsterdam/Philadelphia: John Benjamins.

Melby, A.K. and C. T. Warner (1995): The Possibility of Language. A Discussion of the Nature of Language, with Implications for Human and Machine Translation. Amsterdam/Philadelphia: John Benjamins.

Montoliu, C. (1998): Informe sobre el sistema de traducción automática del Periódico de Catalunya. Available in: http://europa.eu.int/comm/sdt/bulletins/puntoycoma/51/periodico.htm. 12 November 1999.

Nogueira, D. (1998): "The Business of Translation" in Translation Journal, vol. 2, No. 4, October 1998. Available in: http://www.accurapid.com/journal/06xlat1.htm. 12 November 1999

Rico Pérez, C. (1998a): "Nuevas tecnologías, nuevos mercados, żnuevos modos de traducción?" in Actas de las II Jornadas Internacionales de Traducción e Interpretación, Málaga: 137-144.

Rico Pérez, C. and A. Martín de Santa Olalla (1998b): "New Trends in Machine Translation," META, 42, 4, 605-616.

Rochlin, G. I. (1997): Trapped in the Net. The Unanticipated Consequences of Computerization. Princeton: Princeton University Press.

Sager, J.C. (1993): Language Engineering and Translation. Consequences of Automation. Amsterdam/Philadelphia: John Benjamins.

Vasconcellos, M. (1994): "The Issues of Machine Translation," Professional Issues for Translators and Interpreters. vol. VII. American Translators Association. Amsterdam/Philadelphia: John Benjamins.