A personal view – Teodora Ghiviriga, Romania

The Faculty of Maritime Studies at the University of Rijeka, Croatia, hosted this year’s LEXICOM workshop in lexicography and lexical computing. The venue in this lovely seaside resort – Hotel Opatija – provided the Internet facilities for the computers; it also accommodated many of the 32 participants (from 11 countries), an arrangement that was not only time-efficient, but fostered a really congenial atmosphere. The constant presence of the three tutors, Sue Atkins, Adam Kilgarriff and Michael Rundell, outside the actual workshop proceedings added to the social success of the event and also turned out to be a potential professional gain. I, for instance, was the passive, yet eager beneficiary of an in-depth, though brief account of the present state of American lexicography casually delivered over lunch by two of the tutors.


The lexicographic strand ranged from an insider’s view into the practical aspects of dictionary-making, using templates, the choice of metalanguage, definition writing and style guide policy to theoretical matters such as the history of and rationale for the corpus-based approach to word sense or a condensed introduction to frame semantics. This – together with the mini-tutorial I was given on the use of FrameNet – I found particularly relevant for my activity in translation and teaching translation. Apart from its direct applicability to lexicography, (to me) it can prove highly valuable in providing the ground for discussing valency patterns (in morpho-semantics, for instance) and possibly for project work in establishing equivalencies in translating from/into Romanian (mainly literary texts, but not only), since existing bilingual dictionaries offer limited (and often confusing) information on linguistic systems that operate on highly different bases (i.e. English and Romanian). The input referring to the practical aspects outlined solid and consistent criteria for selecting and assessing dictionaries and dictionary entries (useful in translation practice). And anecdotal exemplification made it all the more memorable, at the same time setting a warning against facile solutions or rigid standardization.

The NLP strand twined information on types of documents, tokenization, tagging with technical developments, such corpus querying and refining searches with the Sketch Engine. A point of interest for its immediate applicability was the creation of corpora using the web with the BootCat program (preceded by the necessary discussions on the representativeness of the linguistic material extracted from the web and the legitimacy of using the web as a corpus, problems related to selection and filtering and others). The resulting sample of a specialised corpus, in my case, allowed me to envisage some of the ways such an instrument can be put to work, for example, tracing instances of (yet) unadapted (or partly adapted) English items in the Romanian terminology of Economics / business (and the speed of the Internet connection made our patient wait the more rewarding). Since to me BootCat made sense in association with the Sketch Engine, I think it is appropriate to mention the latter here; being introduced to it and to its Word Sketch function will prove one of the major benefits – for teaching translation (on the side), but mainly for analysing and interpreting the (specialised) corpora already created. Working on the motivating lexicographical tasks that involved the use of Word Sketch and learning by trial and error were definitely a rewarding experience.

The carefully balanced schedule alternated theoretical input with practical work intended to give everyone the opportunity to write entries using the NLP tools. Relying on well ordered step-by-step instructions, shared task practice proved an effective strategy as it involved fruitful exchanges of ideas among team members. At the round-up session, the discussions identified difficulties and weighed up solutions.

The nearly 500 pages of the course pack compressed the content of the seminars and lectures and also conveniently left room for note-taking during the sessions. The written form obviously couldn’t retain the vividness and savour of the presentations, but it definitely makes a helpful resource for further reference and study (back at home). A judicious selection of bibliography and useful websites was also appended to it – an offer that any professional would revel in.

To the optimists, the presentation in the closing session came as a promise. To the more skeptical minds (such as mine) it came both as a promise and a surprise: ‘what the future holds’ for the lexicographer as well as for the lay dictionary user proposed the cogent image of a complex, yet user-friendly, reliable (in terms of representativeness) and flexible dictionary that brings together the advantages of the electronic format (easy but selective access to entries, links to further / related information) with the achievements of traditional lexicography. An altogether inspiring closing to the workshop. So, this year’s Lexicom ended just when one wished it actually started – or at least it could last just a bit longer, a wish which in itself makes – I think – a suitable conclusion, since it holds the promise of future growth.