Michel Goossens

[completed 2013-06-14]

Michel Goossens is well known for the LaTeX Companion books and was president of TUG from 1995–1997.

 

 

Dave Walden, interviewer:     Please tell me a bit about yourself.

Michel Goossens, interviewee:     I was born in 1951 in Belgium, in a village about 15 km to the south-east of Brussels. After secondary school in Brussels I went on, from 1968 to 1972, to study physics at the Free University of Brussels (Flemish part). Then I spent six years at the same university working on a PhD on the physics of low-energy kaons using photos from the 1.5 meter bubble chamber of the Rutherford High Energy Laboratory (UK).

After finishing my PhD, I joined CERN (the European Centre for Particle Physics, see http://www.cern.ch) in February 1979. I worked two years as a Research Fellow on a muon experiment, and then moved to software support, first in the Physics Department, and subsequently in the Informatics Department, working on a data management system. However, soon I came to realize the importance of good and up-to-date documentation. Therefore I decided to spend the larger part of my career developing tools and providing support in the area of scientific document handling.

DW:     How did you first get involved with TeX?

MG:     Until the late 1970s text processing at CERN was performed using home-made programs which tried to make an optimal use of the quality and functionality of the available output devices attached to a given computer system. Then, with the arrival of an IBM mainframe it was decided to transfer all computer documentation to the Waterloo SCRIPT processor running on the IBM, which remained the workhorse of most text processing work until the advent of personal workstations in the late 1980s.

The possibilities of SCRIPT were heavily dependent on the connected output device, and the arrival in 1985 of the IBM APA6670 (All Points Addressable) high-volume single-sheet printer meant a quantum leap forward in quality from the mono-spaced fonts used previously. This allowed, for the first time, using easy-to-remember shorthand for common entities, to generate typesetter-quality output, featuring accented texts and, more importantly in a scientific organisation, simple mathematical formulae. In fact, these shorthands can be considered a precursor notation of entity references in GML (Generalized Markup Language) and SGML, which was being introduced at CERN around that time. This is important since it guided Time Berners Lee, then working at CERN, in the development of the Markup language of the Web, HTML.

Physicists and engineers who had worked with TeX in the USA wanted it to be installed at CERN. Also, in the late 1980s various Unixes and VAX/VMS had become popular at CERN and the need for a text processing system that ran on all systems became ever more important. Therefore, it was decided to make TeX available at CERN, initially on the central VAX Service, and later only, as an alternative to the official “SGML” system on IBM's VM/CMS. Finally, from January 1990, LaTeX became officially supported on all platforms (IBM and VAX mainframes, PC, Mac, and Unix workstations). In the early 1990s I became responsible for scientific text processing in the Informatics Department. Since LaTeX was now available on all computer platforms, I decided to translate all documentation of the software packages supported by the User Support Group (mostly written in Waterloo SCRIPT/SGML, or IBM/DCF/SGML) into LaTeX. The translation of the corresponding several thousands of pages, which contained sometimes quite complex math, took quite some time and effort; but it allowed us to gain a quite detailed knowledge of LaTeX. Initially, the preferred output format was PostScript, but later, when PDF became better known and more generally available, the documentation was made available as PDFs, for printing and for viewing on the Web.

In parallel with my LaTeX activities, I also followed in detail the developments to use XML as an archive format for scientific documents. For instance I authored some documents using the DocBook as markup language (with MathML for mathematics and SVG for graphics); and I studied how to transform this into various output formats directly via XSLT transformations to obtain XHTML, or ePUB, and PDF (via XSL-FO) or via LaTeX (with XSLT). This works relatively well for software documentation, with simple math and graphics; but for scientific publications, with lots of complex math and more fancy layouts plus the fact that XML markup can be quite verbose compared with LaTeX, this XML approach has not really caught on. Since 2002 I have been heavily involved in the CERN Staff Association (I am now the Staff Union President, a fulltime job); so, for the last ten years or so I only follow developments in that area from a distance.

DW:     Please tell me about your becoming involved with TUG.

MG:     Although the official text processing policy at CERN in the late 1980s was still SGML, an ever-increasing number of physicists started using LaTeX “unofficially”, and I got the permission to attend TeX'88 in Exeter (UK), my first TeX conference, where I met a few people who were already very active in the TeX world or were to play an important role in TeX-related matters: Peter Abbott, Barbara Beeton, Malcolm Clark, Alan Hoenig, Bogusław Jackowski, Joachim Schrod. But, above all, it showed me how international the use of LaTeX had become, and how the English-centric 7-bit TeX had been adapted to the needs of languages with accents, and even different alphabets. This was an important point for an international organisation, such as CERN, where scientists from most European countries work together. TeX allowed typesetting in their native languages. Also TeX was a freely-available system with great mathematics support. From then on I could devote a larger part of my time to LaTeX, and was able to attend TeX conferences more regularly. Hence I was in Cork (Ireland) in September 1990 for TeX'90, TUG's first conference in Europe. During that meeting I discovered the existence of a TUG Board in the form of a number of important-looking people who withdrew on several occasions into a room to discuss what seemed to be essential TeX-related issues. As Nelson Beebe, then President of TUG wrote, that “European summit meeting” was an opportunity for the heads of TUG and the European groups (five in western Europe, with five more in the early stages of formation[]) to meet and talk about common issues. Indeed, the international role of TUG became more and more evident. It is at that meeting that a working group introduced the first of a series of so-called standard 8-bit LaTeX font encodings, the “Cork encoding” (also known as “T1”), which defined a code-point position for most of the letters with diacritics used in the Latin alphabets of the European countries (later several other font encodings for other languages and alphabets were defined). This common effort to make TeX more suitable for non-English languages continued at the EuroTeX conference in Paris in September 1991. There a working group with Barbara Beeton as chairperson also started work on defining an 8-bit extended TeX font encoding scheme for math fonts (see the Math Font Group homepage http://www.tug.org/twg/mfg/ for a complete history).

In Prague at EuroTeX'92 (September 1992) and Aston University (Birmingham, UK, in July 1993) I met executives of several European TeX User groups, in particular of GUTenberg, the French speaking TeX users' group. As a result, I was invited to GUTenberg's Committee meetings in November 1993 and elected their President in June 1994. At the beginning of 1994 Christina Thiele, TUG's President, invited me to join the TUG Board as Vice-president. From that point onwards, together with the other members of the Board, we tried to find ways of making TUG really representative of the world-wide TeX community. In that respect there were some important contacts with local user group representatives at the TUG'94 conference in Santa Barbara (July 1994) and EuroTeX'94 in Gdańsk, Poland (September 1994).

After I was elected TUG President in Spring 1995, it was decided at TUG'95 in Saint Petersburg Beach, Florida (August 1995) to abolish the function of Special Director in the TUG Board. That function had been created in 1989 to increase the awareness of problems non-North American users face when using TeX. However, having only five “international” representatives (DANTE, GUTenberg, NTG, Nordic TUG, UK-TUG), although historically correct in 1989, no longer reflected the real situation. Therefore, the five Special Directors agreed to resign, thus allowing TUG to provide ways to optimally take into account the interests of all TeX User Groups. Already the next year TUG made a first step in that direction by organising the TUG'96 conference in Dubna (Russia). It is interesting to note that at that conference we were able to announce the availability of the first TeX Live CD-ROM.

Since then the number of TUG conferences organised in different parts of the world has become an example of the collaborative spirit of the TeX community, and of TUG in particular. Other examples of the collaborative efforts between the various TeX user groups are the TeX Live DVD, the presence of CTAN nodes in various countries, joint-memberships between TUG and some Local Groups, as well as the financial support that Local Groups offer to TeX-related development programs (TeX Development Fund, Libre Font Fund, TeX Gyre fonts, LaTeX 3, LuaTeX, MacTeX) or for bursaries to attend each other's conferences. I can only congratulate TUG for its continued efforts to fully assume its international role.

But let us come back to my direct involvement with TUG. The early 1990s had seen a lack of volunteers who could contribute their time for the production of TUG's flagship publication TUGboat, so that large delays had accumulated in 1994. A direct result was a substantial drop in membership (generating a deficit in TUG's finances). Therefore, as incoming president, I took it upon myself at TUG'95 to get TUGboat back onto schedule by the end of 1995. To achieve this goal TUG's Publications Committee set up a core team together with a production environment at SCRI (the Supercomputer Computations Research Institute at Florida State University). Barbara Beeton was in charge of the overall process, assisted for the practical production by Mimi Burbank, Robin Fairbairns, Sebastian Rahtz, Christina Thiele and myself. We achieved our goal thanks to the hard work of all these people.

More generally, after a build-up in the mid-eighties, the membership of TUG had stabilized in the range of 3000-4000 members, but since 1991 it had fallen by about 10% per year. This decrease as the nineties progressed coincided with easier access to local and wide-area networks. Therefore, most of what a TeX user needed could henceforth be transferred conveniently from CTAN, or copied from one of the TeX CD-ROMs.

As for TUG, notwithstanding the delivery of the outstanding issues of TUGboat before the end of 1995 and a re-subscription campaign with members of the three previous years via email, mid-1996 membership numbers stood at about 1500, down again from the previous year's figures by about 15%, which would have led to a deficit of about $20,000 for 1996. Already at midyear 1995 the precarious financial situation of TUG (which was eating its reserves quickly) had led to the TUG office being moved from Santa Barbara to San Francisco, which offered vastly improved connectivity to the Internet, cheaper rates for telephone, rent, personnel charges, and, with all the universities in the vicinity, increased opportunities to find teachers and rooms for organizing courses or volunteer effort for other TUG activities. For several years the TUG office had been run by an Executive Director (ED) plus two clerical staff, but since the end of 1995 this had been reduced to the ED. Nevertheless, given the cost of renting office space, telephone, computer equipment, printing costs for TUGboat and, above all, an increase in postage charges by almost 30%, the payroll of the one remaining staff member was too large compared to income. Therefore the Board decided to end the contract of the ED, and thanked Ms. Monohon, for her work during the five years she had run the TUG office. Therefore, as of January 1st 1997. the TUG office was only staffed half time and more reliance was be put on email and WWW services.

My direct involvement with TUG ended during the TUG'97 meeting in San Francisco (July 1997). Together with my colleagues on the TUG Board we had made sure during our years in office that we had taken actions to ensure that TUG's finances were on a much sounder basis. We were happy to see that, under the leadership of the incoming President, Mimi Jett, the restructuring of the office and support team was further progressing with the San Francisco TUG office being vacated and its content packed to be shipped to Portland, Oregon. Furthermore, in an effort to reduce costs and improve service to members three contractors were given service contracts, each one limited to a specific responsibility: membership support and office administration; bookkeeping; and email.

I remain convinced that the sometimes painful measures we had to take have borne fruit and have resulted in a stronger TUG that is able to fully support the TeX community at large today, in 2013 and beyond.

DW:     In your various letters about the state of TeX and TUG published in TUGboat (http://tug.org/TUGboat/Contents/listauthor.html#Goossens), you mention various changes in the computing world, with TeX, and with TUG. Please give me a sketch of the evolutions you saw happening in that era.

MG:     The evolution of computing and text processing have been co-developing since I first came in contact with computers in 1969 in my second year at university, where I was submitting Fortran II programs punched on cards and fed into a mainframe with a turnaround of once or twice a day. I was very happy that in 1971 I joined the Elementary Particle Service in Brussels University for my Bachelor Diploma, since they had a PDP-10 computer where I could work on a teletype with an editor to input my programs and get the printed output on large white sheets almost immediately. In 1978 my PhD thesis was still typed by a secretary based on my manuscripts, but plotters allowed me already to directly produce graphical representations of my results.

When coming to CERN in February 1979 I was introduced to electron-beam display devices (some kind of high-definition greenish television screens) to generate graphical (or text) output on screen, but the mainframe remained the master of the computing environment. High-quality text processing was done on a series of incompatible word-processing systems running on dedicated machines (e.g., Norsk Data, Wang, AES, Philips, IBM, Olivetti, and Nixdorf), all of which were used in various CERN services in the 1970s and 1980s, generating high-quality output for use on a professional photo-composition machine.

For non-professional use, commonly-available printers had very limited capabilities in the 1970s, with uppercase-only being the norm. It is thus no surprise that text processing systems available on general-purpose computers only began to appear when printers got more flexible. Several home-grown systems, allowing lowercase text strings interspersed with format control for generating titles, subtitles, appendices, headings, justification (left, right, centered) and boldface were developed, first for use on punched cards, and later, in the late 1970s, in interactive mode featuring a superset of ASCII, plus the Greek alphabet and some mathematical symbols.

In 1978 CERN acquired an IBM Mainframe and decided to transcribe all user documentation to work on that machine by using a simplified set of SCRIPT macros, SYSPUB, which were interpreted by the Waterloo SCRIPT processor, the beginning of the SCRIPT era at CERN, the text formatter that would remain the basis of most text processing work at CERN until the advent of personal workstations in the late 1980s.

As the quality and functionality of general text processing systems is closely linked to the availability of output devices it should come as no surprise that the arrival at CERN in April 1979 of the first laser printer, an IBM 3800, opened up a new realm of possibilities for higher quality typesetting thanks to these laser devices, with several character sets, e.g., with a lot of scientific symbols allowing simple one-line equations or block diagrams to be typeset. They also offered a choice between various type sizes (10, 12 , and 15 characters per inch). The availability of this system was the basis of a mini-revolution, since for the first time scientists could prepare their scientific papers themselves in a reliable way. Sometime later CERN received its first loose-sheet laser printer, the IBM 6670, which came with an extended set of Greek and mathematical symbols, including accented characters. This printer proved a huge improvement over what was available with the IBM 3800 since it offered for the first time proportionally-spaced fonts (previously only available on photo-typesetters), although inputting accented letters of math symbols still had to be done via awkward shorthands. CERN developed its own macro package to ease this input process, CERNPAPER, which also took advantage of the latest versions of Waterloo SCRIPT as they became available, which included support for the newest photo-composition and laser printing devices, with better font handling, negative skips and overlaps, spell checking possibilities via the inclusion of dictionaries, improved hyphenation, more flexible super and subscript handling and better error-reporting. Finally, the arrival at the beginning of 1985 of the APA6670 (All Points Addressable) high-volume single-sheet printer in the Computing Center was another quantum leap forward in quality.

SCRIPT version 84.1 installed at the beginning of 1985 introduced support for GML (Generalized Markup Language), using the reference concrete syntax of SGML (Standard Generalized Markup Language) we now know from HTML and XML, i.e., < and > for starting and ending element tags (GML uses : and . respectively, i.e., instead of GML's “:p.” we already used “<p>”). SGML is not a “markup language” in the sense of the SCRIPT or TeX languages. SGML is a meta-language which defines the syntax for creating an infinite variety of markup languages and is hence completely independent of the text formatter. The explicit structure of a given SGML language instance is described by its document type definition (DTD). Several such definitions have been developed, and at CERN the CERN SGML User's Guide published in October 1986 described a set of document types with a rich tag set for preparing foils, memos, letters, scientific papers, manuals, etc.

In the late 1980s various Unixes and VAX/VMS became popular at CERN and the need for a text processing system that could run the same input source on all these systems became ever more important. Physicists and engineers who visited the United States of America, especially SLAC (the Stanford Linear Accelerator Center, US), told us with great enthusiasm about TeX , a publicly-available text processing system that a certain D.E. Knuth of Stanford University had been working on with his students since 1977. TeX's popularity with thousands of scientists was especially due to the ease with which any kind of writing can be transformed into various document classes, such as articles, reports, proposals, books, in a way that is completely under the control of the writer through a rich set of formatting commands. But more importantly than the formatting are the infinite possibilities of TeX to input and render mathematical formulae with high typographic precision. Moreover the program was available in the C language which could be compiled on almost any operating system in the world, so that it runs on a wide range of computer platforms, from micros to mainframes. It behaves 100% identically on all machines, a fact extremely important in the scientific and technical communities. Related to this portability is TeX 's printing device independence, so that a document can be printed on anything from a CRT screen, a medium-resolution dot or laser printer, to a professional high-resolution photo-typesetter.

Because of these qualities and since it is available in the public domain TeX had become the de facto standard text processing system in many academic departments and research laboratories. With Leslie Lamport's LaTeX, introduced in the early 1980s, authors could concentrate on the structure of the document rather than on formatting details, which were left to the document designer via class files, although fine-tuning of the output page was still possible when needed.

TeX was first officially introduced at CERN in September 1987 running on the central VAX Service, then on the IBM mainframe running VM/CMS about a year later. By 1990 TeX had “invaded” all platforms, including six Unix variants running on all kinds of workstation hardware. SGML running on top of BookMaster (a macro interface based on IBM's DCF SCRIPT interpreter) remained the officially preferred text processor on the VM/CMS operating system, while Microsoft Word on PC and Mac made their entrance on personal computers.

With mainframes being abandoned and most development, administrative, and production work moving to Unix and Microsoft Windows workstations, TeX/LaTeX became the most frequently used text processor on workstations and IBM and Mac personal computers at CERN (with Word on personal computers mainly limited to administrative documents and to technical work in the engineering sector not needing complex math). Under my supervision all user documentation for the software packages supported by the Application Software Group was translated into LaTeX in a huge one-off effort.

With the help of a consultant, Sebastian Rahtz, in 1992/93 we installed at CERN a reference system containing all the latest LaTeX developments as well as compiled TeX binaries for all Unix systems available at CERN. This work became the basis of the TeX Live CD-ROM, which is now a reference for TeX distributions, the first TeX Live CD-ROM having been produced around the time of the TUG'96 conference in Dubna (Russia). It can also be said that the work with Sebastian, in particular the cataloguing of LaTeX packages, was the basis of the three “Companions” we co-authored.

In the late 1980s and the early 1990s a major event happened at CERN, namely Tim Berners-Lee and collaborators developed the basics of what was to become the Web. In those days Tim was sitting just a few offices down the corridor from where Sebastian and I were working; and already at the beginning of 1993 we had translated, with the active help of Tim, some rather complex LaTeX documents into HTML, using our own ad hoc set of LaTeX macros which translated LaTeX high-level commands into HTML equivalents (with formulae left as TeX inside the HTML). CERN was thus using HTML well before the rest of the world, which became aware of the Web mostly after the “Woodstock of the Web”, the First World Wide Web conference organised at CERN on May 25–27 1994.

The Web became so popular that several browser vendors started competing to offer specific extensions to the HTML language to attract users wanting to publicise their products. Therefore, various mutually-incompatible dialects of HTML appeared. Moreover, to really benefit from the Web and the various applications that were being developed, the XML initiative was initiated in late 1997. Jon Bosak, who published his seminal article “XML, Java, and the future of the Web” around that time, was one its main promoters. All this culminated in the publication of the XML W3C Recommendation, which defines the XML language in a formal way. XML is truly and explicitly international in that it espouses Unicode as its basic character set. Moreover, XML-based companion specifications, such as XPath, XSLT, XQuery, XML Schema, XSL-FO, CSS, SVG, MathML, XLink, XHTML, DocBook, TEI, and hundreds more, have made XML the real lingua franca of the Web, allowing XML applications to provide Web services to everybody in a simple and portable way.

For scientific and computer documentation DocBook (http://www.docbook.org) markup has been in use for many years. In the late 1990s I have run some pilot projects in the use of XML and DocBook at CERN, but this effort has not caught on, and today most documents are still produced with LaTeX and Microsoft Word.

DW:     Please tell me about how you got involved in each of the three “Companion” books you were involved in co-authoring and what was your role in each.

MG:     The content of the three Companion books grew out of the work I did together with Sebastian Rahtz at CERN in the early 1990s. More precisely, The LaTeX Companion, the first book in the series, was a direct result of a visit of Frank Mittelbach and Chris Rowley, whom I had invited to CERN in April 1992 to give a talk about LaTeX3 (yes, they were already working on that system twenty years ago!). On that occasion I talked to Frank about the idea Alexander Samarin, a Russian colleague who was helping with TeX-related work at CERN, and I had about writing a book describing the many LaTeX packages we had installed at CERN in those days to teach LaTeX users worldwide about their existence and their capabilities. Frank found it a good idea and he contacted Peter Gordon of Addison-Wesley. This publisher was interested and Alexander, Frank, and I decided to collaborate.

It took almost 18 months of hard work (Frank was developing LaTeX2e in parallel as we were writing the book, which consisted of a single large LaTeX source file) before the first edition was published at the beginning of 1994. As it was the first book for both Frank and me (Samarin went back to Russia in late 1992, so he was less directly involved in the later stages of the project), mastering all the technicalities and synching the latest version of the LaTeX kernel and other macro packages which Frank prepared almost daily in Mainz, where he lived, with what I was running at CERN (Geneva), was not a straightforward or easy task. Remember, there was no Web, file transfers were via ftp and the Internet was a lot slower than today (a more detailed story is available as http://tug.org/TUGboat/tb15-3/tb44goossens.pdf).

The LaTeX Graphics Companion was a natural successor to The LaTeX Companion. In fact, Frank and I had to leave out quite a lot of material from the first book, especially in the field of graphics applications. Hence we thought it interesting to use that material and complement it with recent developments. With the cost of PostScript printers coming down quickly in the mid-1990s, PostScript had become one of the favourite output formats of all commonly used graphics applications. Thus, naturally LaTeX-based interfaces to that language had been developed (e.g., PSTricks) and PostScript fonts had become available on most printers. Therefore, The LaTeX Graphics Companion, which came out some three years after the first Companion, can be considered as its genuine complement since it describes numerous packages that extend or modify LaTeX's basic illustration features. It features material for general graphics based on METAFONT and MetaPost, PSTricks, and XYpic, as well as for special applications in mathematics, physics, chemistry, engineering, games, and music, as well as the use of PostScript fonts and PostScript drivers and tools.

The third book in the series, The LaTeX Web Companion, was started almost immediately after we finished The LaTeX Graphics Companion. Sebastian at Elsevier and I at CERN had been experimenting with publishing complex scientific material on the Web. Developments to translate LaTeX sources into HTML were undertaken in various places and to counter the spread of the many incompatible dialects of HTML the W3C Consortium published the first version of the XML language recommendation in February 1998. XML is based on the ISO standard of SGML, but eliminates the latter's overly complex and rarely used features. All this was happening at a very fast pace, and we decided to try and write a book to capture some of the newest developments so that they could become helpful to the whole LaTeX community. A large part of the book is devoted to translators which transform LaTeX sources into HTML with images or MathML with LaTeX2HTML (maintained by Ross Moore) and TeX4ht (written by the late Eitan Gurari). We also looked at an approach which was still in its infancy: displaying LaTeX directly on the Web, via browser plugins. These three chapters were based on contributions by Ross, Eitan, and IBM's Robert Sutor, who was the leader of techexplorer program. We featured PDF as a successor to PostScript, and introduced pdfTeX as a useful extension of TeX highly integrated with that language. The latter part of the book talked about XML as a language, and XSLT as a tool for transforming XML into various output formats. The aim of Sebastian and me was to try and provide an overview of a quickly changing field.

Soon after the year 2000 Sebastian and I became involved for the larger part of our time in non-LaTeX related activities. Therefore, my direct contribution to the Second Edition of The LaTeX Companion, which was published in 2004, was at a lower level than for the first edition; and the collaboration of Johannes Braams, David Carlisle, and Chris Rowley of the LaTeX3 Team was thus much appreciated. The same is true for both Sebastian and me for the Second Edition of The LaTeX Graphics Companion, which appeared in 2008, and where Denis Roegel (for MetaPost) and Herbert Voss (for PSTricks) made substantial contributions.

As I am still very much interested in scientific document handling (and in particular using XML as central archive format for storing the sources of scientific material), I would very much like to find some time (surely not before I retire in early 2016) to update The LaTeX Web Companion, augmenting it with tools which have been developed over the last decade or so. In a quick search on the Web I found latexml, eLML, Tralics, Hermes, TeX4ht, LXir, TeXML, LyX. And there is also LuaLaTeX, which opens up the possibility of using the embedded Lua scripting language to transform XML sources into LaTeX and the reverse. A whole program....

DW:     You mentioned Berners-Lee and originally TeX macros and later LaTeX2HTML. Please say another word about that.

MG:     Tim Berners-Lee, who developed the Web at CERN, wrote a text-based HTML editor and browser. Mosaic, the first WYSIWYG browser, was developed elsewhere. Similarly, LaTeX2HTML was developed originally by Nikos Drakos in 1995 while working at Leeds University in the UK. It was next put on a distribution server in Darmstadt (Germany) in 1996, and a few years later the maintenance was taken over by Ross Moore. The chapter on LaTeX2HTML in The LaTeX Web Companion was contributed by Ross (although heavily edited by Sebastian and me to conform to the look and feel of the printed book).

DW:     How has the use of TeX at al. changed over the years at CERN?

MG:     Today, TeX/LaTeX is still by far the most frequently used document sourcing system at CERN, especially for marking up scientific documents to be submitted to international journals which provide the relevant class files. Sometimes LaTeX2HTML is used for providing HTML output.

CERN was one of the pioneers of the Open Access initiative, which means that all results of its experimental and theoretical work shall be published or otherwise made generally available. In particular, all scientific papers are downloaded to the ArXiv.org server (currently sited at Cornell University), which CERN actively supports. This server provides open access to over 850,000 e-prints in Physics, Mathematics, Computer Science, Quantitative Biology, Quantitative Finance and Statistics, of which the vast majority are available as LaTeX sources.

Larger documents, mostly of a more technical nature, or administrative documents are often prepared in Word on Macs or Windows PCs. Sometimes Open Office is used on Linux.

Some tests have been made with marking up manuals in XML, in particular using DocBook. However, since complex mathematics cannot yet be fully represented in MathML, or general MathML markup is not rendered correctly in the browser, most documents are available on the Web only as PDF files (generated from LaTeX or MS Word sources).

DW:     A followup question: you mentioned that you are “now the Staff Union President, a fulltime job.” Will you give me a word or two of description of this job.

MG:     Already while in secondary school and university I have always played an active role in representing the students, then the academic staff in the relevant educational and social bodies. So when I joined CERN in 1979, I almost immediately became a member of the Staff Association (http://staff-association.web.cern.ch/), which is the only body recognized by Management to represent on a collective basis matters of a general nature regarding the employment conditions of the members of the personnel.

CERN (http://home.web.cern.ch/about), an international scientific organization, currently employs some 2,500 staff and 500 fellows. Just over 60% of staff are members of the Staff Association, and these elect every two years the 60 members of the Staff Council for a two-year mandate. Subsequently, the delegates of the Staff Council elect a President and his Executive Committee. Delegates to the Staff Council continue to be paid by CERN during their representative duties. This applies in particular to the function of President, which is a fulltime job. Those not working fulltime for the Staff Association (everybody but the President) continue to work in their technical job and function. Similarly, when the term of a president ends, he (we have not yet had a woman president since 1955, when the Staff Association was created) returns to his former unit and technical job.

As far as I am concerned, I have been a member of the Staff Council since the middle 1980s. I was first vice-president in 2002 (a half-time job), then President in 2003 and 2004, then vice-president twice more in 2007 and 2008, and finally President again since 2011. My current term ends in December 2013, but I plan and run for a last two-year mandate in the Staff Council at the end of this year, and, when elected, will thus finish my career at CERN representing its staff until I retire at the beginning of 2016.

DW:     Thank you very much for taking time from this busy role to participate in this interview, and for your very valuable role during a time of major transition for TUG. I particularly appreciate the effort you put into the books on LaTeX that I use daily.


Interview pages regenerated January 26, 2017; TUG home page; join TUG/renew membership; webmaster; facebook; x; mastodon.