Four thousand two hundred letters of different alphabets are engraved on the panelled and spectacular vault, 32 metres high, of the ‘new’ library in Alexandria. A library which, alas, is interested in only one language today.
It was in 2002 that this colossal project was started. The objective: recreate the celebrated library of Alexandria, one of the main wonders from antiquity, which was burned, then reconstructed and was destroyed again and again mainly due to the Romans, but also due to orthodox Copts, Christians and Muslims, according to the divergent versions of historians. Unless of course it was because of an earthquake.
Today, the new library of Alexandria occupies the choicest place on the shores of the Mediterranean, in the second biggest city in Egypt. The original building would have been located 100 metres down, but no excavation has ever proved its exact location. A statue belonging to the Ptolemaic dynasty to which we owe the establishment of the library, was discovered in the port of Alexandria in 1995: 13 metres high and weighing 63 tonnes, it stands today at the entrance of the library of Alexandria (although the secularists were surprised to see that the pubic area of the statue was covered up, as if this ancient display of nudity was indefensible in the eyes of the followers of contemporary radical Islam). ‘No! It’s a man! He was a pharaoh. It is certainly topless, but it is a male statue,’ cuts in Ismail Serageldin, the director of the library.
One gains access to the building through a footbridge made of glass. Inside, there are many museums, a conference hall, a planetarium and especially ‘a reading room that can accommodate 2,000 people at a time, built on eleven levels,’ says Sherine Gaafar, the spokesperson for the project. The library can hold eight million books (it has only 1.5 million at the moment, in twenty-four languages). For the project is slipping. At first, it lacked money and leadership. Egypt neither had the means nor the political will to completely realize the beautiful idea that it had so ardently defended. Deeply associated with the Mubarak era (the wife of the dictator presided its board of administration), threatened with a new round of destruction during the 2011 revolution, criticized for the different forms of political or moral censorship, the library wasn’t fit for its role or for that matter its ambitions. With the development of the digital in particular, the project became devoid of meaning: What good could come out of gathering in the same spot all the books ever printed, when one could have it everywhere digitally? What good could come out of creating a universal encyclopaedia, when Wikipedia exists in all languages?
‘The idea of collecting the maximum number of books possible, which was the original project, does not make sense in the digital age,’ admits Ismail Serageldin, in impeccable French. ‘Today,’ he adds, ‘we don’t really need physical books any more. The important thing is to make a choice, be “curators”.’
A unique and original initiative, the new library of Alexandria soon found itself in competition with better financed and otherwise more solid projects based on a technological plan. It was therefore forced to reach out to American and European partners, with the help of the United Nations Educational, Scientific and Cultural Organization (UNESCO) and Google, to participate in the creation of a World Digital Library (wdl.org), a project in which it plays only a modest role. In the same way, it tried to rally itself by becoming an internet memory bank by permanently archiving everything that appears on the internet, but the project Internet Archive (archive.org), launched in 1996 in San Francisco, marginalized it. It then became a simple outhouse for Arabic and finally stopped archiving the internet in 2007. Today, compared with large-scale digital projects, led by Harvard University, the American Library of Congress, Europeana or even Google Books, the library of Alexandria inexorably took a back seat. And when I visit the reading rooms, meant to collect 2,000 visitors a day, I only see a small dozen readers.
In the fourth basement, people are busy in the Digital Lab. Its deputy director, Rasha Shabandar, a non-veil-wearing woman, as most of the women employees are, describes to me the process of digitization of the ancient texts. An original manuscript is firstly scanned, saved in image form and then converted to text form to enable corrections on it. ‘Everything is automatic, but when being converted from image to text, many mistakes appear and the workers correct them, word by word,’ explains Shabandar. Once the operation is done, the scanned and corrected manuscript is again transformed into image form in a copyright-free format, DjVu, a contender of PDF, which helps further compress images, while still offering copy and search functions. Shabandar wishes to show me the result of a text but, on the day I am in Alexandria, the IT system is down and the demonstration they planned for me fails due to the lack of serviceable computers. Shabandar is sorry and her colleagues are extremely kind; but one of them would later tell me, off the record, that the computers are constantly down. On the wall: framed verses of the Koran.
‘Our mission changed gradually. Instead of aiming to collect all the books, all languages, and archiving everything online, we confined ourselves to Arabic alone. It is our objective now. Our aim is to improve the presence of Arabic online and keep alive the Arab internet. We want to be the sponsor of Arabic content online,’ says Mariam Nagui Edward, one of the digital managers of the Library of Alexandria. In a more nationalistic, yet pan-Arab point of view, here is a multicultural project thus reduced to one language.
Though the ambitions have shrunk, obstacles endure. Digitizing Arabic is not as easy as one might believe. Firstly because there are many kinds of Arabic, even though there is only one classical Arabic of the Koran. Dialectal Arabic are dominant in many Muslim countries, not to mention Berber and Kurdish. ‘Egyptian only represents 13 per cent of the Arabic language that we are backing up,’ admits Serageldin, who is eloquent on the varieties of Arabic. Moreover, with videos being more and more important on the internet, it is difficult to preserve the Arab internet without taking into account all these important linguistic differences. Even in its classical and standard written forms, Arabic poses an innumerable number of problems. ‘When one does a digital search for a word in Arabic characters, one can see that this is a lot more difficult than with the Latin words,’ continues Mariam Nagui. According to her, the letters are written in varied ways, the style of linking them to form a word differs a lot from one person to the other, and periods and other signs that appear above or below letters are not the same everywhere. Thus, software that transcribes texts or which helps search for a word or sentence can confuse the whole, when they don’t take these signs as ‘irrelevant results’ and ignore them (during the process of digitization, stains and dirt on manuscripts are thus named, as something that needs to be left out). ‘We have been working on this software for years and we are gradually improving it thanks to the large amount of books we are digitizing. Today, we have achieved a relevance of 99.8 per cent for search in Arabic,’ guarantees Mariam Nagui. She adds, ‘We are unable to achieve a 100 per cent.’ (The technique used is called Optical Character Recognition, or OCR, and it is obtained through software adapted to Arabic such as Sakhr or NovoVerus.)
Another evolution: the new library of Alexandria is now specialized in the history of Egypt. It holds the important Nasser and Sadat archives, a unique collection of documents on the Suez Canal, and digitizes the National Archives of Cairo. It also chose to preserve all the documents related to the fall of Mubarak. I observe the reactions and I am impressed. At the click of the mouse, all the articles, Tweets and Facebook pages are now archived and also photos posted on Flickr, Picasa or Instagram depicting the 2011 revolution. ‘In total, more than thirty million documents on the revolution,’ says Mariam Nagui, delightedly. At the end of the day, the Library of Alexandria may have just found out its mission.
The battle for languages on the internet has just begun. In countries where I undertook this survey I could see that language is often a subject for combat, as soon as one raises the digital question. In Mexico, in Quebec, in China, in India, in Brazil, in Russia, tensions on the subject of national language or dialects, controversies in the name of diversity or nationalism are innumerable. For, everywhere, only one enemy is pointed at: American ‘Globish’.
The word ‘Globish’ was coined by a Frenchman working for International Business Machines Corporation (IBM). Derogatory, the word refers to a globalized, minimal, basic ‘American’ language that everybody uses more or less, which appears very poor compared to traditional English. It is not a language, only a utilitarian American English. The English are the first ones to make fun of it, and proudly, they like to quote the British writer Oscar Wilde: ‘Nowadays, we have everything in common with America except, of course, the language.’
To make things more complicated, there is not only one ‘Globish’. It has itself developed into a variety of styles of broken English. In Singapore, it is ‘Singlish’; the Hispanics of the United States use ‘Spanglish’; in China, ‘Chinglish’; and among the Tamils of India, one calls it ‘Tanglish’.
It is however difficult to deny the rapid development of English under the twin effect of globalization and the generalization of the internet. According to the available studies including those published by the International Organization of Francophonie, which cannot be accused of Americanophilia, English has indeed become the vehicle language of the internet. It is, I would say, the default of language of the web, like one says ‘default search engine’. It is in a way the mainstream language of the internet, a utilitarian language, user-friendly, the only one to be a sort of lingua franca which allows communication to take place between different speakers, including for example, those within the European Union. The very small rate of multilingualism on the internet is another important detail.
All this however does not mean that the internet goes hand in hand with the rise of linguistic standardization. It is quite the contrary. Though English often represents the only language of exchange between people who don’t speak the same language (standard Arabic, Mandarin, Russian or French can also play this role), most netizens browse sites and engage in conversations on social media in their language. Globalized American sites exist, their audience are many, but they represent only a small number of individual visitors around the world. Facebook may well be American, its users mostly speak in their own language, which is not English. The same thing goes for Wikipedia. And one can think that this trend will increase in the years to come, as millions of non-English speakers go online.
It is a discreet building, with no external sign, on New Montgomery Street, in downtown San Francisco. Arriving a little early for my meeting, I meet a creased-looking security guard with a wrinkled face, who warmly grabs me by the shoulder, ‘I leave at 6 p.m. and if you come late, you may find the gate closed. But you can wait here.’ He seems happy that I give him company for a few minutes before going up to the sixth floor: the world headquarters of Wikipedia.
One of the ten most visited websites in the world occupies a modest space. No external sign of wealth. No signs inside the lift. No ID check at the entrance. In spite of its nineteen billion views a month, there is no abundance of engineers, like at Facebook, nor a big neon sign on the street, like the one at Twitter, whose headquarters on Market Street, is only a few blocks down. Besides, the online encyclopaedia of Wikipedia, is not officially manned by anyone, if not by its members. The Wikimedia Foundation which manages it, and where I am, just deals with legal problems and arbitrates technical questions, leaving the netizens free to democratically debate content, through complex procedures. In each country, the local Wikipedia site is independent. ‘We don’t actually work per country, we work per language. Initially there was a very lively debate on the subject and the netizens chose to privilege language over geography,’ says Matthew Roth, the spokesperson for Wikimedia. Friendly, he responds to all my questions in a very professional way and provides me with all the numbers that I require, without the culture of secrecy which is omnipresent in the Silicon Valley. However, he seems overcome by the magnitude of the success that he is describing. And just as stunned by the audacity, if not the madness, of Jimmy Wales, its voluntary founder who lives today in London, the man who wagered that such a collective encyclopaedia, managed somewhat by the masses and a lot by peer groups, would not descend into chaos.
A sure bet. Today, Wikipedia counts twenty-seven million articles with free access (and in Creative Commons [CC]). Never has a man-made encyclopaedia attained such a scale, nor such reaction. To the point where its disruptive effect has taken root in a few years, Wikipedia has eliminated almost existing encyclopaedias, going as far as creating a wiki dedicated to the ‘Errors in Encyclopaedia Britannica that have been corrected in Wikipedia’.
Particularly, Wikipedia exists now in 287 languages. ‘English is our first language, but a minority language,’ adds Roth who speaks as one writes on Wikipedia in ‘Neutral Point of View’ (NPOV in Wikipedia jargon). The number of articles in English for all English-speaking countries is around 4.4 million, far ahead of German and Dutch which have 1.6 million each, French at 1.4 million and Spanish, Italian, Russian, with 1 million each.
The number of languages used is utterly stupefying. Regional languages like Breton, Occitan, Corsican; Amerindian languages such as Cherokee and Cheyenne; languages of ethnic minorities such as Kabylian or Berber; and of course innumerable dialects and even Esperanto have a real existence on this encyclopaedia, with millions of entries and readers. ‘Wikipedia is used by regional groups all the more so as their languages are rare,’ says Roth, taking care not to make value judgement on these minority phenomena. He also knows that the 287 languages available on the encyclopaedia is a small number compared to the 6,000 languages that exist today around the world.
This assessment is shared by Nishant Shah, head of the Centre for Internet and Society in India, interviewed in Bengaluru, ‘I see that Wikipedia is increasingly translated into regional languages and dialects in India. Yet, we are a country with twenty-two official languages and more than 1,600 local languages. It is a very interesting evolution, the internet is becoming local.’ The Portuguese version of Wikipedia is also much developed, as confirmed by, in Sao Paulo, Régis Andaku, the spokesperson of the Brazilian portal UOL, ‘Wikipedia is growing here with a lightning speed, essentially because of the Brazilians, a lot less due to the Portuguese, the primary language of the country.’
Matthew Roth is delighted with this global dynamic, ‘We are only at the beginning of Wikipedia. But we have already helped some countries, which had no encyclopaedia, get one, and with our Wiktionary, a dictionary, for the first time in their history.’ At the headquarters of the Wikimedia Foundation in San Francisco, I see that each meeting hall has a name of a famous encyclopaedist, from the five continents.
At the end of it all, the presence of languages on the internet is more often linked to the number of its readers. Other than English, which has its own particularity and which alone represents the language of 27 per cent of internet users of all sites put together, a dozen other languages dominate the Web. In 2011, it was according to the number of users: Mandarin (25 per cent), Spanish (8 per cent), Japanese (5 per cent), Portuguese (4 per cent), German (4 per cent), Arab (3 per cent), French (3 per cent), Russian (3 per cent) and Korean (2 per cent). (Hindi is low because the use of the internet in India is most often in English; Bengali, Urdu and Malay, though much used, are also low due to the state of development in India, Pakistan and Indonesia; these data will change rapidly as internet penetration on smartphones increases.)
On the contrary, when one looks at the languages used by the sites, one can see that 55 per cent of them still use English for their welcome pages. Far ahead of all other languages. Following that, in the order of importance: Russian (6 per cent), German (5 per cent), Spanish (5 per cent), Chinese (4 per cent), French (4 per cent), Japanese (4 per cent), Arabic (3 per cent), Portuguese (2 per cent), Polish (2 per cent), Italian (1.5 per cent), Turkish (1 per cent), Dutch (1 per cent) and Farsi (1 per cent). (Here again, there are many means to distort these results, starting with the fact that these 2013 statistics are only for the welcome page.)
Ñ
The letter ‘ñ’ exists only in one language: Spanish,’ says enthusiastically, in Mexico, Manuel Gilardi. But he adds right away, ‘The problem is that this letter is not available on the internet.’
It also almost disappeared! In the early 1990s, a polemic began in Spain, as impassioned as the debate on the prohibition of corrida, that the letter ‘ñ’ will not see the age of information. The reason being that computers that did not do justice to this Spanish trait were sold with keyboards without the famous ‘ñ’ key. The polemic intensified when the European community, never short of liberal provocation, gave official notice to the Spanish government, in May 1991, to cancel out the new legal articles prohibiting the sale on Spanish soil of all computers whose keyboards did not carry the letter ‘ñ’. In the eyes of the European Commission, these were disguised protectionist rules, aiming to privilege the local computer industry and decelerate free competition; but to the Spaniards, it was a strong question of identity. ‘The consequence of this demand was clear: “España” would become “Espanya” on all technological platforms,’ explains Tom C. Avendaño, the young digital columnist for El Pais and celebrated blogger, when I meet him in Madrid. The official response of the Spanish government did not tarry. The very next day of the European ukase, Spain made it clear that it rejected this demand. No question of sacrificing the letter ‘ñ’! Spain would rather lose its kingdom than its ‘ñ’!
The argument, subdued for a moment, resurfaced a few years later with the internet. Intellectuals, among whom the Peruvian Nobel laureate Mario Vargas Llosa, then organized a strike on the streets to protect the letter ‘ñ’, whose very existence on the internet was threatened! The Spanish linguist Lazaro Carreter even went as far as to say, in his grandiose statement, that Spain must quit the European Union rather than giving up its ‘ñ’. Fortunately, negotiations led to a solution and IT systems were soon endowed with the ‘ñ’, as well as other unique characters. ‘It is a bit of a ridiculous anecdote but it says a lot about the fear of the Spaniards in the face of globalization, more than their fear of the internet. In the end, it is a beautiful story,’ says Tom Avendaño, who however requests that I don’t forget the ñ in his surname.
‘On the internet, the letter “ñ” continues to be lacking,’ insists Manuel Gilardi, vice-president of Televisa, number one Hispanic and Spanish-speaking channels.
I am in Mexcico, at the headquarters of Televisa, and Gilardi who heads the digital aspect of the premier Mexican media group, shows me, on PowerPoint, the number of Spanish speakers, between ten and twenty years. The numbers are impressive. The map includes almost all of Latin America except Brazil, the Caribbeans, Spain, of course, and the little Equatorial Guinea, a part of western Sahara and the north of Morrocco (around Tangier), but especially the United States, where the progress of Spanish is swift. Besides, one of the contenders of Televisa, the American channel Cable News Network (CNN), placed a tilde on its ‘N’ for its Latin version: CÑN. Very symbolic of the times.
The number of Hispanics living in the United States is fifty-three million. They are essentially Mexicans, more than thirty-two million to which one can add five million Puerto Ricans, two million Cubans, 1.7 million Salvadorians, 1.5 million Dominicans, one million Guatemalans and almost as many Colombians. And yet these only represent legal Hispanics; if we include the ‘undocumented’, the illegal entrants, it will be necessary to add between one and fifteen million people. (For the most part, the illegal entrants are Mexicans.)
‘Today, thanks to the internet, Televisa can be seen everywhere. But our ambition is neither to limit us to Mexico alone nor to conquer the entire world. We only want to touch those who speak Spanish,’ explains Manuel Gilardi. In fact, the priority of the powerful Mexican media group focuses on the ‘USH’, as the vice-president of Televisa calls the Spanish market in the United States (US Hispanic).
‘Today, television in Spanish is a market undergoing double-digit growth,’ explains, in Miami, Bert Medina, former head of Univision and now the director general of one of the main Hispano-American television channels in Miami, WPLG-TV, affiliated to American Broadcasting Company (ABC). It has two digital channels, MeTV and Live Well, as well as many websites and mobile applications. As for Univision, also based in Florida, and leader of television for Latinos, it has an exclusive partnership with Televisa (which holds a minimum share of its capital), all the while being independent. ‘The majority of Univision programmes and all its prime-time ones involves rebroadcasting telenovelas from Televisa,’ adds Medina.
In order not to run the risk of missing out on the digital revolution, Univision just launched UVideos. This platform offers online videos, musical content as well as web series, which is to say, original mini-telenovelas. On the financial plan, it is about meeting the demands of advertisers, very much after online video advertising (a market estimated at around $3 billion in the United States in 2012, and which would grow more than 40 per cent to reach, according to some forecasts, more than $4 billion in 2014). On the content plan, it is up to Univision to promote telenovelas while starting to produce web content. According to a Nielsen study, the Spanish in the United States have a purchasing power which is increasing strongly and 72 per cent of them now have a smartphone; they are also great consumers of video games and they watch more videos online, on an average of one-and-a-half times more than the non-Hispanic people. Univision cannot keep outside this market.
Had this been enough to put the telenovelas online, the mutation would have been easy for the leading Latino channel. But Univision faced two problems by investing online, which highlighted its contradictions. The first is the demography. By focusing essentially on a production that was brought to it by Televisa in Mexico, Univision continued to seduce a Mexicano-American audience, but started to be disliked by the other Latino-American minority (starting with the Puerto Ricans and Cubans). On the internet, where the Hispanic content is prolific, Univision thus lost its monopoly. And Telemundo, its principal contender on the American soil, a channel which belongs to the NBC-Universal giant, filled the gap by increasing its digital offer. A web channel called mun2.tv, was launched, at the same time as a specialized site dedicated to young Spanish-speaking gamers.
The second problem is cultural. The Spaniards, in particular the new generations born on American soil, look more and more for programmes which talk of their situation in their own country, the United States. They want telenovelas that talk of the question of anti-Hispanic racism, diabetes and obesity, the exorbitant registration fees for universities, illegal immigration or yet, the tensions between immigrant Hispanics and those born in the United States. They want songs and music in Spanish, but by American bands. More and more often, the slang is different; socio-professional referents stand out; cuisine, lifestyle and culture stand out. Gradually, young Hispanics of the United States want to follow television series and talk shows in English, and Univision is obliged to subtitle some of its programmes in American English. ‘As the Hispanics integrate into the United States, the use of English tends to increase in television, cable as well as in internet practices,’ says Bert Medina.
All these questions have been at the heart of the tensions faced by the site UVideos, which exists in English as well as in Spanish. It is also the reason for the launch, in late 2013, of a new cable channel called Fusion, a joint venture by Univision and ABC/Disney, which aims to address the new Latino generations in English. The companion site (fusion.net) also talks to them of their culture, their stars and news relevant to them. The other American internet giants also launched sites for Hispanics living in the United States: Netflix has a Spanish version; Hulu launched as Hulu Latino; Columbia Broadcasting System (CBS) Interactive created a Spanish version of its news and consumer site CNET (and is working on a Spanish version of its video game site, Game Spot). These examples show that language remains important on the net and that geography continues to play a role. Content industries don’t become globalized, and they remain anchored to a territory and a language. The Latino-American example even proves that the two criteria—of space and language—instead of reducing the differences, amplifies them. The internet does not abolish the traditional geographical and linguistic limits, it lets them thrive.
The more the stakes of the battle seem complex from the United States, the simpler they seem from Mexico. There, the stakes are demographic and cultural; here, they are cultural and linguistic. Consuelo Saizar Guerrero, former minister for culture of Mexico, who was the editor of some of the great works by writers like Carlos Fuentes, Octavio Paz or Jorge Luis Borges, doesn’t believe in the common battle for the Spanish language. When I meet her in Mexico, she tells me that she wants to defend the ‘Mexican language’, with its slang and singularities, rather than Spanish in general. She is afraid of the linguistic and cultural contamination of the Hispanic population living in the United States. She also recalls that 10 per cent of Mexicans don’t speak Spanish, but one of the fifty main indigenous languages in the country, such as Mayan, and hundreds of secondary languages. Dressed in a black suit and a white shirt, her medium-length curly hair and round face, Consuelo Saizar wears a hard hat on her head (we are visiting together a historical monument that is being restored in Mexico), never leaves her smartphone, and Tweets without respite (she has sent 23,000 Tweets to her 50,000 followers). Before me, she takes a pen and draws on a bit of paper two big spheres and tells me, ‘Here is Gutenberg; and here is Apple. These are the two key dates in the history of the book.’ Satisfied with the effect she has created, she adds, after a moment of silence, that she thinks of the battles to be fought to save books, publishing and the beautiful Spanish language, all equally threatened.
Even more explicit is Mony de Swaan, the director of Cofetel, the body regulating telecommunications and the internet in Mexico, ‘Spanish is not in danger. Quite the opposite. Except maybe because of the Televisa channel which makes our language so poor! It is not the internet that destroys language, it is mass television.’ (De Swaan has long fought against the Mexican duopoly, in which Televisa is one of the dominant actors of television.)
One can see that the tension not only pits English against other languages, but also the languages among themselves. The internet is rebuilding all these linguistic maps.
Cyrillic internet
At the headquarters of ЯНДЕКС (Yandex in the Latin alphabet), a brand new building on Tolstoy Street in Moscow, I rediscover the Californian ambience, deliberately maintained. The colour green is visible everywhere, Wi-Fi is de rigueur and one can help oneself, here and there, to free fruit in salad bowls just like at Facebook. Two thousand people work (out of an entire workforce of 4,000) at the Russian search engine, an equivalent of Google which holds 60 per cent of the market. The employees can play sports on campus, play darts, and even freely fill the walls with graffiti, as though they were American gangsta rappers. And, indeed, I see famous quotes and countercultural slogans written everywhere in English or in Cyrillic letters.
‘Language is essential on the internet. The future of the Web is a question of language,’ explains Elena Kolmanovskaya, one of the founders of Yandex, and a woman, which is rare in the digital industry in general and in Russia even more so. She continues, ‘From Russia, we succeeded in becoming popular in most of the former USSR countries thanks to Cyrillic.’
The Cyrillic alphabet is thus one of the driving forces behind the development of Yandex. Other than this, there is another explanation, according to Kolmanovskaya, ‘The relevance of the search results are greater with an algorithm made for Cyrillic than with a search engine like Google, which is American.’
Yandex is but one example among the others of ‘Runet’. The term, very frequently used in Russia, stands for the entire Russian-speaking Web, essentially the Cyrillic internet. Most of the sites concerned have a ‘.ru’ suffix, from which one derives the term, but some use ‘.com’ (for example VK.com, equivalent of Facebook). Two other suffixes are also popular in Russia: ‘.da’ and ‘.net’, as they are a play on the local words (da means yes and net means no in Russian).
When one analyses the diffusion of content in Cyrillic from one country to another, one however discovers that linguistic frontiers render the situation complex. In former Soviet republics, and beyond that, in Central Asia, the Russian influence goes back and forth in accordance with paradoxical developments. Belarus and Kazakhstan come in the post-Soviet but neo-authoritarian tradition designed by Russia. Besides, they are envisaging creating together a unique currency based on the rouble and removing border control. A filtered internet is already in use. In Georgia, on the contrary, where Russia is detested, the Web is full of other influences and one tries to resist the invasion of Moscow and its will to impose a standard Russian and its digital censorships. As for Ukraine, its post-revolutionary situation has not yet stabilized and its Web may oscillate between Russia and Europe. In Turkmenistan, Uzbekistan, Tajikistan, Azerbaijan and Armenia, on the contrary, Russia is a model to be followed, among others. Here, the proximity with Turkey and Iran can come to play; also, it is Islam which brings in other influences; moreover, seeing as in Turkmenistan, where dictator Berdimuhamedow wields his powers, it is a model of pure and simple censorship, like the Chinese. The ‘Cyrillic world’ is thus far from being united on the internet through its single alphabet. And in Central Asia, we can see a war of influence between Russia, Iran and Turkey, and when we go towards Mongolia, the game opens up to China, which also participates in this regional battle. In Istanbul and in Tehran, in Moscow and in Beijing, the digital players described to me their conquest strategies, a battle for the soft power that covers tele-serials, music, news channels, and now focuses particularly on the internet.
This has been very well understood by Yandex.ru, which today is trying to establish itself outside its natural linguistic zone, and especially in Turkey. With lukewarm success. I visit the Turkish floor of Yandex in Moscow and I am surprised to find here a portrait of Ataturk and a Turkish flag, as though I were in Ankara. ‘We chose a country within European borders, a country where Google is the leader but where there is not yet a serious contender. The language is easier for us to understand than many other languages. But of course, in Turkey, Yandex chose to be completely Turkish. The technology is global, but the sites are local. We are not claiming to be global, like Google, we want to be local in Russia, local in Turkey and local elsewhere. We want to be trans-local,’ analyses Elena Kolmanovskaya. She keeps quiet on the evident strategic bet, in terms of soft power, that there is for Russia, in implanting itself in Turkey, a great Asian market.
At the very moment of our conversation, though Kolmanovskaya raises as if to end it, she sits down again abruptly, because she wants to tell me an anecdote. She recalls a European conference in Paris, ‘a high-degree one’, where ‘everybody talked of Facebook applications and the promotion tools on Twitter.’ This, visibly, irritated her. ‘We, in Russia, don’t think of the internet this way. We don’t take the example of American social media by trying to position ourselves in relation to them. We want to create our own Facebook and our own Twitter. What is more, we have done it.’
.quebec
French-speakers are not different from Russian-speakers or Spanish-speakers; they too want their site in their language, that is, in French. It is at least what the writer Jean-François Lisée keeps telling me when I meet him in Montreal. He keeps a daily blog where he defends the French language and voices concern power, the rising importance of English on the internet.
We are invited together for a debate, in French, on one of the main programmes of Radio Canada in Montreal, and when I point out to him, in an observation, certainly sad, but an observation nevertheless, that one of the strengths of the English language is to appear ‘cool’ in the eyes of the young Frenchmen. His blood boils! He launches a violent tirade against the Americanization of culture and the dominance of English on the internet. In an act of provocation, I add that ‘speaking English with a strong French or Italian or Spanish accent is a certain way of being European,’ and it makes him even more angry! But he understands that I am joking, because he speaks an unaccented English, unlike me.
We take a look at the state of Francophonie from Morocco to Romania, in Argentina and Maine in the United States, the places which used to be francophone but where the French language is now receding. Off the air, when we leave Radio Canada, and when he no longer speaks like a spokesperson, I point out to him that in Lebanon, for example, a traditionally French-speaking country, radios and sites targeting a young audience, are no longer in Arab or in French, but 100 per cent in English: NRJ Lebanon and its site nrjlebanon.com no longer have an Arabic version! As for finding a site in the language of Molière, one must just be satisfied with the site of Radio Nostalgie (nostalgie.com.lb) which is completely in French, neither in Arabic nor in English. Jean-François Lisée could feel the effect and I find him sincerely saddened. Of course, he is aware of these evolutions at the world level but he has no intention of giving up. He even shows a resurge of energy. He wants to fight. And whereas he seemed to me to accuse the internet of this Americanization of the world when he was on air, he now tells me ‘that the web offers a second chance to French and it is important to seize this.’ (Since our conversation, Lisée has entered the government in Quebec as the minister for international relations, francophonie and foreign trade.)
In Montreal, the linguistic border which surreptitiously divides the city is doubled by a digital frontier. Internet is loved in Canada, and most of the cafés, restaurants as well as many shops, display the name of their sites on the window. Almost everywhere, there is free Wi-Fi. But, whereas websites of traders were still ‘.qc.ca’ in the east, I observe that they appear more, as one crosses the Saint Laurent Boulevard, to be ‘.ca’ in the west. On one side of the famous avenue, the implicit line of separation between the francophones and the anglophones, one is proud to be a Quebecker, and on the other one identifies oneself as Canadian and speaks English. ‘The “.qc” is identity,’ Jean-François Lisée tells me.
Faced with the domination of the language of Shakespeare and Leonard Cohen on internet, the francophones have rallied. It was a question of simultaneously reinforcing the supply of digital content in French and fighting for symbols. Among them, top-level domain names have a great importance. The Quebeckers rallied for the creation of suffix ‘.qc’ and not only ‘.qc.ca’, as they rue the fact that the ‘.qc’ could not stand alone without the ‘.ca’!
This is not the opinion of the entity in charge of domain names in Canada, the Canadian Internet Registration Authority (CIRA, and ACEI in French). This non-profit agency made ‘.ca’ its priority and defends on its site, and in French, the fact that it is ‘the only domain name extension exclusively reserved for Canadians’ and that it ‘acts as the Canadian standard bearer’. The Quebeckers will like this. But the CIRA went a step ahead: in 2010 it prohibited the double extensions which were previously authorized for each province. The reason cited: simplification and better referencing of search engines. Thus, it is no longer possible to create a site under, for example, ‘.qc.ca’ (Quebec), nor ‘.on.ca’ (Ontario). This detail was not missed by the Quebec parliament which was disturbed by this unilateral decision, without success.
A battle ensues. Especially as the announcement of opening new generic extensions for proper names and names of cities gave hope to the Quebec government. Until 2013, there were indeed only twenty-two top-level generic extensions (‘.com’, ‘.edu’, ‘.org’), other than 240 national extensions (‘.uk’, ‘.de’, ‘.it’). The American association which deals with domain names, ICANN, nevertheless launched a vast consultation around 2005 which led to a call for bids to open a thousand new extensions. As Google was going to have its ‘.google’, music its ‘.music’, the Parisians their ‘.paris’ and hotels their ‘.hotel’, the Quebeckers naturally wanted to register an exclusive extension ‘.quebec’. A deft but not discreet way to override the Canadian decision to allow only ‘.ca’.
‘We are no longer fighting for “.qc”, nor for “.qc.ca” because now we want the whole “.quebec”!,’ says Philippe Cipriani, journalist at Télé Québec and Radio Canada, whom I meet in Montreal. Officially, the government is not first in line. The demand was made in 2008 by an association called, naturally, PointQuebec (pointquebec.org), which intends to promote ‘a new Quebecker identity on the internet’. But it was necessary to get the required funds of $185,000 to be able to file an application. The National Assembly of Quebec therefore supported the project, firstly by passing a resolution in their favour, and then lobbying before ICANN, and finally granting a reimbursable loan of 2.4 million Canadian dollars to PointQuebec. ‘A proper address for Quebec will bring it the possibility of displaying its identity on the internet. On a cultural and touristic level, such an address would contribute to the prestige of Quebec on the internet. Also, it will be easier for the Quebecker netizens to distinguish Quebecker companies from French companies in search engine results,’ said a government press release. But the puritans were not completely satisfied. Was one going to write this new generic domain name on their ‘ordiphone’ (Quebecker for smartphone) as ‘.quebec’ or with an accented ‘é’ as ‘québec’? Choosing the former would be to distort the French language; choosing the second would be to marginalize non-francophone Quebeckers! The beginning was decisive. One claimed from ICANN both the domain names! And this is what happened.
In parallel, and to diffuse the situation, the CIRA today plays the seduction card to persuade the Francophones to use the ‘.ca’. It even offers, having prevented them earlier, to adopt accented letters, very characteristic of French and absent in English, in the domain names of ‘.ca’: ‘é’, ‘è’, ‘ù’, ‘à’, ‘ç’, ‘ë’ and even ‘oe’. The CIRA even goes on to promote, on its site, this development: ‘Accents change everything. Put an accent on your company with a “.ca” in true French.’ Actually, the number of sites wanting to register on ‘.qc.ca’ is very low, say the experts, before its ban. With many Quebeckers adverse to ‘.ca’ having long opted for ‘.com’, ‘.net’ and even ‘.fr’ in its stead. Will they adopt ‘.quebec’? That is the question.
It remains to be seen how search engines and other access providers will behave when faced with the accented characters. The CIRA seems to have found the solution, in accordance with international rules, all versions of a domain name registered under ‘.ca’, having French characters, are automatically regrouped in a unique administrative lot. One cannot therefore purchase them separately. Thus, the one with the domain name ‘cira.ca’ will be the only one to be able to also register all the variants such as ‘cirà.ca’, ‘çira.ca’, ‘cîra.ca’, etc. For international search engines, there will only be one ‘cira’ without accents. But each is free to write the address of the site in his way in the Uniform Resource Locator (URL) and promote it as one pleases. And as the Belgians say with humour, ‘it will not make us Congo’ the name of a famous Belgian radio-television programme but it keeps up the fight for Francophonie.
One can see behind these battles, anodyne in appearance, the importance of the internet to the identity of a country, a region or a community, and the centrality of language. At the end of the day, it is good news: languages lead a good life on the internet.
The use of accents in domain names or in email addresses today is a combat fought throughout the world. It is not only about accented Latin letters that the Anglo-Saxon dominated internet meant to strip of digital existence. It is also about the Spanish ‘ñ’, tildes, diareses, cedillas, and other than that, Russian, Arabic and Hebrew alphabets, Chinese characters, kanjis and other Japanese hiraganas. The battle is ongoing. Wikipedia, Google, Facebook, the social media with a billion members that exists in sixty-six languages have set an example. They offer their site in a number of languages and one may think that this is one of the reasons for their global success.
How come it took this long for the internet to speak in foreign languages? Other than the goodwill of the authorities of international regulation under the American influence, it is essentially the Californian agency, ICANN, which deals with these subjects, the generalization of languages on the internet collides with real technical difficulties. For example, it was necessary to wait for a new generation of search engines to be able to read accented characters in the domain names (Internet Explorer with its version 7.0, Firefox with its version 2.0, Safari with its version 1.2, etc.). The future evolutions of the internet must in part remove these limitations. ‘Technically, it is complex, it is a question of international protocol, but we are working on it. For now, we can have accented characters or special characters on the left side of the domain name, but it is harder to touch a top level suffix,’ comments Jamie Hedlund, the special counsellor to the president of ICANN whom I interview in Washington (the top level domain name is the one placed on the right side of the ‘.’, which is also known as TLD). On the whole, one will be able to write ‘école.edu’, but not ‘école.édu’. However, there are some exceptions for TLD names when it comes to certain countries.
For the Arabic alphabet, things are further complicated. One must not only create letters but also allow them to be read backwards: Egypt was nevertheless one of the first countries to get a nation domain name ‘’ (literally ‘Egypt’, written from right to left). These new domain names strongly localize the internet.
For Chinese, the task is even more difficult for another reason: the large number of characters. Besides the political problems of using pinyin (the phonetic transcription of Chinese in Latin characters) or using simplified ideograms (continental China) or complex traditional ideograms (Taiwan, Singapore, Macau, Hong Kong, with, for the last two, Cantonese for speaking instead of Mandarin). ‘In the 1990s when the digital revolution began, we were afraid of having to Latinize Chinese characters to browse the Web. But very soon, we were able to solve this problem. We developed an intuitive software which allows to input Chinese characters and this is used by 800 million people today,’ explains to me, in Beijing, Wihui Wang, the spokesperson for Sohu, an important Chinese web portal and search engine.
Other than these difficulties which are no longer insurmountable, this combat appeared vital to many an essential player on the internet for their identity, their culture and of course their language. In the course of this survey, I was able to observe on the field, in Chinese, Russian, South Korean, African, Iranian and Arab cybercafés, how difficult it was for a person not knowing the Latin alphabet to input a web address. Small pieces of software have been designed over the years to allow these clients to enter the characters of their language, and automatically transform them into a web address, yet this remained problematic and called for a greater scale of evolution. The slogan of Quillpad, an automatic convertor to Hindi, which I saw used in New Delhi, is simply: ‘Because English is not enough.’
Today, the process of internationalization of the internet has been launched and every year, new domain names in non-Latin characters are approved. ‘The question of URL in non-Latin characters is in the process of being solved,’ confirms, in Geneva, Hamadoun Touré, the general secretary of International Telecommunication Union (ITU), a United Nations agency in charge of telecommunications. In time, the internet would speak all languages and the URLs will be available in the principal alphabets.
“To Gengo”
In the Silicon Valley, this revolution is anticipated. For long the internet gurus and giants believed that they can diffuse their Americanized and dematerialized content to the whole world. They now know that the digital does not function like that. They must take into consideration cultural diversities and firstly, linguistic diversity. They must adapt themselves to local contexts.
A few blocks down from the headquarters of Google, in Mountain View, at the heart of the Silicon Valley, the Khan Academy has its superb premises in open space. Are these offices rented out by Google? They assure me that it is so. In any case, YouTube (which belongs to Google) can congratulate itself on the success of this not-for-profit initiative which generates millions of views on its platform every month. ‘Google finances us, their CEO is a member of our board of administration, Google also accommodates us and we broadcast our content exclusively on YouTube,’ concedes Minli Virdone, the director of content and strategy of the Khan Academy. On a screen, at the entrance of the headquarters of this very unique ‘academy’, I see the number of exercises performed and microreadings consulted: around five million. The number increases every second. And their reader is set to zero every morning.
Salman Khan, an American from New Orleans, whose father is from Bangladesh and mother from India, with multiple degrees from MIT and Harvard Business School, launched Khan Academy to give access to education to all the children in the world. He created a programme of short, didactic and free videos on YouTube; more than 4,500 are already freely accessible. They help solving an algebra problem, understand fractions or learn the Pythagoras theorem. Automatic practical exercises complement the lesson. ‘We are campaigning for free education and have only one objective: solve problems,’ justifies Minli Virdone. She employs, as is often done in the Silicon Valley, the miracle catchphrase of the digital entrepreneurs: ‘problem solver’. Enumerating the characteristics of the project, she insists, ‘education is a human right. It should be free. We are a not-for-profit technological start-up, we have no advertisement, our videos are on CC to be freely shared and we don’t sell the data of our students.’
Charlotte Koeniger, the spokesperson for Khan Academy, shows me around the premises. Fifty developers and content managers are busy in a friendly atmosphere. Some of them have made cakes, which are piled on the big table in the common kitchen today, like every Thursday, a game party is planned in the offices, a tradition in the Silicon Valley.
‘Our success comes from personalization. Each student is different and our videos must be adapted to this singularity. We must bring in the right content, at the right time, to the right student. We are able to do this thanks to our algorithms and because our videos are accessible in more than thirty different languages. Each student gets access to the site through a customized welcome page. We make recommendations which are reserved for him,’ says Minli Verdone, happily, pausing and taking a sweeping look around the large open space before us. She adds, ‘Thanks to technology, we are able to address each student. There are that many specific conversations.’ In every country where it develops, Khan Academy signs a partnership with a local institution or association in order to localize its content and translate it.
A similar evolution is taking place in the developing sector of MOOCs. These Massive Open Online Courses are a real university training course, delivered online by universities. Created in the United States, it is led by the four American leaders: EdX (a non-profit consortium piloted by Harvard and MIT), Open Yale (launched by Yale University), Coursera (a hundred universities are behind this non-profit Californian company) and Udacity (also a non-profit established in San Francisco).
Though the MOOCs are still looking for their business model; they can, in time, entirely transform the university system. It is in any case what is anticipated by Aron Rodrigue, history professor at Stanford, who observes, from this privileged spot in the Silicon Valley, the changes in higher education. Casually dressed in a V-necked pullover, Rodrigue has for a long time headed the history department and then the Humanities Center of Stanford, and observes with concern the growing isolation of humanities in the face of sciences on this campus, as well as the passion for the technologies of education. ‘Today, at Stanford, a professor cannot do without Facebook. He communicates with his students through social media. As for the MOOCs, they have already led to the evolution of the model with the “flipped classroom”: The magisterial class takes place online as the students anyway don’t follow the teacher in classrooms, where they spend time on social networking. They will therefore check out the lessons by themselves online. On the contrary, lessons “in real life” will be reserved for discussion and exchange. The only means to make them pay attention to a lesson is dialogue. At Stanford University, magisterial classes will soon disappear.’
The two main MOOC platforms, Coursera and Udacity, were founded by two former students of Stanford. After viral videos, is internet bringing in viral classes? Maybe, for the universities the model remains problematic. ‘To whom do the classes online belong? Is it a lesson or a conference? Is it the property of the university where it was made? Of the professor who took it? Or the platforms that diffuse it? This raises an infinite number of questions, especially when the teachers are external contractors,’ worries Bruce Vincent, the technological director (CTO) of Stanford. He also confirms that a lot of money is at stake in the current copyright negotiations for the MOOCs. Another problem which has not escaped the American university administrators is how to continue to ask students for high admission fees if all the lessons are online for free? It is the entire ecosystem of the American higher education which is in danger. ‘The MOOCs are also a symptom of the problems of American universities and their exorbitant costs for most of the families,’ underlines Chris Saden, who works for Udacity, one of the main companies specialized in the diffusion of MOOCs, and whom I interview in San Francisco.
The risks are real, and so are the opportunities. Some analysts think that MOOCs will be really decisive in permanent education and continuous training. Others think that they will help the United States affirm their superiority in higher education internationally by diffusing their content everywhere. ‘It is with the MOOCs that the American universities will get to really establish their global leadership,’ predicts William Miller, professor emeritus at Stanford, interviewed in the Silicon Valley. The state department did it wrong: it just signed an agreement with Coursera to diffuse hundreds of MOOCs in forty countries. On the contrary, an in-depth study of a million users of MOOCs is showing that less than half the people signed up looked at the first course and only 4 per cent finished the programme.
But in all the cases, there is the question of language. ‘MOOCs will only have a future if it can adapt itself to students, their culture and their language. It is the language that will make all the difference,’ prognosticates Ken Yamada.
It is a Starbucks café like the thousand others in the world. It is urban and uniform. The Wi-Fi is free, like everywhere in San Francisco, mediocre coffee and excessively calorific pastries. However, Ken Yamada likes this place. He loves lattés. And smiles looking at the clients packed together around plug points to recharge their phones. This Nippo-American, who grew up in Los Angeles and in Singapore, managed the website of Gap in Tokyo. Today, he is the spokesperson for Gengo.
The start-up was founded in Japan in 2008 but now has its main offices in the Silicon Valley, in San Mateo. Ken Yamada, who does not much like to lose time in traffic prefers to hold his meetings in this Starbucks café, near Market Street in San Francisco. ‘I am the Asian wearing a blue cardigan,’ he wrote, in an SMS, so that I recognize him. And indeed, here he is. ‘One must be in San Francisco if one wants to develop on the internet and exist on an international level. But Japan is my country. I miss Japan.’ Yamada just married an American woman from Pittsburgh with whom he has a child whom he is eager to educate in Tokyo. ‘I speak to him in Japanese at home,’ he adds, as if to reassure himself.
The speciality of Gengo is precisely translation. Instead of using an algorithm such as Google Translation, with only approximate results, the start-up proposes online translations done by an army of semi-professionals. Already, more than ten thousand translators collaborate every month on this site which is developing rapidly. ‘A good translation cannot be done by an algorithm, it must be done by a human being. Then, we connect the people who require translations to the translators,’ Ken Yamada simply explains. The head of Google, Eric Schmidt, recently gave me to understand that the problem of translations will be gradually resolved by algorithms but the Gengo team does not share this view. The venture capitalists of the Silicon Valley don’t either; the start-up just successfully got a fund of ten million dollars.
For the moment only available in English, Japanese and Thai, Gengo intends to multiply the number of languages for its growth. ‘In order to grow, we need to translate more languages. We are going to do it.’ The translators are recruited online and no certificate is required. The candidates take a brief test and are evaluated by clients in accordance with the quality of their services. A basic translation costs 6 cents a word ($0.06), a commercial translation $12, and a translation verified by a professional $15. ‘Our model is to make the complex problem simple,’ adds Yamada. He also uses catchphrases that are often employed in the Silicon Valley.
In San Francisco, Gengo is the hot start-up at the moment. Everyone is talking about it. The idea of a network of translators is original but there is another reason for this persistent buzz, an original business model. Without having anticipated it, another trend in the Silicon Valley where one firstly creates a start-up around a good idea and only then does one look for a business model, Gengo fulfils a crucial need of e-commerce sites. Amazon, Expedia, TripAdvisor, the commercial channels of YouTube or the Japanese Rakuten, all tried hard to translate advertisments for the best comments of users, before Gengo. Now, they have a recourse to its services. Translating adverts on Facebook and Twitter, to increase their global potential, can constitute the next step.
Yamada is almost done with his latté. Starbucks is empty. He checks his smartphone for the time of his next rendezvous. His time is limited. He excuses himself for it, in the Japanese way. And just as he leaves me, he says with a huge smile, ‘You know what? It is even becoming a verb! They say: “To gengo”. Here, in San Francisco, they are starting to use “gengo” instead of the word “translate”.’
The European Digital Mosaic
It is a nondescript building baptized ‘Park Station’. It could hardly be more drab. It is situated in the ‘business parc’ in Diegem, in the suburb of Machelen, the Fremish town north of Brussels. Between highways and the international airport, one can distinguish the neighbouring buildings of Cisco, Microsoft or Sanofi. To access the Park Station, one needs many codes, badges that one should wear around the neck, and not choose the wrong lift, they don’t go to the same floors.
At the entrance to the offices of the EURid Association, I see a board indicating: ‘.EU: Your European Identity’. If the European identity is anything like these soulless buildings, Europe has much to worry about. ‘Here, we manage the “.eu”, that’s what we do,’ says Marc Van Wesemael, the president of this not-for-profit association, which deals with European domain names.
Since its creation in 2005, EURid is under the administrative supervision of the European Commission. It obtained, after a call for offers, a concession of ten years, renewable, to attribute IP addresses. ‘At first, the “.eu” was reserved for companies, brands and governments. We simply validate demands. In 2006, the access became free for all, including individuals, on condition that they are residents of Europe,’ underlines Van Wesemael. Today, there are 3.7 domain names with ‘.eu’, which is still a small number compared to ‘.com’ (110 million) or national domain names, such as ‘.de’ for Germany or ‘.fr’ for France. Even more serious: this number has been stagnant since 2012. ‘We had 10 to 14 per cent growth per year, and now we have fallen to 2 or 3 per cent. We have reached our limit,’ admits Van Wesemael.
Judging by his name and his accent, my host seems to me to be a French-speaking Flemish person. ‘No, I am a Dutch-speaking Belgian,’ he corrects me with a smile, in impeccable French. ‘I don’t feel Flemish. Politically, I am not for partition. I am Belgian and proud of it.’ The EURid collaborators work in twenty-three official languages of the European Union and the site is available in as many languages.
Is the ‘.eu’ a good résumé of the European identity? A symptom of its fragility? Of its liberal economic vision? In any case, the suffix ‘.eu’ is called ‘country code’, which is already a paradox. It is reserved for the members of the European Union, but another paradox, open to Norway and Iceland (but not to Switzerland). More significant yet, it is predominantly adopted by companies, 65 per cent against 35 per cent of individuals. The ‘.com’, in comparison, has no identity. One who wants to get out of the anonymity of ‘.com’, who hopes to get out of crowd, can choose ‘.eu’ to overcome all bounds, all the while keeping his European identity. But it is true that Europe is still young,’ concedes Van Wesemael. His arguments are strong and his passion for Europe seems sincere, but the suffix ‘.eu’ has not yet convinced the Europeans. It probably helps ‘surpassing bounds’, but it does not seem to embody any territory. To the point where the Germans generally prefer ‘.de’ to it (fifteen million registered domain names), the English ‘.uk’ (ten million), the Dutch ‘.nl’ (five million), the French and the Italians prefer ‘.fr’ and ‘.it’ respectively (2.6 million each). Does this mean that the Europeans identify firstly with the country where they live, before identifying themselves as Europeans? It is possible. And though the Germans are the top users of ‘.eu’, followed by the English, for the other Europeans it is often a complementary address that they buy to protect their brand and redirect them towards their main address, rarely for using it for itself. There is more to worry about: the fragility of the ‘.eu’ also reflects the rarity of European ‘conversations’.
The relative lack of interest created by the suffix ‘.eu’, highlights all the negatives of Europe: a beautiful idea that struggles to materialize; a territory which is united neither by language, nor even by a common culture; the major differences between the north and the south, between western and eastern Europe, the small countries and the big, the founding members and the member countries; the American culture which tends to become the common culture of Europeans. Above this, Europe has yet to build an efficient digital policy. It tried hard regulating the transfer of data and protecting the privacy of its residents, but all this largely remains declaration of intentions. It undertook surveys on the abuse of power by Microsoft (successfully) and Google and Apple (less successfully). Europe now wants to regulate the ‘Cloud’ and put an end to the roaming charges for mobile phones in the European territory, which is very late in coming. Even with the unification of telephone chargers, Europe successfully imposed its views on Apple regarding the iPhone!
In the field of telecommunications, where there are more important national players than there are countries, and no international giant, Europe is no longer innovative and its predictions for growth are useless. In the sector of research engines, Google has a market share of about 86 per cent among twenty-six players. As for big websites, they are trying hard to stay within Europe, and some have even merged with the American companies (Meetic, Skype, Nokia), or acquired by Russian companies (Deezer, upto 30 per cent) and others by Asian companies (PriceMinister, Alpha Direct Services [ADS], Supercell, Play.com). Will the French Dailymotion and the Swedish Spotify survive? The European digital policy is fragile; web content policies even more so. If one asked the question: ‘How many divisions are there in the European digital?’, the answer is final.
In March 2014, during a European meeting in Berlin I could directly interact with the German Chancellor, Angela Merkel, on precisely this subject. Here is her response: ‘You are raising a crucial point. It is important to look at reality in the face. And the situation is such as you have described, maybe even worse. We don’t produce routers any more, very few electronic components, we have very few software editors and there remain only a few small companies in the field of security. And the risk is that they may be acquired by internet giants. We are not going to take an active part in the definition of standards if we are not at the cutting edge of technology. We will stand no chance. Germans, for example, are very proud of their automobile industry but what will it become if it did not have software that has become very crucial for the manufacturing of cars? If we don’t have great determination, we can continue to talk about growth and prosperity, but in future, the riches would be reserved only for regions which are at the forefront. I do not want to be too negative, but we must demonstrate great determination and be very, very concrete.’
The reason: stunted European institutions and a fragile political will. One of the principal defenders of cinema in Brussels, Yvon Thiec, the general delegate of EuroCinema, summarizes the problem: ‘The European Parliament, which isn’t a real parliament, increasingly resembles the UN; it advances slowly, it’s not very technical, very generalist, and majorities are difficult to find. The EU Commission, which is the only one to have ideas but no true government, increasingly resembles Washington: judges, technical prowess, counter-powers, lobbies. As for the EU Council, like the American senate, it is the one to decide, and the decision at the end of the day is made by the member states.’ The results are to be seen. And for every Viviane Reding, voluntarist for data protection, how many European commissioners are there, prudent if not flexible, like Joaquin Almunia or Karel de Gucht, though weighed down with the pressures of competition and commerce?
Google, at this stage, was not worried, despite the tendency of the European Union. Fiscal evasion? Solutions are at a standstill. Distortion of competition and manipulation of search results? Google bought some time whereas the European companies suffered irremediable losses. Violation of privacy? Right to oblivion? Management of data transfer to the United States? A helplessness reigns. (It is however estimated that the personal data of Europeans represented in 2012 a treasure of $315 million.)
Early in 2014, the European Parliament finally voted, by a large majority, a law for data protection. But its adoption and implementation will depend on the goodwill of the next Commission and the future Parliament. The lobbies are already busy; oppositions are arising. Do data belong resolutely to individuals or are they collective public assets? Is their use by the internet giants a legitimate reason for the free services offered? And although nobody seriously envisaged the ‘relocation’ of data in each country, some think that it must at least be envisaged in Europe for its tweny-eight member states. ‘Rather than relocating data to Europe, an effective strategy would be to create a new fibre-optic cable undersea which will directly link Brazil to Europe, without transiting to the United States. It is “cleverer” and more “artful”,’ says Ryan Heath, reinvigorating the spokesperson for Neelie Kroes, commissioner in charge of the digital.
José Manuel Barroso, president of the Commission (whom I had the opportunity to interview repeatedly between 2012 and 2014 in Brussels, Warsaw and Berlin), insists on ‘voluntarism’ of the European Commission to ‘strengthen the identity of Europe’ and imagine ‘a new narrative for Europe’. For Barroso, Europe must be afraid neither of globalization nor by the digital and, he says, ‘it must participate in world exchanges without being reclusive and without slipping into anti-globalization forms.’ Insistent, President Barroso denounces ‘the populism, xenophobia, these old demons which were dormant and which have risen again’; he is actively advocating for ‘an open idea’ against ‘the European chauvinism’ and hopes to ‘bring down walls and build bridges’. ‘In Europe,’ he says, ‘we have the intellectual capacity, the creativity and the knowledge to be a leader in the technological sector. We must therefore ask ourselves, while we have the talented people, many of whom leave for the States, why are we like this? Why aren’t we the leader for the digital sector, whereas we are in the other fields?’ And, Barroso adds, more as a diagnosis than as a proposition, ‘If Europe loses its technological advancement and its capacity to innovate, it would have an effect on all the sectors of the economy, in all industries, not only in the digital sector.’ Duly noted, but how can Europe fight an equal battle vis-à-vis the American internet giants? How to guarantee equitable rules of the game if one does not regulate the abuse of power by the Americans?
The priorities of the European Commission focus on the implementation of a ‘digital agenda’, centred on very high-speed creation of a ‘unique digital market’ (by 2015), ‘digital literacy’ and ‘e-diplomacy’. On the other hand, it resisted the idea of helping producers of digital content and building the indispensable industrial policy which goes with it. The constitution of European ‘champions’ of the internet did not seem to be its priority. Insidiously, the Commission even demanded a European ‘harmonization’ of ‘copyright’, from the bottom up, its contradictors claimed, to the advantage of the American copyright model, without the ‘moral right’ of the artists and with much less protection for the creators. ‘If we want to help Spotify, Deezer or Dailymotion, it is necessary to end with the twenty-eight specific copyright licences that need to be negotiated in twenty-eight countries,’ nuances Ryan Heath, whom I speak to in Brussels. Likewise, the Commission also advocates for a vast ‘modernization’ of creative industries, aiming at, in the medium term, limitation of national cultural quotas, elimination of means for private copying, reduction of aid for cinema and other ‘archaisms’. Luxembourg and Ireland are in fact becoming the gateways, in Europe, for deregulated culture. ‘More harmonization cannot hurt; some cultural exceptions can be updated,’ comments, for her part, Lorena Boix Alonso, who heads the unit for media and content convergence of the European Commission. Listening to President Barroso and his commissioners, I got the impression that, for them, the very word ‘regulation’ had become a bad word.
A new committee, presided over by the Luxembourgish Jean-Claude Juncker, has been set up for five years (2014-2019). Three European commissioners examine the digital dossiers: the Estonian Andrus Ansip, vice-president in charge of the digital single market; the German Günther Oettinger, in charge of economy and digital companies; Danish Margrethe Vestager, in charge of competition. The priority, already proclaimed by the precedent committee, is to create a European ‘digital single market’, a project consisting of sixteen strong initiatives greatly invested in supporting digital economy, modernization of the telecommunications sector and copyright, and protection of personal data and privacy of the European citizens online.
Fiscal harmonization is also a part of it, although it is paradoxical that it was implemented by Jean-Claude Juncker, the former prime minister of Luxembourg who untiringly promoted when head of his state—where VAT rates are down and where internet giants have their Headquarters—that which he is denouncing today. Luxembourg and Ireland have become, in the last few years, the entry point in Europe for deregulated culture. Could this change? This is anecdotal—and certainly a coincidence—but the Permanent Representation of Ireland in Brussels is situated on the premises adjoining the European Headquarters of Google…
Apple, Facebook, Google, Microsoft and Twitter have established their Headquarters in Ireland where corporate taxation on profits is only 12.5 per cent, while Amazon and eBay have chosen Luxembourg where corporate taxes are at 21.8 per cent (compared to 34.3 per cent in France, for example). A first step, to avoid this fiscal optimization, was to harmonize Value-added Tax, established since January 2015, with the application of this rule in countries of purchase and not where these e-commerce companies have their seat. Would this suffice? The progress in this subject is long in coming as it needs the unanimous vote of all the 28 members of the European Union to be agreed upon.
Besides, the Committee seems to want to proceed on the regulation of data transfer (a more stringent regulation project is envisaged) and updating the Safe Harbour Agreement with the Americans which deals with setting up a special counter for companies that agree to protect personal data. Another key project: the right to be forgotten. An option to be ‘dereferenced’ was agreed upon after the closing down of Google in Spain on 13 May 2014, although it does not apply to European suffixes and ‘.com’ sites—proof that the degree of legal protection depends on a certain kind of territorialization of the internet.
As for copyright, the Committee calls for a harmonization which is creating a lively debate. For detractors, this would mean insidiously privileging the American notion of copyright over the French notion, without the ‘moral right’ of the author and much less protection for the creators. ‘If we want to help Spotify, Deezer or Dailymotion, we must end the twenty-eight copyright licenses that must be obtained for them in each country,’ says Ryan Heath in Brussels. In a much more global way, the Committee wants to engage in modernizing creative industries, with the medium-term objective of limiting national cultural quota, removing tax on devices for ‘private copying’ and reducing aid for cinema and other such ‘archaisms’. ‘Increased harmonization cannot hurt; a few cultural exceptions may be added,’ says Lorena Boix Alonso, who heads the unit Convergence of media and content for the European Commission. Will this lead to the emergence of European champions of creative and digital industries? This is the intention, but will this suffice?
In the end, the new Committee’s priority is to clamp down on the abuse of power by the internet giants. In contrast to her procrastinating predecessor, the Spanish Joaquín Almunia, who had tried to negotiate an amicable agreement with Google, the new commissioner in charge of Competition, Margrethe Vestager, chose another approach. The former Minister for Economic and Interior Affairs of Denmark peremptorily reopened an investigation against Google by sending a statement of objections to the company, that is, she filed a formal indictment against the company. The point of the dispute: the search engine would systematically promote its own products and partners in its search results. If this objection is proven to be true, Google will be subjected to a penalty of six billion Euros… She who is known as the Iron Lady of Denmark, known to fear nothing, is thus engaged in an arm-wrestling with Google. ‘We have no grudge for, nor dispute with Google,’ commented Ms Vestager in a press interview, adding that ‘the (European) consumers trust us to make sure that Competition is fair and equitable.’
This grand event might aid the advent of others: Google might be threatened for its anti-competition advertisement techniques or the abuse of power by its Android operating system on smartphones. Amazon and Apple could be next in line and targeted for other infractions.
On the financial plan in particular, just as sensitive, the advances are equally slow as unanimity of the twenty-eight members of the Union is necessary to obtain a vote. Yet, Apple, Facebook, Google, Microsoft and Twitter have set up in Ireland where tax imposition is at 12.5 per cent only whereas Amazon and eBay have chosen Luxembourg, where tax for companies are at 21.8 per cent (against 34.3 per cent in France, for example). A first step to avoid this ‘fiscal optimization’ would be harmonization of VAT, beginning 2015, particularly with its application in countries of destination for purchase and not in countries where the e-commerce groups have their headquarters. Will this be enough? The influence of the lobbies of the American net giants in Brussels also explains, in part, these procrastinations. Google, Apple, Facebook and Amazon have paid a fortune for the best lobbyists, often former European parliamentarians or senior officials well versed in Brussel’s techno-structure. They lead very efficient campaigns of persuasion by wielding carrots and sticks; here by inviting to dinner and indirectly financing players of good quality; there by leading political contestation or if required a legal battle with an army of advocates under contract for the biggest cabinets for Anglo-Saxon business. Many associations with anodyne names, such as Industry Coalition for Data Protection (ICDP), are indirectly financed by the internet giants and telecom manufacturers. Everybody knows to play skilfully with the twenty-eight digital policies, a European fragmentation that permits them to impose their agenda and their views. ‘Our members prefer us not to complicate their business,’ stresses Patrice Chazerand from the lobby DigitalEurope. And he adds, conversationally, ‘No taxation, no regulation.’ The American Chamber of Commerce also participates in this movement, which can depend as required on the ambassador of the United States in Brussels. When I interview him, William Kennard confirms with measured words that he was there for ‘explaining the American positions, that of his government and economic actors’. The simple fact that Barack Obama has named Kennard, a lawyer and a famous expert of the internet and telecommunications, who had also been the former president of the Federal Communications Commission (FFC), as the ambassador to Brussels is an obvious sign.
And yet, Europe is not a digital dwarf. Paradoxically, it is even a giant on the internet which has a decisive critical mass and which constitutes, with twenty-eight members and 500 million people, the first economic zone of the world, a crucial market for the Americans. One can thus hypothesize that, in the coming years, the dialogue will be intensified in terms of digital regulation and a strong link will be forged necessarily between the American and the European agencies for these questions. ‘We get informed; we don’t negotiate. But we are already talking to the Americans all the time, and even everyday,’ says the spokesperson Joaquin Almunia, the commissioner for concurrence. Only, the Americans will not be able to impose their norms and the digital ‘terms of use’ on the rest of the world; by negotiating with Europe, their inevitable ally, they can slow down the regulation demands of the emerging countries firstly, those of China.
It is without a doubt too late to create a general European search engine to enter into competition with Google; but nothing has been dealt with in terms of specialized researches and niches (‘vertical’ search engines for “topical sites”). Europe is also ahead in the audio-visual sector, streaming music, mobile applications and very well prepared in the Cloud. It is especially a key player in content, television, music, publishing, video games and, more broadly, media. In parallel, Europe must nevertheless build its own digital regulation authority. Step by step, a European internet will emerge.
The European peoples don’t make a single people; even less so on the internet. ‘United in diversity’, according to the official motto, Europe is disunited in diversity on the web. However, Europeans feel profoundly European and, though they are disappointed today by the turn that community construction has taken (for the better or for the worse), they need very little to think of themselves as Europeans.
Is the digital an answer among others, to the renaissance of Europe? I believe so. The digital sector will be constructed once again with patience, so as to escape digital transition. Construction of a voluntarist digital policy may be one of the priorities for the next European Commission for the period 2014–19. In any case, I believe that there will not be a ‘European renaissance’, to use the beloved expression of José Manuel Barroso, without an ‘e-renaissance’.
This European internet, when it finally becomes concrete, will not be a ‘digital Airbus’, a catchphrase in vogue that does not quite mean much. It will be clearly situated in the ‘Western’ camp, above passing tensions with the Americans, but will also bring to light its breaking-up into many internets. The European Union may have only one voice, but it must remain an aggregation of many networks: a mosaic of twenty-eight digital territories, distinct, linked by a destiny and maybe by a certain idea of Europe, rather than by an improbable ‘.eu’.