Three Swimming Elk: Telling Lessons in Swedish

I’ve been slogging my way through a Swedish course on Duolingo. I don’t know whether the French course, say, uses the same examples. But I suspect not.

The nature questions are heavy on Swedish flora and fauna – pines and spruce, wolves and elk. The supernatural ones are lousy with trolls and gnomes.

  • Vi såg tre simmande älgar (“we saw three swimming elk”).
  • Ett fullt troll tittade in genom fönstret (“An ugly troll looked in through the window”)

There are those characteristically inexplicable language course headscratchers. I’m fairly confident I’ll never need to tell a Swede that “a turtle came swimming” (En skölpadda kom simmande). But on the whole you’re less likely to stumble across surrealistic whimsy than you are the kind of thing you expect from Henning Mankel. Antalet mord i staden har ökat: “the number of murders in the city has increased”, indeed.

The most arresting examples, though, sit squarely and morosely in Bergman territory:

  • Hennes moster är döende (“Her aunt is dying”)
  • Din fru kommer att ha tagit alla dina drömmar från dig (“Your wife is going to have taken all your dreams from you”)
  • Gav du henne en rakhyvel? (“Did you give her a razor?”)

And the unbeatable:

  • Det är jag som är Döden (“It is I who is Death”)

For the sake of my emotional equilibrium, I’m not sure I can carry on with this course for much longer…

Advertisements

Google Translate and Plato

google-translate-ai-2016-11-24-01

So you’ve seen this news [New Scientist/Wired], right? That the Google Translate AI has supposedly invented a new internal language to help it translate language pairs it hasn’t learnt. Having been taught English <> Japanese and English <> Korean, it can then do the job for Japanese <> Korean.

The headlines position this as the AI inventing its own internal language, or interlingua, to handle those conversions. Every article notes the difficulty in anyone knowing exactly what is happening in there, the deep learning that’s going on inside the AI. (Engadget had the slightly more nuanced report on this, further from the “invented a language” headline.)

An invented language? That’s one interpretation. But if a language consists of signs, symbols that exist in the world, is that the best description for the process?

So Google Translate learns that English table means both Spanish mesa and Swedish bord. Does it then need to tell itself:

IF table = 1010101011101 = bord
AND table = 1010101011101 = mesa
THEN bord = 1010101011101 = mesa

?

That’s not how we meat-sacks use language. It skips over another interpretation, lacking from the reporting I’ve seen so far, which is either totally thrilling or utterly chilling, depending on whether or not you’re looking forward to the ascendancy of Skynet.

I (and I’m willing to assume you too) have an idea of a table, based on years of experience:

  • It’s a flat surface atop a number of legs (often 4);
  • It’s usually (not always) around thigh height;
  • Most are made of wood, or metal, or plastic;

All these things contribute to a mental representation of [table]: a confluence of images, physical experiences, language labels, and a heap of Venn diagrams of different properties that coalesce around the label of table. A lot of overlap with something like [chair]; less – but some – overlap with [dog]. A mess of connections and firings in the neural pathways, impossible to pin down, even while it’s possible to see where they cluster.

The Platonic form of a table, if you like. That is what’s triggered when I hear the English or Spanish or Whateverish word for [table]. English is my mother tongue, but if I were to translate a Stockholm restaurant reservation for a Spanish speaker, the mental process wouldn’t be “(Swedish) bord = (English) table = (Spanish) mesa”. It would be “bord <> [idea or Platonic form of the table] <> mesa”.

Look at that map up top again. That string of Japanese word/concepts on side, the English string on the other, the Korean somewhere in the middle but tending closer to the Japanese. The whole forming a neat oval, a cluster of meaning. OK, so the AI only has language input, there are no sights/feelings/memories of [stratosphere] associated, not yet.

But what if that oval, that pattern of pinpricks of understanding, represents the rough formation of a Platonic form; an AI idea?