Serenity When? Serenity Now?

serenitynow

I started writing this post ten days before the US election. The premise was that it’s massively unproductive to worry about things we have no control over: football results, Brexit, that US election I mentioned.

… worry is a dividend paid to disaster before it is due…

– Ian Fleming, “On Her Majesty’s Secret Service”

The aim was to quit the “read the entire internet” impulse that fretting encourages. Stop skim reading every article. Stop following inflammatory tweets back to the shithive of alt-right white supremacist scumbaggery they dribbled out of.

Reading every viewpoint and counter-viewpoint on a subject doesn’t leave you better prepared to absorb the consequences of an outcome you don’t have any control over. That view/counterview thing? Horribly overdone. Depressing to see news outlets follow up with the counter-opinions, because often it seems so forced. (“OK, who wants to write an opinion piece saying Bowie was shit? Come on, someone has to write this thing…”)

In praise of disengagement? Sort of. This is tacking pretty close to the serenity prayer.

Anyways, history overtook my sluggish blogging (that’s how fast I blog, at a sub-historical pace), and the US somehow elected a candidate who makes more false statements than true ones. As I lazily pondered how Facebook and Twitter had played a disastrous role in the dissemination of bullshit during Brexit and the US election, the media duly decided to get riled about fake news.

What of serene disengagement in 2016, then? Disengagement from the internet, I hasten to add, not from “real life”. The last thing we need now is disengagement from reality. Ever think: if we’d all stayed off Twitter and Facebook in 2016, and instead had talked to our less politically-aligned relatives, we might not be now suffering the spittle-flecked, oddly angry, victory shitposting of the Brexiteers and Trumpkopfs? Even without social media we would still have been appalled by the atrocities in Syria, still mourned Bowie and Prince.

I stand by the point that over-reading is unproductive. It’s a one-way street, a dead-end download that will likely go unanalysed and unsynthesised, which will never be shared except in an angry diatribe to a colleague with better things to do.

This isn’t a solution for the awfulnesses we’ve subjected ourselves this year. It’s a modest contribution to your own mental health not to pore over things, obsessively, until you lose your grip on what they actually mean. Don’t read the comments. Don’t feed the trolls. Twitter sparingly. Facebook for event invites and birthdays only.

As for King Troll, all I can do is trust in the survival instinct of the US people, and have popcorn on hand in case of impeachment. Regarding the disengagement policy, I’m convinced that the most constructive thing people could do is unfollow Trump on Twitter. Can you imagine how much more effectively he could be scrutinised if we weren’t wasting time evaluating his tweets by the normal standards of political/civil discourse, instead of dismissing them out of hand as the deliberate misinformation they are? Imagine how that fragile ego would take a plummeting follower count…

 

PS The one topic I did manage to disengage from in the latter half of the year was football. I’ve avoided the brief, addiction-forming highs of the wins and the days-long toxic fug of defeats. It’s easier, living outside the UK, but I’ve seen a few results by accident, or knee jerk click-impulse. On the whole I honestly feel that if anything it’s helped my mood. (I reserve the right to revise this opinion if Arsenal win the League.)

 

Google Translate and Plato

google-translate-ai-2016-11-24-01

So you’ve seen this news [New Scientist/Wired], right? That the Google Translate AI has supposedly invented a new internal language to help it translate language pairs it hasn’t learnt. Having been taught English <> Japanese and English <> Korean, it can then do the job for Japanese <> Korean.

The headlines position this as the AI inventing its own internal language, or interlingua, to handle those conversions. Every article notes the difficulty in anyone knowing exactly what is happening in there, the deep learning that’s going on inside the AI. (Engadget had the slightly more nuanced report on this, further from the “invented a language” headline.)

An invented language? That’s one interpretation. But if a language consists of signs, symbols that exist in the world, is that the best description for the process?

So Google Translate learns that English table means both Spanish mesa and Swedish bord. Does it then need to tell itself:

IF table = 1010101011101 = bord
AND table = 1010101011101 = mesa
THEN bord = 1010101011101 = mesa

?

That’s not how we meat-sacks use language. It skips over another interpretation, lacking from the reporting I’ve seen so far, which is either totally thrilling or utterly chilling, depending on whether or not you’re looking forward to the ascendancy of Skynet.

I (and I’m willing to assume you too) have an idea of a table, based on years of experience:

  • It’s a flat surface atop a number of legs (often 4);
  • It’s usually (not always) around thigh height;
  • Most are made of wood, or metal, or plastic;

All these things contribute to a mental representation of [table]: a confluence of images, physical experiences, language labels, and a heap of Venn diagrams of different properties that coalesce around the label of table. A lot of overlap with something like [chair]; less – but some – overlap with [dog]. A mess of connections and firings in the neural pathways, impossible to pin down, even while it’s possible to see where they cluster.

The Platonic form of a table, if you like. That is what’s triggered when I hear the English or Spanish or Whateverish word for [table]. English is my mother tongue, but if I were to translate a Stockholm restaurant reservation for a Spanish speaker, the mental process wouldn’t be “(Swedish) bord = (English) table = (Spanish) mesa”. It would be “bord <> [idea or Platonic form of the table] <> mesa”.

Look at that map up top again. That string of Japanese word/concepts on side, the English string on the other, the Korean somewhere in the middle but tending closer to the Japanese. The whole forming a neat oval, a cluster of meaning. OK, so the AI only has language input, there are no sights/feelings/memories of [stratosphere] associated, not yet.

But what if that oval, that pattern of pinpricks of understanding, represents the rough formation of a Platonic form; an AI idea?