• Richard Harvey

AI: will it change your life?

A kind invitation to speak at the Cambridge Society and, as usual, I thought I would pop up a blog covering some of the background material.

The first slide appears to show an original photograph of RUR: Rossum's Universal Robots by Karek Capek. The play is quite well known now and Wikipedia has quite a good article which will reveal pretty much all you want to know about RUR.

I was first alerted to the play by a splendid book by the commentator and critic Jascia Reichardt. Her book "Robots: fact, fiction + prediction" was published in 1978 and is now sadly out of print but, unlike many old technology books which feel outlandish and alien, her curated content is still fresh and with a bit of updating would make good reading today. From recollection, there are illustrations of computer generated art, text, display robots and of course RUR.


The third slide covers the ups and downs of AI. This again is well documented and the "hype cycle" of AI has fooled many a commentator and investor. Modern times are not immune from the overselling of AI, and IEEE Spectrum which is the house magazine of the IEEE, recently made a number of accusations of IBM's Watson when applied in healthcare settings. Great care is needed with such accusations since people tend to confuse the over-selling of technology with the assumption that the technology is useless. This is far from the case. That exact mistake was made by the great Sir James Lighthill FRS. Lighthill was venerated among mathematicians who work in fluid dynamics, but his famous report into AI in the UK knackered British AI research. It is quite commonplace for scientists to be pitted against each other in the war on funding and it is often said by oficers at EPSRC that all grant reviews by Pysicists are very favourable whereas those by Computer Scientists are brutal. The reason being that if Physicists say nice things about each others work then it will get funded, and since the pit of money is fixed, it will be funded over more valuable work in other fields who are beastly to each other. Even so, Lighthill's efforts to dismiss the AI work as unprincipled now look egregious. My own view is that very very brainy people often get into trouble when understanding artificial brains -- their own intuition about their own thought processes leads them astray (see, for example Penrose's books, The Emperor's New Mind and Shadows of the Mind).

The lecture spends quite a while on explaining modern AI as brought to life through "deep learning". I'm sure I will remember to say this in the lecture but, for the record, deep learning is not the same as AI and AI is not deep learning. Furthermore, many of the impressive systems that make the headlines are not necessarily based on deep learning. But, it's quite cool to know a bit about how they work and I think the deep neural network is a useful tool to think about the problems associated with modern AI: it can be inexplicable; highly dependent on training data and prone to over-fitting.

Notwithstanding the technical problems of modern AI, I would say the key fact that is worth remembering (the "takeaway" as Americans would say) is that it is quite possible for a teenager to download a pre-trained deep neural network and use it to recognise faces, people, objects and so on. But using a classifier effectively requires knowledge and expertise in bias, data science, ethics and risk. Those things really come with experience.

This all comes to a head in the work of Joy Buolamwini who noted that facial classifiers work differently for white people and black people. Her work was a shock, in several ways. For me, I was astonished to see that large corporations had dared to train their classifiers on a non-representative set of people. How amazingly thoughtless. That said, this was in 2016. To my mind the value in the work of people like Buolamwini is not the results, there are number methodological flaws in the work, but more in what they represent, which is a loud technological call of "what were you thinking?"

So now we have researchers studying how to fool AI, how to create unbiassed AI and how to avoid data "poisoning".

The final part of the lecture was prompted by some of the questions, some of which were submitted in advance. One set of questions related to the possibility that AI would displace jobs. The question of whether automation replaces or creates jobs seems to be one of the never-ending questions in economics and it is never answered very satisfactorily because it depends on many factors which economists find it difficult to measure. Broadly the consensus seems to be that automation creates jobs -- the specific jobs undertaken by the automation are lost but new ones are created elsewhere to handle the increased production caused by the increased consumption, caused by the lowering in prices caused by automation. "But could AI be different", people ask?

And economists cannot agree about that either. One school of thought is rather conventional and argues that people with high levels of experience and education are less at risk and that argument is made, for example, The McKinsey Global Institute or, in the England, the Office for National Statistics. But a newer school of thought is to attempt to map the patents associated with AI into job descriptions and this leads to some novel outcomes and implies that highly paid people such as market analysts might be more exposed to AI that, say cooks or food serving staff.

Neither approach is very satisfactory, and I think it is safer to think about your job and work out how much of it is do with pattern matching and recognition of things. This is presumably what happened when the BCS asked its members which jobs were at risk. If you job is dangerous, then possibility there is more an incentive to automate it. Where current AI struggles at the moment is on tasks that involve metaphor and simile; or tasks where there are no training data. If you put that together then there are some surprises: medical doctors, lawyers and stock-brokers, the conventional provinces of the upper-middle classes, will have their work affected; whereas teachers, chefs and researchers are safe for a while yet.

The other questions? Well I'll blog separately about those.


For those who want to follow-along. The slides are here:

CamSoc_slides
.pdf
Download PDF • 12.25MB

4 views0 comments

Recent Posts

See All