top of page
  • Writer's pictureRichard Harvey

AI questions

I was asked a whole host of interesting questions following my lecture to the Cambridge Society. I could not deal with them all in the time available so here is a quick recap of the answers.

1. Which careers will be least affected by AI in the future?

Well I did talk about this somewhat in the lecture. The short answer is that economists disagree on this question even more than they disagree on most things.

2. What skills do I need to develop to prepare me best for an AI world?

Computer science or electronic engineering are the obvious answers. It was fashionable a few years ago to state that pretty much any numerate degree could prepare you for computer science. I don't think that is true anymore, and I see more and more people on UEA's Conversion Masters who are Economists, Physicists or Mathematicians. They tell me that they have realised that premium jobs in computing are no longer open to them.

3. Will AI lead to very high levels of unemployment?

The seems to be recurring anxiety in the analysis of new technology and it's a natural worry. I am writing this in Norwich and my local newspaper the Eastern Daily press has a piece on office scenes of yesteryear. The drawing office used to be a hum or creativity and action. Now the function is distributed onto to people's desks. Of course those jobs morphed into something else and the historic reality seems to be that automation actually increases work (via increased demand and hence production). A few experts think AI is different. I'm not so sure. With reference to Question 1 it depends which one of the experts you believe - the lecture blog refers to three inconsistent approaches.

4. How should wealth be distributed in a world where people cannot advance through work?

No idea! I suspect strongly this is a question to which the questioner knows the answer better than me. However I would just note that it seems to be predicated on a false assumption since, at least in the Western democracy wealth distribution has nothing to do with how hard you work or how ingenious you are. It is mostly based on how much money your parents had. And, certainly pre-Covid, one of the more weighty economic flows was the rentier flow which has nothing to do with technology or innovation -- it is related to ownership of land and fixed assets.

5. Will AI lead to more or less inequality in society?

Inequality of what? Most people would mean wealth by this question, in which case it is not obvious to me that AI automatically leads to more inequality. IT in general has been very enabling for certain countries. There are plenty of examples of modern AI not understanding the sensitivities around the UK's protected characteristics so, in that sense, there are dangers. But I'm not sure I see an "in principle" argument for inequality.

6. What must governments do to prepare the world better for AI?

Stop recruiting civil servants who no nothing about AI! would be my "finally and briefly" radio answer. If you want society to benefit from AI, and automation in general, then you need to make your society digitally ready." A good example of where we are not ready is the English legal system which is bedlam of competing agencies, overlapping and non-granular agencies. Sorting that out, which we call systems engineering or requirements engineering is quite a task.

7. Is the UK doing enough to create leadership in AI?

No. The UK has done what it always done when facing a competitive crisis - the Treasury doshed money around on a few projects notably the Turing Institute. There was an expensive buy-in process which meant that only Cambridge, Edinburgh, Oxford, UCL and Warwick could afford the entry ticket. It's better now that the Treasury is less involved but I'm really not persuaded that giving even more money to the wealthiest universities on the planet was quite the right way to develop leadership. If the questioner meant scientific leadership, then maybe, but the UK's publication record on AI is already very good (especially in Computer Vision which is my field). But I suspect the questioner meant industrial and societal leadership. Well obviously those require different approaches.

8. Where should legal responsibility rest for decisions taken by AI?

This question comes up a lot but is it what scientists call a "well formed" question? To put the question the other way round - when AI makes a mistake, maybe a fatal mistake, who is to blame? I really can't see how this is different from questions we have been resolving, via the courts for over 100 years. Who was to blame for the Boeing 737max disasters? Well there is very long Congressional Report on apportioning blame, Blame either falls on the designers or the operators. Why should AI differ?

9. Does the recent fiasco over A level grades represent a failure of rudimentary AI, a failure of politics or a failure of human intelligence?

As it happens I am the Academic Director of Admissions at the University of East Anglia, so I have had consider this question rather closely. As far as I am aware, the various methods trialled by OFQUAL did not use AI, at least not in the sense we have been discussing it here. They were statistical extrapolations. The failure was triggered by a complete lack of appreciation of teenagers sitting exams. Academics and Teachers spend hours and hours with this group of the population and I can assure they are very unlike the general population. Firstly they are monomaniacs. At every waking hour for the last N-years they have been nagged by their teachers and parent to focus on exams. Any activity outside exams is scrutinsed by parents and teachers - "will it make them better or worse at exams" - they ask. Secondly, all their friends, enemies and collaborators are all at School doing exams. Thirdly, like all people sitting exams, they are highly highly focussed on the tactics of the exam and, fourthly they have a highly developed sense of injustice and fairness.

In AI, it is a fundamental rule of classification that you have to consider not only the accuracy of the classifier, but you also the cost of making mistakes (misclassifications). Clearly here, what looked liked a small number of misclassifications, was going to be very damaging as each misclassification was a human tragedy for someone. I also think there was a very naive error caused by a failure to appreciate that people sit three exams. If p is the probability that someone fails to get their grade in a single exam ... well there are plenty of intelligent people out there ... you can work out for yourselves the probability that a person chosen at random misses at least one grade on one exam.


The failure to understand those things led to near calamity in the British exam system. Who to blame? Should we blame Ministers or Civil Servants for not understanding teenagers? Well I looked into the whole chain of decision making and I'm pretty sure it is one of those two groups. The Education Select Committee will decide which.

10. Is there any way that AI can be fashioned to benefit the whole of society or is it inevitable that it will enrich those who develop it?

Well the lecture showed that it might not be "the usual suspects" who are advantaged by AI. Furthermore the inventors of any technology are usually well down the richlist. It is the exploiters of technology who make money. You can see my answer here, as with other answers, is basically treating AI as another technology -- I do not think there is anything special about a technology that can "think" for itself.

11. Could an artificial intelligence apparatus assume a human persona, as for the writing of a book, and if it did

  • what likelihood might there be of it

  • offering novel insights into the human condition getting ideas above its station

  • experiencing the problems encountered by Frankenstein’s Monster?

Books have already been written by AI. As have many creative objects, paintings, music, poetry and so on. Critics have tended to dismiss the outputs as banal although that does depend on the critical milieu of the field (art criitcs being less luddite than music critics for example). But perhaps this question is about an AI pretending to be something it is not - a human. If you have ever talked to a chatbot then I'm sure you know that the first few interactions, where the chatbot tries to work out if are having a broadband issue, a billing problem or you just hate Virgin Media, are mediated by AI. So, yes, AI does pretend to be human but so far, in fairly limited scenarios. The recent Google demo using customised technology to book restaurants or hair appointments was a technical tour de force but many people were disturbed by innocent hairdressers being duped into conversing with an AI that was pretending to be a human.


So, is that scenario likely? Yes, at the moment it looks likely. The second part of the question is more complicated. Just because an AI is pretending to be a human doesn't mean that it understands being a human. But, yes, if an AI models human behaviour then it can provide insight by modelling situations -- this can be very important -- knowing that a person loitering on the "fast" end of a subway platform might be about to kill themselves is very useful modelling indeed (I'm sure this is a real system but I cannot find the reference for it - please email me the reference if you have it). Within the question is hint that an AI might provide insight into the human condition. That is a big problem with current deep learning -- how to get insight from decisions. At the moment the decisions from most AI are delphic (although there are some notable exceptions).


The final part of the question is hampered by me not having read Mary Shelley's output. I presume that the Monster has some internal strife as it struggles to reconcile human and machine emotions. Yes well authors love that stuff but it's a bit far from my reality.


12. I have heard of research that has shown that we are very poor at telling whether a person is lying, i.e. not telling the truth, and that in tests even those whom we would expect to be able to do so – e.g. police officers and experienced lawyers – produce poor results. Does Professor Harvey anticipate that artificial intelligence will be able to do much better than us and lie detectors which in this country we don’t appear to trust?

We are poor judges of lying and, worse, we are poor judges of whether we are poor judges! However there is a developing field of lie detection and maybe those dialogues could be learnt? There are various training courses that attempt o teach you spot "micro-expressions" and small movements. I've not seen much evaluation evidence for any of them so I doubt if that is the answer. Possibly blood flow might work but again that needs evaluation. That said, it is narrowish task so a rule of thumb is, if you can define the task narrowly, then AI can generally be trained to beat a human.


13. Do you believe there will be an exponential growth in technological advances as A.I. is utilised more to solve problems that would typically take humans before far longer? If so do you see this as a benefit to the world or does this worry you as machine learning means A.I. is advancing at a rate not controlled by humans?

To be exponential there has to be a doubling effect. So although take-up of AI will be rapid there is no reason to think it will be exponential unless we have AIs creating AIs -- that would lead to rapid expansion. Two areas where that is likely are embedded systems and network devices. A router is a device that decides how and where to send packets using all sorts of algorithms: heuristics and AI. A feature of those algorithms is that they learn (or adapt) and some routers will program other routers using their own AI (AI building other AIs). The question is ... can anyone who bought a router be confident that they understand the algorithm inside the router? It's this area where I think the march of the machines will be fastest and possibly quite dangerous as, if we are not careful, we will have a network that no-one understands or controls.


14. I have just a fascinating and deeply worrying programme on Netflix called The Social Dilemma. It argued that AI has become the most sophisticated manipulator of humanity ever invented because of its ability to present different information to different individuals, according to what it knows of their biases. It argues that this is resulting in societies becoming ever more divided because human beings are essentially information processing machines. If there is no reliable process for establishing truth, there is no basis for agreement between them. Do you have any thoughts on this?

This is a complex question because it mixes a number of concepts.  And for precision and truth it’s important to be clear about the separation of those things.

(i) There are number of businesses whose business model is based on using and selling your data.

The two most famous of those are Google and Facebook.  If you don’t like those models then don’t use those businesses.  I don’t use them very much, because basically I think Google and Facebook are malign.  Other people disagree and they are welcome to carry on using them. So, one part of that film seems to be saying that consumers are either too dumb or too naive to understand how those firms work. Yup.  Solution: better education.

(ii) There are a number of businesses which are thriving on the spreading of fake news.

The two most infamous of these are Twitter and Facebook.  My own view is that the most egregious of those cases would be already dealt with were we to consider Twitter and Facebook to be either newspapers or broadcasters.  Probably broadcasters.  Why is the government so loath to act?  Mystery. There have been some recent attempts by both companies to reign-in the fake news problem but they don't look very scalable to me.

(iii) There are algorithms that present different stuff to different people.  Yes. That’s called personalisation.  Switch it off if you don’t like it.  But people do like it.  Racists like talking to other racists.  Of course the concept is not new at all – the Klu Klux Klan is a sort of country club for racists isn’t it?  And when those racists are talking to their neighbours they talk differently.  So here the technology is merely mirroring human behaviour – “mirroring” being precisely the technical term for it in the psychology literature.

(iv) Fake news spreads quickly.  Yes, although not as quickly as covid.  In my Gresham lecture on social media I quote results which show it spreads as a complex contagion (meaning one has to have several exposures to it before you spread it).  But what can be done about this?  I would agree that it is a technical problem looking for a proven solution but in the absence of developed research I argue:

  1. Don’t retweet news from sources you know to be dodgy – unsubscribe from the therealdonaldtrump and void talking about the guy.

  2. Slow down the propagation of news from sources which have substantial numbers of people distrusting them.

  3. Read the debunking literature (how many people have read the debunkers handbook for example)

  4. Develop intellectual vaccines.

I’m rather fond of solution d.  Although it is entirely speculative, I think it highly likely that certain ideas are so powerful that they inoculate you against bad thoughts.  Are you less likely to be a racist if you understand Hofsteder’s dimensions of culture?  There has been some work on this and there are some pointers to it in my lecture on Social Media.

The final part of the question is predicated on a highly dystopian vision in which “there is no basis for establishing truth”  Well I cannot understand why I should accept that as a premise.  The vast majority of truths are established by experiment or expert agreement.  The fact that lots of morons think, for example, that vaccination is bad thing, is regrettable, but evolution will take care of them.

It seems to be part of a middle class panic (or “moral panic” as Sociologists called it) that there are lots of people who don’t think like us and they vote for Donald Trump, Brexit and other ghastly things.  And that is essentially a political question so you really need a political scientist to answer this question.  That said, my own view is that countries need to work harder on what it means to be a voting citizen: voters should have responsibilities and if you fail to fulfil your responsibilities then I cannot see why you should vote.  In this country, for example, roughly half the country pay tax so it is perfectly possible for the remaining half to vote for things that they do not have to pay for.  Obviously that is completely unsustainable from a game-theoretic point of view.  A similar argument applies to the spread of fake news.  If I retweet Trump’s latest inanity what penalty do I face?  Well perhaps a bit of backlash from my friends but that’s it.  What if my subsequent tweets got delayed?  Well Trump’s tweets would be so behind the times that no-one would read them at all.

So the root cause of the problems alluded to in this final question are the lack of negative consequences for negative actions. Change that, and we will change society. None of which is particularly to do with AI, but interesting nevertheless!

16 views0 comments

Recent Posts

See All
Post: Blog2_Post
bottom of page