- Richard Harvey

# The weirdness of algorithm research

I've just spent a happy hundred hours preparing my Gresham lecture on __algorithms__. As my friends will know, I did not study computer science for my first degree, so I had to put the effort in to understand a bit more about the topic. No wonder undergraduates complain about theory courses -- the subject is complete muddle.

How to represent algorithms? Structured English? There is no standard, standard english. Flowcharts? Well they seem like a good idea, and there is, or was, a standard but they take up a lot of screen real estate. Probably the most robust way would be a algorithm for a universal Turing machine. Ah ha! Well we are on firmer ground here, Turing machines are well defined (although there are quite a few variants) and there is a mental model that involves the programs being written on "cards." that was devised by __Tibor Radó__. It's nicely explained __here__ by __Professor David Brailsford__ and uses a notation in which a "card" tells the Turing machine what to do next. Unfortunately, Turing machines are so lacking in sophistication that one could not possibly actually use them except for the simplest problems, or "toy" problems as they are sometimes called. So, for the time being, it is a free for all.

And then there is the complexity of algorithms - it's reasonably intuitive that if a problem doubles in size, the number of steps might increase by more than double (ie greater than linear complexity) so most students can grasp the idea of polynomial complexity and hence the set **P**. That said, I've noticed a lot of explanations underemphasise the fact that **P** is defined for only one class of problems - decision problems. But the idea that each class of problems requires a new set, and that the **P** versus **NP** problem probably has its own variants, in **EXP** and so on. That is often swept under the carpet. And the carpet of complexity theorists, well that is a large carpet with 419 other complexity classes, many of which we barely understand. and may, or may not be important. Well, no wonder it is called a "complexity zoo" and it's all become a bit like biology - and you know what Ernest Rtherford said about biology.

The one notable exception to this is video by __hackdashery__. It's about 10 minutes long and is focussed on just computational complexity and it is the most accessible, most correct explanation of **N** versus **NP**.