In my next blog, I’ll be writing about one of the less glamorous but more important aspects of artificial intelligence, quantifying confidence in predictions. This comes down to asking the related questions, how do I optimise my machine learning model based on my training data, and how do I quantify confidence in the predictions that my model makes for my testing data?
Prior to the start of the world cup, there was social media interest in a research article that had used a machine learning algorithm to predict the winner, based on a wide range of input data. None of the media discussed how we would verify such a model: how many world cups have to be played before we could have confidence that the model was more predictive than, for example, randomly picking a winner from the three teams, Brazil, Germany and Italy, that have won the most previous world cups? Regardless of whether our model is physics or machine learning based, it is of no use if we are not able to verify it in a statistically meaningful way.
Such questions inevitably lead me to start thinking about the meaning of artificial intelligence. In a recent tweet, the New York Times asked for views on whether AI is a threat or a boon to humans. Encouraging these conversations is important if for no other reason than if we do not engage more widely with concerns, we may find ourselves on the wrong side of ill-informed legislation. The problem with having such a debate is that we don’t even know what supersmart artificial intelligence means. It is very hard to have a meaningful discussion about the merits of something that is not well defined.
For a thoughtful perspective on this, read Quanta Magazine’s Q&A with Judea Pearl.