Optimising models

In my previous blog, I discussed how to use an intuitive method, Leave One Out Cross Validation, for determining what parameters work best for a given machine learning algorithm. When I wrote that blog, I was surprised to find that it did not seem to work very well for finding an optimum fit. Something I have learnt over many years of working with numerical analysis is that if the results are surprising, it is well worth checking that they are correct! I’ll now revisit the LOOCV technique and present a much more satisfying outcome. Recall that in this method we train on all but one data point and then use a comparison between the trained model’s prediction for the stress with the observed stress at the point left out. We then optimise by minimising the deviation between the two after we have repeated the LOOCV process by leaving out every data point in turn. If we vary the length and variance hyper parameters in the GP, we obtain a contour map of the model likelihood which enables us to identify the best parameters. This approach to optimisation, iterating over a regularly spaced grid of length and variance values, is not very efficient, but serves to illustrate the process.

The dark red patch in the left figure shows where the fit is best and the figure on the right shows the corresponding fit to the data, including the confidence interval, shaded in grey.

An alternative way to determine how good a model is is to use the log likelihood based solely on the probability of predictions. This gives rise to two competing terms, one measures the model complexity the other the quality of the data fit. Below is the map of likelihood as I vary length scale and variance in my Gaussian process for my stress/extension data.

Although the contour plot looks different, the maximum likelihood is located in a similar, but not identical place. Since both methods are statistically different ways of determining the best fit, it is not too surprising that the exact results should differ, however, we would find it difficult to distinguish between the predictions of the two validation techniques.

For comparison, the optimised hyper-parameters are:

Hyper parameter LOOCV Log Likelihood
length 2 1.8
variance 2.4 2.7

Likelihoods and complexity

If we are going to apply machine learning to science, then clearly we need a way of quantifying how good we think our predictions are. Even before we reach that stage though we need a way to decide what is the best model and for any given model, what are the settings, or hyper-parameters, that provide the most believable predictions. This was illustrated in my previous blog in the figure showing how the predictions for the stress/extension relation vary as we vary the length scale hyper-parameter. For all length scales the curve passes exactly through the data, so how do we decide which is best?

There are a number of approaches to this, I’ll discuss just two, which are quite different and provide a nice reminder that as powerful as machine learning is, it has quirks. It is worth noting that the methods for validation are applicable regardless of the particular form of machine learning, so are as valuable for checking neural network predictions as they are for Gaussian processes.

First of all, in this blog I’ll discuss the conceptually simpler, cross-validation. In essence, we split the data into two sets, one is used to train the algorithm, the other is used to measure how good the predictions are. Since there are many different ways of splitting a data set into a training and a validation set, I’ll discuss what is know as Leave One Out cross validation (LOOCV). This is a dry but very descriptive name! All the data points but one are used to train the algorithm and then the trained algorithm is asked to predict the output at the point that has been left out, the closeness of the prediction to the observation is a measure of how good the fit is. This can be repeated so that every data point is left out in turn, with the overall measure of how good the fit is just an average over all of the  LOOCV attempts.

We can use this to help guide the choice of hyper-parameters by repeating the process for a range of length scales and variances, introduced in a previous blog. We then look for the pair of values which give the best averaged match …

Since I wrote what comes next, I have discovered a bug in my code. My next blog will correct this, but since this is a blog and not an article for peer-review, I will preserve my mistakes below for posterity!

… , which sounds simple, but gives rise to two challenges.

The first of which is that searching for the best match means finding a maximum point on a two dimensional surface (or higher as we add more hyper-parameters). Finding a maximum on a surface that is likely to be quite complex is difficult! It is easy to find a local, but much more challenging to find the global, maxima. This is a problem that exists in a multitude of numerical calculations, and has been known about for a long time, so although it isn’t easy, there are lots of established approaches to help.

The second challenge is best illustrated in a contour plot of the log likelihood for my stress-extension data as I vary the two hyper-parameters:

loglikelihoodLOO.png

The colour scale is such that bright yellow represents the highest values whilst dark blue are the lowest. What you might be able to see is that there is no maximum! Even when I increase the length scale to much higher values that log likelihood continues to increase. So according to this method LOOCV predicts that an infinite lengthscale is optimal. Apparently this is a common problem with the cross-validation approach, although I have not yet found an explanation as to why. In the next blog, I’ll discuss a different approach to finding the optimum values for the hyper-parameters, which is less intuitive, but appears to be more robust.

A new challenge ahead: Automating Science Discovery

The reason I started this blog was to document my progress as I delved into machine learning. One of the primary motivations for doing that was that I was preparing a proposal to the UK’s Engineering and Physical Science Reaearch Council for a call for feasibility studies bringing together physical science and artificial intelligence. After an Expression of Interest, an audition(!), that I did from Los Angeles at 3am local time, writing and submitting a full proposal and then attending an interview, I was unsurprisingly thrilled to learn that our bid was successful.

One of the key requirements for the proposal call was that we develop not just the use of AI in the physical sciences but also new AI. Below is the summary of our project describing the particular area of the physical sciences that we will focus on and the challenges we have set ourselves.

“De-mixing is one of the most ubiquitous examples of self-assembly, occurring frequently in complex fluids and living systems. It has enabled the development of multi-phase polymer alloys and composites for use in sophisticated applications including structural aerospace components, flexible solar cells and filtration membranes. In each case, superior functionality is derived from the microstructure, the prediction of which has failed to maintain pace with synthetic and formulation advances. The interplay of non-equilibrium statistical physics, diffusion and rheology causes multiple processes with overlapping time and length scales, which has stalled the discovery of an overarching theoretical framework. Consequently, we continue to rely heavily on trial and error in the search for new materials.”

“Our aim is to introduce a powerful new approach to modelling non-equilibrium soft matter, combining the observation based empiricism of machine learning with the fundamental based conceptualism of physics. We will develop new methods in machine learning by addressing the broader challenge of incorporating prior knowledge of physical systems into probabilistic learning rules, transforming our capacity to control and tailor microstructure through the use of predictive tools. Our goal is to create empirical learning engines, constrained by the laws of physics, that will be trained using microscopy, tomography and scattering data. In this feasibility study, we will focus on proof-of-concept, exploring the temperature / composition parameter space for a model blend, building the foundations for our ambition of using physics informed machine learning to automate and accelerate experimental materials discovery for next generation applications.”

Machine learning: preparing to go underneath the hood

In my previous blog, I showed machine learning predictions for the stress/extension data. In my next blog, I’ll start exploring what is happening within the algorithm. The particular machine learning algorithm that I’m using is known as a Gaussian process, or GP as I’ll use for short from now on. Rasmussen, in his freely available book, discusses what he claims are reasonably close relationship between GPs and many of the other different approaches, including neural networks. I believe that this is not a universally accepted viewpoint, but I suspect for those of us who are likely to remain just users of machine learning, the arguments might be rather too technical to follow! Either way, I find GPs to be one of the more accessible routes into machine learning.

Today though I just want to cover some background about linear regression since this enables me to introduce some of the language and terms that are just as important in machine learning as in linear regression. A useful starting point is to remind ourselves about the Gaussian distribution and how it is used, often without us thinking about it, to determine the line of best fit to data using linear regression, the simplest form of machine learning used well before the term became common place:

linearfit

If we believe that this equation describes our data, and our measurements are free from experimental noise, then our data would fit perfectly onto the straight line. Of course, all measurements have some noise which now means that our belief is that the actual measurement, let’s call it yobs, if we measure it repeatedly at the same value of x, will have a distribution of values with a mean of = mx c, and some distribution about the mean. The most common distribution is the Gaussian function, which says that the probability that we observe a particular value, yobs, is given by,

gaussian

One attraction of the Gaussian distribution is that it is characterised by just one additional parameter, the variance σ2.

You might find it helpful to see what a Gaussian looks like:

Gaussian curve

So we see that the probability is highest at the expected value of = mx c as required. The width of the peak is determined by the variance, the greater the variance the wider the peak and the more likely we are to observe data further away from the mean. Since p(yobs) is a probability we also need a prefactor that ensures that the probability of have any possible value is 1. This leads to

gaussian_normalised

Since we have a probability of observing a particular yobs at each x at which we take a measurement, we need to introduce a joint probability, which is just the probability of observing y1 at x1, y2 at x2 and so on. We usually assume that the noise that affects the measurement at one point is not related, or is uncorrelated, to the noise at another point. The joint probability that two or more independent events occur is the product of the probability of each individual event, so that

joint_gaussian_normalised

Now that we have specified a belief about how our data behaves and encoded it within a probability distribution function, we need to find the line of best fit, which means we need a measure of goodness of fit. The most common measure is the maximum likelihood expectation (MLE). I’ll briefly discuss this in the context of linear regression, but, helpfully, it is also widely used in machine learning to optimise the model based on training data. The MLE corresponds to the parameters, which are just m and c for linear regression, that maximise what is called the log-likelihood, in other words we search for m and c that maximise the logarithm of the joint probability. Why maximise the logarithm of the joint probability and not just the joint probability? The simplest answer is that by taking the logarithm, we simplify the math enormously. Firstly, the logarithm of a product is the sum of the logarithms and secondly, the logarithm of an exponential is just whatever is inside the exponential. In mathematical terms the log-likelihood for a joint Gaussian probability distribution function for independent events is given by

log_joint_gaussian

So now we just need to maximise this with respect to the parameters m and c, which we can do using calculus. I won’t go any more into the mathematics, there are plenty of resources online that describe the process in detail.

Hopefully that has set the scene for my next blog when I’ll explore GPs.

Machine learning vs physics learning. A physicist’s view of the machine learning approach.

Cubic splines.

Still reading? Good. In the second of my two part blog I’ll introduce non-parametric learning. The most important thing to understand about non-parametric learning is that it is not non-parametric. Now that we’ve cleared that up …

Thanks to those of you who voted following my previous blog. You’ll find the results at the end of this blog. So what’s going on in the three figures I posted earlier?

Figure 1 is the simplest and, as, probably, you’ve already guessed, each data point is joined to its neighbours by straight lines. From experience we tend to think that this it is not very likely that if we took measurements between the existing data that they would fall on these lines. We would expect the data to vary more smoothly.

Figure 3 is generated from the best fit of the relationship between stress and extension that arises from some simple assumptions and application of the idea of entropy, as I mentioned here. In terms of learning about physics, this representation could be considered to provide the most learning, albeit that we are learning that our model is too simplistic. By just comparing the shapes of the data with the curve, we can infer that we need a model with more parameters to describe physics not included in our simple model. This is parametric physics learning.

If we aren’t attempting to fit a physics relationship but believe that our data is representative of an underlying trend, what options are there?

Figure 2 is generated using a “smoothing spline”. This is a neat way of attempting to find a way to interpolate data based on the data alone rather than any beliefs about what might cause a particular relation between extension and stress. A smoothing spline is an extension of the cubic splines which is a type of non-parametric learning, The cubic spline describes locally the curve at each data point as a cubic equation. In this case, in contrast to the physics approach, we do not impose a global relationship. This means that knowing the value of the data of the first point tells us nothing about the value at, say, the 10th point of measurement. The physics approach would enable us to make this inference, but as we can see from the figure 2, in some cases, it wouldn’t be a very good prediction!

You may be wondering how we can define a cubic for each individual data point. A cubic equation has four parameters, so we have four unknowns and only one data point. To find the other unknowns, we add assumptions such as the slope and curvature of the curve connecting adjacent points are equal, which guarantees the appearance of smoothness. The math behind this is cumbersome to write down and involves a great deal of repetitious calculations, which is why it only became popular with the advent of computers.

A smoothing cubic spline extends the idea of a cubic spline so it can deal with noisy data, that is data that varies about the expected mean. Without this, the cubic spline can become quite spiky when the data is noisy, so a smoothed spline relaxes the demand that the curve pass through every data point and instead looks for a compromise curve that is smooth but never too far from the data. This requires the introduction of another (non?) parameter, unsurprisingly called the smoothing parameter. When this is zero it reproduces a cubic spline fit that passes through all the data points, when it is one it fits a straight line through the data. The best choice of smoothing parameter requires us to introduce some arbitrary measure of what good looks like, but statisticians have come up with ways of quantifying and measuring this, a topic of a future blog.

In what way, then, is this approach non-parametric? In parametric learning, the number of parameters is dictated by the model we have chosen to fit our data. For the stress-extension entropy model, we have just one parameter. We believe that more data will either improve the fit to the model or further support our view that the model needs to be more sophisticated, but we do not believe that more data will necessarily require more parameters. In non-parametric learning the number of parameters is determined by the amount of data and how much we wish to smooth the data. The parameters should be viewed as having no meaning outside of their use in describing how the data behaves. In other words, we cannot extract any physical meaning from the parameters.

So which one is preferable? This comes down to asking ourselves the questions: what do we want to learn and what do we want to do with our new knowledge. If our goal is to determine the physical laws that govern rubber elasticity and how those laws can be most elegantly represented mathematically, figure 2 is an important step. If our goal is to predict what will happen, but don’t care why, for extension values that we have not measured then figure 3 is preferable. This is the essence of the difference between machine learning and physics. Machine learning works on the basis that the data is everything and we learn everything we need to know from the data itself. Physics on the other hand is continually searching for an underlying description of the universe based on laws that we discover guided by observations.

As for your votes, the view was unanimous: Figure 2. Is this telling us that machines have learnt to think like humans, or that we have biased their learning outcomes with our own preconceived notions?

Are you discrete?

In this blog, I’ll look at the two different data types that we work with: discrete or continuous. The distinction between them determines the type of machine learning that we will use. If our data can only have discrete values we seek to classify it. Like The Terminator? You’ll probably be classified into the group of people who also like Total Recall. That is a guess, but any AI that didn’t make that prediction probably isn’t very good.

Whilst classification is hugely important in science as well as many other big data problems, my interest is mostly in regression, which is about describing and predicting data that can take a range of values.

Perhaps a nice physical illustration of the difference between classification and regression is the phase behaviour of substances. Whether a substance is solid, liquid or gas at a given temperature and pressure is a classification problem. Take enough measurements at different pairs of temperature and pressure and an AI algorithm will be able to start constructing the probable boundaries that separate the phases. If, on the other hand you are interested in the properties of the substance just in the liquid phase, you might, for example, measure the density as you vary temperature. The density will vary continuously as long as it stays in the liquid phase. Describing such continuous variations, such as the density/temperature relation, is an exercise in regression.

Whilst in many cases it is obvious, whether a data set is discrete or continuous might also be a choice for you to make. It is worth remembering that no AI based decision making is free from human choices and/or influence. In the above example of the phase behaviour, I have decided that my substance can only be in one of three phases. There are many more possibilities, different types of solid phases, supercritical fluid phases, liquid crystals, and then there is even the question of whether we really know what we mean by the liquid phase

Why I think AI might help me to do science

I’ll say a little about the type of research that I do, and how the traditional way that I solve research problems is limiting my ability to answer some of the questions I’m interested in.

My greatest research passion is developing and solving models of physical processes. Much of my work is about developing mathematical descriptions of how microstructure in blends emerges and evolves with time. Rather than attempt to describe this in words, perhaps a picture will help (if an interest in my blogs develops, I’ll upgrade and will be able to post the movie!).

cropped-phi_351.png
A snapshot of the phase separation process, according to one of my computational models. Anyone else see the resemblance to the Borg spaceship from Star Trek?

One frustration is that the processes I try to model are so complex that even the simplest theoretical description is unwieldy. This means that gaining physical insight from the models, which after all is the point of building models in the first place, becomes increasingly difficult. To illustrate, the above picture took two hours of computing time with a high end graphics card. Two hours might not seem like long but when you consider a model that has ten different independent variables, exploring how the predictions of the model changes as the values of the variable changes in a way that is statistically significant is just not possible. Without going into the details of statistical significance, let’s say that the minimum number of different values for any given independent variable is 5, which is optimistically low. If we only have one independent variable we need to do 5 runs of our computer model, which takes 10 hours. If we have two independent variables, we need to do 5 different variations of the second variable for each possible value of the first, so 25 runs or 50 hours. For three independent variables, this becomes 125 runs or 250 hours and so on. For 10 variables we are looking at 9765625 runs or over 200 years!

Now lets look a little bit more at what is happening inside my model. I divide space into discrete boxes (256 cubed in this case) and then I solve using finite differences my partial differential equations. The equations describe how the concentration of one of the polymers evolves in time based on some well established physics and then some assumptions particular to the circumstances I’m interest in, which might be how a surface affects the evolution. In each box, I’m solving the same equation but with different data. This means that if I have two boxes that look alike in terms of their concentration, and in my case, the concentration of their neighbours, I’m doing twice the work necessary. To some extent this is overcome by the use of parallel processing inside my graphics card, but even the best graphics cards have limitations on how many computations can be completed at once. For example, for 256 cubed, I have to solve for 16777216 boxes, but my card can only handle 2496 boxes at once, so that is still a lot of processing happening in serial.

What I started to appreciate, through conversations with some AI experts and listening to some fascinating developments at the somewhat cryptically entitled session Uncertainty Quantification in Multiscale Materials Modelling at the 2017 Fall MRS meeting, was that AI might be able to help out by learning the patterns. Rather than just solving the same problem on the same data, which is computationally intensive, my AI can say “I’ve seen that combination of inputs before, here is what is most likely to happen next”. It turns out that this is a lot quicker, but only once I’ve gone through a time consuming step of generating data to feed the AI so it can learn the trends. Well that’s the hope at least …