A first attempt at machine learning

Today, I’ve generated my first results using a machine learning algorithm. I don’t understand the theory behind the model well enough yet to describe it, but will be working on that later this week, so watch this space.

One of the first challenges for anyone wanting to adopt machine learning is the huge choice of algorithms. Probably the most famous are neural networks but there are numerous others with names such as Support Vector Machines and Reducing Kernel Hilbert Space methods (a personal favourite, just for the impenetrability of the name). The family of methods that I’ve been trying to understand are known as Gaussian Processes. I’ll talk more about what these are and why I’ve chosen them in a later blog, but for now let’s just look at what they predict, returning to my stress/extension data:

stressstrainGP_matlab

You’ll notice that the curve is smoother than the smoothed spline I used previously. Even better is that the output from this model is predictive rather than descriptive, in the sense that it is straightforward to provide the algorithm with new extension values and ask for the most probable stress. That is how I generated the curve above.

Gaussian process models are available in Matlab as part of their Regression Learner apps, I’ve found them helpful as a starting point, but a little too restrictive in terms of understanding what is happening underneath the hood of the engine. I assume that other mathematical software packages also have Gaussian Process capabilities. If you use Matlab, then feel free to play with the code that I’ve written. You can download the zip file with the .mlx live script function that does the training and testing, the two .mat files with the training data and the .m GP model.

I’ve started to use GPy which is a package within Python. It is more powerful than the Matlab version of Gaussian Processes, as well as free. Here is the GPy prediction based on the same training data:

stressstrainGP_optimized

You will notice differences between the predictions of the Matlab and the GPy models. The GPy version seems more faithful to the data but doesn’t predict a stress of zero with an extension ratio of one. There is no particular reason it should. As far as machine learning is concerned the (0,1) data point has no more meaning than any of the other data points and it doesn’t know anything about the underlying physics, so I’m actually surprised that the Matlab version does pass through (0,1). I’ll try to figure out why this is the case! One of the reasons I like the GPy version is that it is easier to explore what is happening within the model, which gives me greater confidence. As an example, I can look at what it predicts before I optimise:

stressstrainGP_unoptimized

One of the nice features of the GPy package is that blue shaded region indicates the range of confidence in the predicted stress values. As you would expect the range of confidence is much narrower for the optimised case. Again, feel free to download the .py file and the data.

In both cases, I encourage you to replace the stress and extension data with any x, y dataset of your own and build your own machine learning Gaussian Process! Perhaps you can upload results in the comments section of the blog?

 

 

One thought on “A first attempt at machine learning

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s