Rage against the machine learning

Jimmy, our Insights Director mentions how like all good buzzwords, machine learning (ML) has taken over and seemingly become an instinctive label for anything involving computers.


jimmy_rageagainst_v2_FBLike all good buzzwords, machine learning (ML) has taken over and seemingly become an instinctive label for anything involving computers. Gartner have been saying machine learning is at ‘peak hype’ for the last year or so. I have no doubt that organisations need to start thinking about ML, artificial intelligence (AI) and how these emerging technologies should be utilised, but simply labelling every automated process and calculation in your organisation as ML devalues the technology.

Automation is not ML

I recently attended a big data conference in London and was speaking to the owner of a marketing platform that would ensure all my customers receive the right communication for them. Now, let’s gloss over the fact that I’m not a marketer and have no real use for the tool he’s built (he’s clearly not familiar with the innovation processes such as the desirable, feasible, viable framework - but that’s another blog).

I asked how the tool worked and he explained that the ‘algorithm’ measured the open and click-through rates of different emails against a predefined list of customer groups. The algorithm learnt over time what the best emails were to send to the best customer groups. BANG - machine learning! No, it’s not. It’s updating click-through and open rates. Good marketers have been doing this for years, but the process has been more manual. Simply automating a process doesn't make it machine learning.

Take simple regression analysis, we’ve been doing it for years in Excel with maths that’s really old. If I paste in new data and copy my new parameters for the line of best fit I get new outcomes from any inputted data - I wouldn’t class that as human learning. But if I automate the data refresh all of a sudden do I have machine learning?

Really cool stuff, is not ML 

Last month I was in a demo hosted by a really big cloud platform provider (have a guess, you have a 33% chance of being right). They demonstrated a machine learning algorithm that could draw pictures based on a sentence describing an animal. You could describe a completely fictional animal that the machine would not have seen before and it would come back with some really good drawings that matched the description.

They explained how it worked and it is really, really, impressive. The speed and accuracy of the site are fantastic. But I could type descriptions of animals in for one thousand years and I'd still get the same results. It doesn't learn. Yes, it’s really cool but it’s relying on a scoring algorithm that’s been trained to analyse images in a certain way. The scoring algorithm is fed by a random pixel generator that tweaks the pixels based on the score of the other algorithm. As I mentioned, I really liked this demo, and I’m not devaluing the tech in any way. However...the way I view this is in line with the infinite number of monkeys argument. If I had an infinite amount of monkeys with typewriters and they had enough time, they would eventually write the entire works for Shakespeare (actually, one would do it the first time but that’s a different argument). I wouldn’t call that learning. In my argument, the random pixel generator is like a typewriting monkey and the image recognition algorithm is a stats engine looking at word order that has been fed the entire works of Shakespeare.

Doing things really quickly, is not ML

An increase in computational power has undoubtedly led to a dramatic increase in what computers can do. I remember harnessing the power of a computer cluster to do protein folding simulations at university - behold, the power of seven desktops! Compared to that, we’re now lightyears ahead. Online supermarkets sites are analysing baskets and recommending a product you may have forgotten. The analysis involves billions of rows of data and it can be pretty accurate.

This is being classed as machine learning, but it’s just statistics - 200-year-old statistics done really quickly. If you buy pasta, minced meat, tomatoes and red wine…..did you forget CHEESE? The maths behind this is Poisson distribution, but until now we’ve not has the computing power to do the calculations. The percentage of people who buy those 5 items is very very small compared to all shopping baskets. However, when three or four are brought together, the percentage of the fifth item also be bought is very high. Very clever, but not machine learning

So, what is ML?

There is no doubt ML is progressing. However, it’s still not at the stage where we should be handing over the controls to vital systems. Take visual recognition for example. There are plenty of examples that show the progression of image recognition, however, for every step forward there is also an experiment that finds a simple image that confuses neural nets. Some of these examples seem ridiculous, mistaking a turtle for a rifle for example. The issue is that the neural nets act in stages and make assumptions. If an early assumption is wrong the rest of the neural net is set completely on the wrong path and it gets to an answer that is widely off the mark.

Ultimately this might be a semantics argument and depend upon what you consider the definition of ‘learning’ is. This could all be machine learning or none of it might be. So what’s the point? Before you develop your machine learning strategy, or any strategy, take a step back and ask what problem you’re actually trying to solve. Going straight to ‘ML is the solution’ will certainly please the hundreds of companies that can sell you their latest solution but will it help you? This approach is no different from how Red Badger have been building amazing digital products for years - Build the Right Thing, Build the Thing Right.

Want to find out more on Machine Learning? Read our Tech Round Table here.

Similar posts

Are you looking to build a digital capability?