Looking Behind the Curtain of Machine Learning

In our last article, we attempted to define Machine Learning while also debunking some of the myths that have managed to pop up over time. We also made a promise to take a look at the different methods and techniques available for Machine Learning development. So, as promised, here we present the Zircon guide on approaches to Machine Learning.

Anyone who has so much as dipped a toe into the realm of Machine Learning will know that Supervised and Unsupervised learning are the two most widely adopted methods. Knowing which method will best suit a given problem will very much depend on the available data sets available, and the type of problem itself.

Supervised Learning

At it’s most basic level, Supervised Learning is a method of training where input data is well labelled, some already being tagged with the correct answer. An algorithm can then use the training data to analyse new unlabelled data sets and as a result, deduce the correct outcome.

Consider training an algorithm to recognise fruit for example. The first step would be to provide the labelled training data, i.e. objects that are rounded with depression and coloured red will be labelled as apple, objects that are long, thin and slightly curved cylinder coloured yellow will be labelled banana. Based on this training data, if presented with a banana the algorithm should classify both the shape and colour before confirming that object is 97% BANANA and 3% APPLE, and should, therefore, classify it to the banana category.

A helpful analogy that sums up Supervised Learning well, is to think of the process as a school classroom. A teacher (training data) provides students (the algorithm) with information in such a way that when presented with a question, students should be able to provide a correct answer. In cases where students repeatedly make the same mistakes, the teacher can provide them with more information (additional training data) that should correct these mistakes.


Several techniques can be adopted by Supervised Learning algorithms, such as Region-Based Convolutional Neural Networks, Decision Trees and Support Vector Machines, however, it is possible to group them into two key categories:

  • Classification
  • Regression

Both categories share the aim to construct a model capable of predicting an output based on training inputs. The difference between these two categories shows itself in that outputs for Regression problems are numerical, while Classification problems result in categorical outputs.

Take the previous fruit identification example, regardless of the specific algorithm, this is a Classification problem as the output will be either APPLE or BANANA. However, if the output were to be a persons age or salary this would be a Regression problem.

When selecting a Supervised Learning algorithm, consideration needs to be given for the bias and variance that exists within the algorithm. Having a degree of flexibility is certainly a selling point for Machine Learning, but, certain algorithms can be too flexible for the problem they are trying to address. Caution must also be given in regards to the accuracy, redundancy, heterogeneity (diversity of data) and linearity of the available data, as this will affect the selection process.

On the behalf of our clients, Zircon has had the opportunity to work with a wide variety of Supervised Learning Multi-class Classification algorithms using frameworks such as, Google TensorFlow and Intel OpenVINO, to perform tasks such as detect and identify assets or recognise when members of the public are breaking the law while driving.

Unsupervised Learning

As the name suggests, Unsupervised Learning is the polar opposite of Supervised Learning. Where Supervised Learning utilises labelled data to train its algorithms, the data sets presented to Unsupervised systems are neither classified or labelled. This allows the algorithm to act on this information without any guidance or training. Essentially, Unsupervised Learning systems are expected to identify hidden structures within the data and group unsorted information according to similarities, patterns or differences in the data.

If we stick with the fruit analogy from earlier, we once again want to create a system that can recognise different types of fruit. Only this time we don’t take the time to carefully label all the data that is presented to the system, essentially meaning that the system will know nothing about the items it is being presented with. So how can it arrange them into groups? One obvious possibility is to consider the physical appearance of the different fruits, specifically the colour of them. So the system arranges them on a base condition of colour, potentially presenting the following results:

  • RED COLOUR GROUP – Apple, Cherries
  • GREEN COLOUR GROUP – Watermelon, Grapes

As there are multiple answers to each colour group more clarification is needed to make a distinction between the options, so the system needs to add another consideration factor. This time it arranges the items based on a new physical character such as size.


Despite having no labelled data before this process, the system should now be able to distinguish between each of these fruits by grouping them in this manner. However, it is worth noting that human input is still required to identify the focus of each cluster. All the algorithm is doing is highlighting characteristics, it doesn’t know what the items represented by the data actually are.


As with Supervised Learning, there is a multitude of algorithms and techniques available for use, such as DBSCAN, K-means and Hebbian Learning, and once again there are two key categories into which they can be grouped:

  • Clustering
  • Association

Clustering, unsurprisingly, involves the grouping of data based on similarity and dissimilarity between features or data points. Meaning that the data points in one group, or cluster, will be similar to other data points in that same group, and will be dissimilar to the data points in other groups. On the other hand, Association finds hidden relations between features or variables in a data set, e.g. when people purchase item A (baby food) they also purchase item B (nappies). Typically Association algorithms will record a count of the frequency of associations to identify the occasions where associations occur more often than in a random sample of possibilities.

As part of our internal R&D activities, we have recently started an investigation into several Unsupervised techniques, to test the limits of their capabilities in analysing big data.

Alternative Methodologies

While Supervised and Unsupervised Learning are by far the most popular and recognised approaches to developing a Machine Learning system, there are some alternatives out there. One such example is Reinforcement Learning.

Much like the methods used in the training of animals, Reinforcement Learning is based on the idea of making suitable actions to maximise the reward in a given situation e.g. manoeuvring from one side of a maze to another in order to escape or collect an item. As a degree of guidance must be given for an algorithm to determine increases and decreases in reward rate, Reinforcement Learning is often bundled in with Supervised Learning. However, there are a few key features that set it apart as a unique approach. For example, Supervised algorithms are trained with the expected/correct answer, essentially meaning that a decision is made on the very first input provided. Reinforcement Learning on the other hand, is all about making decisions sequentially. The end reward or task is a given whilst the means to reach that reward is established through repeated experience.

It is important to note that, whilst they operate on a very similar premise, Genetic Algorithms are not the same as Reinforcement Learning. Genetic Algorithms repeatedly make random alterations to a given population, selecting the top specimens in order to progress towards a reward, whereas Reinforcement Learning algorithms use a mathematically defined framework such as SARSA or Q-learning in order to progress.

There are so many more elements that could be covered under this topic, but if we were to talk any more about the potential algorithms and approaches that are being used in Machine Learning development today, this article would never come to an end. Should you have any questions regarding Machine Learning or the elements covered in today’s article, our team would be more than happy to provide you with answers.

For the third and final entry to this Machine Learning / AI blog series, one of our engineers has taken a look into the human element that still sits at the very core of these technological advances.