What is Machine Learning in Simple Words

Machine learning is considered to be the “technology of tomorrow being realized in the present”. From search engines to self-driving cars, machine learning has become indispensable to the modern lifestyle. A subset of artificial intelligence, machine learning represents the ability of machines to perform complex tasks without being explicitly instructed on how to do so. Let us take a deep dive into how machines learn.

Machine learning is considered to be a vast discipline by its practitioners. Thus, there are over fifty classifications, each of which enjoys niche popularity. However, the researchers and the ML community overall categorize it into three distinct bands:

What is Machine Learning in Data Science

While the underlying philosophy is common among the three paradigms, which is to ensure that tasks are performed by the machine without step-by-step instructions, the “learning technique” (that is, how the machines learn), is the real differentiator. Let us understand them with the aid of analogies from everyday life.

Read Blog ✅✅ Bagging vs Boosting in Machine Learning: All You Need To Know

Supervised learning

What is Supervised Machine Learning

Supervision is a common sight in our lives. Our school classrooms bank upon this mode, wherein an instructor breaks down the way a linear equation is solved, for instance. She does so by demonstrating examples, making her students solve them, and then asking them to do the same with the unsolved exercises from the textbook. This is typically how humans learn.

After sufficient practice, a learner may become impeccable. In the realm of ML though, there isn’t an explicit instructor. To demonstrate this point, let us ponder upon the task of using a mixer grinder. It is a four-step process that involves putting ingredients into a wet jar, putting the lid, switching on the power, and letting it grind for a while, after which the power is turned off. The last two steps are iterative and involve an examination of the fluidity achieved/texture of the foodstuff.

As far as machine learning is concerned, we only supply it with the following information: Grind the “substance” using “the device”, with the help of “power”.

A trial and error process ensues, in which most of the instances see the machine following a wrong order. It may switch on the power without putting in the contents in the jar. It might carry out the task without putting on the lid. It may grind suboptimally (a lot or quite less, such that the desired effect isn’t achieved).

It maps its performance against the stipulated target and keeps going on. In the end, the machine ends up learning the so-called right way. Owing to the sheer number of trials over “training data” it may end up having a large worth of useful insights; it may even end up devising a new method that humans are not aware of and thus become the most efficient grinder in our history!

The process stated above highlights the key components of a supervised ML model, which includes supplying the inputs and the target and using the optimization algorithm for minimizing (or maximizing, in specific cases) the objective function. Try discerning these components from the example above, to form a clearer understanding.

To summarize, targets are provided along with the inputs. The model strives to reach the same by training them, with assistance from the optimization algorithm, which is triggered by the magnitude of the objective function. The lower/more optimized the latter, the lesser the amount of optimization left for the former.

We are able to cross-check the progress the model has made by running it on the training data, observing its accuracy, and tinkering with the hyperparameters (this includes the duration of the training, the nature of the objective function, the optimization algorithm chosen, etc.). This is iterative and is pursued before running the model on test data (which is quite similar to an unseen problem or passage in a typical exam), to mimic how well it’d work in a real-world situation.

Read Blog ✅✅ Supervised learning vs Unsupervised learning

Unsupervised learning

What is Unsupervised Learning

Imagine a classroom again, without an instructor this time. While it may bring back fond memories of chaos and disorder, let us concentrate on the aspect of learning, by focusing on how we humans learn in this scenario. It is implausible to gauge what one should learn, in the dearth of other information/instruction. Given the textbook (or the internet), one might end up learning practically anything, from the fall of the Soviet Union to the geology of the Mesozoic era.

As far as a machine is concerned, this is where we do not put an explicit target for the model. We give it the inputs and envisage that the machine learns by obtaining a greater understanding of the dataset. This generally involves the classification of data by methods like clustering. In everyday life, this situation can be explained with the classical archer example.

We instruct the model “to shoot”. In the case of supervised learning, explicit targets would have been provided in the form of shooting boards with a bullseye representing the highest achievable accuracy. Though, for unsupervised learning, these are not available. By unavailability, one can envisage limited resources (as in limited/unlabelled data), which render the first paradigm irrelevant.

Coming back to the example, the robot archer keeps shooting indiscriminately in an objectless setting. After firing thousands or possibly, even millions of arrows, as an onlooker, we would see groups of arrows.

These include:

  • The short arrows, which have landed relatively close to the shooting position
  • The sturdy, longer ones, that lay grounded at a larger radius
  • The broken, malfunct arrows, are lesser in number and follow no specific pattern
  • Anomalies like crossbow ones, which haven’t gone the distance since they were shot
    with the wrong kind of bow

In this case, the robot learns more about the data. Thus, we avail invaluable insights, which can now be used in any way we desire. Do appreciate the fact that this can be a key step in a multistep analysis, with supervised learning being applied with much more intuition and resolve after knowing what the actual targets are.

Read Blog✅✅ A Guide to Unsupervised Machine Learning Models | Types | Applications

Reinforcement learning

What is Reinforced Learning in Machine Learning

All of us have heard or seen the iconic Super Mario at one point or the other. It’s a game where the usual gameplay sees the player getting rewarded if they bring an ‘enemy’ down or collect coins, powerups, one-ups, and mushrooms. Similarly, being attacked or falling off a cliff or running out of time leads to disqualification (styled as ‘death’). All of this is a part of the ultimate objective of rescuing Princess Peach from Bowser.

The underlying principle of reinforcement learning matches such games. A reward system is introduced, and the machine learns as follows: the model is rewarded if it acts in accordance with the objective. To an extent, it resembles supervised learning, with the difference being, instead of minimizing the objective function, the reward is maximized.

For example, in the grinder example, we may opt to do so when the order pursued by the machine matches the common knowledge, at any stage. Similarly, in the archery example, every arrow which was hit closer to the bullseye than the previous best shall entail a reward.

In everyday terms, chocolate or a gift is understood to be a prize for realizing positive reinforcement. This is widely known in behavioral psychology and was implemented in ML owing to the popular perception of making machines learn as humans do.

At this juncture, it ought to be mentioned that we may also opt to bring in negative reinforcement, which is an alias for negative reinforcement/punishment. The two can be applied simultaneously, depending on the problem’s needs.

Conclusion

The list is not extensive, as far as all ML categories are concerned. There isn’t a unanimous agreement on where cutting-edge techniques like Deep Learning, NLP (Natural Language Processing), speech recognition, Neural Networks (NNs), RNNs, etc. come into the picture. Some treat them as separate applications altogether, even though they may build upon one or more of the aforementioned three.

Also, while techniques like logistic and linear regression, decision trees, and random forests are subsets of supervised learning (which are widely used in predictive analytics) almost all clustering techniques are classified under unsupervised learning. At the same time, these can also be implemented using Neural Networks.

All in all, as a part of your data science journey, take one step at a time. Starting with supervised learning is preferable; building your way to the more complex concepts comes naturally that way. At Pickl.Ai, learn all of it from scratch in a structured manner, with a judicious mix of theory and application.

Ayush Pareek

I am a programmer, who loves all things code. I have been writing about data science and other allied disciplines like machine learning and artificial intelligence ever since June 2021. You can check out my articles at pickl.ai/blog/author/ayushpareek/

I have been doing my undergrad in engineering at Jadavpur University since 2019. When not debugging issues, I can be found reading articles online that concern history, languages, and economics, among other topics. I can be reached on LinkedIn and via my email.