There are many different parts to any ML model and ways of achieving it. Some of these are supervised learning methods and unsupervised learning methods. Today we will look into a supervised method with the Perceptron. The Perceptron is an algorithm for supervised learning of binary classifiers. A binary classifier is a function that decides whether or not an input (Which is represented by a vector of numbers) belongs to a specific class. Perceptron is a type of linear classifier ( A classification algorithm that makes its predictions based on a linear predictor function, combining a set of weights with the feature vector.
You can think of a perceptron as a single node in a bigger network that is meant to mimic a neuron inside a human brain. Inside a human brain thoughts are very often a response to a sensory input which can be smell, taste, touch, sight, or hearing. The thought we get would be a direct result of the data we have been exposed to. For example, one person eats a snail and learns that they do not like it while another eats it and loves it. They both got the same input but had a different conclusion that the first person. This is also a case for perceptrons, if we give the same input data to two different perceptrons they can come to different conclusions/output.
So we can control the thinking pattern of perceptrons with different weights and thresholds. Weights help to determine how much of an impact a particular input value has on the data as a whole.
If we have the values:
Here we can see that X2 has the largest weight compare to X0 or X1 so its impact on the data as a whole is much more significant than the others even though its value is lower than X1. Given this we know that X2 is much more import than the other two inputs.
In order to get a logical conclusion we need to apply the wights to the inputs, the easiest way of doing this is just multiplying the input value by the weight values and than adding all the results together to get a weighted sum. For the example above the weighted sum is 0.31.
After getting the weighted sum we need to evaluate it and see if it has surpassed a certain threshold/bias.
In order to see if the threshold was reached we use something called an activation function. There are many different activation functions such as Step, Unit step, Linear and Logistic.
The most basic of functions is the step function, using this function if we have a threshold of 0.5 and our sum is bigger than 0.5 the function will return True/1, if the opposite occurs and our sum is less than 0.5 our function will return False/0. You might be asking how is the threshold chosen, well we get to choose that threshold, also we get to choose the initial set of weights. If we were to change the weights in the above example we could end up getting a completely different answer than before. So accuracy depends on how well one can adjust the weights to better understand the input data.
Below is example code and execution of a very basic perceptron.
We can see the output showing us what is happening, we keep adding the inputs * weights to the weighted sum to reach 0.31. Given that our threshold is 0.5 this perceptron does not make it to the threshold so it is returned as 0 or False. A real world example of this would be features of a chicken. If we were to give inputs/features of a ostrich instead of a chicken where we can get close to a chicken but since it inst one so it would return False. Certain parts like size of a chicken, weight and other characteristics would be the inputs where something like the weight or size would have a higher input weight than the color of the chicken.