curated by Aluminium
In 1957, “Frank Rosenblatt”, an American psychologist, unveiled the “Mark 1 Perceptron”—a custom machine built for image recognition. This groundbreaking device consisted of three layers, and for adjusting weights, it used potentiometers and electric motors. In essence, it was a hardware implementation of an artificial neural network.

“Rosenblatt” used this machine for recognizing letters and to classify photos of people as male/female.
In hindsight, this was the origin of Supervised Algorithms, these type of algorithms are mainly classified into two classes, either they are Regression or Classification.
In my previous blog, we explored about how neural networks can predict continuous values through regression, and here we will discuss their ability to make discrete choices.
Perceptron
The perceptron is a linear classifier that separates data using a hyperplane.
This is single layer perceptron, meaning that it will only have one input layer, and one output layer, no hidden layer(thus can only do linear classification).
Initialize Weights & Bias: Here initially both are zero
self.weights = np.zeros(2)
self.bias = 0
Computing Output: We will take input, dot it with weights and add bias to it, and then we will send the output to the activation function.
$$ y= sign(w_1x_1 + w_2x_2+b) $$
Here, sign function is used:
$$ f(x) = \begin{cases} +1 & \text{ if } x \geq 0\\ -1 & \text { if } x < 0\end{cases} $$
**def activation(self, z):
return 1 if z >= 0 else -1
linear_output = np.dot(x_i, weights) + bias
y_predicted = activation(linear_output)**
Updating Weights: Update is only done on misclassification.
$$ w_i=w_i+η(y_{true}−y_{pred})x_i \\ b=b+η(y_{true}−y_{pred}) $$
if y_true != y_predicted:
update = lr * (y_true - y_predicted)
self.weights += update * x_i
self.bias += update
Iterate Until Convergence: We run this process for multiple epochs.

