Saturday, 19 June 2021

updating the weights for a perceptron step by step produces different result, why is that?

A post gives an approach to find the (almost) best learning rate and initial weights so that a perceptron converges with the minimal iteration.

I modified the data a little bit,

nearest_setosa = np.array([[1.9, 0.4],[1.6, 0.6]])

and the best result over 2 iterations I got is

eat = 0.2, initial weights = [0.7, 0.7], trained weights = [-0., 0.5]

which managed to separate the data points.

However, when I tried to reproduce the training step by step, I got a different set of trained weights [-0.06, 0.54]

Here are the steps

this initialize the params

model_w = np.asarray([0.7, 0.7])
model_b = 0.0
eta = .2

this code finds the wrongly separated examples, the same way as the original approach

for i in range(3):
    print(y_train[i] == predict(x_train[i]))

and then I got

False
False
True

so, I updated the weights regarding the first example, again, the same way as the original approach

update_weights(0, True)

new weights were

-0.2
[0.32 0.62]

and then I did the prediction again

for i in range(3):
    print(y_train[i] == predict(x_train[i]))

and got

False
False
True

and then again, I updated the weights

update_weights(0, True)

new weights were

-0.4
[-0.06  0.54]

which were different from the one from the original code.

What am I missing?



from updating the weights for a perceptron step by step produces different result, why is that?

No comments:

Post a Comment