Logic of a neural network implementation

1

Hello, Good Morning

I made the implementation of a network, and it has the following matrix, where f (x) is an input vector (matrix 1,139), the matrix phi that has dimension 1.20 (20 due to the number of signals that I used it to train it) and w as the weights that are of dimension 20.1

for k in range(0,20):                                           
    for item in range(0,139):
        substract += (s[0,item] - phi[0,k])            # phi = 20,20

    mod = np.linalg.norm(substract)

    substract = 0

    if(mod > 0):
        substract = (mod*mod)*math.log10(mod)
        phi_matrix_final.append(substract)
    else:
        phi_matrix_final.append(mod)

    mod = 0
    substract = 0 

Sn = 20, due to the number of pro training entries

Problem of this network is that it always returns a value very close to each other, since the answers should be between 0 and 10

Note: I use the r²log (r) function

    
asked by anonymous 24.06.2018 / 17:09

1 answer

1

You should have only one entry on the network, with a training sample of 139 points, so% of% has dimension 1 x20 and weights, dimension 20x1.

What's not clear is why you have 139 exits,

        substract += (s[0,item] - phi[0,k])          # phi = 20,20
                                                     # item varia de 0 a 139, ou seja, são 139 saídas?

The code, I think, should be the following, according to your equation,

x = [] # 139 amostras
s = [[]] # 139 amostras x 20 saídas esperadas

mod = 0.0 # forçando um erro em phi
modant = 1.0 # forçando um erro em phi

phi = np.zeros((1, 20)) # 1 entrada e 20 saídas

ctr = 0

eta = 0.3 # amortecimento

while abs(1 - mod/modant) > 0.01 and ctr < 10000:
    ctr += 1
    modant = mod

    for item in range(0,139):
        for k in range(0,20):                                       
            phi[0,k] += eta* (x[item] - s[item,k])

        mod = np.linalg.norm(s)
        if mod > 0:
            phi = (mod*mod*math.log10(mod)) @ phi
    
24.06.2018 / 17:45