Thursday, October 30, 2014

Using neurolab in Python to train a multi-layer neural network

Recently, I'm learning machine learning in my university. Our teacher didn't limit us using specific programming language to solve the problem, so I choose to using Python my most familiar one.
Write the code of neural network from scratch is not so easy for me now(teacher, please forgive me), so I searched on the internet to find a easy to use package.
In machine learning area, the most famous and full functioned package is sklearn. I did my regression tree homework using this. But sklearn only have Bernoulli Restricted Boltzmann Machine. I don't know what's this and apparently this neural network aren't fit the requirement of my machine learning homework.
There is a problem on neurolab package. The document of it wasn't easy to understand. Looks like this package was developed by a Russian and his/her English is not as good as me, by the way, I'm a Chinese.
Before I use it, I devoted a lot of time to read the document and examples. This is not because this package has a bad design, but the way to explain how to use it. Anyway, I finally figured it out and finished my homework.

OK, here is the description of homework:

A NN has Input and output as following:
       Input Output
10000000 10000000
01000000 01000000
00100000 00100000
00010000 00010000
00001000 00001000
00000010 00000010
00000001 00000001
Design a one-hidden layer NN with 2, 3, 4 hidden nodes
respectively. Use programming language(matlab, R, etc) to
implement backpropagation algorithm to update the weight.
1). Show the hidden nodes value for different design. (10 points)
2). Compare the sum of squared error for different design. (10
points)
3). Plot the trajectory of sum of squared error for each output
unit for different design. (10 points)
Here is the code, I think I've build commended enough to let others understand my code
import neurolab as nl
import numpy as np  
import pylab as pl

def NN_for_234_hidden_node():
    '''
    Train a neural net work for one hiden layer and 2, 3, 4 hidden nodes, 
    then plot and compare sum of squared error for different design 
    '''
 
    inputs = np.eye(8) # Create the input and target 
    inp = inputs[2] # input used to test the neural network
    inp = inp.reshape(1,8) # transform inp from one row to 8 row 1 column
    inputs = np.delete(inputs , 5, 0) # delete 0000 0100
    print(inputs)
    error = [] 
 # node_number denote the number of nodes in hiden layer.
    for node_number in range(2, 4 + 1):
        # [[0, 1]] * 8 denote that there EIGHT input nodes and the range of each input is from 0 to 1
        # [node_number, 7] denote that the hidden layer contains "node_number" neurons and have 8 out put.
  # transf=[nl.trans.LogSig()] * 2 shows that using Log Signoid function in nodes. 
        net = nl.net.newff([[0, 1]] * 8, [node_number, 8], transf=[nl.trans.LogSig()] * 2)  # @UndefinedVariable
        # net = nl.net.newff([[0, 1]] * 8, [node_number, 8])  # @UndefinedVariable
  # using Resilient Backpropagation to train the network. 
  # This is the best way to train in this example in all provided Backpropagation algorithm
        net.trainf = nl.train.train_rprop  # @UndefinedVariable
        # default transfer function for newff is tan sigmoid 
        
        # net = nl.net.newp([[0, 1]]*8, 2)  # @UndefinedVariable
        
  # train network and generate errors. the default method to generate error is Sum of Squared Error 
        error.append(net.train(inputs, inputs, show=0, epochs = 3000))  # @UndefinedVariable
        print("The input array is: ", inp)
        out = net.sim(inp) # compute output for input "inp"
        print(out)
  
  # this small block of code used to compare network result with expected result.
#         pl.plot(range(8), inp[0])    
#         pl.plot(range(8), out[0])    
#         pl.show()                    
#         print(out)                   
#         print(net.layers[0].np)      
        sigmoid = nl.trans.LogSig();  # @UndefinedVariable
         
  # compute the value of nodes in each hidden layer
        for inputs_idx in range(len(inputs)):
            result = []
            for perce_idx in range(node_number):
                result.append(sigmoid((net.layers[0].np['w'][perce_idx] * inputs[inputs_idx]).sum() 
                                      + (net.layers[0].np['b'][perce_idx])))
            print("For the ", inputs_idx, " The value of each node is: ")
            print(result)
                
    # plot the error of 3 different network
    for i in range(len(error)):
        label_str = str(i + 2) + " nodes"
        # label_str = str(label_str)
        # print(str(label_str))
        pl.plot(range(len(error[i])), error[i], label=label_str)
    pl.legend()
    pl.xlabel("number of iteration")
    pl.ylabel("sum of squared error")
    pl.title("Sum of squared error for NN with different hidden nodes")
    pl.show()

NN_for_234_hidden_node()