There are several types of neural network models with various features developed for variety of applications. A single neuron with linear activation function as in Eqn. 1,three fundamental components of it are the connection links that provide the inputs with weights for = 1,…, , an adder that sums all the weighted inputs to prepare the input to the activation function along with the bias associated with each neuron, and an activation functionmaps the input to the output of the neuron,
designed as a neural network model as shown in Fig. 1, is a linear estimator and classifier. The estimator capabilities are equivalent to simple and multiple linear regression models. As a classifier, it is similar to simple and multiple discriminant function analysis in statistics. The perceptron network shown in Fig. 2(a) was proposed by Rosenblatt in 1950. This is a linear classifier and is functionally similar to simple and multiple discriminant function analysis in statistics.
The multilayer perceptron (MLP) model shown in Fig. 2(b) is the most popular neural network model for the nonlinear estimation and classification problems. This, in fact, is an extension of the perceptron network. The competitive networks shown in Fig. 3 are unsupervised networks that can find clusters in the data. The self-organizing map (SOM) competitive network, shown in Fig. 3, not only finds unknown clusters in the data but also preserves the topological structureof the data and clusters. Two well-knownneural networks for time-series forecasting are the Jordan and Elman networks shown in Fig. 4. These networks contain feedback links that help to capture temporal effects.
Neural networks can further be classified based on the signal flow from neuron to neuron within the network, training/learning method used, and type of activation function used in the neural network. In feedforward neural networks, signal flow is always in the forward direction whereas in the feedback or recurrent neural networks, signal can flow in forward and backward directions. The MLP networks, radial basis function networks, support vector machines, generalized model for data handling or polynomial nets, generalized regression neural network, generalized neural network, Kohonen’sselforganizing feature map, back propagation neural network, and Jordan and Elman networks are some of the examples of neural networks.
The neural networks generally have three layers. The single or multiple inputs form the input layer. This is connected with its corresponding weights to the middle layer called hidden layer. There can be multiple number of hidden layers and can have more than one neuron in each layer. The last layer which produces the output of the neural network is called output layer.
Keywords: Neuron Model, Neural Network Model, Competitive Networks, Perceptron, MLP, Feedback Neural Network, Classification
Dr. S. Mohan Mahalakshi Naidu
Associate Professor – Electronics & Telecommunication Engineering Department