Application of Adaptive Neural Network in Fault Diagnosis of Generator Sets

Due to the complex non-linear characteristics between the vibration fault symptom and the fault feature of the steam turbine generator set, it is very difficult to diagnose and identify the fault. In recent years, the artificial neural network (8) has attracted extensive attention in the application of unit vibration fault diagnosis with its unique association, memory, storage and learning functions. Commonly used networks in ANNs include BP networks and RBF networks. The BP network is the most mature and widely used network.

In order to improve the convergence speed of BP network learning and avoid falling into local minimum points, it is usually necessary to add a weight adjustment formula for momentum items. This paper points out that when the learning rate and the momentum factor do not match, although the momentum is increased, the learning curve is increased, but the error curve is oscillated. Two methods to avoid the error curve oscillation are proposed: (1) the learning rate and the momentum factor Adaptive adjustment of error; (2) Asymptotic shrinkage learning algorithm for error approximation.

The topology of the 1BP network and its algorithm 131 is a typical three-layer BP network topology diagram.

The BP network topology diagram Wi, M is the number of neurons in the input layer, the hidden layer and the output layer respectively; % is the connection weight between the /th neuron of the input layer and the neuron of the hidden layer; The number of neurons in the layer and the output layer is the connection weight between the neurons, /=1,2,...,N-7=1,2. For a given training sample set, the batch method is used to construct the error. Function: 2 is the network learning convergence curve.

system error:. 5 ideal output; the second neuron of the output layer corresponds to the actual output of the first sample.

The weight value adopts the adjustment formula wJt(t+1)=w, A-1lWk+a(yvjit) added to the momentum factor to avoid entering the saturation region of the S-shaped curve. In the network learning process, when the actual output of the S function is less than 0.01 Or greater than 0.99, take its output value directly. 1 or 99.2 adaptive BP neural network algorithm and its oscillation of training error 2.1 BP network algorithm of adaptive learning rate In the BP network algorithm, the learning rate value that determines the convergence speed of network learning is appropriate, always Discussion hotspots. The theoretical formula of the learning rate is calculated, but the calculation is large and the application is limited. In this paper, the following adjustment formula is used: when the error increment is positive, the learning rate is reduced, otherwise the learning rate is increased. The example shows that equation (3) can effectively improve the convergence speed of network learning, but when the momentum factor is not correct, it will cause the error curve to oscillate. Using the original data in Table 1, when the network structure 9-12-9, the initial learning rate; / = 0.75, > 5 = 0.15, the momentum factor "=0.85, the figure, for a sample P and the corresponding output If the difference between the ideal output and the actual output is greater than the error approximation, the corresponding weight adjustment is made; if the difference is smaller than the error approximation, the error signal of the inverse propagation to each neuron is zero, and the sample is not needed. P and the corresponding output neurons are weighted. Since only the neurons with large output errors are adjusted for each training, and the larger learning rate and momentum factor are selected, the neurons with smaller output errors are not adjusted, and the learning phenomenon does not occur, and the output error does not fluctuate. Therefore, this method can greatly improve the convergence speed, and the error convergence curve is stable without oscillation.

The problem of the value of the error approximation is also extremely important. If it is too small, the number of neurons to be adjusted each time is too large, which affects the training speed and the stability of the error curve. If it is too large, the number of neurons to be adjusted for each iteration of the training network is too small. Therefore, the network can not really learn the basic mapping relationship of the input and output modes, and generally takes less than. 5. For simple fault diagnosis problems, M can be selected smaller due to fewer output neurons. On the contrary, for the complex fault diagnosis problem, the convergence curve of the asymptotic contraction learning algorithm of the output neuron training number error approximation degree should be larger, and then gradually decrease as the error decreases.

Using the original data in Table 1, the momentum factor a = 0.9, magic = 0.022, other conditions are unchanged, is the network learning convergence curve. As can be seen from the figure, the network converges faster and there is no oscillation.

4 Turbine generator set fault diagnosis and identification The vibration signal of a unit's No. 3 bearing is normalized by the FFT transform to the amplitude of each frequency segment, and the fault symptom is obtained, and it is input into the above trained network. The network output is the membership of the symptom relative to each fault, as shown in Table 2. The fault category error approximation asymptotic contraction learning algorithm should be the learning rate algorithm unbalanced bumping misalignment bearing and journal eccentric rotor crack coupling fault subharmonic Resonant oil film oscillation looseness According to the principle of maximum membership degree, the fault of the unit can be diagnosed as “misalignment” and “bearing and journal eccentricity”. This conclusion has been confirmed by actual maintenance.

It can also be seen from Table 2 that the improved network classification and recognition performance is good, and the recognition performance of the integrated fault is superior to the traditional BP network diagnosis method.

5 Conclusions BP neural network with adaptive learning rate and momentum factor is proposed. The example shows that, to a certain extent, the learning convergence speed can be improved without causing the error convergence curve to oscillate.

An error approximation asymptotic contraction learning algorithm for BP neural networks is proposed. By setting the error approximation degree, only the weight corresponding to the output neuron with a large output error is adjusted for each training. The example shows that the learning convergence of this algorithm is fast, the convergence curve does not oscillate, and the classification recognition performance is good. The recognition performance of composite fault is better than the traditional BP network diagnosis method.

Filling and Seaming Machine

Filling Seaming Machine,Soft Drink Filling Machine,Can Filling Seaming Machine,Aluminum Filling Seaming Machine

Zhoushan willman machinery technology Co.,Ltd , https://www.zhoushanwillman.com