Skip to main content
Log in

A tabu search algorithm for the training of neural networks

  • Theoretical Paper
  • Published:
Journal of the Operational Research Society

Abstract

The most widely used training algorithm of neural networks (NNs) is back propagation (BP), a gradient-based technique that requires significant computational effort. Metaheuristic search techniques such as genetic algorithms, tabu search (TS) and simulated annealing have been recently used to cope with major shortcomings of BP such as the tendency to converge to a local optimal and a slow convergence rate. In this paper, an efficient TS algorithm employing different strategies to provide a balance between intensification and diversification is proposed for the training of NNs. The proposed algorithm is compared with other metaheuristic techniques found in literature using published test problems, and found to outperform them in the majority of the test cases.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Figure 1
Figure 2
Figure 3

Similar content being viewed by others

References

  • Barnard E (1992). Optimization for training neural nets. IEEE Trans Neural Networks 3: 232–240.

    Article  Google Scholar 

  • Battiti R (1992). First- and second-order methods for learning between steepest descent and Newton's method. Neural Comput 4: 141–166.

    Article  Google Scholar 

  • Battiti R and Tecchiolli G (1995). Training neural nets with the reactive tabu search. IEEE Trans Neural Networks 6: 1185–1200.

    Article  Google Scholar 

  • Cho SB and Shimohara K (1998). Evolutionary learning of modular neural networks with genetic programming. Appl Intell 9: 191–200.

    Article  Google Scholar 

  • Eglese RW (1990). Simulated annealing: a tool for operational research. Eur J Opl Res 46: 271–281.

    Article  Google Scholar 

  • Fogel DB, Fogel LJ and Porto VW (1990). Evolving neural networks. Biol Cybernet 63: 487–493.

    Article  Google Scholar 

  • Fukuoka Y, Matsuki H, Minamitani H and Ishida A (1998). A modified back propagation method to avoid false local minima. Neural Networks 11: 1059–1072.

    Article  Google Scholar 

  • Glover F (1986). Future paths for integer programming and links to artificial intelligence. Comput Opns Res 5: 533–549.

    Article  Google Scholar 

  • Glover F (1989). Tabu search—Part I. ORSA J Comput 1: 190–206.

    Article  Google Scholar 

  • Glover F (1990). Tabu search—Part II. ORSA J Comput 2: 4–32.

    Article  Google Scholar 

  • Gupta JND and Sexton RS (1999). Comparing backpropagation with a genetic algorithm for neural network training. Omega 27: 679–684.

    Article  Google Scholar 

  • Hagan MT and Menhaj MB (1994). Training feedforward networks with the Marquardt algorithm. IEEE Trans Neural Networks 5: 989–993.

    Article  Google Scholar 

  • Hansen P (1986). The steepest ascent mildest descent heuristic for combinatorial programming. Congress on Numerical Methods in Combinatorial Optimization, Capri, Italy.

  • Hush DR (1999). Training a sigmoidal node is hard. Neural Comput 11: 1249–1260.

    Article  Google Scholar 

  • Jacobs RA (1988). Increased rates of convergence through learning rate adaptation. Neural Networks 1: 295–308.

    Article  Google Scholar 

  • Kelly JP, Rangaswamy B and Xu J (1996). A scatter-search-based learning algorithm for neural network training. J Heuristics 2: 129–146.

    Article  Google Scholar 

  • Kirkpatrick S, Gelatt Jr CD and Vecchi MP (1983). Optimization by simulated annealing. Science 220: 671–680.

    Article  Google Scholar 

  • Liao TW and Chen LJ (1998). Manufacturing process modeling and optimization based on multilayer perceptron network. J Manuf Sci Eng 120: 109–119.

    Article  Google Scholar 

  • Møller MF (1993). A scaled conjugate gradient algorithm for fast supervised learning. Neural Networks 6: 525–533.

    Article  Google Scholar 

  • Montgomery DC (1997). Design and Analysis of Experiments. John Wiley & Sons Inc.: New York.

    Google Scholar 

  • Pham DT and Karaboga D (1999). Training Elman and Jordan networks for system identification using genetic algorithms. Artif Intell Eng 13: 107–117.

    Article  Google Scholar 

  • Rigler AK, Irvine JM and Vogl TP (1991). Rescaling of variables in back propagation learning. Neural Networks 4: 225–229.

    Article  Google Scholar 

  • Rumelhart DE, Hinton GE and Williams RJ (1986). Learning representations by backpropagation errors. Nature 323: 533–536.

    Article  Google Scholar 

  • Sexton RS and Gupta JND (2000). Comparative evaluation of genetic algorithm and backpropagation for training neural networks. Inform Sci 129(1–4): 45–59.

    Article  Google Scholar 

  • Sexton RS, Alidaee B, Dorsey RE and Johnson JD (1998a). Global optimization for artificial neural networks: A tabu search application. Eur J Opl Res 106: 570–584.

    Article  Google Scholar 

  • Sexton RS, Dorsey RE and Johnson JD (1998b). Toward global optimization of neural networks: A comparison of the genetic algorithm and backpropagation. Decis Supp Syst 22: 171–185.

    Article  Google Scholar 

  • Sexton RS, Dorsey RE and Johnson JD (1999). Optimization of neural networks: A comparative analysis of the genetic algorithm and simulated annealing. Eur J Opl Res 114: 589–601.

    Article  Google Scholar 

  • Tollenaere T (1990). SuperSAB: Fast adaptive back propagation with good scaling properties. Neural Networks 3: 561–573.

    Article  Google Scholar 

  • Treadgold NK and Gedeon TD (1998). Simulated annealing and weight decay in adaptive learning: The SARPROP algorithm. IEEE Trans Neural Networks 9: 662–668.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to B Dengiz.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Dengiz, B., Alabas-Uslu, C. & Dengiz, O. A tabu search algorithm for the training of neural networks. J Oper Res Soc 60, 282–291 (2009). https://doi.org/10.1057/palgrave.jors.2602535

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1057/palgrave.jors.2602535

Keywords

Navigation