Abstract
The most widely used training algorithm of neural networks (NNs) is back propagation (BP), a gradient-based technique that requires significant computational effort. Metaheuristic search techniques such as genetic algorithms, tabu search (TS) and simulated annealing have been recently used to cope with major shortcomings of BP such as the tendency to converge to a local optimal and a slow convergence rate. In this paper, an efficient TS algorithm employing different strategies to provide a balance between intensification and diversification is proposed for the training of NNs. The proposed algorithm is compared with other metaheuristic techniques found in literature using published test problems, and found to outperform them in the majority of the test cases.
Similar content being viewed by others
References
Barnard E (1992). Optimization for training neural nets. IEEE Trans Neural Networks 3: 232–240.
Battiti R (1992). First- and second-order methods for learning between steepest descent and Newton's method. Neural Comput 4: 141–166.
Battiti R and Tecchiolli G (1995). Training neural nets with the reactive tabu search. IEEE Trans Neural Networks 6: 1185–1200.
Cho SB and Shimohara K (1998). Evolutionary learning of modular neural networks with genetic programming. Appl Intell 9: 191–200.
Eglese RW (1990). Simulated annealing: a tool for operational research. Eur J Opl Res 46: 271–281.
Fogel DB, Fogel LJ and Porto VW (1990). Evolving neural networks. Biol Cybernet 63: 487–493.
Fukuoka Y, Matsuki H, Minamitani H and Ishida A (1998). A modified back propagation method to avoid false local minima. Neural Networks 11: 1059–1072.
Glover F (1986). Future paths for integer programming and links to artificial intelligence. Comput Opns Res 5: 533–549.
Glover F (1989). Tabu search—Part I. ORSA J Comput 1: 190–206.
Glover F (1990). Tabu search—Part II. ORSA J Comput 2: 4–32.
Gupta JND and Sexton RS (1999). Comparing backpropagation with a genetic algorithm for neural network training. Omega 27: 679–684.
Hagan MT and Menhaj MB (1994). Training feedforward networks with the Marquardt algorithm. IEEE Trans Neural Networks 5: 989–993.
Hansen P (1986). The steepest ascent mildest descent heuristic for combinatorial programming. Congress on Numerical Methods in Combinatorial Optimization, Capri, Italy.
Hush DR (1999). Training a sigmoidal node is hard. Neural Comput 11: 1249–1260.
Jacobs RA (1988). Increased rates of convergence through learning rate adaptation. Neural Networks 1: 295–308.
Kelly JP, Rangaswamy B and Xu J (1996). A scatter-search-based learning algorithm for neural network training. J Heuristics 2: 129–146.
Kirkpatrick S, Gelatt Jr CD and Vecchi MP (1983). Optimization by simulated annealing. Science 220: 671–680.
Liao TW and Chen LJ (1998). Manufacturing process modeling and optimization based on multilayer perceptron network. J Manuf Sci Eng 120: 109–119.
Møller MF (1993). A scaled conjugate gradient algorithm for fast supervised learning. Neural Networks 6: 525–533.
Montgomery DC (1997). Design and Analysis of Experiments. John Wiley & Sons Inc.: New York.
Pham DT and Karaboga D (1999). Training Elman and Jordan networks for system identification using genetic algorithms. Artif Intell Eng 13: 107–117.
Rigler AK, Irvine JM and Vogl TP (1991). Rescaling of variables in back propagation learning. Neural Networks 4: 225–229.
Rumelhart DE, Hinton GE and Williams RJ (1986). Learning representations by backpropagation errors. Nature 323: 533–536.
Sexton RS and Gupta JND (2000). Comparative evaluation of genetic algorithm and backpropagation for training neural networks. Inform Sci 129(1–4): 45–59.
Sexton RS, Alidaee B, Dorsey RE and Johnson JD (1998a). Global optimization for artificial neural networks: A tabu search application. Eur J Opl Res 106: 570–584.
Sexton RS, Dorsey RE and Johnson JD (1998b). Toward global optimization of neural networks: A comparison of the genetic algorithm and backpropagation. Decis Supp Syst 22: 171–185.
Sexton RS, Dorsey RE and Johnson JD (1999). Optimization of neural networks: A comparative analysis of the genetic algorithm and simulated annealing. Eur J Opl Res 114: 589–601.
Tollenaere T (1990). SuperSAB: Fast adaptive back propagation with good scaling properties. Neural Networks 3: 561–573.
Treadgold NK and Gedeon TD (1998). Simulated annealing and weight decay in adaptive learning: The SARPROP algorithm. IEEE Trans Neural Networks 9: 662–668.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Dengiz, B., Alabas-Uslu, C. & Dengiz, O. A tabu search algorithm for the training of neural networks. J Oper Res Soc 60, 282–291 (2009). https://doi.org/10.1057/palgrave.jors.2602535
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1057/palgrave.jors.2602535