Global optimization via neural network approximation of inverse coordinate mappings with evolutionary parameter control

Бесплатный доступ

A hybrid method of global optimization NNAICM-PSO is presented. It uses neural network approximation of inverse mappings of objective function values to coordinates combined with particle swarm optimization to find the global minimum of a continuous objective function of multiple variables with bound constraints. The objective function is viewed as a black box.The method employs groups of moving probe points attracted by goals like in particle swarm optimization. One of the possible goals is determined via mapping of decreased objective function values to coordinates by modified Dual Generalized Regression Neural Networks constructed from probe points.The parameters of the search are controlled by an evolutionary algorithm. The algorithm forms a population of evolving rules each containing a tuple of parameter values. There are two measures of fitness: short-term (charm) and long-term (merit). Charm is used to select rules for reproduction and application. Merit determines survival of an individual. This two-fold system preserves potentially useful individuals from extinction due to short-term situation changes.Test problems of 100 variables were solved. The results indicate that evolutionary control is better than random variation of parameters for NNAICM-PSO. With some problems, when rule bases are reused, error progressively decreases in subsequent runs, which means that the method adapts to the problem.

Еще

Global optimization, heuristic methods, evolutionary algorithms, neural networks, parameter setting, parameter control, particle swarm optimization

Короткий адрес: https://sciup.org/143169794

IDR: 143169794   |   DOI: 10.25209/2079-3316-2019-10-2-3-31

Список литературы Global optimization via neural network approximation of inverse coordinate mappings with evolutionary parameter control

  • W. M. Spears, K. A. De Jong, T. Bäck, D. B. Fogel, H. De Garis. “An overview of evolutionary computation”, European Conference on Machine Learning, Lecture Notes in Computer Science, vol. 667, Springer, 1993, pp. 442--459. DOI: 10.1007/3-540-56602-3_163
  • F. Neri, V. Tirronen. “Recent advances in differential evolution: a survey and experimental analysis”, Artificial Intelligence Review, 33:1--2 (2010), pp. 61--106. DOI: 10.1007/s10462-009-9137-2
  • N. Siddique, H. Adeli. “Nature inspired computing: an overview and some future directions”, Cognitive computation, 7:6 (2015), pp. 706--714. DOI: 10.1007/s12559-015-9370-8
  • D. H. Wolpert, W. G. Macready. “No free lunch theorems for optimization”, IEEE transactions on evolutionary computation, 1:1 (1997), pp. 67--82. DOI: 10.1109/4235.585893
  • G. Karafotias, M. Hoogendoorn, A. E. Eiben. “Parameter control in evolutionary algorithms: Trends and challenges”, IEEE Transactions on Evolutionary Computation, 19:2 (2015), pp. 167--187. DOI: 10.1109/TEVC.2014.2308294
  • A. Aleti, I. Moser. “A systematic literature review of adaptive parameter control methods for evolutionary algorithms”, ACM Computing Surveys (CSUR), 49:3 (2016), 56.
  • DOI: 10.1145/2996355
  • R. Poli, J. Kennedy, T. Blackwell. “Particle swarm optimization”, Swarm intelligence, 1:1 (2007), pp. 33--57.
  • DOI: 10.1007/s11721-007-0002-0
  • V. D. Koshur. “Reinforcement swarm intelligence in the global optimization method via neuro-fuzzy control of the search process”, Optical Memory and Neural Networks, 24:2 (2015), pp. 102--108.
  • DOI: 10.3103/S1060992X15020083
  • Sh. A. Akhmedova, V. V. Stanovov, E. S. Semenkin. “Cooperation of bio-inspired and evolutionary algorithms for neural network design”, Journal of Siberian Federal University. Mathematics Physics, 11:2 (2018), pp. 148--158.
  • DOI: 10.17516/1997-1397-2018-11-2-148-158
  • E. Semenkin, M. Semenkina. “Self-configuring genetic algorithm with modified uniform crossover operator”, ICSI 2012, Lecture Notes in Computer Science, vol. 7331, Springer, 2012, pp. 414--421.
  • DOI: 10.1007/978-3-642-30976-2_50
  • G. Karafotias, A. E. Eiben, M. Hoogendoorn. “Generic parameter control with reinforcement learning”, ACM, 2014, pp. 1319--1326.
  • DOI: 10.1145/2576768.2598360
  • G. Karafotias, M. Hoogendoorn, B. Weel. “Comparing generic parameter controllers for EAs”, IEEE, 2014, pp. 46--53.
  • DOI: 10.1109/FOCI.2014.7007806
  • A. Rost, I. Petrova, A. Buzdalova. “Adaptive Parameter Selection in Evolutionary Algorithms by Reinforcement Learning with Dynamic Discretization of Parameter Range”, ACM, 2016, pp. 141--142.
  • DOI: 10.1145/2908961.2908998
  • V. Koshur, K. Pushkaryov. “Global optimization via neural network approximation of inverse coordinate mappings”, Optical Memory and Neural Networks, 20:3 (2011), pp. 181--193.
  • DOI: 10.3103/S1060992X1103009X
  • V. D. Koshur, K. V. Pushkaryov. “Global'naya optimizatsiya na osnove neyrosetevoy approksimatsii inversnykh zavisimostey [Global optimization via neural network approximation of inverse mappings]”, XIII Vserossiyskaya nauchno-tekhnicheskaya konferentsiya “Neyroinformatika-2011” [XIII All-Russian Scientific and Technical Conference “Neuroinformatics”], 1 (2010), pp. 89--98.
  • K. V. Pushkaryov, V. D. Koshur. “Gibridnyy evristicheskiy parallel'nyy metod global'noy optimizatsii [Hybrid heuristic parallel method of global optimization]”, Vychislitel'nye metody i programmirovanie [Computational Methods and Programming], 16 (2015), pp. 242--255.
  • DOI: 10.26089/NumMet.v16r224
  • V. D. Koshur, K. V. Pushkaryov. “Dual'nye obobshchenno-regressionnye neyronnye seti dlya resheniya zadach global'noy optimizatsii [Dual Generalized Regression Neural Networks for global optimization]”, XII Vserossiyskaya nauchno-tekhnicheskaya konferentsiya “Neyroinformatika-2010” [XII All-Russian Scientific and Technical Conference “Neuroinformatics”], 2 (2010), pp. 219--227.
  • J. Nocedal, S. J. Wright. Numerical Optimization, Second Edition, Springer-Verlag, New York, 2006.
  • DOI: 10.1007/978-0-387-40065-5
  • N. H. Awad, M. Z. Ali, P. N. Suganthan, J. J. Liang, B. Y. Qu. “Problem Definitions and Evaluation Criteria for the CEC 2017 Special Session and Competition on Single Objective Real-Parameter Numerical Optimization”, 2016 URL http://web.mysites.ntu.edu.sg/epnsugan/PublicSite/Shared%20Documents/CEC-2017/Bound-Constrained/Definitions%20of%20%20CEC2017%20benchmark%20suite%20final%20version%20updated.pdf.
  • G. Karafotias, M. Hoogendoorn, A. E. Eiben. “Why parameter control mechanisms should be benchmarked against random variation”, IEEE, 2013, pp. 349--355.
  • DOI: 10.1109/CEC.2013.6557590
  • E. Jones, T. Oliphant, P. Peterson, et al. SciPy: Open source scientific tools for Python, 2001-- URL http://www.scipy.org/.
  • R Core Team. R: A language and environment for statistical computing, 2018 URL https://www.R-project.org/.
Еще
Статья научная