Search and optimization by metaheuristics

437 162 0
Search and optimization by metaheuristics

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

Ke-Lin Du M.N.S Swamy Search and Optimization by Metaheuristics Techniques and Algorithms Inspired by Nature Ke-Lin Du M.N.S Swamy • Search and Optimization by Metaheuristics Techniques and Algorithms Inspired by Nature M.N.S Swamy Department of Electrical and Computer Engineering Concordia University Montreal, QC Canada Ke-Lin Du Xonlink Inc Ningbo, Zhejiang China and Department of Electrical and Computer Engineering Concordia University Montreal, QC Canada ISBN 978-3-319-41191-0 DOI 10.1007/978-3-319-41192-7 ISBN 978-3-319-41192-7 (eBook) Library of Congress Control Number: 2016943857 Mathematics Subject Classification (2010): 49-04, 68T20, 68W15 © Springer International Publishing Switzerland 2016 This work is subject to copyright All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed The use of general descriptive names, registered names, trademarks, service marks, etc in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication Neither the publisher nor the authors or the editors give a warranty, express or implied, with respect to the material contained herein or for any errors or omissions that may have been made Printed on acid-free paper This book is published under the trade name Birkhäuser The registered company is Springer International Publishing AG Switzerland (www.birkhauser-science.com) To My Friends Jiabin Lu and Biaobiao Zhang Ke-Lin Du and To My Parents M.N.S Swamy Preface Optimization is a branch of applied mathematics and numerical analysis Almost every problem in engineering, science, economics, and life can be formulated as an optimization or a search problem While some of the problems can be simple that can be solved by traditional optimization methods based on mathematical analysis, most of the problems are very hard to be solved using analysis-based approaches Fortunately, we can solve these hard optimization problems by inspirations from nature, since we know that nature is a system of vast complexity and it always generates a near-optimum solution Natural computing is concerned with computing inspired by nature, as well as with computations taking place in nature Well-known examples of natural computing are evolutionary computation, neural computation, cellular automata, swarm intelligence, molecular computing, quantum computation, artificial immune systems, and membrane computing Together, they constitute the discipline of computational intelligence Among all the nature-inspired computational paradigms, evolutionary computation is most influential It is a computational method for obtaining the best possible solutions in a huge solution space based on Darwin’s survival-of-the-fittest principle Evolutionary algorithms are a class of effective global optimization techniques for many hard problems More and more biologically inspired methods have been proposed in the past two decades The most prominent ones are particle swarm optimization, ant colony optimization, and immune algorithm These methods are widely used due to their particular features compared with evolutional computation All these biologically inspired methods are population-based Computation is performed by autonomous agents, and these agents exchange information by social behaviors The memetic algorithm models the behavior of knowledge propagation of animals There are also many other nature-inspired metaheuristics for search and optimization These include methods inspired by physical laws, chemical reaction, biological phenomena, social behaviors, and animal thinking Metaheuristics are a class of intelligent self-learning algorithms for finding near-optimum solutions to hard optimization problems, mimicking intelligent processes and behaviors observed from nature, sociology, thinking, and other disciplines Metaheuristics may be nature-inspired paradigms, stochastic, or vii viii Preface probabilistic algorithms Metaheuristics-based search and optimization are widely used for fully automated decision-making and problem-solving In this book, we provide a comprehensive introduction to nature-inspired metaheuristical methods for search and optimization While each metaheuristics-based method has its specific strength for particular cases, according to no free lunch theorem, it has actually the same performance as that of random search in consideration of the entire set of search and optimization problems Thus, when talking about the performance of an optimization method, it is actually based on the same benchmarking examples that are representatives of some particular class of problems This book is intended as an accessible introduction to metaheuristic optimization for a broad audience It provides an understanding of some fundamental insights on metaheuristic optimization, and serves as a helpful starting point for those interested in more in-depth studies of metaheuristic optimization The computational paradigms described in this book are of general purpose in nature This book can be used as a textbook for advanced undergraduate students and graduate students All those interested in search and optimization can benefit from this book Readers interested in a particular topic will benefit from the appropriate chapter A roadmap for navigating through the book is given as follows Except the introductory Chapter 1, the contents of the book can be grossly divided into five categories and an appendix • Evolution-based approach is covered in Chapters 3–8: Chapter Genetic Algorithms Chapter Genetic Programming Chapter Evolutionary Strategies Chapter Differential Evolution Chapter Estimation of Distribution Algorithms Chapter Topics in Evolutionary Algorithms • Swarm intelligence-based approach is covered in Chapters 9–15: Chapter Particle Swarm Optimization Chapter 10 Artificial Immune Systems Chapter 11 Ant Colony Optimization Chapter 12 Bee Metaheuristics Chapter 13 Bacterial Foraging Algorithm Chapter 14 Harmony Search Chapter 15 Swarm Intelligence • Sciences-based approach is covered in Chapters 2, 16–18: Chapter Simulated Annealing Chapter 16 Biomolecular Computing Chapter 17 Quantum Computing Chapter 18 Metaheuristics Based on Sciences • Human-based approach is covered in Chapters 19–21: Preface ix Chapter 19 Memetic Algorithms Chapter 20 Tabu Search and Scatter Search Chapter 21 Search Based on Human Behaviors • General optimization problems are treated in Chapters 22–23: Chapter 22 Dynamic, Multimodal, and Constrained Optimizations Chapter 23 Multiobjective Optimization • The appendix contains auxiliary benchmarks helpful to test new and existing algorithms In this book, hundreds of different metaheuristic methods are introduced However, due to space limitation, we only give detailed description to a large number of the most popular metaheuristic methods Some computational examples for representative metaheuristic methods are given The MATLAB codes for these examples are available at the book website We have also collected some MATLAB codes for some other metaheuristics These codes are of general purpose in nature The reader needs just to run these codes with their own objective functions For instructors, this book has been designed to serve as a textbook for courses on evolutionary algorithms or nature-inspired optimization This book can be taught in 12 two-hour sessions We recommend that Chapters 1–11, 19, 22 and 23 should be taught In order to acquire a mastery of these popular metaheuristic algorithms, some programming exercises using the benchmark functions given in the appendix should be assigned to the students The MATLAB codes provided with the book are useful for learning the algorithms For readers, we suggest that you start with Chapter 1, which covers basic concepts in optimization and metaheuristics When you have digested the basics, you can delve into one or more specific metaheuristic paradigms that you are interested in or that satisfy your specific problems The MATLAB codes accompanying the book are very useful for learning those popular algorithms, and they can be directly used for solving your specific problems The benchmark functions are also very useful for researchers for evaluating their own algorithms We would like to thank Limin Meng (Zhejiang University of Technology, China), and Yongyao Yang (SUPCON Group Inc, China) for their consistent help We would like to thank all the helpful and thoughtful staff at Xonlink Inc Last but not least, we would like to recognize the assistance of Benjamin Levitt and the production team at Springer Ningbo, China Montreal, Canada Ke-Lin Du M.N.S Swamy Contents Introduction 1.1 Computation Inspired by Nature 1.2 Biological Processes 1.3 Evolution Versus Learning 1.4 Swarm Intelligence 1.4.1 Group Behaviors 1.4.2 Foraging Theory 1.5 Heuristics, Metaheuristics, and Hyper-Heuristics 1.6 Optimization 1.6.1 Lagrange Multiplier Method 1.6.2 Direction-Based Search and Simplex Search 1.6.3 Discrete Optimization Problems 1.6.4 P, NP, NP-Hard, and NP-Complete 1.6.5 Multiobjective Optimization Problem 1.6.6 Robust Optimization 1.7 Performance Indicators 1.8 No Free Lunch Theorem 1.9 Outline of the Book References 1 11 12 13 14 16 17 19 20 22 23 25 Simulated Annealing 2.1 Introduction 2.2 Basic Simulated Annealing 2.3 Variants of Simulated Annealing References 29 29 30 33 35 Genetic Algorithms 3.1 Introduction to Evolutionary Computation 3.1.1 Evolutionary Algorithms Versus Simulated Annealing 3.2 Terminologies of Evolutionary Computation 3.3 Encoding/Decoding 3.4 Selection/Reproduction 3.5 Crossover 37 37 39 39 42 43 46 xi xii Contents 3.6 Mutation 3.7 Noncanonical Genetic Operators 3.8 Exploitation Versus Exploration 3.9 Two-Dimensional Genetic Algorithms 3.10 Real-Coded Genetic Algorithms 3.11 Genetic Algorithms for Sequence Optimization References 48 49 51 55 56 60 64 Genetic Programming 4.1 Introduction 4.2 Syntax Trees 4.3 Causes of Bloat 4.4 Bloat Control 4.4.1 Limiting on Program Size 4.4.2 Penalizing the Fitness of an Individual with Large Size 4.4.3 Designing Genetic Operators 4.5 Gene Expression Programming References 71 71 72 75 76 77 77 77 78 80 Evolutionary Strategies 5.1 Introduction 5.2 Basic Algorithm 5.3 Evolutionary Gradient Search and Gradient Evolution 5.4 CMA Evolutionary Strategies References 83 83 84 85 88 90 Differential Evolution 6.1 Introduction 6.2 DE Algorithm 6.3 Variants of DE 6.4 Binary DE Algorithms 6.5 Theoretical Analysis on References 93 93 94 97 100 100 101 Estimation of Distribution Algorithms 7.1 Introduction 7.2 EDA Flowchart 7.3 Population-Based Incremental Learning 7.4 Compact Genetic Algorithms 7.5 Bayesian Optimization Algorithm 7.6 Concergence Properties 7.7 Other EDAs 7.7.1 Probabilistic Model Building GP References 105 105 107 108 110 112 112 113 115 116 DE Appendix A: Benchmarks 419 Decision space: [−5, 5]2 Minimum: −1.03163 Sphere Function n x = xi2 (A.24) i=1 Decision space: [−100, 100]n Minimum: at x ∗ = Drop Wave Function − + cos 12 x √ x +2 (A.25) Decision space: [−5.12, 5.12]n Minimum: −1 at x = (0, 0)T Easom Function − cos x1 cos x2 exp −(x1 − π)2 − (x2 − π)2 (A.26) [−100, 100]2 Decision space: Minimum: -1 at x = (π, π)T Griewank Function x − 4000 n xi cos √ + i i=1 (A.27) Decision space: [−600, 600]n Minimum: at x ∗ = Michalewicz Function n − sin xi sin i=1 i xi2 π 20 (A.28) Decision space: [0, π]n Minimum: −1.8013 at x ∗ = (2.20, 1.57)T for n = Pathological Function ⎛ n−1 i=1 ⎜ ⎝0.5 + sin2 + 0.001 − 0.5 100xi2 + xi+1 xi2 − 2xi xi+1 + ⎞ ⎟ 2⎠ xi+1 (A.29) 420 Appendix A: Benchmarks Decision space: [−100, 100]n Minimum: Rastrigin Function n 10n + xi2 − 10 cos(2πxi ) (A.30) i=1 Decision space: [−5.12, 5.12]n Minimum: at x ∗ = Rosenbrock Function n−1 100 xi+1 − xi2 + (xi − 1)2 (A.31) i=1 Decision space: [−100, 100]n Minimum: at x ∗ = (1, 1, , 1)T Salomon Function − cos(2π x ) + 0.1 x (A.32) [−100, 100]n Decision space: Minimum: at x ∗ = Needle-in-Haystack f (x) = a b + (x12 + x22 ) + (x12 + x22 )2 (A.33) with a = 3.0, b = 0.05 x ∈ [5.12, 5.12]2 Schaffer Function sin2 f (x) = 0.5 + x12 + x22 − 0.5 + 0.001(x12 + x22 ) (A.34) Decision space: [−100, 100]2 Minimum: at x = Schwefel Function n 418.9829n − xi sin i=1 |xi | (A.35) Appendix A: Benchmarks 421 Decision space: [−500, 500]n Minimum: at x = (420.9687, , 420.9687) Sum of Powers Function n |xi |i+1 (A.36) i=1 Decision space: [−1, 1]n Minimum: Tirronen Function exp − x 10n 2.5 − 10 exp −8 x + n n cos(5(xi + (1 + i mod 2) cos( x ))) i=1 (A.37) Decision space: Minimum: [−10, 5]n Whitley Function n n yi,2 j 4000 i=1 j=1 − cos yi, j + (A.38) yi, j = (100(x j − xi )2 + (1 − xi )2 )2 (A.39) Decision space: [−100, 100]n Minimum: Zakharov Function n x + i=1 i x1 2 n + i=1 i x1 xi (A.40) Decision space: [−5, 10]n Minimum: Axis Parallel Hyper-ellipsoid Function n i xi2 i=1 Decision space: [−5.12, 5.12]n Minimum: (A.41) 422 Appendix A: Benchmarks Moved Axis Function n 5i xi2 (A.42) i=1 Decision space: [−5.12, 5.12]n Test Functions for Multimodal Optimization Those test functions listed in Section A.2.1 that contains sin and cos functions demonstrates periodic properties and thus can be used as benchmark for multimodal optimization For example, Ackley, Rastrigin, Griewank, and Schwefel functions are typically used A.2.2 Test Functions for Constrained Optimization The following test functions for constrained optimization are extracted from [7] g06 f (x) = (x1 − 10)3 + (x2 − 20)3 subject to g1 (x) = −(x1 − 5)2 − (x2 − 5)2 + 100 ≤ g2 (x) = (x1 − 6)2 + (x2 − 5)2 − 82.81 ≤ (A.43) (A.44) (A.45) where 13 ≤ x1 ≤ 100 and ≤ x2 ≤ 100 The minimum is f (x ∗ ) = −6961.81387558015 at x ∗ = (14.09500000000000064, 0.8429607892154795668)T g08 f (x) = − sin3 (2πx1 ) sin(2πx2 ) x13 (x1 + x2 ) g1 (x) = x12 − x2 + ≤ subject to g2 (x) = − x1 + (x2 − 4) ≤ (A.46) (A.47) (A.48) where ≤ x1 ≤ 10 and ≤ x2 ≤ 10 The minimum is f (x ∗ ) = −0.0958250414180359 at x ∗ = (1.227, 4.245)T g11 f (x) = −x12 + (x2 − 1)2 subject to h(x) = x2 − x12 =0 (A.49) (A.50) where −1 ≤ x1 ≤ and −1 ≤ x2 ≤ The minimum is f (x ∗ ) = 0.7499 at x ∗ = (−0.707036070037170616, 0.00000004333606807)T Appendix A: Benchmarks 423 A.2.3 Test Functions for Unconstrained Multiobjective Optimization Test functions for unconstrained and constrained multiobjective optimization can be found in [2,12], and at http://www.tik.ee.ethz.ch/~zitzler/testdata.html The benchmark can be constructed using WFG Toolkit [5] IEEE Congress on Evolutionary Computation provides CEC2009 MOEA Competition benchmark for multiobjective optimization [11] Schaffer Objective functions: f (x) = x , f (x) = (x − 2)2 (A.51) [−103 , 103 ] Variable bounds: Optimal solutions: x ∈ [0, 2] This function has a convex, continuous Pareto optimal front Fonseca n f (x) = − exp − i=1 n f (x) = − exp − i=1 (xi − √ )2 , n (A.52) (xi − √ )2 , n (A.53) n = 3; Variable bounds: [−4, 4] Optimal solutions: x1 = x2 = x3 ∈ [− √1 , √1 ] 3 This function has a nonconvex, continuous Pareto optimal front which corresponds to g(x) = ZDT1 f (x) = x1 , x1 , f (x) = g(x) − g(x) n xi g(x) = + i=2 n−1 (A.54) (A.55) (A.56) n = 30; Variable bounds: x = (x1 , x2 , , xn )T , xi ∈ [0, 1] Optimal solutions: x1 ∈ [0, 1], xi = 0, i = 2, , 30 This function has a convex, continuous Pareto optimal front which corresponds to g(x) = 424 Appendix A: Benchmarks ZDT2 f (x) = x1 , (A.57) x1 f (x) = g(x) − ( ) , g(x) n xi g(x) = + i=2 n−1 (A.58) (A.59) Variable bounds: x = (x1 , x2 , , xn )T , xi ∈ [0, 1] Optimal solutions: x1 ∈ [0, 1], xi = 0, i = 2, , 30 This function has a nonconvex, continuous Pareto optimal front which corresponds to g(x) = ZDT3 f (x) = x1 , (A.60) x1 x1 f (x) = g(x) − − sin(10πx1 ) , g(x) g(x) n xi g(x) = + i=2 n−1 (A.61) (A.62) Variable bounds: x = (x1 , x2 , , xn )T , xi ∈ [0, 1] Optimal solutions: x1 ∈ [0, 1], xi = 0, i = 2, , 30 This function has a convex, discontinuous Pareto optimal front which corresponds to g(x) = ZDT4 f (x) = x1 , f (x) = g(x) − (A.63) x1 , g(x) (A.64) n g(x) = + 10(n − 1) + xi2 − 10 cos(4πxi ) (A.65) i=2 n = 30 Variable bounds: x1 ∈ [0, 1], xi ∈ [−5, 5], i = 2, , n Optimal solutions: x1 ∈ [0, 1], xi = 0, i = 2, , 30 This function has a nonconvex, discontinuous Pareto optimal front which corresponds to g(x) = Appendix A: Benchmarks 425 ZDT6 f (x) = − exp(−4x1 ) sin6 (6πx1 ), x1 f (x) = g(x) − ( ) , g(x) n i=2 x i g(x) = + (A.66) (A.67) 0.25 n−1 (A.68) n = 30 Variable bounds: x = (x1 , x2 , , xn )T , xi ∈ [0, 1] Optimal solutions: x1 ∈ [0, 1], xi = 0, i = 2, , 30 This function has a nonconvex, many-to-one, nonuniformly spaced Pareto optimal front which corresponds to g(x) = A.2.4 Test Functions for Constrained Multiobjective Optimization Osyczka2 Objective functions: f (x) = − 25(x1 − 2)2 + (x2 − 2)2 + (x3 − 1)2 (x4 − 4)2 + (x5 − 1)2 , f (x) = x12 + x22 + x32 + x42 + x52 + x62 (A.69) (A.70) Constraints: g1 (x) = ≤ x1 + x2 − 2, g2 (x) = ≤ − x1 − x2 , g3 (x) = ≤ − x2 + x1 , g4 (x) = ≤ − x1 + 3x2 , g5 (x) = ≤ − (x3 − 3)2 − x4 , g6 (x) = ≤ (x5 − 3)3 + x6 − (A.71) (A.72) (A.73) (A.74) (A.75) (A.76) Variable bounds: x1 ∈ [0, 10], x2 ∈ [0, 10], x3 ∈ [1, 5], x4 ∈ [0, 6], x5 ∈ [1, 5], x6 ∈ [0, 10] Tanaka Objective functions: f (x) = x1 , f (x) = x2 , (A.77) (A.78) Constraints: g1 (x) = −x12 − x22 + + 0.1 cos(16 arctan(x1 /x2 )) ≤ 0, g2 (x) = (x1 − 0.5)2 + (x2 − 0.5)2 ≤ 0.5 (A.79) (A.80) 426 Appendix A: Benchmarks Variable bounds: xi ∈ [−π, π] ConstrEx Objective functions: f (x) = x1 , f (x) = (1 + x2 )/x1 (A.81) (A.82) Constraints: g1 (x) = x2 + 9x1 ≥ 6, g2 (x) = −x2 + 9x1 ≥ (A.83) (A.84) Variable bounds: x1 ∈ [0.1, 1.0], x2 ∈ [0, 5] Srinivas Objective functions: f (x) = (x1 − 2)2 + (x2 − 1)2 + 2, f (x) = 9x1 − (x2 − 1) (A.85) (A.86) Constraints: g1 (x) = x12 + x22 ≥ 225, g2 (x) = x1 − 3x2 ≥ −10 (A.87) (A.88) Variable bounds: xi ∈ [−20, 20] A.2.5 Test Functions for Dynamic Optimization Moving Peaks Benchmark (http://www.aifb.uni-karlsruhe.de/~jbr/MovPeaks/) is a test benchmark for DOPs The idea is to have an artificial multidimensional landscape consisting of several peaks, where the height, width, and position of each peak are altered slightly every time a change in the environment occurs Repository on EAs for dynamic optimization problems is available: http://www.aifb.uni-karlsruhe.de/~jbr/ EvoDOP Those test functions listed in Section A.2.3 can be modified to act as benchmark for dynamic multiobjective optimization DZDT1 f ( y) = y1 , y1 , g( y) n yi g( y) = + i=2 n−1 t = f c /F E Sc f ( y) = g( y) − (A.89) (A.90) (A.91) (A.92) Appendix A: Benchmarks 427 y1 = x1 t yi = |xi − |/H (t), i = 2, , n nT t t H (t) = max{|1 − |, | − − |} nT nT (A.93) (A.94) (A.95) n = 30 Variable bounds: x1 ∈ [0, 1], xi ∈ [−1, 1], i = 2, , n DZDT2 f ( y) = y1 , y1 ) , g( y) n yi g( y) = + i=2 , n−1 t = f c /F E Sc , f ( y) = g( y) − ( y1 = x1 t |/H (t), i = 2, , n, nT t t H (t) = max{|1 − |, | − − |} nT nT yi = |xi − (A.96) (A.97) (A.98) (A.99) (A.100) (A.101) (A.102) n = 30 Variable bounds: x1 ∈ [0, 1], xi ∈ [−1, 1], i = 2, , n DZDT3 f (x) = x1 , x1 x1 − sin(10πx1 ) , g(x) g(x) n xi g(x) = + i=2 n−1 t = f c /F E Sc f (x) = g(x) − y1 = x1 t |/H (t), i = 2, , n nT t t H (t) = max{|1 − |, | − − |} nT nT yi = |xi − n = 30 Variable bounds: x1 ∈ [0, 1], xi ∈ [−1, 1], i = 2, , n (A.103) (A.104) (A.105) (A.106) (A.107) (A.108) (A.109) 428 Appendix A: Benchmarks DZDT4 f ( y) = y1 , y1 , g( y) (A.110) [yi2 − 10 cos(4πyi )], (A.111) f ( y) = g( y) − n g( y) = + 10(n − 1) + i=2 t = f c /F E Sc , y1 = x1 t |/H (t), i = 2, , n nT t t H (t) = max{|1 − |, | − − |} nT nT yi = |xi − (A.112) (A.113) (A.114) (A.115) n = 10 Variable bounds: x1 ∈ [0, 1], xi ∈ [−1, 1], i = 2, , n Problem A.1 Plot the deceptive multimodal objective function: f (x) = −0.9x + (5|x|0.001 /50.001 )2 , x ∈ [−5, 5] References Chu PC, Beasley JE A genetic algorithm for the multidimensional knapsack problem J Heuristics 1998;4:63–86 Deb K, Pratap A, Agarwal S, Meyarivan T A fast and elitist multi-objective genetic algorithm: NSGA-II IEEE Trans Evol Comput 2002;6(2):182–97 Drezner Z The p-center problem: heuristic and optimal algorithms J Oper Res Soc 1984;35(8):741–8 Hopfield JJ, Tank DW Neural computation of decisions in optimization problems Biol Cybern 1985;52:141–52 Huband S, Barone L, While RL, Hingston P A scalable multiobjective test problem toolkit In: Proceedings of the 3rd international conference on evolutionary multi-criterion optimization (EMO), Guanajuato, Mexico, March 2005 p 280–295 Huband S, Hingston P, Barone L, While L A review of multiobjective test problems and a scalable test problem toolkit IEEE Trans Evol Comput 2006;10(5):477–506 Kramer O Self-adaptive heuristics for evolutionary computation Berlin: Springer; 2008 Matsuda S "Optimal" Hopfield network for combinatorial optimization with linear cost function IEEE Trans Neural Netw 1998;9(6):1319–30 Reinelt G TSPLIB–a traveling salesman problem library ORSA J Comput 1991;3:376–84 Appendix A: Benchmarks 429 10 Suganthan PN, Hansen N, Liang JJ, Deb K, Chen Y-P, Auger A, Tiwari S Problem definitions and evaluation criteria for the CEC 2005 special session on real-parameter optimization Technical Report, Nanyang Technological University, Singapore, and KanGAL Report No 2005005, Kanpur Genetic Algorithms Laboratory, IIT Kanpur, India, May 2005 http://www ntu.edu.sg/home/EPNSugan/ 11 Zhang Q, Zhou A, Zhao S, Suganthan PN, Liu W, Tiwari S Multiobjective optimization test instances for the CEC 2009 special session and competition Technical Report CES-487, University of Essex and Nanyang Technological University, Essex, UK/Singapore, 2008 12 Zitzler E, Deb K, Thiele L Comparison of multiobjective evolutionary algorithms: empirical results Evol Comput 2000;8(2):173–95 Index A Adaptive coding, 43 Affinity, 180 Affinity maturation process, 180 Algorithmic chemistry, 304 Allele, 40 Animal migration optimization, 243 Annealing, 29 Antibody, 180 Antigen, 180 Artificial algae algorithm, 222 Artificial fish swarm optimization, 249 Artificial immune network, 184 Artificial physics optimization, 296 Artificial selection, 41 B Backtracking search, 58 Bacterial chemotaxis algorithm, 222 Baldwin effect, Bare-bones PSO, 156 Bat algorithm, 246 Bee colony optimization, 210 Belief space, 316 Big bang big crunch, 301 Binary coding, 42 Bin packing problem, 417 Biochemical network, 267 Bit climber, 49 Black hole-based optimization, 302 Bloat phenomenon, 71 Boltzmann annealing, 31 Boltzmann distribution, 30 Building block, 123 Building-block hypothesis, 123 C Cauchy annealing, 33 Cauchy mutation, 58 Cell-like P system, 272 Cellular EA, 128, 132 Central force optimization, 296 Chemical reaction network, 306 Chemical reaction optimization, 304 Chemotaxis, 217 Chromosome, 40 Clonal crossover, 178 Clonal mutation, 178 Clonal selection, 177, 178 Clonal selection algorithm, 180 Clone, 178 Cloud computing, 134 CMA-ES, 88 Cockroach swarm optimization, 251 Coevolution, 136 Collective animal behavior algorithm, 242 Combinatorial optimization problem, 14 Compact GA, 110 Computational temperature, 30 Constrained optimization, 359 Cooperative coevolution, 133 Crossover, 46, 56 Crowding, 351 Cuckoo search, 243 Cycle crossover, 60 D Danger theory, 178 Darwinian model, Deceptive function, 125 Deceptive problem, 126 Deme, 128, 356 © Springer International Publishing Switzerland 2016 K.-L Du and M.N.S Swamy, Search and Optimization by Metaheuristics, DOI 10.1007/978-3-319-41192-7 431 432 Dendritic cell algorithm, 186 Deterministic annealing, 34 Differential mutation, 94 Diffusion model, 128 Diffusion search, 258 DNA computing, 268 E Ecological selection, 41 Electromagnetism-like algorithm, 297 Elitism strategy, 45 Evolutionary gradient search, 85 Evolutionary programming, 83 Exchange market algorithm, 343 Exploitation/Exploration, 51 F Firefly algorithm, 239 Fitness, 41 Fitness approximation, 139 Fitness imitation, 141 Fitness inheritance, 140 Fitness landscape, 41 Fitness sharing, 350 Flower pollination algorithm, 256 Free search, 243 G Gausssian mutation, 57 Gene, 40 Gene expression programming, 78 Generational distance, 386 Genetic assimilation, Genetic diversity, 47, 51 Genetic drift, 41 Genetic flow, 41 Genetic migration, 41 Genotype, 40 Genotype–phenotype map, 41 Glowworm swarm optimization, 238 Golden ball metaheuristic, 342 GPU computing, 135 Gradient evolution, 85 Gravitational search algorithm, 295 Gray coding, 42 Great deluge algorithm, 300 Group search optimization, 240 Grover’s search algorithm, 286 Guided local search, 10 Index H Hamming cliff phenomenon, 42 Heat transfer search, 299 Heuristics, Hill-climbing operator, 49 Hyper-heuristics, I Immune algorithm, 180 Immune network, 178 Immune selection, 182 Immune system, 175 Imperialist competitive algorithm, 340 Individual, 40 Intelligent water drops algorithm, 299 Invasive tumor growth optimization, 224 Invasive weed optimization, 255 Inversion operator, 48 Ions motion optimization, 297 Island, 128 Island model, 130 Iterated local search, 11 Iterated tabu search, 330 J Jumping-gene phenomenon, 50 K Kinetic gas molecule optimization, 299 KKT conditions, 13 Knapsack problem, 416 Krill herd algorithm, 250 L Lagrange multiplier method, 12 Lamarckian strategy, 5, 319 (λ + μ) strategy, 85 (λ, μ) strategy, 85 Large-scale mutation, 48 League championship algorithm, 342 Levy flights, 244 Lexicographic order optimization, 17 Location-allocation problem, 414 Locus, 40 M Magnetic optimization algorithm, 298 MapReduce, 134 Markov chain analysis, 124 Marriage in honeybees optimization, 209 Master–slave model, 129 Index Maximum diversity problem, 417 Melody search, 233 Membrane computing, 271 Memetic algorithm, 318 Memory cell, 175 Messy GA, 53 Metaheuristics, Metropolis algorithm, 29 MOEA/D, 380 Multimodal optimization, 350 Multipoint crossover, 47 Mutation, 48, 57 N Natural selection, 41 Negative selection, 178 Negative selection algorithm, 185 Neo-Darwinian paradigm, 37 Niching, 350 Niching mechanism, 350 No free lunch theorem, 22 Nondominated sorting, 372, 384 NP-complete, 14 NSGA-II, 374 Nuclear magnetic resonance, 284 Nurse rostering problem, 417 O One-point crossover, 46 Opposition-based learning, 310 Order crossover, 60 P PAES, 378 Pareto method, 18 Pareto optimum, 18 Partial matched crossover, 60 Partial restart, 53 Particle, 153 Pathogen, 180 Path relinking, 333 Penalty function method, 360 Permutation encoding, 60 Permutation problem, 142 Phenotype, 41 Phenotypic plasticity, 6, 41 Physarum polycephalum algorithm, 222 Plant growth algorithm, 300 Plant propagation algorithm, 256 Point mutation, 48 Population, 39 433 Population space, 316 Population-based incremental learning, 108 Premature convergence, 44 Principle of natural selection, Q Quadratic assignment problem, 413 R Random keys representation, 60 Ranking selection, 44 Ray optimization, 298 Real-coded GA, 56 Rearrangement operator, 48 Replacement strategy, 45 Reproduction, 43 Roach infestation optimization, 251 Roulette-wheel selection, 44 S Scatter search, 331 Schema theorem, 121, 122 Seeker optimization algorithm, 337 Selection, 43 Selfish gene theory, 141 Sequence optimization problem, 60 Seven-spot ladybird optimization, 252 Sexual selection, 41 Sheep flock heredity algorithm, 141 Shuffled frog leaping, 241 Simplex search, 14 Social spider optimization, 247 Sorting, 303 SPEA2, 377 Squeaky wheel optimization, 342 States of matter search, 298 Statistical thermodynamics, 30 Suppress cell, 180 Survival of the fittest, Swarm intelligence, Syntax tree, 72 T Tabu list, 193, 328 Teaching–learning-based optimization, 338 Tournament selection, 44 Transposition operator, 50 Traveling salesman problem, 415 Two-dimensional GA, 55 Two-point crossover, 46 434 U Uniform crossover, 47 Uniform mutation, 57 V Vaccination, 182 Variable neighborhood search, 10 Index Vortex search, 301 W Wasp swarm optimization, 212 Water cycle algorithm, 300 Wind driven optimization, 302 ... uncertainty, and evolutionary computation for stochastic optimization search © Springer International Publishing Switzerland 2016 K.-L Du and M.N.S Swamy, Search and Optimization by Metaheuristics, ... Tabu search, iterated local search [40,42], guided local search [61], pattern search or random search [31], Solis–Wets algorithm [54], and variable neighborhood search [45] In population-based metaheuristics, ... Ke-Lin Du M.N.S Swamy • Search and Optimization by Metaheuristics Techniques and Algorithms Inspired by Nature M.N.S Swamy Department of Electrical and Computer Engineering Concordia

Ngày đăng: 14/05/2018, 15:01

Từ khóa liên quan

Mục lục

  • Preface

  • Contents

  • Abbreviations

  • Abbreviations

  • 1 Introduction

    • 1.1 Computation Inspired by Nature

    • 1.2 Biological Processes

    • 1.3 Evolution Versus Learning

    • 1.4 Swarm Intelligence

      • 1.4.1 Group Behaviors

      • 1.4.2 Foraging Theory

    • 1.5 Heuristics, Metaheuristics, and Hyper-Heuristics

    • 1.6 Optimization

      • 1.6.1 Lagrange Multiplier Method

      • 1.6.2 Direction-Based Search and Simplex Search

      • 1.6.3 Discrete Optimization Problems

      • 1.6.4 P, NP, NP-Hard, and NP-Complete

      • 1.6.5 Multiobjective Optimization Problem

      • 1.6.6 Robust Optimization

    • 1.7 Performance Indicators

    • 1.8 No Free Lunch Theorem

    • 1.9 Outline of the Book

  • 2 Simulated Annealing

    • 2.1 Introduction

    • 2.2 Basic Simulated Annealing

    • 2.3 Variants of Simulated Annealing

  • 3 Genetic Algorithms

    • 3.1 Introduction to Evolutionary Computation

      • 3.1.1 Evolutionary Algorithms Versus Simulated Annealing

    • 3.2 Terminologies of Evolutionary Computation

    • 3.3 Encoding/Decoding

    • 3.4 Selection/Reproduction

    • 3.5 Crossover

    • 3.6 Mutation

    • 3.7 Noncanonical Genetic Operators

    • 3.8 Exploitation Versus Exploration

    • 3.9 Two-Dimensional Genetic Algorithms

    • 3.10 Real-Coded Genetic Algorithms

    • 3.11 Genetic Algorithms for Sequence Optimization

  • 4 Genetic Programming

    • 4.1 Introduction

    • 4.2 Syntax Trees

    • 4.3 Causes of Bloat

    • 4.4 Bloat Control

      • 4.4.1 Limiting on Program Size

      • 4.4.2 Penalizing the Fitness of an Individual with Large Size

      • 4.4.3 Designing Genetic Operators

    • 4.5 Gene Expression Programming

  • 5 Evolutionary Strategies

    • 5.1 Introduction

    • 5.2 Basic Algorithm

    • 5.3 Evolutionary Gradient Search and Gradient Evolution

    • 5.4 CMA Evolutionary Strategies

  • 6 Differential Evolution

    • 6.1 Introduction

    • 6.2 DE Algorithm

    • 6.3 Variants of DE

    • 6.4 Binary DE Algorithms

    • 6.5 Theoretical Analysis on DE

  • 7 Estimation of Distribution Algorithms

    • 7.1 Introduction

    • 7.2 EDA Flowchart

    • 7.3 Population-Based Incremental Learning

    • 7.4 Compact Genetic Algorithms

    • 7.5 Bayesian Optimization Algorithm

    • 7.6 Concergence Properties

    • 7.7 Other EDAs

      • 7.7.1 Probabilistic Model Building GP

  • 8 Topics in Evolutinary Algorithms

    • 8.1 Convergence of Evolutinary Algorithms

      • 8.1.1 Schema Theorem and Building-Block Hypothesis

      • 8.1.2 Finite and Infinite Population Models

    • 8.2 Random Problems and Deceptive Functions

    • 8.3 Parallel Evolutionary Algorithms

      • 8.3.1 Master--Slave Model

      • 8.3.2 Island Model

      • 8.3.3 Cellular EAs

      • 8.3.4 Cooperative Coevolution

      • 8.3.5 Cloud Computing

      • 8.3.6 GPU Computing

    • 8.4 Coevolution

      • 8.4.1 Coevolutionary Approaches

      • 8.4.2 Coevolutionary Approach for Minimax Optimization

    • 8.5 Interactive Evolutionary Computation

    • 8.6 Fitness Approximation

    • 8.7 Other Heredity-Based Algorithms

    • 8.8 Application: Optimizating Neural Networks

  • 9 Particle Swarm Optimization

    • 9.1 Introduction

    • 9.2 Basic PSO Algorithms

      • 9.2.1 Bare-Bones PSO

      • 9.2.2 PSO Variants Using Gaussian or Cauchy Distribution

      • 9.2.3 Stability Analysis of PSO

    • 9.3 PSO Variants Using Different Neighborhood Topologies

    • 9.4 Other PSO Variants

    • 9.5 PSO and EAs: Hybridization

    • 9.6 Discrete PSO

    • 9.7 Multi-swarm PSOs

  • 10 Artificial Immune Systems

    • 10.1 Introduction

    • 10.2 Immunological Theories

    • 10.3 Immune Algorithms

      • 10.3.1 Clonal Selection Algorithm

      • 10.3.2 Artificial Immune Network

      • 10.3.3 Negative Selection Algorithm

      • 10.3.4 Dendritic Cell Algorithm

  • 11 Ant Colony Optimization

    • 11.1 Introduction

    • 11.2 Ant-Colony Optimization

      • 11.2.1 Basic ACO Algorithm

      • 11.2.2 ACO for Continuous Optimization

  • 12 Bee Metaheuristics

    • 12.1 Introduction

    • 12.2 Artificial Bee Colony Algorithm

      • 12.2.1 Algorithm Flowchart

      • 12.2.2 Modifications on ABC Algorithm

      • 12.2.3 Discrete ABC Algorithms

    • 12.3 Marriage in Honeybees Optimization

    • 12.4 Bee Colony Optimization

    • 12.5 Other Bee Algorithms

      • 12.5.1 Wasp Swarm Optimization

  • 13 Bacterial Foraging Algorithm

    • 13.1 Introduction

    • 13.2 Bacterial Foraging Algorithm

    • 13.3 Algorithms Inspired by Molds, Algae, and Tumor Cells

  • 14 Harmony Search

    • 14.1 Introduction

    • 14.2 Harmony Search Algorithm

    • 14.3 Variants of Harmony Search

    • 14.4 Melody Search

  • 15 Swarm Intelligence

    • 15.1 Glowworm-Based Optimization

      • 15.1.1 Glowworm Swarm Optimization

      • 15.1.2 Firefly Algorithm

    • 15.2 Group Search Optimization

    • 15.3 Shuffled Frog Leaping

    • 15.4 Collective Animal Search

    • 15.5 Cuckoo Search

    • 15.6 Bat Algorithm

    • 15.7 Swarm Intelligence Inspired by Animal Behaviors

      • 15.7.1 Social Spider Optimization

      • 15.7.2 Fish Swarm Optimization

      • 15.7.3 Krill Herd Algorithm

      • 15.7.4 Cockroach-Based Optimization

      • 15.7.5 Seven-Spot Ladybird Optimization

      • 15.7.6 Monkey-Inspired Optimization

      • 15.7.7 Migrating-Based Algorithms

      • 15.7.8 Other Methods

    • 15.8 Plant-Based Metaheuristics

    • 15.9 Other Swarm Intelligence-Based Metaheuristics

  • 16 Biomolecular Computing

    • 16.1 Introduction

      • 16.1.1 Biochemical Networks

    • 16.2 DNA Computing

      • 16.2.1 DNA Data Embedding

    • 16.3 Membrane Computing

      • 16.3.1 Cell-Like P System

      • 16.3.2 Computing by P System

      • 16.3.3 Other P Systems

      • 16.3.4 Membrane-Based Optimization

  • 17 Quantum Computing

    • 17.1 Introduction

    • 17.2 Fundamentals

      • 17.2.1 Grover's Search Algorithm

    • 17.3 Hybrid Methods

      • 17.3.1 Quantum-Inspired EAs

      • 17.3.2 Other Quantum-Inspired Hybrid Algorithms

  • 18 Metaheuristics Based on Sciences

    • 18.1 Search Based on Newton's Laws

    • 18.2 Search Based on Electromagnetic Laws

    • 18.3 Search Based on Thermal-Energy Principles

    • 18.4 Search Based on Natural Phenomena

      • 18.4.1 Search Based on Water Flows

      • 18.4.2 Search Based on Cosmology

      • 18.4.3 Black Hole-Based Optimization

    • 18.5 Sorting

    • 18.6 Algorithmic Chemistries

      • 18.6.1 Chemical Reaction Optimization

    • 18.7 Biogeography-Based Optimization

    • 18.8 Methods Based on Mathematical Concepts

      • 18.8.1 Opposition-Based Learning

  • 19 Memetic Algorithms

    • 19.1 Introduction

    • 19.2 Cultural Algorithms

    • 19.3 Memetic Algorithms

      • 19.3.1 Simplex-based Memetic Algorithms

    • 19.4 Application: Searching Low Autocorrelation Sequences

  • 20 Tabu Search and Scatter Search

    • 20.1 Tabu Search

      • 20.1.1 Iterative Tabu Search

    • 20.2 Scatter Search

    • 20.3 Path Relinking

  • 21 Search Based on Human Behaviors

    • 21.1 Seeker Optimization Algorithm

    • 21.2 Teaching--Learning-Based Optimization

    • 21.3 Imperialist Competitive Algorithm

    • 21.4 Several Metaheuristics Inspired by Human Behaviors

  • 22 Dynamic, Multimodal, and Constrained Optimizations

    • 22.1 Dynamic Optimization

      • 22.1.1 Memory Scheme

      • 22.1.2 Diversity Maintaining or Reinforcing

      • 22.1.3 Multiple Population Scheme

    • 22.2 Multimodal Optimization

      • 22.2.1 Crowding and Restricted Tournament Selection

      • 22.2.2 Fitness Sharing

      • 22.2.3 Speciation

      • 22.2.4 Clearing, Local Selection, and Demes

      • 22.2.5 Other Methods

      • 22.2.6 Metrics for Multimodal Optimization

    • 22.3 Constrained Optimization

      • 22.3.1 Penalty Function Method

      • 22.3.2 Using Multiobjective Optimization Techniques

  • 23 Multiobjective Optimization

    • 23.1 Introduction

    • 23.2 Multiobjective Evolutionary Algorithms

      • 23.2.1 Nondominated Sorting Genetic Algorithm II

      • 23.2.2 Strength Pareto Evolutionary Algorithm 2

      • 23.2.3 Pareto Archived Evolution Strategy (PAES)

      • 23.2.4 Pareto Envelope-Based Selection Algorithm

      • 23.2.5 MOEA Based on Decomposition (MOEA/D)

      • 23.2.6 Several MOEAs

      • 23.2.7 Nondominated Sorting

      • 23.2.8 Multiobjective Optimization Based on Differential Evolution

    • 23.3 Performance Metrics

    • 23.4 Many-Objective Optimization

      • 23.4.1 Challenges in Many-Objective Optimization

      • 23.4.2 Pareto-Based Algorithms

      • 23.4.3 Decomposition-Based Algorithms

    • 23.5 Multiobjective Immune Algorithms

    • 23.6 Multiobjective PSO

    • 23.7 Multiobjective EDAs

    • 23.8 Tabu/Scatter Search Based Multiobjective Optimization

    • 23.9 Other Methods

    • 23.10 Coevolutionary MOEAs

  • A Appendix Benchmarks

    • A.1 Discrete Benchmark Functions

    • A.2 Test Functions

      • A.2.1 Test Functions for Unconstrained and Multimodal Optimization

      • A.2.2 Test Functions for Constrained Optimization

      • A.2.3 Test Functions for Unconstrained Multiobjective Optimization

      • A.2.4 Test Functions for Constrained Multiobjective Optimization

      • A.2.5 Test Functions for Dynamic Optimization

  • Index

Tài liệu cùng người dùng

  • Đang cập nhật ...

Tài liệu liên quan