Tối ưu hóa viễn thông và thích nghi Kỹ thuật Heuristic P4 ppsx

21 330 0
Tối ưu hóa viễn thông và thích nghi Kỹ thuật Heuristic P4 ppsx

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

4 Tabu Search and Evolutionary Scatter Search for ‘Tree-Star’ Network Problems, with Applications to Leased-Line Network Design Jiefeng Xu, Steve Y. Chiu and Fred Glover 4.1 Introduction Digital Data Service (DDS) is widely used for providing private high quality digital transport service in the telecommunications industry. The network connections of DSS are permanent and its transmission facilities are dedicated, enabling it to transfer digital data with less interference and greater security than switched service. DSS also proves to be appropriate for linking sites that have applications which require a permanent connection and a demonstrated need for frequent data transfer. For example, it can be used for remote Local Area Network (LAN) access, entry into frame relay networks, support for transaction- based systems, and can be incorporated in IBM’s System Network Architecture (SNA) and other networks. With optimal DSS network design and sufficient use, DSS becomes economically competitive with frame relay service in the higher transmission speed ranges, and with analog private line service in the lower transmission speed ranges. Telecommunications Optimization: Heuristic and Adaptive Techniques, edited by D. Corne, M.J. Oates and G.D. Smith © 2000 John Wiley & Sons, Ltd Telecommunications Optimization: Heuristic and Adaptive Techniques. Edited by David W. Corne, Martin J. Oates, George D. Smith Copyright © 2000 John Wiley & Sons Ltd ISBNs: 0-471-98855-3 (Hardback); 0-470-84163X (Electronic) Telecommunications Optimization: Heuristic and Adaptive Techniques 58 In this chapter, we address a fundamental DDS network design problem that arises in practical applications of a telecommunications company in the United States. The decision elements of the problem consist of a finite set of inter-offices (hubs) and a finite set of customer locations that are geographically distributed on a plane. A subset of hubs are chosen to be active subject to the restriction of forming a network in which every two active hubs to communicate with each other, hence constituting a spanning tree. Each hub has a fixed cost for being chosen active and each link (edge) has a connection cost for being included in the associated spanning tree. Each customer location must be connected directly to its own designated end office which in turn needs to be connected with exactly one active hub, thereby permitting every two customers to communicate with each other via the hub network. This also incurs a connection cost on the edge between the customer location and its associated hub. The objective is to design such a network at minimum cost. Figure 4.1 A DDS network. Figure 4.1 shows a practical scenario of a small DDS network. The number of dedicated lines required for the link between an end office and its assigned hub is equal to the number of customer locations connected to the end office. Since the links between customer locations and end offices are always fixed, the costs of these links are constant and thereby can be ignored from the network design. In practice, the line connection cost is distance sensitive and is calculated according to the tariff charges established by the Federal Communications Commission (FCC). These charges include a fixed cost for use and a variable cost that is related to the distance. For each active hub, in addition to the fixed bridging cost, a charge is also accessed for each incoming and outgoing line connected to this hub. To illustrate how these costs are associated with the DSS network, suppose the monthly cost data are given as in Table 4.1. Then, the monthly costs for the network given in Figure 4.1 are as detailed in Table 4.2. The foregoing representation of the DDS network design problem can be simplified by reference to a Steiner Tree framework. Since the linking cost per line between an end office and a potential hub is known and the bridging cost per line for that hub is also available, we 4m 10m 16m 0m 4m 8m 5m 3m Digital Hub End Office Customer Location Tabu Search and Evolutionary Scatter Search for ‘Tree-Star’ Network Problems 59 can pre-calculate the cost of connecting a customer location to a hub by adding up these two terms. Thus, the intermediate end offices can be eliminated and the DDS network problem can be converted into an extension of the Steiner Tree Problem. This extended problem was first investigated by Lee et al. (1996), who denote the hubs as ‘Steiner nodes’ and the customer locations as ‘target nodes’, thus giving this problem the name Steiner tree-star (STS) problem. Table 4.1 Example monthly cost data for leased line networks. Fixed bridging cost $82.00 Bridging cost per line $41.00 Line connecting cost: Mileage Fixed Cost Variable Cost <1 $30.00 $0.00 1—15 $125.00 $1.20 ≥16 $130.00 $1.50 Table 4.2 Monthly costs for network of Figure 4.1 based on Table 4.1 Bridging Cost fixed cost: $82.00 × 4 = $328.00 variable cost $41.00 × 14 = $574.00 Line Connecting Cost fixed cost: $30.00 × 2 + $125 ×8 + $130 × 1 = $1190.00 variable cost: $1.20 × (3 + 4 × 3 + 4 + 5 + 8 + 10) + $1.5 × 16 = $74.40 Total monthly cost: $2166.40 Literature on the STS problem is limited. Lee et al. (1994) show that the STS problem is strongly NP-hard and identify two mixed zero-one integer programming formulations for this problem. Lee et al. (1996) further investigate valid inequalities and facets of the underlying polytope of the STS problem, and implement them in a branch and cut scheme. More recently, Xu et al. (1996a; 1996b) have developed a Tabu Search (TS) based algorithm. Their computational tests demonstrated that the TS algorithm is able to find optimal solutions for all problem instances up to 100 nodes. Applied to larger problems that the branch and cut procedure (Lee et al., 1996) could not solve, the TS algorithm consistently outperforms the construction heuristic described in Lee et al. (1996). In this chapter, we explore an implementation of Scatter Search (SS) for the STS problem. Scatter search, and its generalized form called path relinking, are evolutionary methods that have recently been shown to yield promising outcomes for solving combinatorial and nonlinear optimization problems. Based on formulations originally proposed in the 1960s (Glover, 1963; 1965) for combining decision rules and problem constraints, these methods use strategies for combining solution vectors that have proved effective for scheduling, routing, financial product design, neural network training, optimizing simulation and a variety of other problem areas (see, e.g., Glover (1999)). Our chapter is organized as follows. The problem formulation is presented in the next section. Section 4.3 briefly describes the tabu search algorithm for the STS problem. We Telecommunications Optimization: Heuristic and Adaptive Techniques 60 further describe the SS based heuristic for the STS problem in section 4.4 and examine several relevant issues, such as the diversification generator, the reference set update method, the subset generation method, the solution combination method and the improvement method. In section 4.5, we report computational results on a set of carefully designed test problems, accompanied by comparisons with the solutions obtained by the TS algorithm (Xu et al., 1996a; 1996b) which has been documented as the best heuristic available prior to this research. In the concluding section, we summarize our methodology and findings. 4.2 Mathematical Formulation We formulate the STS problem as a 0-1 integer programming problem as follows. First we define: M set of target nodes; N set of Steiner nodes; c ij cost of connecting target node i to Steiner node j; d jk cost of connecting Steiner nodes j and k; b j cost of activating Steiner node j. The decision variables of this formulation are: x i a binary variable equal to 1 if and only if Steiner node j is selected to be active. y j k a binary variable equal to 1 if and only if Steiner node j is linked to Steiner node z ij a binary variable equal to 1 if and only if target node i is linked to Steiner node j. The model is then to minimize: ∑∑∑∑∑ ∈∈∈>∈∈ ++ MiNj ijij NjjkNk jkjk Ni ii zcydxb , (4.1) subject to: Miz Nj ij ∈= ∑ ∈ for ,1 (4.2) NjMixz jij ∈∈≤ ,for , (4.3) Nkjkjxxy kjjk ∈<+≤ , ,for ,2/)( (4.4) ∑∑∑ ∈∈∈> ⊂∈−≤ NjNj j Nkjk jk NSSwxy ,for ,1 , (4.5) ∑∑∑ ∈−∈∈> ≥≤ NjwSj j Nkjk jk Sxy 3||for , )(, (4.6) Tabu Search and Evolutionary Scatter Search for ‘Tree-Star’ Network Problems 61 Njx j ∈∈ for },1,0{ (4.7) Nkjkjy jk ∈<∈ , ,for },1,0{ (4.8) NjMiz jk ∈∈∈ ,for },1,0{ (4.9) In this formulation, the objective function (equation 4.1) seeks to minimize the sums of the connection costs between target nodes and Steiner nodes, the connection costs between Steiner nodes, and the setup costs for activating Steiner nodes. The constraint of equation 4.2 specifies the star topology that requires each target node to be connected to exactly one Steiner node. Constraint 4.3 indicates that the target node can only be connected to the active Steiner node. Constraint 4.4 stipulates that two Steiner nodes can be connected if and only if both nodes are active. Constraints 4.5 and 4.6 express the spanning tree structure over the active Steiner nodes. In particular, equation 4.5 specifies the condition that the number of edges in any spanning tree must be equal to one fewer than the number of nodes, while equation 4.6 is an anti-cycle constraint that also ensures that connectivity will be established for each active Steiner node via the spanning tree. Constraints 4.7–4.9 express the non-negativity and discrete requirements. All of the decision variables are binary. Clearly, the decision variable vector x is the critical one for the STS problem. Once this n-vector is determined, we can trivially determine the y jk values by building the minimal spanning tree over the selected Steiner nodes (those for which x j =1), and then determine the z ij values for each target node i by connecting it to its nearest active Steiner node, i.e. we have z ij =1 if and only if c ij = min {c ik | x k =1}. 4.3 The Tabu Search Algorithm In this section, we provide an overview of the tabu search algorithm for this problem, which was first proposed in Xu et al. (1996b). Although we do not describe the method in minute detail, we are careful to describe enough of its form to permit readers to understand both the similarities and differences between this method and the scatter search method that is the focus of our current investigation. The tabu search algorithm starts at a trivial initial solution and proceeds iteratively. At each iteration, a set of candidate moves is extracted from the neighborhood for evaluation, and a ‘best’ (highest evaluation) move is selected. The selected move is applied to the current solution, thereby generating a new solution. During each iteration, certain neighborhood moves are considered tabu moves and excluded from the candidate list. The best non-tabu move can be determined either deterministically or probabilistically. An aspiration criterion can over-ride the choice of a best non-tabu move by selecting a highly attractive tabu move. The algorithm proceeds in this way, until a pre- defined number of iterations has elapsed, and then terminates. At termination, the algorithm outputs the all-time best feasible solution. In subsequent subsections, we describe the major components of the algorithm. Telecommunications Optimization: Heuristic and Adaptive Techniques 62 4.3.1 Neighborhood Structure Once the set of active Steiner nodes is determined, a feasible solution can easily be constructed by connecting the active Steiner nodes using a spanning tree and by linking the target nodes to their nearest active Steiner nodes. Based on this observation, we consider three types of moves: constructive moves which add a currently inactive Steiner node to the current solution, destructive moves which remove a active Steiner node from the current solution, and swap moves which exchange an active Steiner node with an inactive Steiner node. The swap moves induce a more significant change in the current solution and hence require a more complex evaluation. For efficiency, swap moves are executed less frequently. More specifically, we execute the swap move once for every certain number of iterations (for perturbation) and consecutively several times when the search fails to improve the current solution for a pre-specified number of iterations (for intensification). Outside the swap move phase, constructive and destructive moves are executed, selecting the best candidate move based on the evaluation and aspiration criteria applied to a subset of these two types of moves. In addition, since destructive moves that remove nodes deform the current spanning tree, we restrict the nodes removed to consist only of those active Steiner nodes whose degree does not exceed three. This restriction has the purpose of facilitating the move evaluation, as described next. 4.3.2 Move Evaluation and Error Correction To quickly evaluate a potential move, we provide methods to estimate the cost of the resulting new solution according to the various move types. For a constructive move, we calculate the new cost by summing the fixed cost of adding the new Steiner node with the connection cost for linking the new node to its closest active Steiner node. For a destructive move, since we only consider those active Steiner nodes with degree less than or equal to three in the current solution, we can reconstruct the spanning tree as follows. If the degree of the node to be dropped is equal to one, we simply remove this node; If the degree is equal to two, we add the link that joins the two neighboring nodes after removing the node; If the degree is equal to three, we choose the least cost pair of links which will connect the three nodes previously adjacent to node removed. The cost of the new solution can be calculated by adjusting the connection cost for the new spanning tree and the fixed cost for the node removed. The swap can be treated as a combination of the constructive and destructive moves by first removing a tree node and then adding a non-tree node. The error introduced by the preceding estimates can be corrected by executing a minimum spanning tree algorithm. We apply this error correction procedure every few iterations and also whenever a new best solution is found. Throughout the algorithm, we maintain a set of elite solutions that represent the best solutions found so far. The error correction procedure is also applied to these solutions periodically. 4.3.3 TS Memory Our TS approach uses both a short-term memory and a long-term memory to prevent the Tabu Search and Evolutionary Scatter Search for ‘Tree-Star’ Network Problems 63 search from being trapped in a local minimum and to intensify and diversify the search. The short term memory operates by imposing restrictions on the set of solution attributes that are permitted to be incorporated in (or changed by) candidate moves. More precisely, a node added to the solution by a constructive move is prevented from being deleted for a certain number of iterations, and likewise a node dropped from the solution by a destructive move is prevented from being added for a certain (different) number of iterations. For constructive and destructive moves, therefore, these restrictions ensure that the changes caused by each move will not be ‘reversed’ for the next few iterations. For each swap move, we impose tabu restrictions that affect both added and dropped nodes. The number of iterations during which a node remains subject to a tabu restriction is called the tabu tenure of the node. We establish a relatively small range for the tabu tenure, which depends on the type of move considered, and each time a move is executed, we select a specific tenure randomly from the associated range. We also use an aspiration criterion to over-ride the tabu classification whenever the move will lead to a new solution which is among the best two solutions found so far. The long-term memory is a frequency based memory that depends on the number of times each particular node has been added or dropped from the solution. We use this to discourage the types of changes that have already occurred frequently (thus encouraging changes that have occurred less frequently). This represents a particular form of frequency memory based on attribute transitions (changes). Another type of frequency memory is based on residence, i.e. the number of iterations that nodes remain in or out of solution. 4.3.4 Probabilistic Choice As stated above, a best candidate move can be selected at each iteration according to either probabilistic or deterministic rules. We find that a probabilistic choice of candidate move is appropriate in this application since the move evaluation contains ‘noise’ due to the estimate errors. The selection of the candidate move can be summarized as follows. First, all neighborhood moves (including tabu moves) are evaluated. If the move with the highest evaluation satisfies the aspiration criterion, it will be selected. Otherwise, we consider the list of moves ordered by their evaluations. For this purpose, tabu moves are considered to be moves with highly penalized evaluations. We select the top move with a probability p and reject the move with probability 1–p. If the move is rejected, then we consider the next move on the list in the same fashion. If it turns out that no move has been selected at the end of this process, we select the top move. We also make the selection probability vary with the quality of the move by changing it to 2 1 β β − r p , where r is the ratio of the current move evaluation to the value of the best solution found so far, and 1 β and 2 β are two positive parameters. This new fine-tuned probability will increase the chance of selecting ‘good’ moves. 4.3.5 Solution Recovery for Intensification We implement a variant of the restarting and recovery strategy in which the recovery of the elite solution is postponed until the last stage of the search. The elite solutions, which are Telecommunications Optimization: Heuristic and Adaptive Techniques 64 the best K distinct solutions found so far, are recovered in reverse order, from the worst solution to the best solution. The list of elite solutions is updated whenever a new solution is found better than the worst solution in the list. Then the new solution is added to the list and the worst is dropped. During each solution recovery, the designated elite solution taken from the list becomes the current solution, and all tabu restrictions are removed and reinitialized. A new search is then launched that is permitted to constitute a fixed number of iterations until the next recovery starts. Once the recovery process reaches the best solution in the list, it moves circularly back to the worst solution and restarts the above process again. (Note that our probabilistic move selection induces the process to avoid repeating the previous search trajectory.) 4.4 The SS Algorithm Our SS algorithm is specifically designed for the STS problem and consists of the following components, based on Glover (1997): 1. A Diversification Generator: to generate a collection of diverse trial solutions, using an arbitrary trial solution (or seed solution) as an input. 2. An Improvement Method: to transform a trial solution into one or more enhanced trial solutions. (Neither the input nor output solutions are required to be feasible, though the output solutions will more usually be expected to be so. If no improvement of the input trial solution results, the ‘enhanced’ solution is considered to be the same as the input solution.) 3. A Reference Set Update Method: to build and maintain a Reference Set consisting of the b best solutions found (where the value of b is typically small, e.g. between 20 and 40), organized to provide efficient accessing by other parts of the method. 4. A Subset Generation Method: to operate on the Reference Set, to produce a subset of its solutions as a basis for creating combined solutions. 5. A Solution Combination Method: to transform a given subset of solutions produced by the Subset Generation Method into one or more combined solution vectors. In the following subsections, we first describe the framework of our SS algorithm, and then describe each component which is specifically designed for the STS problem. 4.4.1 Framework of SS We specify the general template in outline form as follows. This template reflects the type of design often used in scatter search and path relinking. Initial Phase 1. (Seed Solution Step.) Create one or more seed solutions, which are arbitrary trial solutions used to initiate the remainder of the method. 2. (Diversification Generator.) Use the Diversification Generator to generate diverse trial Tabu Search and Evolutionary Scatter Search for ‘Tree-Star’ Network Problems 65 solutions from the seed solution(s). 3. (Improvement and Reference Set Update Methods.) For each trial solution produced in Step 2, use the Improvement Method to create one or more enhanced trial solutions. During successive applications of this step, maintain and update a Reference Set consisting of the b best solutions found. 4. (Repeat.) Execute Steps 2 and 3 until producing some designated total number of enhanced trial solutions as a source of candidates for the Reference Set. Scatter Search Phase 5. (Subset Generation Method.) Generate subsets of the Reference Set as a basis for creating combined solutions. 6. (Solution Combination Method.) For each subset X produced in Step 5, use the Solution Combination Method to produce a set C(X) that consists of one or more combined solutions. Treat each member of the set C(X) as a trial solution for the following step. 7. (Improvement and Reference Set Update Methods.) For each trial solution produced in Step 6, use the Improvement Method to create one or more enhanced trial solutions, while continuing to maintain and update the Reference Set. 8. (Repeat.) Execute Steps 5–7 in repeated sequence, until reaching a specified cut-off limit on the total number of iterations. We follow the foregoing template and describe in detail each of the components in the subsequent subsections. 4.4.2 Diversification Generators for Zero-One Vectors Let x denote an 0-1 n-vector in the solution representation. (In our STS problem, x represents a vector of the decision variables which determines if the corresponding Steiner node is active or not.) The first type of diversification generator we consider takes such a vector x as its seed solution, and generates a collection of solutions associated with an integer h = 1, 2, , h*, where h* ≤ n – 1 (recommended is h* ≤ n/5). We generate two types of solutions, x ′ and x ′′ , for each h, by the following pair of solution generating rules: Type 1 Solution: Let the first component 1 x ′ of x ′ be 1 1 x− , and let kh x + ′ 1 = kh x + − 1 1 for k = 1, 2, 3, , k*, where k* is the largest integer satisfying k*≤ n/h. Remaining components of x ′ equal 0. To illustrate for x = (0,0, ,0): the values h = 1, 2 and 3 respectively yield x ′ = (1,1, ,1), x ′ = (1,0,1,0,1 ) and x ′ = (1,0,0,1,0,0,1,0,0,1, ). This progression suggests the reason for preferring h* ≤ n/5. As h becomes larger, the solutions x ′ for two adjacent values of h differ from each other proportionately less than when h is smaller. An option to exploit this is to allow h to increase by an increasing increment for larger values of h. Telecommunications Optimization: Heuristic and Adaptive Techniques 66 Type 2 Solution: Let x ′′ be the complement of x ′ . Again to illustrate for x = (0,0, ,0): the values h = 1, 2 and 3 respectively yield x ′′ = (0,0, ,0), x ′′ = (0,1,0,1, ) and x ′′ = (0,1,1,0,1,1,0, ). Since x ′′ duplicates x for h = 1, the value h = 1 can be skipped when generating x ′′ . We extend the preceding design to generate additional solutions as follows. For values of h ≥ 3 the solution vector is shifted so that the index 1 is instead represented as a variable index q, which can take the values 1, 2, 3, , h. Continuing the illustration for x = (0,0, ,0), suppose h = 3. Then, in addition to x ′ = (1,0,0,1,0,0,1, ), the method also generates the solutions given by x ′ = (0,1,0,0,1,0,0,1, ) and x ′ = (0,0,1,0,0,1,0,0,1 ), as q takes the values 2 and 3. The following pseudo-code indicates how the resulting diversification generator can be structured, where the parameter MaxSolutions indicates the maximum number of solutions desired to be generated. (In our implementation, we set MaxSolutions equal to the number of ‘empty slots’ in the reference set, so the procedure terminates either once the reference set is filled, or after all of the indicated solutions are produced.) Comments within the code appear in italics, enclosed within parentheses. NumSolutions = 0 For h = 1 to h* Let q* = 1 if h < 3, and otherwise let q* = h (q* denotes the value such that q will range from 1 to q*. We set q* = 1 instead of q* = h for h < 3 because otherwise the solutions produced for the special case of h < 3 will duplicate other solutions or their complements.) For q = 1 to q* let k* = (n–q)/h <rounded down> For k = 1 to k* khqkhq xx ++ −= ′ 1 End k If h > 1, generate x ′′ as the complement of x ′ ( x ′ and x ′′ are the current output solutions.) NumSolutions = NumSolutions + 2 (or + 1 if h = 1) If NumSolutions ≥ MaxSolutions, then stop generating solutions. End q End h The number of solutions x ′ and x ′′ produced by the preceding generator is approximately q*(q*+1). Thus if n = 50 and h* = n/5 = 10, the method will generate about 110 different output solutions, while if n = 100 and h* = n/5 = 20, the method will generate about 420 different output solutions. Since the number of output solutions grows fairly rapidly as n increases, this number can be limited, while creating a relatively diverse subset of solutions, by allowing q to skip over various values between 1 and q*. The greater the number of values skipped, the less ‘similar’ the successive solutions (for a given h) will be. Also, as previously noted, h itself can be incremented by a value that differs from 1. [...]... the best heuristic available among the various construction heuristics 4.5.1 Parameter Settings Our TS method requires a few parameters to be set at the appropriate values These values are initialized based on our computational experience and common sense, and then finetuned using a systematic approach (Xu et al., 1998) First we select an initial solution 72 Telecommunications Optimization: Heuristic. .. Duplication Check Subroutine Tabu Search and Evolutionary Scatter Search for ‘Tree-Star’ Network Problems 4.4.6 71 Improvement Method We apply a local search heuristic to improve the initial solution and the trial solution produced by the combination method The heuristic employs the same neighborhood of moves as used for the tabu search algorithm, i.e constructive moves, destructive moves and swap moves We also... Let E0 and Hash0 be the evaluation and hash function value for solution x ′ , and denote associated values for the xsave[r] array by Esave(r) and Hashsave(r) These are Telecommunications Optimization: Heuristic and Adaptive Techniques 70 accompanied by a ‘depth’ value, which is 0 if no duplication occurs, and otherwise tells how deep in the list – how far back from the last solution recorded – a duplication... procedure that generates subsets X of RefSet that have useful properties, while avoiding the duplication of subsets previously generated Our approach for doing this is 68 Telecommunications Optimization: Heuristic and Adaptive Techniques organized to generate the following four different collections of subsets of RefSet, which we refer to as SubSetType = 1, 2, 3 and 4 Let bNow denote the number of solutions... moves is exact.) If the true cost of the best move for all three types is lower (better) than the cost of the current solution, that move is executed and the search proceeds Otherwise, the local search heuristic terminates with the current solution Since the local search improvement method always ends with a local optimum, it is very likely to terminate with the same solution for different starting solutions... avoid the effort of transforming and improving solutions already generated Avoidance of duplications by controlling the combined solutions, which includes submitting them to constructive and improving heuristics, can be a significant factor in producing an effective overall procedure To do this, we store only the r = rNow most recent solutions generated (allowing rNow to grow to a maximum of rMax different... than TS It also ties 14 problems with TS, and produces four worse solutions, but the differences are truly marginal (less than 0.1%) Given the fact that our TS approach has been documented as the best heuristic available for the STS problem, and that it has produced optimal solutions for all test problems with up to 100 Steiner nodes (Xu et al., 1996b), the quality of our SS method is quite high We... does not take long term memory into consideration More specifically, our local search pays the same attention to the constructive moves, destructive moves and swap moves Telecommunications Optimization: Heuristic and Adaptive Techniques 74 However, statistics show that the constructive and swap moves are more time consuming and therefore should be executed less frequently to achieve greater speed The... rule (1) does, which requires fewer subsequent constructive moves to generate a complete solution The SS 4 approach precisely matches the solutions produced by SS, but Telecommunications Optimization: Heuristic and Adaptive Techniques 76 takes more time (approximately twice as much) This suggests that the offspring produced by SS are quite likely a subset of those produced by SS 4, but a subset that . ranges. Telecommunications Optimization: Heuristic and Adaptive Techniques, edited by D. Corne, M.J. Oates and G.D. Smith © 2000 John Wiley & Sons, Ltd Telecommunications Optimization: Heuristic and Adaptive. algorithm for the STS problem. We Telecommunications Optimization: Heuristic and Adaptive Techniques 60 further describe the SS based heuristic for the STS problem in section 4.4 and examine several. Improvement Method We apply a local search heuristic to improve the initial solution and the trial solution produced by the combination method. The heuristic employs the same neighborhood of moves

Ngày đăng: 01/07/2014, 10:20

Tài liệu cùng người dùng

  • Đang cập nhật ...

Tài liệu liên quan