International journal of computer integrated manufacturing , tập 23, số 10, 2010

105 462 0
International journal of computer integrated manufacturing , tập 23, số 10, 2010

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

International Journal of Computer Integrated Manufacturing Vol 23, No 10, October 2010, 853–875 Implementation of product lifecycle management tools using enterprise integration engineering and action-research Nicolas Pen˜arandaa, Ricardo Mejı´ ab, David Romeroa and Arturo Molinaa* a Tecnolo´gico de Monterrey, Me´xico; bUniversidad EAFIT, Colombia (Received November 2009; final version received 18 May 2010) This paper describes how enterprise integration engineering (EIE) and action-research (A-R) can be used to support the implementation of product lifecycle management (PLM) tools The EIE concept is used to align the corporate strategies with the use of PLM technologies in order to impact the key performance indicators (KPIs) in the enterprise An EIE reference framework is proposed to define strategies, evaluate performance measures, design/ re-design processes and establish the enabling tools and technologies to support the enterprise strategies, while A-R is proposed to guide the PLM tools implementation at various stages of the product development process An industrial application is described to demonstrate the benefits of applying EIE, A-R and PLM in an enterprise Keywords: enterprise integration; enterprise modelling; product lifecycle; action-research; industrial application Introduction Business managers are looking for new ways of improving their company’s performance For this reason, concepts such as enterprise integration (EI) and product lifecycle management (PLM) have emerged to help companies to be successful facing these challenges EI is a domain of research developed since the 1990s as an extension of computer integrated manufacturing (CIM) EI research is mainly carried out within two distinct research disciplines: enterprise modelling and information technology The first discipline refers to EI as a set of concepts and approaches that allow the definition of a global architecture for a system, the consistency of a system-wide decision making, the notion of a process which activity flow model goes beyond the borders of functions and the dynamic allocation of resources as well as the consistency of data (Vernadat 2002) In the second discipline, information technology, EI is carried out through the integration of several enterprise systems, such as: Enterprise resource planning (ERP), supply chain management (SCM), customer relationship management (CRM), business process management systems (BPMS) and also by authoring functional applications such as: computer aided design (CAD), computer aided manufacturing (CAM), computer aided engineering (CAE), Office automation, etc (Panetto and Molina 2008) All these systems and applications support the implementation of processes that sustain the enterprise operations *Corresponding author Email: armolina@itesm.mx ISSN 0951-192X print/ISSN 1362-3052 online Ó 2010 Taylor & Francis DOI: 10.1080/0951192X.2010.495136 http://www.informaworld.com Enterprise integration engineering (EIE) is the collection of modelling principles, methodologies and tools that support the integration of different enterprise lifecycle entities (e.g enterprise, project, product, processes) The EIE foundation relies on the creation of an enterprise model of the different entities in an enterprise, aiming at building a complete representation of an enterprise that consists on the definition of their mission, strategies, key performance indicators (KPIs), processes and competencies and their relationships (Nof et al 2006) EIE allows a detailed description of all the key elements of an entity (e.g activities, information/knowledge, organisational aspects, human and technological resources) and several languages may be used (Cuenca et al 2006) In an enterprise model, this description provides the means to connect and communicate all the functional areas of an organisation to improve synergy within the enterprise, and to achieve its mission and vision in an effective and efficient manner (Molina et al 2005) Furthermore, EIE enables an enterprise to share key information/knowledge in order to achieve business process coordination and cooperative decision-making, and therefore achieving enterprise integration PLM is a strategic business approach that is used to achieve ‘enterprise integration’ for product development It has the intention to reduce inefficiencies across the whole product lifecycle (Grieves 2006) The PLM concept is focused on integration of lifecycle information and knowledge supported by computer aided 854 N Pen˜aranda et al engineering technologies such as: CAD, CAM, CAE, and knowledge-based engineering systems (KBES) PLM aims to support the management of the product development process through the stages of its lifecycle, from its conception to its recycling or disposal PLM is recognised by the world’s leading universities, institutes, and solution vendors as the next big wave in enterprise software applications in the market and as a key technology to support the new competitive strategy, value chain strategy and production/service strategy in an enterprise (Ming et al 2005) The emerging software market is a suite of tools used to plan, manage and execute lifecycle activities, which include identifying business opportunities, prioritising R and D efforts, developing new products, and supporting their production and introduction to the market (Rozwell and Halpern 2004) or even closing the lifecycle loop, as Jun et al (2007) proposed by integrating new technologies to gather realtime feedback However, PLM systems might be considered also an important concept for a complete Enterprise Integration in an enterprise that aims to carry out lifecycle engineering activities The work presented by Jianjun et al (2008) describes an example of product lifecycle engineering design based on a design for excellence (DFX) approach and treating information exchange issues in order to lead the engineering design to an effective and efficient adoption of a sustainable product development paradigm Gao et al (2003) at Cranfield University has integrated product data management (PDM) and PLM technologies, to demonstrate that PLM can improve enterprise’s ability to effectively manage their supply chains and collaboration around concurrent product developments between separate offices and also with sub-contractors, enabling enterprise integration PLM integration and coordination in an enterprise remain challenging because of its knowledge intensive nature The study carried out by Siddiqui et al (2004) investigates the problems and issues faced by companies when implementing PDM systems, which is one of several components needed for a complete PLM implementation A set of key factors, such as a lack of management support, implementation issues, user acceptance and costs, should be considered Furthermore, according to Bygstad et al (2008), the turbulence of the business environment and the technical environment complexity are the main challenges to face Schilli and Dai (2006) emphasise the necessity of a deeper understanding of a current business, the design of appropriate processes and the implementation of a supporting IT architecture Garetti et al (2005) propose a set of experimental learning techniques and a change management approach in order to reach a better PLM implementation, recognising the central role of virtual simulation, business process analysis techniques and process mapping, and remarking on the importance of adopting solutions that are flexible and adaptable owing to the constant changes in enterprises processes Another important component of PLM systems is workflow management, which is an issue as illustrated by Rouibah et al (2007) The enhancement of process design through the creation of building blocks as well as the enhancement of organisational structure through the usage of roles as a resource for process activities is a major achievement for PLM definitions For these reasons there is a strong need for a systematic, methodological and technology supported approach to develop and sustain a successful PLM implementation in an enterprise, which is aligned to achieve a complete enterprise integration Actionresearch (A-R) is proposed in this paper as a methodology to support the implementation of PLM technologies in an enterprise This paper describes how enterprise integration engineering (EIE) - a framework and a methodology for enterprise integration – have been used to align the strategic objectives of an enterprise to improve its engineering processes using information technologies, in particular in the implementation of PLM tools The underlying methodology used to support the PLM implementation process is A-R in order to take a systematic approach of planning, implementing, observing and evaluating the process By using A-R it is possible to improve key performance indicators (KPIs) in the enterprise and justify the implementation of PLM technologies A case study in a real enterprise is presented to demonstrate the usage of this methodology The paper has been organised as follows: Section describes how the EIE reference framework can be used to guide the PLM realisation project Section describes how A-R can be used to guide in three cycles a PLM implementation Finally, a case study is described in Section to demonstrate the applicability of EIE and A-R in PLM implementation projects Enterprise integration engineering reference framework The EIE reference framework components are depicted in Table The EIE reference framework has its foundations on CIMOSA, ARIS, PERA, ZACHMAN and GERAM reference models and frameworks EIE uses reference models and frameworks to support strategies development by applying three key concepts: (1) lifecycle principles, (2) enterprise models, and (3) instantiation in different domains (Chen and Vernadat 2004) (see Figure 1) Each of the different components 855 International Journal of Computer Integrated Manufacturing Table EIE Reference framework components for PLM implementation EIE components Strategy and Performance Evaluation Systems Reference Models for Enterprise Modelling Decision-making and Simulations Models Knowledge/Information Technology Activities Tools Define strategies: competitive strategy, value chain strategy and production/service strategy Define KPIs: quality, volume, time, costs, flexibility and environment Define enterprise model and core processes Describe Enterprise Model AS-IS and TO-BE: functions, information, resources and organisation Determine KPIs of core-process Define logic models of best business practices and IT and its impact on KPIs Design AS-IS and TO-BE simulation models to evaluate decision-making Evaluate KPIs based on the use of best business practices and IT implementation Define data, information and knowledge models Decide type of IT application: functional, coordination, collaboration or knowledge management Design IT architecture Determine IT infrastructure provides guidelines, methodologies and tools to engineer business process changes (Molina et al 2005) The components are: (1) strategy and performance evaluation systems, (2) reference models for enterprise modelling, (3) decision-making and simulation models and (4) knowledge/information technology Strategy and performance evaluation systems: They support the definition of three types of strategies in the enterprise (Molina 2003), namely: (1) Competitive strategy: It should be translated into a set of decisions of how an organisation can deliver value to the customer (2) Value chain strategy: It is about making decisions of how an enterprise will establish an organisational model (external and internal) that will exploit the different possibilities to build an effective and efficient value chain (3) Production/service strategy: It defines how the enterprise will produce or deliver its products and/or services All these strategies are associated with performance measures to evaluate the impact of the strategy pursued in an organisation SWOT Porters 5s Scenario planning Balanced scorecards Enterprise modelling languages (IDEF0, UML) Business Process Model Notation (BPMN) Event-driven Process Chains (EPC) Program logical models System dynamics models Discrete event simulation Business process analysis Product Lifecycle Management (PLM) Business Process Management Systems (BPMS) Business Process Intelligent (BPI) Enterprise Systems (ERP, CRM, SCM, etc.) Enterprise Content Management (ECM) Reference models for enterprise modelling: It supports the visualisation of enterprise knowledge, processes and associated performance measures in order to identify areas of opportunity for improvements It comprises five groups of the main business processes to describe a generic structure of an ideal intra- and interintegrated extended enterprise: (1) Strategic planning (2) Product, process and manufacturing system development (3) Marketing, sales and service (4) Order fulfilment and supply chain management (5) Support services In Figure 2, the process groups and their interactions are depicted Decision making and simulation models: They support the evaluation of different strategies and the implementation of the best manufacturing practices and information technologies using different simulation tools such as: dynamic systems and discrete event simulation Best practices are defined in terms of logic program models to describe their impacts on business Production/service strategy Make-to-Stock (MTS) Make-to-Order (MTO) Assembly-to-Order (ATO) Configure-to-Order (CTO) Build-to-Order (BTO) Engineer-to-Order (ETO) Value chain strategy Vertical integration Strategic business units Horizontal integration Collaboration  vertical network  horizontal network Product Leadership Operational Excellence Customer Focus Quality Volume Time Cost Flexibility Environment Quality Volume Time Cost Flexibility Environment New sales New products Operational costs Time to deliver Customer satisfaction Customer loyalty Key Performance Indicators Competitive strategy Guidelines for strategy definition activity Business strategies Table ! ! ! Value chain strategy Horizontal integration to share resources among strategic business units Horizontal collaboration to incorporate industry partners in product innovation KPIs: Flexibility, Cost, Time Production/service strategy Engineer-to- Order (ETO) to innovate and create new products KPIs: Flexibility, Cost, Time Product Leadership KPIs: # of new products introduced to the market, % of sales related to new products Configure-to- Order (CTO) to reduce manufacturing cost and time to market KPIs: Manufacturing costs, Time to deliver Vertical collaboration to create a network of suppliers to reduce costs and assures time to market KPIs: Logistic costs, Time to deliver Operational Excellence KPIs: Operational costs, Time to market Competitive strategy Examples Build-to-Order (BTO) to create customised products KPIs: Customer rejections, mass customised products delivered Strategic business units to understand better the customer requirements KPIs: Customer requirements meet, Customer claims Customer Focus KPIs: Customer satisfaction 856 N Pen˜aranda et al International Journal of Computer Integrated Manufacturing Figure 857 EIE reference framework, foundations and applications performance System dynamics simulation: The applied theory of system dynamics and dynamic systems modelling method come primarily from the work of Forrester (1980) The models are built based on feedback loops of key performance measures, causeand-effect models, feedback influences and impacts or effects Therefore enterprise models of behaviour have been developed to demonstrate the effects and impacts of best practices implementation on performance measures (Molina and Medina 2003) Discrete event simulation: Simulation is the most common method used to evaluate (predict) performance The reason for this is that a quite complex (and realistic) simulation model can be constructed using actors, attributes, events and statistics accumulation Business processes simulation can be performed, for example, in order to evaluate resource usage and to predict performance measures among others (e.g delivery time, costs, capacity usage, etc.) (Molina et al 2005) Knowledge/information technology: PLM systems allow product data management and use of corporate intellectual capital (knowledge) PLM, BPMS and business process intelligence (BPI) tools support the execution and analysis of process using business and IT perspectives BPMS allow process design, execution and tracking based on process engine technology BPI analysis supports decision making for predicting and optimising processes Enterprise systems include applications such as: ERP, CRM and SCM Enterprise content management (ECM) integrates the management of structured, semi-structured, and unstructured 858 Figure N Pen˜aranda et al Enterprise model and integrated business processes information, such as software code embedded in content presentations, and metadata together in solutions for content production, storage, publication, and utilisation in organisations (Pa¨iva¨rinta and Munkvold 2005) Therefore, the utilisation of PLM, BPMS and ECM systems together with BPI analysis capabilities permit to track the document lifecycle and capture experiences in the process design executed Also, allow companies to support business change using a technology driven approach, and permit the project visibility, knowing who, what and when has to deliver each activity as well as the information and knowledge sharing along all the product lifecycle The final goal is to integrate all the applications in order to achieve enterprise integration The EIE reference framework can be applied to different fields such as: business process management (Li et al 2005), integrated product development (Chin et al 2005), processes redesign/reengineering, knowledge management and project management The application presented in this paper, offers to scientific and industrial communities a different consideration of the design process as the integration of key business processes and therefore, be treated with EIE formalisms This consideration improves implementations as, nowadays, companies have a certain level of maturity around enterprise systems such as ERP, SCM or CRM, but PLM is a novel strategy that should be considered in the same way A novel methodology is then proposed, and validated through a case study, based on A-R in order to follow a methodic approach to implement PLM tools, enabling KPI definitions and process modelling in order to identify key activities, people, information and resources, needed to a successful implementation Methodology for implementation of PLM based on EIE and A-R The methodology proposed in the EIE reference framework is based on Action-Research (A-R) (Baskerville and Wood-Harper 1996) A-R is defined as a spiral process that allows action (change and improvement) and research (understanding and knowledge) to be achieved at the same time (Baskerville and Pries-Hejeb 1999) A-R, which emphasises collaboration between researchers and practitioners, has much potential for the Information International Journal of Computer Integrated Manufacturing Figure Methodology for a PLM implementation based on EIE and A-R Systems field, because it represents a potentially useful qualitative research method, and it supports the practical problem solving, as well as the theoretical knowledge generation (Avison et al 2001, Chiasson et al 2008) In this methodology, an A-R cycle is constituted by four phases: (1) (2) (3) (4) 859 Plan Act Observe Reflect For the PLM implementation in an integrated way, three A-R cycles are proposed, which increase the knowledge in the business model and suggest improvements in the AS-IS process (see Figure 3) By the accomplishment of the third A-R cycle, it can be said that the PLM system is implemented However, as EI is the integration of several enterprise systems, the A-R cycles may continue, but oriented to achieve a complete EI implementation by considering other enterprise systems (e.g ERP, SCM, CRM) if they are not implemented yet An improvement of the PLM system may be carried out, if needed The different cycles of this methodology are described in the following sections This methodology provides a progressive way to evaluate the existing processes; define KPIs; as well as design, develop, and implement an improved PLM process It provides practical benefits as it is suited to projects of high industrial potential (consulting oriented) to implement novel and complex technologies For this kind of approach A-R has shown to be a valuable method to implement PLM systems with evolutionary knowledge and experiences through reflective cycles It provides consistency across projects enabling better planning, based on conclusions issued from reflections phases without avoiding flexibility to match project complexity 3.1 First A-R cycle – enterprise strategy and AS-IS modelling In the first cycle the enterprise strategy has to be understood and clarified The objectives of this first cycle are: (1) to describe enterprise strategy and its KPIs, (2) to model the process AS-IS and, (3) to suggest new improvements on the AS-IS model These 860 N Pen˜aranda et al objectives are achieved using interviews with strategists and process owners, which know the current strategy of the enterprise and understand the product development process in the enterprise The different stages of this first cycle are described next 3.1.1 Plan Define work team, responsibilities, activities and resources A project plan is made, according to the scope, resources and work team defined The integration of a multidisciplinary team is suggested, which could include strategic planners, process owners and information technology analysts to incorporate a diversity of perspectives during the AS-IS and TO-BE models definition Analyse the vision, mission and strategic objectives in the enterprise This activity is a fundamental step to align the product development process improvements with the enterprise strategy External consultants may improve process definition, because they act as impartial actors and can perform an analysis without influence of personal interests The strategic objectives in the enterprise can be presented as KPI related to quality, volume, time, costs, flexibility and environment These indicators will lead to the following benefits: Economical: Profit, Sales, ROI Productivity: value added per employee, value added per invested capital Strategic benefits: According to the competitive strategies selected by the enterprise They can be: operational excellence (e.g cycle time, process cost, yield), customer focus (e.g customer loyalty, customer satisfaction), and product/process innovation (e.g sales of new products, time for developing new products, time for recovering investment) are selected to monitor the impact on it The competitive strategy aims to achieve competitive advantage by following at least one of these three possible strategies: (a) operational excellence, (b) product leadership, or (c) customer focus Such generic strategies are related to Porter’s (1990; 1996) proposal: Cost leadership (operational excellence strategy), differentiation (product leadership strategy) and focus (customer focus) Once the enterprise competitive strategy is understood, it is possible to translate it into a set of decisions about how the organisation can deliver its value proposition to the customer Value chain strategy is about making decisions on how an enterprise will establish an organisational model that will best exploit its potentials and opportunities to build an effective and efficient value chain Different directions can be considered and adopted as a value chain strategy: (a) vertical integration, (b) strategic business units, (c) horizontal integration and (d) collaboration (vertical or horizontal network) Finally, a production/service strategy is based on the following elements: Product description: Defines criteria required for an enterprise to qualify or to win an order in a specific market Customers and suppliers characterisation: Defines customers’ expectations and requirements imposed on suppliers Process definition: Specifies performance measures required in the execution of the activities in the process Define project scope, impacts and benefits for the enterprise The PLM implementation impacts and benefits must be defined, and it must have a clear influence on KPIs (e.g costs reduction, time to market, or improved capacity to develop products) The EIE concept can guide the efforts of implementing the PLM system pursuing Enterprise Integration All these factors are defined by order-qualification and order-winning criteria (Hill 1989) The criteria are: price, volume, quality, lead-time, delivery speed and reliability, flexibility, product innovation and design, and lifecycle status Based on all these performance measures the following production/service strategy can be defined (Rehg and Kraebber 2005): make-to-stock (MTS), make-to-order (MTO), assemble-to-order (ATO) and engineer-to-order (ETO) New production/service strategies have been defined by Molina et al (2007), which include configure-to-order (CTO) and build-to–order (BTO) The product/service strategy defines the criteria that must be satisfied by the enterprise in order to be able to compete in the selected markets and industries Analyse the business strategic elements and key performances indicators (KPIs) To set the context for the PLM system implementation, there is a need to clarify the enterprise strategy (competitive strategy, value chain strategy or production/service strategy) After the enterprise strategy has been clarified, KPIs Identify the key business process with highest impact and drivers of change PLM could support different business processes Some of them of particular interest for authors are: co-design, co-engineering and product development Some KPIs in PLM implementations may be: time to market, cost International Journal of Computer Integrated Manufacturing reductions, increase collaboration between stakeholders, improved organisation efficiency, and reduction of project execution time Other indicators defined by IT analysts are: how long it takes for a process to be executed, what resources were used to execute that process (among others, Pen˜aranda et al 2006) It is important to define which process is going to be analysed, including specific stages of the whole business process in the enterprise The stages selected must have high benefits and impacts in selected KPIs 3.1.2 Act Model process AS-IS The AS-IS model represents how the product lifecycle process (e.g product, process or manufacturing system) is currently executed In order to perform an efficient AS-IS analysis, the use of graphical representations is suggested, which help to identify duplicated information, parallel activities, and information and material flows There are some standard notations and languages recommended to model business process The first recommended by authors and possibly the most used Business Modeling Language is ARIS (Scheer 2000) ARIS is the union of methodologies (Kalnins 2004), where modelling with its eEPC (extended event-process-chain) diagram and other related diagrams is only a part, as it takes into account different views of the business process There are some other tools that will depend on the level of confidence and expressibility needed, such as: IDEF0 (integrated definition methods), UML (unified modelling language) and BPMN (business process management notation) Some authors give a set of parameters to select the most suitable language as, for example, those from Bertoni and Cugini (2008): Formality extent of the modelling language, easiness of understanding, level of detail, goals description and process simulation integrated product development Their structure must be considered, in order to define the starting specifications for a product data model (PDM) This facilitates the understanding of how product and manufacturing information is structured Organisation domain: The human resources identification and the way they are organised are defined within the organisational domain It must establish the relations among functional areas and departments, as well as partners involved in a simultaneous engineering environment (e.g concurrent engineering) The organisation structure is important, in order to identify the key players in engineering activities, not only for execution, but also for reviewing, supervising and monitoring Resource domain: It identifies the different technologies and applications used for organisations’ processes operation and management Table describes some technologies that can be classified in functional, coordination, collaboration and information management (Mejia et al 2007) 3.1.3 Observe Evaluate AS-IS model Build and use discrete event or dynamic system simulations to identify improvement areas in the AS-IS model Using these simulations is possible to identify which specific activities in the ASIS process could be reformulated and also, which tools could improve this process The indicators defined are measured to obtain the initial state of the model (ASIS) before the TO-BE model implementation 3.1.4 As mentioned languages meet the authors’ requirements, four domains must be represented to build the AS-IS model for the identification of the current enterprise state: process, information/knowledge, organisation and resources (Mejia et al 2004), (Molina and Medina 2003) Process domain: It describes the activities of an integrated product development identifying the information flow through the product lifecycle, resources, controls, inputs and outputs incorporated in each activity The objective is to identify the coreprocesses and activities of an enterprise Information/knowledge domain: This domain allows the detailed description of data, information and knowledge required in an 861 Reflect Analyse and propose improvements in AS-IS model Decide what recommendations and improvements can be made to the AS-IS model and propose KPIs for the new model (TO-BE) Evaluate the implications of changing the process in the process, information, resources and organisation domains 3.2 Second A-R cycle - TO-BE model definition In the second A-R cycle, the TO-BE model definition is proposed and analysed The core-process identified is improved within the enterprise strategies These improvements are achieved using tools as dynamic system simulations and logical models The different stages of this second A-R cycle are described in following subsections: International Journal of Computer Integrated Manufacturing Vol 23, No 10, October 2010, 942–956 Metaheuristics to minimise makespan on parallel batch processing machines with dynamic job arrivals Huaping Chena, Bing Dua* and George Q Huangb a School of Management, University of Science and Technology of China, Hefei, China; bDepartment of Industrial and Manufacturing Systems Engineering, The University of Hong Kong, Hong Kong, China (Received 13 November 2009; final version received 18 May 2010) Batch processing machines that can process a group of jobs simultaneously are often encountered in semiconductor manufacturing and metal heat treatment This research investigates the scheduling problem on parallel batch processing machines in the presence of dynamic job arrivals and non-identical job sizes The processing time and ready time of a batch are equal to the largest processing time and release time among all jobs in the batch, respectively This problem is NP-hard in the strong sense, and hence two lower bounds were proposed to evaluate the performance of approximation algorithms An ERT-LPT heuristic rule was next presented to assign batches to parallel machines Two metaheuristics, a genetic algorithm (GA) and an ant colony optimisation (ACO) are further proposed using ERT-LPT to minimise makespan The performances of the two approaches, along with a BFLPTERTLPT (BE) heuristic were compared by computational experiments The results show that both metaheurisitcs outperform BE GA is able to obtain better solutions when dealing with small-job instances compared to ACO, whereas ACO dominates GA in large-job instances Keywords: batch scheduling; makespan; dynamic job arrivals; genetic algorithm; ant colony optimisation Introduction This paper studies a dynamic scheduling problem on parallel batch processing machines (BPMs) There are n jobs to be processed on m identical parallel machines Each job j is associated with a processing time pj, a size sj and a release time rj A BPM is capable of processing several jobs simultaneously in a batch form as long as the total size of all the jobs in the batch are less than or equal to the machine capacity C No job has a size exceeding the machine capacity and it cannot be split across batches The processing time PTb and ready time RTb of a batch b are determined by the longest processing time and latest release time of the jobs in the batch respectively Once the processing of a batch is started, it cannot be interrupted, and no additional jobs can be added or removed from the batch while the batch is being processed The objective is to minimise makespan, i.e the completion time of the last job leaving the system Scheduling BPMs is a substantive and significant issue in the manufacturing industry One of the earliest works appears to be that of Ikura and Gimple (1986) The research was mainly motivated by burn-in operations in semiconductor manufacturing (Lee et al 1992) The purpose of burn-in operations is to subject batches of integrated circuits to thermal stress *Corresponding author Email: toto@mail.ustc.edu.cn ISSN 0951-192X print/ISSN 1362-3052 online Ó 2010 Taylor & Francis DOI: 10.1080/0951192X.2010.495137 http://www.informaworld.com to bring out latent defects This process is done by maintaining the circuits in an oven for an extended period of time As the processing time of burn-in operation is generally longer compared with those of other testing operations, the burn-in operation forms a bottleneck in assembly lines These burn-in ovens should therefore be effectively scheduled By minimising the makespan, the utilisation of the ovens can be increased (Pinedo 2002) Consequently, reducing the makespan should also lead to a higher throughput of the manufacturing system This concern led the authors to consider the makespan performance measure as the scheduling objective The problem of scheduling BPMs has been studied extensively by other investigators However, these studies either assume that all jobs arrive at time zero, or on the other hand, deal with the cases where jobs have identical size Li et al (2005) and Chou et al (2006) are the only researchers who have examined the dynamic scheduling problem with non-identical job sizes The purpose of this paper is to extend their model to the parallel BPMs environment from the single BPM environment Two metaheuristics, i.e a genetic algorithm (GA) and an ant colony optimisation algorithm (ACO), were also proposed to minimise makespan International Journal of Computer Integrated Manufacturing The structure of the paper is as follows In the next section, previous studies on BPM scheduling problem are briefly reviewed Section gives two lower bounds for the problem under consideration GA and ACO algorithms are described in Section Their performance is evaluated through extensive computational experiments in Section A summary and discussion of future research directions concludes the paper Literature review The problems related to scheduling on BPM have been examined extensively by many researchers since they were first proposed by Ikura and Gimple (1986) They provided an efficient algorithm to determine whether a schedule where all jobs are completed by their due date exists for the case in which release times and due dates are agreeable and all jobs have identical processing times Lee et al (1992) presented efficient dynamic programming-based algorithms for minimising a number of different performance measures on BPM where the processing time of the batch depends on the jobs in the batch Li and Lee (1997) proved that minimising the maximum tardiness and minimising the number of tardy jobs are strongly NP-hard even when the release times and due dates of the jobs are agreeable Chandru et al (1993) examined the objective of minimising total completion time They provided optimal and heuristic algorithm for the case of a single BPM, and heuristics for the parallel BPM problem Most of the early works on BPM focused on the model with identical job sizes Uzsoy (1994) later proposed the problems of minimising makespan and total completion time on a single BPM with nonidentical job sizes Both problems were proved NPhard and several heuristics were presented Uzsoy and Yang (1997) then proposed a branch-and-bound approach to minimise the total weighted completion time on a BPM of this type Dupont and Ghazvini (1998) provided two heuristics SKP and BFLPT, which achieved better makespan than FFLPT given by Uzsoy Melouk et al (2004) were probably the first researchers who used metaheuristics to deal with BPM problems with non-identical job sizes For makespan minimisation they employed a simulated annealing (SA) approach, which outperformed a commercial solver CPLEX Damodaran et al (2006) examined the same problem using a genetic algorithm (GA), and the results indicated that the GA approach was able to obtain better makespan than SA Kashan et al (2006) proposed a batch based hybrid GA (BHGA) that generated random batches and ensured feasibility through using knowledge of the problem A pairwise swapping heuristic (PSH) was hybridised with BHGA to improve its effectiveness 943 Some other researchers concentrated on theoretical analysis Zhang et al (2001) proved that the worst-case ratio of Algorithm FFLPT is no greater than Moreover, they proposed an approximation algorithm with worstcase ratio 7/4 Li et al (2005) studied the scheduling problem of arbitrary job release times and present an approximation algorithm with worst-case ratio þ e, where e can be made arbitrarily small Recent studies have been trying to expand BPM problems to parallel machines and flow shop environments Chang et al (2004) studied identical parallel BPMs using a simulated annealing approach Malve and Uzsoy (2007) examined the problem of minimising maximum lateness on parallel BPM with dynamic job arrivals They proposed a family of iterative improvement heuristics and combined them with a genetic algorithm based on the random keys encoding The computational results showed that one of the proposed GAs could achieve a good trade-off between solution time and quality Damodaran and Chang (2008) proposed several heuristics to minimise makespan on parallel BPMs, and the obtained results are compared to simulated annealing and CPLEX Kashan et al (2008) and Damodaran et al (2009) gave two different Genetic Algorithms for minimising makespan respectively As for flow shop environment, Sung and Yoon (1997) considered a model containing a finite number of jobs arrive dynamically Cheng and Wang (1998) examined another problem in the two-machine flow shop where one of the machines is a discrete processor and the other one is a batch processor In a recent paper, Tang and Liu (2009) dealt with two-machine flow shop scheduling with batching and release time They derived a lower bound and developed dynamic programming-based heuristic algorithms to solve the scheduling problem For a comprehensive review on scheduling with batching, a summary of results in this area was given by Potts and Kovalyov (2000) In addition, Mathirajan and Sivakumar (2006) presented a detailed literature review and two classification schemes to systematically organise the published articles related to BPM scheduling problems Lower bounds In this section, two lower bounds for Pmjbatch, rj, sj CjCmax are presented It is easy to see that the problem is strongly NP-hard by setting the number of machines m ¼ and the release time of all jobs rj ¼ 0, 8j J This special case is then equivalent to the problem 1jbatch, sj CjCmax proposed by Uzsoy (1994) Since the special case is strongly NP-hard, the result follows This conclusion indicates that any exact algorithms for the problem would be extremely 944 H Chen et al time-consuming and impractical for large-scale problems For this reason approximation algorithms are often a good compromise and consequently some lower bound is needed to evaluate the performance of approximation algorithms The first lower bound is derived from FBLPT (Full Batch Longest Processing Time) (Lee and Uzsoy 1999), which can optimally solve the problem 1jbatch, CjCmax By relaxing the problem to allow jobs to be split and processed in different batches, a lower bound can be given as follows: Algorithm LB1 Step 8j J, replace job j with sj jobs of unit size and processing time pj, ignoring their release times Then a new set of jobs J0 is obtained Step Arrange the obtained jobs in decreasing order of their processing times Step Form batches by placing jobs iC þ through (i þ 1)C together in a batch for i ¼ 0, , bjJ0 j/Cc, where bxc is the largest integer smaller than x Step Let B0 denote the set of batches obtained È É from rj ÆP Step b3, and Ç then È LB1ɼ max fmin j2J PT =jMj ; max r þ p þ gis a lower j j b2B j2J bound Proposition LB1 is a valid lower bound Proof Let CÃmax denote the optimal makespan of the problem On the one hand, the processing time and release time of a batch are determined by the largest processing time and release time of the jobs in the batch respectively, therefore CÃmax ! rj þ pj ; 8j J, and È É Ã Cmax ! max rj þ pj On the other hand, FBLPT j2J rule can produce an optimal makespan for the problem 1jbatch, CjCmax, in which the makespan is equivalent to the total processing time of batches Therefore P P b h b2B0 PT h2B PT , where B is given by any feasible batching plan Hence, the total processing time of the optimal solution ÆPfor Pmjbatch, Ç rj, sj b CjCmax is no less than PT =jMj More0 b2B over, since processing will not start before the à earliest È É jobÆParrives, b clearly Ç the formula Cmax ! rj þ b2B0 PT =jMj holds Therefore, LB1 is j2J a valid lower bound The lower bound based on FBLPT relaxes the problem by ignoring the release times of most jobs Following the definition of job unit, another lower bound will be introduced, which takes into account the release times of all jobs Definition A job with unit size and unit processing time can be defined as a job unit Job j with processing time pj, size sj, and release time rj is composed of pjsj job units with release times from rj to rj þ pj71 Definition The load of machine m at time t is the sum of job units processed on machine m at time t, denoted by Lm(t) The following algorithm provides another lower bound by splitting jobs into job units for Pmjbatch, rj, sj CjCmax Algorithm LB2 Step Split each job j J into pjsj job units, and the release time of ith job unit rji ¼ rj þ b(i71)/ sjc, i ¼ 1,2, Á Á Á, pjsj Step Let JU denote the set of job units obtained from Step Order the job units in set JU in increasing order of their release times Step Initialise system time t ¼ Let U(t) denote the set of job units that have not been released at time t, and set U(0) ¼ JU Let W(t) denote the set of job units available at time t but still waiting to be processed, and set W(0) ¼ F Let P(t) denote the set of job units processing at time t, and set P(0) ¼ F Let C(t) denote the set of job units that have finished processing at time t, and set C(0) ¼ F Step Move all the job units i satisfying ri ¼ t in U(t) into W(t) Move all job units in P(t) into C(t) If C(t) 6¼ JU, go to Step 5; otherwise go to Step Step Move CjMj job units from W(t) to P(t), if the number of job units in W(t) is less than CjMj, then move them all to P(t) Step Set time t ¼ t þ 1, go to Step Step Export LB2 ¼ t as a lower bound To prove the validity of LB2, the lemma below should be provided first Lemma Let A and B denote two instances of problem Pmjbatch, rj, sj CjCmax sol(A) and sol(B) are two P solutions A and B respectively P forsolðAÞ FsolðAÞ ðtÞ ¼ ti¼0 m2M Lm ðtÞ is the sum of job units processed t for sol(A), and P P by time solðBÞ FsolðBÞ ðtÞ ¼ ti¼0 m2M Lm ðtÞ for sol(B) If the following conditions are satisfied: (1) A and BPhave the same P number of job units, namely j2JA ðpj sj Þ ¼ j2JB ðpj sj Þ (2) Fs8l(A) (t) ! Fs8l(B) (t), 8t2Zþ solðAÞ solðBÞ solðAÞ solðBÞ Then Cmax Cmax , where Cmax and Cmax are the makespans obtained from sol(A) and sol(B) respectively 945 International Journal of Computer Integrated Manufacturing Proof Since the load of machine m at any time is nonnegative, Fs8l(A) (t) and Fs8l(B) (t) are increasing solðAÞ solðBÞ functions on Zþ Assume that Cmax > Cmax , then: FsolðAÞ ðCBmax Þ < FsolðAÞ ðCA max Þ ¼ X ðpj sj Þ j2JA FsolðBÞ ðCBmax Þ ¼ X ðpj sj Þ ð2Þ j2JB P The equation j2JA ðpj sj Þ ¼ j2JB ðpj sj Þ holds according to condition (1), thus F s8l(A) (t) Fs8l(B) (t), and this is in contradiction with condition (2), solðAÞ solðBÞ therefore Cmax Cmax The following proposition can be derived from Lemma1 Proposition LB2 is a valid lower bound Proof Let CÃmax denote an optimal makespan for problem Pmjbatch, rj, sj CjCmax Let FLB2(t) be the sum of job units processed by time t for Algorithm LB2, and F*(t) be the sum of job units processed by time t for the optimal solution It can be proved by mathematical induction that FLB2 (t) ! F*(t), 8t2Zþ (1) t ¼ Let R(t) be the setPof jobs arriving at time t, then Fà ð0Þ f i2Rð0Þ si ; CjMjg According to Step of Algorithm LB2, The jobsPin R(0) P are split into i2Rð0Þ ðpi si Þ job units, i2Rð0Þ si of which arrive at time zero, hence P FLB2 ð0Þ ¼ f i2Rð0Þ si ; CjMjg and FLB2 (0) ! F*(0) (2) Suppose FLB2 (T) ! F*(T) holds When t ¼ T þ 1: It should be noted that at any time t, any job unit j is always in one of the following states: (A) j has not been released by the moment; (B) j has been released, but still waiting to be processed; (C) j is being processed at the moment; and (D) The process of j has finished before time t The four states correspond to the sets of job units in Step of Algorithm LB2 At time t ¼ T þ 1, the following equations hold: þ jPà ðT þ 1Þj þ jCà ðT þ 1Þj ¼ X pj s j ð3Þ j2J  LB2    U ðT þ 1Þ þ WLB2 ðT þ 1Þ     X þ PLB2 ðT þ 1Þ þ CLB2 ðT þ 1Þ ¼ pj s j j2J FðT þ 1Þ ¼ jCðT þ 1Þj þ jPðT þ 1Þj ¼ FðTÞ þ jPðT þ 1Þj ð1Þ P jUà ðT þ 1Þj þ jWà ðT þ 1Þj In terms of the definition, function F(t) is the sum of job units processed by time t, the following equation can be obtained for both LB2 and COP max : ð4Þ ð5Þ If at time t ¼ T þ 1, all the machines are fully loaded for LB2, namely jPLB2(T þ 1)j ¼ CjMj, then jPLB2(T þ 1)j ! jP*(T þ 1)j as C is the maximal load for a machine According to the assumption FLB2 (T) ! F*(T) and Equation (5), then FLB2 (T þ 1) ! F*(T þ 1) If at time t ¼ T þ 1, not all the machines are fully loaded for LB2, namely jPLB2(T þ 1)j CjMj, then it can be concluded that jWLB2(T þ 1)j ¼ (otherwise job units in WLB2(T þ 1) will be moved into PLB2 (T þ 1) in terms of Step 5) Moreover, the release time of any job unit remains unchanged in Step of Algorithm LB2, i.e jU*(T þ 1)j ¼ jULB2(T þ 1)j So: jUà ðT þ 1Þj þ jWà ðT þ 1Þj !  LB2    U ðT þ 1Þ þ WLB2 ðT þ 1Þ ð6Þ Substitute Equation (3), (4) into the above formula, then jPà ðT þ 1Þj þ Cà ðT þ 1Þj jPLB2 ðT þ 1Þj þ jCLB2 ðT þ 1Þj; that is; FLB2 ðT þ 1Þ ! Fà ðT þ 1Þ: Now it can be concluded that FLB2 (t) ! F*(t), 8t Zþ, i.e., Condition (2) in Lemma is satisfied In addition, splitting jobs into job units does not change the total number of job units in the given set of jobs J This indicates Condition (1) in Lemma is also satisfied, therefore LB2 CÃmax Metaheuristic approaches As mentioned in the previous sections, the problem under study is NP-hard in the strong sense This implies some enumeration schemes such as branchand-bound algorithms would be extremely time-consuming when the number of jobs is large Therefore the use of metaheuristics is a reasonable choice The objective is to obtain a satisfactory solution within an acceptable time Two interdependent decisions are made for makespan minimisation: (1) forming batches and (2) scheduling the batches on parallel machines In this section, two metaheuristics, genetic algorithm (GA) and ant colony optimisation (ACO) are presented to form batches, and an ERT-LPT (Earliest 946 H Chen et al Ready Time & Longest Processing Time) heuristic is proposed to arrange the batches on parallel machines 4.1 ERT-LPT heuristic Since the makespan cannot be calculated until the batches are assigned to parallel machines, the following ERT-LPT heuristic is proposed to schedule batches on the parallel machines Algorithm ERT-LPT Step Arrange the batches in increasing order of their ready times Step Find out the machine m in the set of machines with earliest available time ta, which is the completion time of the last batch currently assigned to the machine Step Move all the batches satisfying RT ta from the set of batches B to the set of available batches AB Step Select the batch b in AB with the longest processing time Remove b from AB and assign it to machine m If AB is empty, then assign the first batch in B to machine m Step Repeat Step to Step until set B and AB are both empty Zhang et al 2003, Chou et al 2006, Damodaran et al 2006, Kashan et al 2006, Luo et al 2009) The GA presented in this paper is based on the previous study by Damodaran et al (2006), which aims at minimising makespan on a single BPM The ERTLPT heuristic was used to calculate the fitness of each chromosome in this study, and an improved selection and crossover operator was also defined The following steps describe in detail how this GA was implemented in this research 4.2.1 Initialisation The chromosome in the GA approach corresponds to a sequence of jobs The initial population consists of a group of chromosomes generated in different ways The first one is formed by arranging the jobs in decreasing order of processing times, and the second by ordering the jobs in increasing order of release time The other chromosomes are obtained by arbitrarily arranging the job sequences The BF (Best Fit) (Bramel and Simchi-Levi 1997) heuristic is then employed to form batches, based on which previously described ERT-LPT heuristic is used to give a final solution 4.2.2 An example of batches and machines is chosen to illustrate ERT-LPT heuristic The ready times and processing times of the batches, and their arrangement on machines are shown in Figure 4.2 Genetic algorithm Genetic algorithm is a population-based metaheuristic proposed by Holland in the 1970s A GA searches the solution space using simulated evolution, i.e., the survival of the fittest strategy As GAs are able to locate good solution even for difficult search spaces in a relatively short amount of time, they have been widely used for many combinatorial problems, including scheduling problem (Wang and Uzsoy 2002, Koksalan and Keha 2003, Sevaux and Dauzere-Peres 2003, Figure Illustration of the ERT-LPT heuristic Selection The selection operator involves randomly choosing members of the population to enter a mating pool Here roulette wheel mechanism is used to probabilistically select individuals based on their fitness values Because the makespan is to be minimised, it is necessary to transform the fitness function for maximisation The fitness function of chromosome c is defined as follows: fitnessðcÞ ¼ ðCcmax À max fLB1; LB2g þ 1Þq ð7Þ In equation (7), q is a parameter used to determine the fitness value A larger value of q increases the gap in fitness value between different chromosomes, and consequently speeds up convergence of GA However, International Journal of Computer Integrated Manufacturing it may also lead to premature convergence of the algorithm; while too small a value of q may take longer time to converge and produce bad solutions as well The probability of selection of a certain chromosome is Þ calculated by Pfitnessðc , where P is the current fitnessðcÞ c2P population 4.2.3 Crossover The crossover operator creates a new individual’s representation from parts of its parent’s representations A two-point crossover is used as it gave better results than single-point crossover did when the number of jobs is large However, direct exchange of genes between chromosomes may produce invalid job sequences For example, consider two job sequences of five jobs, (4, 2, 5, 3, 1) and (2, 4, 1, 5, 3) Let’s assume the two crossover points to be and 4, then a direct exchange of genes will produce two new chromosomes (4, 4, 1, 5, 1) and (2, 2, 5, 3, 3), both of which are invalid job sequences Consequently, it is necessary to redefine the crossover operator For each crossover operation, two crossover points are randomly selected The two new chromosomes are generated in the following manner For one chromosome, the genes between crossover points are rearranged by the sequence of the corresponding jobs in the other chromosome, and the remainder remains unchanged For the other chromosome, both sides of chromosome are reordered while the central part of chromosome remains unchanged This crossover operator will ensure that the new individuals produced by crossover operations are always valid An example of ten jobs shows how the crossover operator works (see Figure 3) 4.2.4 Mutation The mutation operator is used to maintain genetic diversity from one generation of a population of chromosomes to the next Two of the jobs are randomly selected in a chromosome and their positions in the chromosome are interchanged with a predefined mutation probability pm The stopping conditions of the proposed GA are as follows: (1) the makespan of any chromosome is equal to the lower bound LB ¼ max {LB1, LB2}, i.e the optimal solution is found (2) A preset execution time of GA is reached 4.3 Figure Flowchart of the proposed GA Figure Crossover mechanism Ant colony optimisation The ant colony optimisation algorithm (ACO) is a probabilistic technique for solving computational problems It has drawn extensive attention since it was proposed and has been successfully applied to 947 948 H Chen et al ACO algorithms are stochastic search procedures They are based on a parameterised model called the pheromone model, which is used to sample the search space probabilistically In the model, artificial ants incrementally construct solutions by adding opportunely defined solution components to a partial solution until a complete solution is built The construction of solutions is guided by pheromone trails and problem-specific heuristic information In the context of combinatorial optimisation problems, pheromones indicate the intensity of ant trails with respect to solution components, and such trails are determined on the basis of the contribution to the objective function Before the next iteration starts, some of the solutions are used for performing a pheromone update The details of the proposed ACO in this study are given as follows 4.3.1 Initialisation Each solution of the ACO is coded as a binary symmetric matrix, in which the element a(i, j) is equal to if job i and j are in the same batch and otherwise ERT-LPT dispatching rule is then used to get a final solution as well as GA Another important issue before applying ACO is to define the pheromone trails A possible plan of pheromone trail ti,j (t) is to represent the sequence desire of job i to follow j at time t However, this would require an extra heuristic such as BF to form batches, and may lead to similar results to GA, therefore in this study ti,j (t) denotes the trail intensity of job i and j in the same batch The ACO used here is basically the max-min ant system (MMAS) version (Stutzle and Hoos 2000), in which the range of possible pheromone trails on each solution component is limited to an interval [tmin, tmax] to avoid premature convergence The pheromone trails are initialised to tmax, achieving in this way a higher exploration of solution space at the beginning Additionally, other parameters are also initialised including the number of ants na, pheromone evaporation rate r, relative importance of pheromone trail a, and relative importance of heuristic information b Figure Flowchart of the proposed ACO many applications in practice, including some scheduling problems (Merkle et al 2002, T’Kindt et al 2002, Rajendran and Ziegler 2004, Mak et al 2007) 4.3.2 Construction of a solution The basic ingredient of any ACO algorithm is a constructive process for probabilistically constructing solutions Starting from a null set of batches, each ant makes use of trail intensities and heuristic information to determine the job to be added to the current batch at each step If the current batch does not have enough residual capacity to accommodate any unassigned job, then a new batch is created and an unassigned job with the longest processing time is added to the batch so as 949 International Journal of Computer Integrated Manufacturing to determine the processing time of the batch Each ant repeats the above processes until all the jobs are assigned to some batch Heuristic information is optional in ACO, but often needed for achieving good performance For makespan minimisation, it is evident that the residual capacity of each batch should be minimised, which signifies reducing waste in the job size dimension On the other hand, due to different processing times, some jobs have already finished processing but cannot be removed from machine until the processing of the batch is completed This can be regarded as waste in the job processing time dimension Moreover, if the release time of a job in a batch is too late, it may also delay the process of the whole batch However, this cannot be determined until all the batches are scheduled Consequently, it is very difficult to take release time into account when forming batches The following equation is used to calculate the heuristic value Zb,j between the current batch b and a job j: Zb;j ¼ À ðPTb À pj Þsj þ lðC À Sb À sj ÞPTb PTb ðC À Sb Þ pb;j ¼ P ð#b;j Þa ðZb;j Þb i2Fb ð10Þ ð#b;i Þa ðZb;i Þb P ti;j is the average value of pheromone where #b;j ¼ ji2b bj trails between job j and the jobs in batch b 4.3.3 Pheromone trail update After the ants have completed the assignment of jobs, the pheromone trails are updated to increase the pheromone values on solution components that have been found in high quality solutions This is done first by lowering the pheromone trails at an evaporation rate r and then by allowing the ants to deposit pheromone on the paths searched All the ants are used to update pheromone trails after each iteration However, the amount of pheromone deposited by ants differs in terms of solution quality The pheromone trail update rule is given below ð8Þ ti;j ðt þ 1Þ ¼ ð1 À rÞti;j ðtÞ þ r na X Dtai;j ð11Þ a¼1 Equation (8) can be illustrated by Figure When a job j is added to batch b, waste in the processing time dimension corresponds to Area I, i.e (PTb7pj)sj and Dtai;j ¼ < max fLB1;LB2g Camax Àmax fLB1;LB2gþ1 : where na is the number of ants, and the amount of pheromone deposited by ant a follows this equation: if i; j are in the same batch in the solution found by ant a at the current iteration otherwise ð12Þ waste in the size dimension corresponds to Area II, i.e (C7Sb7sj)PTb However, since there may be other jobs to be added to the current batch, some parts of Area II will probably be used in the future Hence a coefficient l is introduced to estimate the ratio of the waste area to Area II   fiji Ju \ si > C À Sb À sj g ð9Þ l¼ jfiji Ju gj And Camax denotes the makespan of the solution found by ant a It is possible that sometimes the algorithm might get trapped in local minima This may happen if at each choice point, the pheromone trail is significantly higher for one choice than for all the others In order to avoid this situation and maintain the diversity of solutions, explicit limits tmin and tmax are imposed on the minimum and maximum pheromone trails such where Ju denotes the set of unassigned jobs On the one hand, if all the unassigned jobs have the size larger than the residual capacity of the batch after job j has been added, then l ¼ as Area II will entirely be wasted On the other hand, if no unassigned job has the size larger than the residual capacity, then l ¼ The transition probabilities are then defined, which are the probabilities for choosing the next solution component Let Fb ¼ {jjj2Ju\sj C7Sb} denotes the set of jobs possibly to be added to the current batch b, the probability of adding job j to batch b is: Figure Illustration of heuristic information 950 H Chen et al that for all pheromone trails ti, j (t), tmin ti, j(t) tmax After each update if ti, j (t) tmin, then ti, j (t) is set to tmin; analogously ti, j (t) is set to tmax if ti, j (t) tmax This ensures that every path has at least a small amount of pheromone, thus the probability of choosing any solution component is never The algorithm stops when a preset running time is reached, or the best makespan found is equal to the lower bound LB ¼ max {LB1, LB2} Computational experiments 5.1 Experimental design Random instances were generated in a way similar to Kashan et al (2008) to evaluate the performance of the proposed metaheuristics with respect to solution quality and run time Several factors that have effect on problem solution were identified: the number of machines, the number of jobs, the variation in job processing times, and the variation in job sizes In addition, the following equation was used to determine the maximum release time EðpÞEðsÞjJj ð13Þ CjMj where E(p) and E(s) are the mathematical expectation of processing time p and size s, and the factor R determines the relative frequency of jobs arrivals If R ¼ 0, all jobs become available simultaneously at time zero, while as R increases the jobs arrive over a longer interval The factors, their levels or ranges are shown in Table Five different problem sizes with jobs varying from 10 to 200 were chosen to conduct the experiments as suggested in the previous works (Damodaran et al 2006, Kashan et al 2008) The number of machines was fixed to 2, because all the algorithms tested in the experiments employed the same ERT-LPT dispatching rule to assign batches to machines, thus it led to similar results in cases with four machines or more The range of processing times is fixed from to 10, the pilot rmax ¼ R Table instances Summary of parameters used to generate Factors J M p s r Instances per category Total instances Levels 10, 20, 50, 100, and 200 Fixed ¼ Fixed ¼ Uniform [1, 10] Uniform [1, 10], [2, 4] and [4, 8] Uniform [0, rmax], R ¼ 0.5, Number of levels 1 10 300 experiments showed that the impact would be limited if the variability in processing times increased, e.g from to 20 By contrast, job sizes and release times significantly affected the performance of algorithms Therefore, different levels were used to identify their impacts For each combination of problem factors, ten instances were randomly generated, and 300 different instances were thus obtained Each instance-algorithm combination was run for times with different random seeds The best result would be selected to report the performance of an algorithm Each category of problems was represented by a run code Since the number of machines, and the processing times have only one level, the two fields were simply omitted For example, a problem category with 10 jobs, job sizes generated from the interval [4, 8], and R ¼ was denoted by J1s3r2 5.2 Evaluation of the lower bounds The quality of the lower bound is important because a tighter bound can help to better assess the performance of an approximate algorithm In this subsection, a small experiment was conducted to examine the tightness of the two lower bounds relative to optimal solutions However, as there is no branch and bound method developed for problem Pmjbatch, rj, sj CjCmax to the authors’ knowledge, the proposed metaheuristics (GA and ACO) were employed to approximate optimal solutions in small instances with 10 jobs Both algorithms were allowed to run ten minutes with a very large population size (1000 for GA, and 500 for ACO) Column in Table presents the run code, which has been explained previously Column presents the average makespan obtained from 10 instances for each problem category The best results reported by GA or ACO for each instance are selected, and such results are very close to the optimal solutions considering the exhaustive search in the solution space Column and report the ratio of the two lower bounds to CÃmax respectively It can be observed that the ratio of the lower bounds to CÃmax lies in the interval [0.87, 0.92] in most Table Evaluation of the two lower bounds Run code CÃmax LB1=CÃmax LB2=CÃmax s1r1 s2r1 s3r1 s1r2 s2r2 s3r2 21.6 14.2 23.3 25.5 16.0 26.1 0.8935 0.9085 0.8112 0.8980 0.9063 0.8774 0.8704 0.8944 0.8026 0.9176 0.9063 0.9004 International Journal of Computer Integrated Manufacturing cases except category s3r1 LB1 is a tighter lower bound for r1 problems, whereas LB2 is better for r2 problems This may, however, be due to relaxation of different constraints LB1 keeps the processing time constraint in a batch, but simply ignores the release time By contrast, LB2 relaxes the processing time constraint, but allows each job unit with a release time 5.3 Parameter tuning of the proposed algorithms Algorithms tested in experiments include BFLPTERTLPT (BE for short), the proposed GA, and ACO BE is a heuristic using BFLPT(Ghazvini and Dupont, 1998) to form batches, and then assigning the batches to machines by ERT-LPT A pre-experiment was performed to determine the appropriate parameter settings of both GA and ACO There are several parameters that affect the search behaviour of the proposed GA, and choosing a proper value for them will improve solution quality These parameters include the population size pop, mutation probability pm and fitness parameter q The population size is closely related to instance size, e.g a population size of 30 may be enough for a 10-job instance, but a 200-job instance would require a much bigger population size In this study, a formula pffiffiffi pop ¼ 10 n was used to determine the population size as suggested by Rardin and Uzsoy (2001) For mutation probability, pm ¼ 0.05, 0.10, 0.15, 0.20 were 951 considered Based on the pilot experiment, the appropriate value was found to be 0.20 Another important parameter to be determined is the fitness parameter q, which directly affects the quality and time of convergence The following figure illustrates the convergence processes of GA with different q values As shown in Figure 6, when q ¼ the algorithm gives the best result, and the convergence time in this case is also acceptable Hence it is preferred to set the fitness parameter q ¼ The parameters needed to determine for ACO include the ants’ population na, pheromone evaporation rate r, relative importance of the pheromone a, and relative importance of the heuristic b It could be found in pilot experiments that using a dynamic population size ispfavourable So the number of ants ffiffiffi was set to na ¼ n To determine the proper values of r, a and b, different combinations of the parameters were evaluated The results are shown in Figure As can be observed in the figure, the combination of r ¼ 0.2, a ¼ and b ¼ 10 achieves the best result This setting was selected for ACO in the experiments To compare the performance of GA and ACO, both algorithms were allowed to run for the same time The preset computation time differs in terms of the problem size, and it should be long enough to ensure convergence of both algorithms All the relevant parameters are listed in Table Figure Convergence process of GA with different q values Note: (1) The above figure shows the convergence process for a problem category J5s1r1, with other parameters pop ¼ 150, pm ¼ 0.2 (2) The algorithm was allowed to run 60s to ensure convergence 952 H Chen et al Figure Parameter tuning for ACO Note: The above results are obtained from 10 instances belonging to category J5s1r1 Each instance was allowed to run 60 s to ensure convergence The vertical axis represents the average makespan of the instances Table Problem size J J J J J ¼ ¼ ¼ ¼ ¼ Summary of the parameters for GA and ACO Running time (s) (per instance) 10 20 50 100 200 10 30 GA pop ¼ 30 pop ¼ 50 pop ¼ 70 pop ¼ 100 pop ¼ 150 All the algorithms, along with LB1 and LB2, were coded in Visual C# 2008 A Core 2, 2GHz computer with 2GB RAM was used to run the experiments 5.4 Experimental results To access the quality of the proposed algorithms, different algorithms were compared by measuring the relative difference (or gap) between the solution they found and the lower bound CLB max ¼ MaxfLB1; LB2g The gap percentage %GapLB is given by the following equation: %GapLB ðAÞ ¼ LB CA max À Cmax  100% CLB max ð14Þ where CA max is the best makespan value obtained from algorithm A Figure reports the results obtained from problem category s1r1 The vertical rectangles represent the average performance of different algorithms, while the broken lines represent the worst-case performance It can be observed that both GA and ACO outperform ACO q¼4 pm ¼ 0.2 na na na na na ¼ ¼ ¼ ¼ ¼ 15 20 35 50 70 r ¼ 0.2 a¼1 b ¼ 10 BE The difference between BE and the other algorithms becomes marginal with the increasing number of jobs, because as problem sizes are smaller, the solution space is also limited In this case, a metaheuristic is able to exhaustively explore the solution space Metaheuristics become less efficient when dealing with large scale problems due to the exponential growth of the solution space GA and ACO have similar results in this problem category GA performs slightly better than ACO when the problem size is small, whereas ACO outperforms GA when the number of jobs is larger than 50 The worst-case performance follows the same trends Figure and Figure 10 present the performance from problem category s2r1 and s3r1, respectively When the job size is small (s2), the difference between BE and metaheuristic is clear, and ACO provides a more robust performance compared to GA For the problems with large jobs (s3), all the algorithms perform similarly despite the fact that ACO outperforms the other two The search space in this case is significantly smaller compared to that of s2 problems For these problems 40% of jobs are supposed to lie in International Journal of Computer Integrated Manufacturing Figure Results from problem category s1r1 Figure Results from problem category s2r1 Figure 10 Results from problem category s3r1 953 954 H Chen et al Figure 11 Results from problem category s1r2 Figure 12 Results from problem category s2r2 exactly one batch (the jobs with sizes and 8) and the other 60% of jobs need to be assigned efficiently to batches Futhermore, any batch can accommodate at most two jobs, which reduces the complexity of the problems as well As for the worst-case performance, it is evident that ACO provides the most robust performance in most cases, except the problem s2r1 with 200 jobs The results from category r2 are shown in Figure 11, 12 and 13 For this kind of problem, the release time of job is generated over a longer interval Alternatively, jobs arrive less frequently than that of category r1 It should be noted that in this case the performance of BE is considered rather unsatisfactory, especially for the worst-case performance An important reason accounting for this phenomenon is that BE employs BFLPT rule to form batches, which does not take the release time of job into consideration When jobs arrive frequently, the impact of release time is limited, but if the release time of a job is generated over a long interval, such an impact may result in very poor performance in some instances Although GA employs the same rule to form batches, the release time has been considered implicitly by the feedback of the fitness function Comparing GA with ACO, there is generally not a very considerable difference between the two algorithms An interesting trend is that GA prefers problems with small jobs (s2), while ACO is desirable when dealing with problems with large jobs (s3) The reason may be considered in two ways First, GA uses BFLPT rule to form batches, which requires visiting all the existing batches before determining the ‘‘best-fit’’ batch to accommodate the current job In small-job cases, the number of batches is also small as a batch is able to accommodate more jobs Hence GA will visit fewer batches to make a decision This reduces the International Journal of Computer Integrated Manufacturing Figure 13 Results from problem category s3r2 complexity and computation time of constructing a solution Second, at each step of constructing a batch, ACO needs to determine the next job to be added to the current batch There will be fewer jobs available for problems with large jobs due to the capacity constraint Consequently, it is relatively simple and fast to calculate the transition probabilities When the running time of ACO is given in advance, this will result in more iterations, and thus allow ACO to provide better performance for large-job instances 955 improved Some local search procedures can be developed to make the algorithm more effective and efficient As for GA, considering different batching rules to group jobs is also of interest The possibility of using some other metaheuristics such as particle swarm optimisation and differential evolution may be investigated as well Furthermore, this research can also be extended to optimise other objectives such as total completion time and due-date related performance measures BPM problems with job families or flow shop environment could also be considered as promising future directions Concluding remarks In this paper the problem of minimising makespan on BPMs in the presence of dynamic job arrivals and nonidentical jobs sizes was investigated The authors extended the single machine problem considered by Li et al (2005) and Chou et al (2006) to parallel machine environment The problem was shown to be NP-hard in the strong sense and thus there exists no polynomial time algorithm to solve the problem optimally Two lower bounds, along with their validity poof were presented to evaluate the performance of approximation algorithms An ERT-LPT heuristic was presented to assign batches to parallel machines Two metaheuristics, namely a genetic algorithm and an ant colony optimisation were then proposed to minimise makespan using ERT-LPT Computational experiments were performed to assess the proposed metaheuristics along with a BE heuristic The results indicate that both metaheurisitcs outperform BE GA is able to obtain better solutions when dealing with small-job problems compared to ACO, whereas ACO dominates GA in large-job instances There are a number of important directions for future research The heuristic information and the pheromone update rule of ACO may be further Acknowledgements The authors would like to thank Xiaolin Li, Qi Tan, and Song Zhang for technical assistance This work was supported by the National Natural Science Foundation of China (70821001), Research fund for the Doctoral program of Higher Education of China (200803580024), HKSAR ITF (GHP/042/07LP) and HKSAR RGC GRF (HKU 712508E) References Bramel, J and Simchi-Levi, D., 1997 The logic of logistics: Theory, algorithms, and applications for logistics management Berlin: Springer Verlag Chandru, V., Lee, C.Y., and Uzsoy, R., 1993 Minimizing total completion time on batch processing machines International Journal of Production Research, 31 (9), 2097–2121 Chang, P.Y., Damodaran, P., and Melouk, S., 2004 Minimizing makespan on parallel batch processing machines International Journal of Production Research, 42 (19), 4211–4220 Cheng, T.C.E and Wang, G.Q., 1998 Batching and scheduling to minimize the makespan in the two-machine flowshop IIE Transactions, 30 (5), 447–453 Chou, F.D., Chang, P.C., and Wang, H.M., 2006 A hybrid genetic algorithm to minimize makespan for the single batch machine dynamic scheduling problem International Journal of Advanced Manufacturing Technology, 31 (3–4), 350–359 956 H Chen et al Damodaran, P and Chang, P.Y., 2008 Heuristics to minimize makespan of parallel batch processing machines International Journal of Advanced Manufacturing Technology, 37 (9–10), 1005–1013 Damodaran, P., Hirani, N.S., and Velez-Gallego, M.C., 2009 Scheduling identical parallel batch processing machines to minimise makespan using genetic algorithms European Journal of Industrial Engineering, (2), 187–206 Damodaran, P., Manjeshwar, P.K., and Srihari, K., 2006 Minimizing makespan on a batch-processing machine with non-identical job sizes using genetic algorithms International Journal of Production Economics, 103 (2), 882–891 Dupont, L and Ghazvini, F.J., 1998 Minimizing makespan on a single batch processing machine with non-identical job sizes European Journal of Automation, 32 (4), 431–440 Ghazvini, F.J and Dupont, L., 1998 Minimizing mean flow times criteria on a single batch processing machine with non-identical jobs sizes International Journal of Production Economics, 55 (3), 273–280 Ikura, Y and Gimple, M., 1986 Efficient scheduling algorithms for a single batch processing machine Operations Research Letters, (2), 61–65 Kashan, A.H., Karimi, B., and Jenabi, M., 2008 A hybrid genetic heuristic for scheduling parallel batch processing machines with arbitrary job sizes Computers and Operations Research, 35, 1084–1098 Kashan, A.H., Karimi, B., and Jolai, F., 2006 Effective hybrid genetic algorithm for minimizing makespan on a single-batch-processing machine with non-identical job sizes International Journal of Production Research, 44 (12), 2337–2360 Koksalan, M and Keha, A.B., 2003 Using genetic algorithms for single-machine bicriteria scheduling problems European Journal of Operational Research, 145 (3), 543–556 Lee, C.Y and Uzsoy, R., 1999 Minimizing makespan on a single batch processing machine with dynamic job arrivals International Journal of Production Research, 37 (1), 219–236 Lee, C.Y., Uzsoy, R., and Martin-Vega, L.A., 1992 Efficient algorithms for scheduling semiconductor burn-in operations Operations Research, 40 (4), 764–775 Li, C.L and Lee, C.Y., 1997 Scheduling with agreeable release times and due dates on a batch processing machine European Journal of Operational Research, 96 (3), 564–569 Li, S.G., Li, G.J., Wang, X.L., and Liu, Q.M., 2005 Minimizing makespan on a single batching machine with release times and non-identical job sizes Operations Research Letters, 33 (2), 157–164 Luo, H., Huang, G.Q., Zhang, Y.F., Dai, Q.Y., and Chen, X., 2009 Two-stage hybrid batching flowshop scheduling with blocking and machine availability constraints using genetic algorithm Robotics and Computer-integrated Manufacturing, 25 (6), 962–971 Mak, K.L., Peng, P., Wang, X.X., and Lau, T.L., 2007 An ant colony optimization algorithm for scheduling virtual cellular manufacturing systems International Journal of Computer Integrated Manufacturing, 20 (6), 524–537 Malve, S and Uzsoy, R., 2007 A genetic algorithm for minimizing maximum lateness on parallel identical batch processing machines with dynamic job arrivals and incompatible job families Computers & Operations Research, 34 (10), 3016–3028 Mathirajan, M and Sivakumar, A.I., 2006 A literature review, classification and simple meta-analysis on scheduling of batch processors in semiconductor International Journal of Advanced Manufacturing Technology, 29 (9–10), 990–1001 Melouk, S., Damodaran, P., and Chang, P.Y., 2004 Minimizing makespan for single machine batch processing with non-identical job sizes using simulated annealing International Journal of Production Economics, 87 (2), 141–147 Merkle, D., Middendorf, M., and Schmeck, H., 2002 Ant colony optimization for resource-constrained project scheduling IEEE Transactions on Evolutionary Computation, (4), 333–346 Pinedo, M., 2002 Scheduling: theory, algorithms and systems Upper Saddle River, NJ: Prentice-Hall Potts, C.N and Kovalyov, M.Y., 2000 Scheduling with batching: A review European Journal of Operational Research, 120 (2), 228–249 Rajendran, C and Ziegler, H., 2004 Ant-colony algorithms for permutation flowshop scheduling to minimize makespan/total flowtime of jobs European Journal of Operational Research, 155 (2), 426–438 Rardin, R.L and Uzsoy, R., 2001 Experimental evaluation of heuristic optimization algorithms: A tutorial Journal of Heuristics, (3), 261–304 Sevaux, M and Dauzere-Peres, S., 2003 Genetic algorithms to minimize the weighted number of late jobs on a single machine European Journal of Operational Research, 151 (2), 296–306 Stutzle, T and Hoos, H.H., 2000 MAX-MIN Ant system future generation Computer Systems, 16, 889–914 Sung, C.S and Yoon, S.H., 1997 Minimizing maximum completion time in a two-batch-processing-machine flowshop with dynamic arrivals allowed Engineering Optimization, 28 (3), 231–243 T’kindt, V., Monmarche, N., Tercinet, F., and Laugt, D., 2002 An ant colony optimization algorithm to solve a 2-machine bicriteria flowshop scheduling problem European Journal of Operational Research, 142 (2), 250–257 Tang, L.X and Liu, P., 2009 Minimizing makespan in a two-machine flowshop scheduling with batching and release time Mathematical and Computer Modelling, 49 (5–6), 1071–1077 Uzsoy, R., 1994 Scheduling a single batch processing machine with nonidentical job sizes International Journal of Production Research, 32 (7), 1615–1635 Uzsoy, R and Yang, Y., 1997 Minimizing total weighted completion time on a single batch processing machine Production and Operations Management, (1), 57–73 Wang, C.S and Uzsoy, R., 2002 A genetic algorithm to minimize maximum lateness on a batch processing machine Computers and Operations Research, 29 (12), 1621–1640 Zhang, G.C., Cai, X.Q., Lee, C.Y., and Wong, C.K., 2001 Minimizing makespan on a single batch processing machine with nonidentical job sizes Naval Research Logistics, 48 (3), 226–240 Zhang, Y., Jiang, P., and Zhou, G., 2003 GA-driven part e-manufacturing scheduling via an online e-service platform Integrated Manufacturing Systems, 14 (7), 575–585 [...]... components International Journal of Computer Integrated Manufacturing, 19 (3 ), 248–263 Forrester, J ., 1980 Principles of systems: text and workbook Cambridge, Mass: MIT Gao, J.X ., Aziz, H ., Maropoulos, P.G ., and Cheung, W.M ., 2003 Application of Product data management technologies for enterprise integration International Journal of Computer Integrated Manufacturing, 1 6, 491–500 Garetti, M ., Terzi, S ., Bertacci,... Jianjun, Y ., Baiyang, J ., Yifeng, G ., Jinxiang, D ., and Chenggang, L ., 2008 Research on evaluation methodologies of product life cycle engineering design (LCED) and development of its tools International Journal of Computer Integrated Manufacturing, 21 (8 ), 923–942 Jun, H.B ., Shin, J.H ., Kiritsis, D ., and Xirouchakis, P ., 2007 System architecture for closed-loop PLM International Journal of Computer Integrated. .. Integrated Manufacturing, 20 (7 ), 684–698 Kalnins, A ., 2004 Business modelling Languages and tools Mathematics in Industry, 4, 41–52 Li, H ., Fan, Y ., Dunne, C ., and Pedrazzoli, P ., 2005 Integration of business processes in Web-based collaborative product development International Journal of Computer Integrated Manufacturing, 18 (6 ), 452–462 Mejı´ a, R ., Aca, J ., Ahuett, H ., and Molina, A ., 2004 Collaborative... systems literature Information Systems Journal Online early articles, DOI: 10.1111/j.1365-2575.2008 00297.x Chin, K.S ., Lam, J ., Chan, J.SF ., Poon, K ., and Yang, J ., 2005 A CIMOSA presentation of an integrated product design review framework International Journal of Computer Integrated Manufacturing, 1362–305 2, 18 (4 ), 260–278 Cuenca, L ., Ortiz, A ., and Vernadat, F ., 2006 From UML or DFD models to CIMOSA... Siddiqui, Q.A ., Burns, N.D ., and Backhouse, C.J ., 2004 Implementing product data management the first time International Journal of Computer Integrated Manufacturing, 17 (6 ), 520–533 Subrahmanian, S ., Rachuri, S ., Fenves, S ., Foufou, S ., and Sriram, R ., 2005 Product lifecycle management support: a challenge in supporting product design and manufacturing in a networked economy International Journal of Product... Kraebber, H.W ., 2005 Computer- integrated manufacturing New Jersey: Pearson Prentice Hall Revelle, J.B ., Moran, J.W ., and Cox, C.A ., 1998 The QFD handbook New York: Wiley Rouibah, K ., Rouibah, S ., and Van Der Aalst, W.M.P ., 2007 Combining workflow and PDM based on the workflow management coalition and STEP standards: the case of axalant International Journal of Computer Integrated Manufacturing, 20 (8 ), 811–827... practices implementation Annual Reviews in Control, 2 7, 221– 228 Molina, A ., Velandia, M ., and Galeano, N ., 2007 Virtual enterprise brokerage: A structure driven strategy to achieve build to order supply chains International Journal of Production Research, 4 5, 3853–3880 DOI: 10.1080/00207540600818161 Nof, S.Y ., Morel, G ., Monostori, L ., Molina, A ., and Filip, F ., 2006 From plant and logistics control to multienterprise... environments to support integrated product development In: L.M CamarinhaMatos, ed Emerging solutions for future manufacturing systems Austria, IFIP: Springer, 271–278 Mejia, R ., Lopez, A ., and Molina, A ., 2007 Experiences in developing collaborative engineering environments: An action research approach Computers in Industry, 58 (4 ), 329–346 Ming, X.G ., Yan, J.Q ., Lu, W.F ., and Ma, D.Z ., 2005 Technology solutions... projects Information Systems Journal Online early articles, DOI:10.1111/ j.1365-2575.2007.00280.x 874 N Pen˜aranda et al Chen, D and Vernadat, F ., 2004 Standards on enterprise integration and engineering – a state of the art International Journal of Computer Integrated Manufacturing, 1 7, 235–253 Chiasson, M ., Germonprez, M ., and Mathiassen, L ., 2008 Pluralist action research: a review of the information systems... Education, 33 (1 ), 85–103 Amaro, G and Hendry, L.B ., 1999 Competitive advantage, customization and a new taxonomy for non make to stock companies International Journal of Operations and Production Management, 1 9, 349–371 Armistead, C and Machin, S ., 1997 Implications of business process management for operations management International Journal of Operations and Production Management, 1 7, 886–898 Avison,

Ngày đăng: 19/07/2016, 20:12

Từ khóa liên quan

Mục lục

  • Cover

  • Implementation of product lifecycle management tools using enterprise integration engineering and action-research

  • A web-based collaborative design architecture for developing immersive VR driving platform

  • Third-generation STEP systems that aggregate data for machining and other applications

  • The lifecycle of active and intelligent products: The augmentation concept

  • Visual inspection of glass bottlenecks by multiple-view analysis

  • Metaheuristics to minimise makespan on parallel batch processing machines with dynamic job arrivals

Tài liệu cùng người dùng

Tài liệu liên quan