Quantcast

Dynamic Parameter Design By Ant Colony Optimization and Neural Networks

July 29, 2007

By Chang, Hsu-Hwa Chen, Yan-Kwang; Chen, Mu-Chen

Parameter design is the most important phase in the development of new products and processes, especially in regards to dynamic systems. Statistics-based approaches are usually employed to address dynamic parameter design problems; however, these approaches have some limitations when applied to dynamic systems with continuous control factors. This study proposes a novel three-phase approach for resolving the dynamic parameter design problems as well as the static characteristic problems, which combines continuous ant colony optimisation (CACO) with neural networks. The proposed approach trains a neural network model to construct the relationship function among response, inputs and parameters of a dynamic system, which is then used to predict the responses of the system. Three performance functions are developed to evaluate the fitness of the predicted responses. The best parameter settings can be obtained by performing a CACO algorithm according to the fitness value. The best parameter settings that are obtained are no longer restricted to the values of control factor levels. The proposed approach is demonstrated with two illustrative examples. Results show that the proposed approach outperforms the Taguchi method. Keywords: Ant colony optimization; neural networks; dynamic characteristic; robust parameter design.

1. Introduction

In today’s rapidly changing manufacturing setting, time-to- market, quality, and individualism are essential for timely introduction of new products into global marketplaces. Therefore manufacturers have to improve their production capability to satisfy customers’ needs. In general, the product design process consists of three phases: system design, parameter design, and tolerance design. Phadke (1989) and Fowlkes and Creveling (1995) provide a more detailed description of these three design phases. Parameter design in particularly is the most emphasized phase by manufacturers because this phase most influences the total production time and cost. In the design process, a number of parameters can affect the quality characteristic or response of the product. Parameters within the system may be classified as signal factors, noise factors and control factors. Signal factors are set by the operator of the product to express the intended value for the response of the product. Noise factors are parameters that cannot be controlled by the designer, or are difficult and expensive to control. Noise factors cause the response to deviate from the target specified by the signal factor. Control factors are parameters that can be specified freely by the designer to determine the best settings (also called levels) so that the system has the least sensitivity to the effect of the noise factors. Figure 1 provides the Parameter Diagram (P-diagram) to represent different types of parameters and their relationships (Phadke, 1989).

Parameter design can be divided into two classes: static and dynamic characteristics. The problem with a static system is obtaining the responses as close as possible to a specified target. Whereas with a dynamic system the aim is to make the responses approach the floating point targets which depend on the input signal values assigned by the system operator. In other words, a system has static characteristic if the signal factor is fixed, while it has dynamic characteristic if the signal factor is variable (Fowlkes and Creveling, 1995). Since the dynamic system is frequently encountered in practice and is more difficult to analyze, it has received increasing attention. A recent review of dynamic parameter design can be found in Zang et al. (2005).

Taguchi (1987) used signal-to-noise ratio (SNR) to evaluate robustness and variation in a system. The higher SNR value signifies that the system has better performance, that is, less response variation. Taguchi (1987) classifies SNRs into three types according to the quality characteristics of the responses, i.e. nominal- thebest (NTB), smaller-the-better (STB), and larger-the-better (LTB). In the case of NTBs the system has a specific target response that is larger than zero, e.g. length of a component. With STBs the ideal response is zero, e.g. loudness of engines, or eccentricity of a tire. And with LTBs the ideal target response is infinity, e.g. efficiency, yield, or braking strength. For a dynamic system, let y^sub ij^ denote the response corresponding to a particular settings of control factors at the ith noise factor setting, and the jth signal factor setting. Under the assumption of a linear ideal function with no intercept, we have y^sub ij^ = betaM^sub j^, where M^sub j^ denotes the jth signal factor setting, beta is the slope (sensitivity). For STB, NTB, and LTB cases, the slope values are beta = 0, 0

Fig. 1. P-diagram of a system.

To evaluate a dynamic system’s performance, the SNR formula SNR = 10log^sub 10^ (beta/MSE) is used, where MSE represents the mean square error of the dis- tance between the measured response and the best fitted line. To improve a system’s robustness, the SNRs are utilized to perform the two-phase optimization proce- dures, which are shown in Fig. 2. The procedures are as follows (Wu, 1999; Wu and Hamada, 2000):

Phase 1: Select levels for the significant factors to maximize the dynamic SNRs. A significant factor is a control factor that has an effect on variation.

Phase 2: Select levels for the adjustment factors to bring the slope on target (according to the characteristics of the responses, i.e. dynamic LTB, NTB, and STB). An adjustment factor is a control factor that has an effect on slope but not on variation.

In Fig. 2, the shadow indicates the outputs of a system, the straight line denotes the ideal function y = betaM. Figure 2(a) shows that the responses of an initial design have large variation and deviation from the ideal function. Figure 2(b) shows that the system after executing Phase 1 has a smaller variation. Figure 2(c) depicts how the desired target outputs are achieved in terms of three different cases. By select- ing the appropriate levels of the adjustment factors, we can manipulate the outputs approaching the ideal outputs. In the NTB case, the outputs are moved to a spe- cific ideal function to reduce the deviation. In the LTB case, the outputs are moved upward to find a larger value. For the STB case, we make the outputs approach zero as close as possible. After performing two-phase optimization, the system is supposed to be robust, that is, the responses have smaller variation and lower deviation from desired targets.

Fig. 2. Two-phase procedure for the dynamic characteristic problem.

From the viewpoint of optimization, the parameter design can be treated as a black-box problem because we cannot formulate the system’s actual functions in mathematical equations based on our engineering knowledge (Goh, 1993). Engineers usually apply the Taguchi method to analyze the experimental outputs of the black-box system. Although the Taguchi method is considered a cost-effective approach for determining the optimal parameter settings of a product or a process, it cannot be applied to all parameter design problems and has some limitations and inefficiencies (Robinson et al., 2004; Maghsoodloo et al., 2004). There have been several articles published focusing on the methods of improving dynamic parameter design (Wasserman, 1996; Lunani et al., 1997; McCaskey and Tsui, 1997; Tsui, 1999; Miller, 2002; Lesperance and Park, 2003; Chen, 2003).

This study aims to transfer the black-box system to a mathematical programming problem so that more precise solutions can be obtained by using meta-heuristic algorithms. The neural networks and ant colony optimization (ACO) are employed in this study to optimize the dynamic parameter design with regards to quantitative control factors. Neural networks are widely used for mapping the relationship function between a system’s inputs and outputs, which makes it suitable for our problems. Many researchers have successfully applied neural networks to resolve diversity parameter designs problems (Rowlands et al., 1996; Anjum et al., 1997; Chiu et al., 1997; Su and Chang, 2000; Chow et al., 2002; Sarkar and Modak, 2003; Aijun et al., 2004; Su et al., 2005). The results showed that neural networks are capable of treating quantitative parameter values. Alternatively, continuous ACO (namely CACO) has been tested for its performance of optimizing continuous function. The results indicate better performance in comparison to other algorithms such as genetic algorithms, evolutionary programming, hill-climbing, and population-based incremental learning (Bonabeau et al., 1999).

The rest of this article is organized as follows. Section 2 briefly introduces the neural networks and CACO theories. Section 3 develops a three-phase approach for optimizing dynamic parameter design. Section 4 demonstrates the effectiveness of the proposed approach with two illustrative examples, and compares the results with the Taguchi method. Discussion and conclusion are finally in Section 5. 2. Neural Networks and Ant Colony Optimization

2.1. Neural networks

A back-propagation neural network is a multi-layer network with learning ability, and is usually employed to approximate any nonlinear continuous mapping from the input patterns to the output patterns. The nonlinear transformation function of the sigmoid function f(x) = 1/(1 + e^sup -x^) is used between the connections of input layer, hidden layer and output layer. Back-propagation neural network learning employs a gradient-descent algorithm to minimize the MSE between the target data and the predictions of the neural network. The gradient-descent learning algorithm enables a network to enhance its performance through self-learning. Learning rules are commonly divided into two types: supervised and unsupervised learning. A back-propagation neural network employs a supervised learning rule. The data set is initially collected before training a neural network model. Applying a supervised learning rule, the data set is comprised of an input and an actual output. While training the network model, the performance of the model is sensitive to the choices of various network structures, and the parameter settings of learning rate and momentum coefficient. A common approach for obtaining a well-trained network structure is to select the best one from several candidate networks that are trained with different numbers of hidden layers and nodes in each hidden layer. The process of training a network is as follows (Fausett, 1994):

1. Determine the network structure. Randomly initialize weights between layers. Select the learning schedule (i.e. set the transfer function, learning rate, momentum, and learning count).

2. Repeat steps 3-6 until learning counts or the error criterion has arrived.

3. Enter the inputs and target outputs into the network.

4. Apply the transfer function to predict the outputs.

5. Calculate the error between the predicted output and the target output.

6. Adjust the weights of the network.

2.2. Ant colony optimization

Ant colony optimization (ACO) algorithms are multi-agent systems which mimic the foraging behavior of real ants looking for the shortest path between a food source and the nest. The pheromone, deposited on the trail by an ant, is used to guide the colony during the search. The amount of pheromone to be deposited depends on the quality of the constructed solution. The better the solution, the greater the amount of the pheromone deposited, and the higher probability the path will be chosen by other ants in later iterations. ACO has been widely applied to various discrete optimization problems, those engineering applications can be found in Dorigo and Stiitzle (2004).

A solution to a parameter design problem sometimes makes reference to continuous function optimization. Several modified ant algorithms were developed to optimize continuous space functions. Examples include Bilchev and Parmee (1995), Wodrich and Bilchev (1997), Jayaraman et al. (2000), Wang et al. (2002), Socha (2004), and Pourtakdoust and Nobahari (2004). The modified CACO presented by Jayaraman et al. (2000) is adopted to resolve our problems. The CACO utilizes a bi-level search procedure including both local and global searches. To apply CACO, the initial solutions are characterized into superior and inferior solutions as per their fitness values. The local and global search ants move to destinations of increasing fitness by repeatedly searching superior (local) and inferior (global) solutions, respectively. A local search finds a better solution based on the pheromone trail values, whereas a global search involves using a genetic algorithm to generate the offspring of the solutions.

2.2.1. Local search

Local search ants select a local solution i with a probability P^sub i^(t) = P^sub i^(t)/ [n-ary summation]^sub k^ tau^sub k^(t), where k is the number of solutions, tau^sub i^(t) is the pheromone trail on solution i at time t. Initially, set tau^sub i^(t = 0) = tau^sub 0^ for all solutions. After selecting the destination the ant moves through a short distance, the direction of the movement is the same as that of the previous direction if there is an improvement in the fitness. If there is no improvement it searches in a random direction. In the above search, if a higher fitness is not obtained the age (i.e., time t) of the solution is increased. The moving distance is defined by the relation Delta(t, R) = R(1 – r^sup (1-t/T)b)^, where R is maximum search radius, r is a random number from [0, 1], Tis the total number of iterations of the algorithm (so that Delta(t, R) eventually converges to 0 as t tends to T), and b is a positive parameter controlling the degree of nonlinearity. The pheromone value decreases by evaporating over time, and increases by depositing an increment which depends on how well the fitness has improved.

2.2.2. Global search

The global search is done sequentially by crossover, mutation, and trail diffusion operations. Crossover operation selects a parent randomly and sets the first variable of the child’s position vector (which for parameter design problems is the control factor values) to be the same as that of the first element of the parent’s position vector. The subsequent values of the variables of the child are set to the corresponding value of a randomly chosen parent with a crossover probability (denoted by P^sub c^). Mutation operation adds or subtracts a value to or from each variable with mutation probability (denoted by P^sub m^). The mutation step size is the same as the above relation Delta(t, R) = R(1 – r^sup (1-t/T)b^). The trail diffusion operation randomly selects two parents from the present superior solutions. The variables of the child’s position vector can have either (1) the value of the corresponding variable from the first parent, (2) the corresponding value of the variable from the second parent, or (3) a combination arrived from a weighted average of the parents: x(child) = alphax^sub i^(parent^sub 1^) + (1 – a)x^sub i^ (parent^sub 2^), where a is a random number from [0, 1]. The probability of selecting the third option is set equal to the mutation probability while allotting equal probability of selecting the first two options.

3. Proposed Approach

The approach for optimizing the dynamic parameter design consists of three phases, which is based on the integration of a back- propagation network and a CACO. The first phase involves training a neural network model to formulate the relationship function between the responses and the parameters of a dynamic system. The trained model is then used to predict the responses of a specific parameter combination. The second phase involves developing performance functions for the three types of dynamic systems to evaluate the predicted responses. The performance measures are then transferred to the fitness values for use in the next phase. The third and final phase uses a modified CACO presented by Jayaraman et al. (2000) to find the best solution of parameter combination through optimizing the fitness value.

3.1. Identify the relationship function

To apply neural networks to our problems, the input and output data are assigned as the setting values for the control factor, noise factor, signal factor, and the responses, respectively. A well- trained neural network represents a mathematical model of a system’s relationship function y = R(X^sub ij^), where y is the response value for a parameter combination X at the ith noise factor and the jth signal factor settings. Through the trained network we can accurately predict the corresponding response by giving a set of any values within the ranges of the parameters. Therefore the network is capable of treating continuous control factors. The schematic representation of the network is shown in Fig. 3. The processes of identifying the R(X^sub ij^) are as follows:

Step 1. Collect the data for input and output layers from the experiments.

Step 2. Obtain the training and testing patterns by randomly selecting the data.

Step 3. Select several neural networks structures including input nodes, hidden layers, hidden nodes and output nodes.

Step 4. Set learning rate, momentum coefficient and execution iterations.

Fig. 3. Three-layer network for a dynamic robust design.

Step 5. Execute the training processes of the neural networks.

Step 6. Choose the best network from the several trained networks as the system’s relationship function R(X^sub ij^). The performance evaluation criterion for neural network training is herein the root of mean-square error (RMSE).

3.2. Measure performance

To evaluate how good a combination of control factors is, the corresponding responses at the full combination of signal levels and noise levels are predicted and then transferred into a single value to calculate its performance. According to different types of quality characteristics for the dynamic parameter designs, three performance functions are developed to evaluate the performance of a possible solution, which are as follows.

3.3. Perform the optimization search

A CACO algorithm is used to search the best parameter combination in terms of fitness value. To apply the CACO to the problems, we have to define the following elements:

(1) Construction of solutions

A complete solution is composed of a set of control factors. An initial solution is generated randomly within parameter bounds, e.g. for a set of five control factors (A, B, C, D, E), the settings might be (2, 4.5, 12.4, 3, 0.7).

(2) Generating new solutions

In a local search and mutation operation, a neighboring solution is selected by adding or subtracting a value to or from each variable of the present solution, e.g. the above solution with a set of variation (+1, +0.6, -0.8, -1, +0.4) becomes (3, 5.1, 11.6, 2, 1.1). In the crossover operation, new solutions are generated from a pair of solutions. For instance, assume that parents are organized by (2, 4.5, 12.4, 3, 0.7) and (3, 5.1, 11.6, 2, 1.1) and the cut- point is the third variable. After performing the crossover operation, the new solutions will be (2, 4.5, 11.6, 2, 1.1) and (3, 5.1, 12.4, 3, 0.7). (3) Fitness function

F(X), calculated in Phase 2, represents the fitness value for a solution of parameter combination X. In our problems, the fitness function is to be minimized under the restrictions of parameter bounds. A smaller fitness value means the corresponding solution has a better performance.

(4) Pheromone updating rule

The pheromone is updated by the rule tau^sub i^(i + 1) = rhotau^sub i^(t) + Deltatau^sub i^(t), where rho is the evaporation rate, Deltatau^sub i^(t) is the deposited pheromone. Herein, the increment of pheromone Deltatau^sub i^(t) is defined as 1/F^sub i^(t), where F^sub i^(t) is the fitness value of solution i at iteration t.

(5) Radius size

For a continuous factor the maximum search radius R is defined as a tenth of the range of the parameter value, the search radius decreases when the age increases. For a qualitative factor, R is the range of the parameter value, and the moving step size remains one for all ages.

(6) Stopping criterion

The algorithm stops when the total number of iterations T is reached.

To perform a CACO, ants are repeatedly sent to trail solutions in order to optimize the fitness value. The total number of ants (denoted by A) is set as half the total number of trail solutions (denoted by S). The number of global ants (denoted by G) and the number of local ants (denoted by L) are set as 80% and 20% of the total number of ants, respectively. Furthermore 90% of the global ants are sent for crossover and mutation, and the remaining 10% are sent for trail diffusion. Figure 4 shows the flow chart of the proposed CACO. The stepwise procedure is depicted as follows.

Step 1. Set the values of parameters S, A, rho, tau^sub 0^, P^sub c^, P^sub m^, T, and b.

Step 2. Set the radius size R and bounds of each control factor.

Step 3. Create S trail solutions.

Step 4. Set t = 0 for all trail solutions.

Step 5. Estimate the outputs of the trail solutions through the trained neural network.

Step 6. Evaluate the fitness values of the trail solutions.

Fig. 4. Flowchart of the proposed CACO.

Step 7. Sort the trail solutions.

Step 8. Send L ants to the selected trail solutions for local search.

Step 9 . Move the ants to neighboring trail solutions with a short distance.

Step 10. If the best solution has improved, replace the previous one, and set t = 0. Otherwise set t = t + 1.

Step 11. Send G ants to global solutions. The 80% of G are for crossover and mutation. The remaining 20% are for trail diffusion.

Step 12. Generate a random number r^sub 1^ from [0, 1], if r^sub 1^ > P^sub c^, then go to Step 15.

Step 13. Select randomly a pair of solutions and the cut-point.

Step 14. Crossover.

Step 15. Generate a random number r^sub 2^ from [0, 1], if r^sub 2^ > P^sub m^, then go to Step 17.

Step 16. Move the ants to neighboring trail solutions with a short distance.

Step 17. Perform trail diffusion.

Step 18. If the best solution has improved, replace the previous one, and set t = 0. Otherwise set t = t + 1.

Step 19. Update the trail pheromone.

Step 20. If t = T, go to Step 5. Otherwise stop the algorithm.

Step 21. Print the best solution.

4. Illustrative Examples

4.1. Case 1

A dynamic LTB experiment presented by Wu et al. (2005) for the design of quartz crystal nanobalance (QCN) gas sensor is reanalyzed. In the QCN design the quality characteristic is the resonance frequency shift value and is ideally as large as possible in order to enhance the detection capabilities of the device. Table 1 lists the relevant parameter values at each level. This case adopted an L18 orthogonal array to arrange the experiments. The orthogonal array is a method of setting up experiments that only have a fraction of the full factorial combinations. Array L^sub 18^ is one of the standard orthogonal arrays that are tabulated by Taguchi. An array’s name indicates the number of rows it has. So Lis means that there are 18 different control factor settings to be conducted. For more detailed information, readers can refer to Fowlkes and Creveling (1995), and Phadke (1989). The factor allocations and the result data are listed in Table 2. The optimal parameter settings obtained by the Taguchi method are (2, 0.006, 0.002, 4/3, 25) for (A, B, C, D, E), respectively.

To build the relationship function of the system, several different structures of neural networks are trained by assigning the values of (control factors, noise fac- tor, and signal factor) / response as the inputs/outputs. We randomly select 138 training patterns and 24 testing patterns from Table 2 to perform the network models. Table 3 lists several options of the network architecture and the learning schedule; in addition, the structure 7-6-1 is selected to obtain a better performance. This study makes use of the neural network software package Qnet(R) (www.qnetv2k.com).

Table 1. The parameters and their levels.

Table 2. Control factor allocations and the responses.

Table 3. The candidate neural network models.

Table 4. Predicted responses at the best combination of control factors (case 1).

Table 5. A comparison of the Taguchi method and the proposed approach (case 1).

4.2. Case 2

Kissel (1992) conducted a dynamic NTB experiment for the design of a motor controller test circuit. The signal factor is the power supply voltage; the response is the measured voltage. The slope of the ideal function is equal to one. The L12 orthogonal array was used to allocate the parameter settings. Table 6 lists the values of the parameter levels and the responses of the experiments. The optimal parameter settings obtained by the Taguchi method are (44, 1, 2, 2000, 10, 6, 0.1, 5500, 1, 8, 15) for factors (A, B, C, D, E, F, G, H, I, J, K), respectively.

Reanalyzing this case, 184 training patterns and 32 testing patterns are randomly selected from Table 6 to build several options of the network architectures.

Table 6. Control factors values and responses of the experiment.

Table 7. Predicted responses at the best solution (Case 2).

Table 8. A comparison of the Taguchi method and the proposed approach (Case 2).

5. Discussion and Conclusion

Parameter design with dynamic characteristics is a critical engineering problem for efficiently developing new products. Engineers conventionally utilize Taguchi’s twophase procedure to optimize the design, which selects the levels of the adjustment factors in order to bring the slope of a system toward the ideal target. However, the adjustment factors cannot be guaranteed to exist in practice. Furthermore, even if the adjustment factors exist interactions among adjustment factors and significant factors may occur, thereby increasing the trade-off judgment of designers. Besides, the experimental design approach can only provide the discrete levels of factors. This study presents a three-phase approach to avoid the above limitations. The main contribution of the proposed approach is that it integrates neural networks into a CACO so as to efficiently optimize the parameter design of dynamic systems with continuous control factors. The approach has verified its effectiveness with two numerical cases, and the results have been compared with those presented in previously published works in which Taguchi methods were used. In Case 1, the response values of the LTB system have been increased by 7%. In Case 2, the robustness of the NTB system has been improved greatly by 26.2%. Both the cases demonstrate that the approach is able to efficiently treat dynamic parameter design. The properties of the proposed approach are summarized as follows:

(1) The approach is capable of optimizing simultaneously the locations and dispersions of response, no adjustment factors and interactions need to be concerned.

(2) The approach can easily treat both continuous and discrete control factors. The obtained solutions are not limited to the values of the experimental levels if the factors are continuous.

(3) The approach can be reduced to deal with the static parameter design by appropriate modification when the signal factor is fixed.

Acknowledgments

This research project was sponsored by the National Science Council of Taiwan under Grant No. NSC94-2416-H-141-003.

References

Aijun, LA, L Hejun, L Kezhi and G Zhengbing (2004). Applications of neural network and genetic algorithms to CVI processes in carbon/ carbon composites. Acta Materialia, 52, 299-305.

Anjum, MF, I Tasaddug and AS Ahaled (1997). Response surface methodology: A neural network approach. European Journal of Operational Research, 101, 65-73.

Bilchev, G and I Parmee (1995). The ant colony metaphor for searching continuous design spaces. In: Proceedings of AISB Workshop on Evolutionary Computing, Lecture Notes in Computer Science, Vol. 993, pp. 25-39, Berlin: Springer- Verlag.

Bonabeau, E, M Dorigo and G Theraulaz (1999). Swarm Intelligence: From Natural to Artificial Systems. New York, NJ: Oxford University Press.

Chen, SP (2003). Robust design with dynamic characteristics using stochastic sequential quadratic programming. Engineering Optimization, 35(1), 79-89.

Chiu, CC, CT Su, GH Yang, JS Huang, SC Chen and NT Cheng (1997). Selection of optimal parameters in gas-assisted injection molding using a neural network model and the Taguchi method. International Journal of Quality Science, 2, 106-120.

Chow, TT, GQ Zhang, Z Lin and CL Song (2002). Global optimization of absorption chiller system by genetic algorithm and neural network. Energy and Building, 34, 103-109.

Dorigo, M and T Stutzte (2004). Ant Colony Optimization. Cambridge, London: MIT Press.

Fausett, L (1994). Fundamentals of Neural Networks: An Architectures, Algorithms, and Applications. Prentice Hall.

Fowlkes, WY and CM Creveling (1995). Engineering Methods for Robust Product Design. Addison- Wesley. Goh, TN (1993). Taguchi methods: some technical, cultural and pedagogical perspectives. Quality and Reliability Engineering International, 9, 185-202.

Jayaraman, VK, BD Kulkarni, S Karale and P Shelokar (2000). Ant colony framework for optimal design and scheduling of batch plants. Computers and Chemical Engineering, 24, 1901-1912.

Kissel, R (1992). Taguchi method in electronics – a case study. Case Studies & Tutorials: 10th Taguchi Symposium, ASI, pp. 289-308.

Lesperance, ML and SM Park (2003). GLMs for the analysis of robust design with dynamic characteristics. Journal and Quality Technology, 35(3), 253-263.

Lunani, M, VN Nair and GS Wasserman (1997). Graphical methods for robust design with dynamic characteristics. Journal of Quality Technology, 29, 327-338.

Maghsoodloo, S, G Ozdemir, V Jordan and CH Huang (2004). Strengths and limitations of Taguchi’s contributions to quality, manufacturing, and process engineering. Journal of Manufacturing systems, 23(2), 73-126.

McCaskey, SD and KL Tsui (1997). Analysis dynamic robust design experiments. International Journal of Production Research, 35(6), 1561-1574.

Miller, A (2002). Analysis of parameter design experiments for signal- response systems. Journal of Quality Technology, 34(2), 139- 151.

Phadke, MS (1989). Quality Engineering Using Robust Design. New Jersey: Prentice Hall.

Pourtakdoust, SH and H Nobahari (2004). An extension of ant colony system to continuous optimization problems. In: Ant Colony Optimization and Swarm Intelligence, Proceedings of the 4th International Workshop, ANTS 2004, Dorigo, M, M Birattari, C Blum, LM Gambardella, F Mondada and T Stutzle (eds.), pp. 294-301. Berlin: Springer-Verlag.

Robinson, TJ, CM Borror and RH Myers (2004). Robust parameter design: a review. Quality and Reliability Engineering International, 20, 81-101.

Rowlands, H, MS Packianather and E Oztemel (1996). Using artificial neural networks for experimental design in off-line quality. Journal of Systems Engineering, 6, 46-59.

Sarkar, D and JM Modak (2003). ANNSA: a hybrid artificial neural network/simulated annealing algorithm for optimal control problem. Chemical Engineering Science, 58, 3131-3142.

Socha, K (2004). ACO for continuous and mixed- variable optimization. In: Ant Colony Optimization and Swarm Intelligence, Proceedings of the 4th International Workshop, ANTS 2004, M Dorigo, M Birattari, C Blum, LM Gambardella, F Mondada and T Stutzte (eds.), pp. 25-36. Berlin: Springer- Verlag.

Su, CT and HH Chang (2000). Optimization of parameter design: an intelligent approach using neural network and simulated annealing. International Journal of Systems Science, 31(12), 1543-1549.

Su, CT, MC Chen and HL Chan (2005). Applying neural network and scatter search to optimize parameter design with dynamic characteristics. Journal of the Operational Research Society, 56(10), 1132-1140.

Taguchi, G (1987). System of Experimental Design. White Plains, NY: Unipub/Kraus International.

Tsui, KL (1999). Modeling and analysis of dynamic robust design experiments. HE Transactions, 31, 1113-1122.

Wang, L, Q Wu and F Qiao (2002). Ant system algorithm research and its applications. High Technology Letters, 8(4), 91-96.

Wasserman, GS (1996). Parameter design with dynamic characteristics: a regression perspective. Quality and Reliability Engineering International, 12, 113-117.

Wodrich, M and G Bilchev (1997). Cooperative distributed search: the ants way. Control and Cybernetics, 26(3), 413-445.

Wu, A (1999). Taguchi Method Five Days Seminar. American Supply Institute.

Wu, CFJ and M Hamada (2000). Experiments: Planning, Analysis, and Parameter Design Optimization. John Wiley & Sons, Inc.

Wu, DH, WT Chien and YJ Tsai (2005). Applying Taguchi dynamic characteristics to the robust design of a piezoelectric sensor. IEEE Transactions on Ultrasonics, Ferroelectronics, and Frequency control, 52(3), 480-486.

Zang, C, MI Priswell and JE Mottershead (2005). A review of robust optimal design and its application in dynamics. Computers and Structures, 83, 315-326.

HSU-HWA CHANG*, YAN-KWANG CHEN[dagger] and MU-CHEN CHEN[double dagger]

* Department of Business Administration

National Taipei College of Business

321, Sec. 1, Chi-Nan Rd., Taipei, Taiwan

* hhchang@webmail.ntcb.edu.tw

[dagger] Department of Logistics Engineering and Management

National Taichung Institute of Technology

129 Sanmin Road, Sec. 3, Taichung, Taiwan

[double dagger] Institute of Traffic and Transportation

National Chiao Tung University

4F, 118, Sec. 1, Chung Hsiao W. Road, Taipei, Taiwan

Received 31 March 2005

Revised 15 March 2006

* Corresponding author.

Hsu-Hwa Chang is an Associate Professor in the Department of Business Administration at National Taipei College of Business, Taiwan. He received his Ph.D. degree in Industrial Engineering and Management from National Chiao Tung University, Taiwan. His current research interest includes advanced quality engineering and data mining applications. His articles have appeared in a variability of journals including Expert Systems with Applications, International Journal of Systems Science and International Journal of Industrial Engineering, etc.

Yan-Kwang Chen is an Associate Professor in the Department of Logistics Engineering and Management at National Taichung Institute of Technology, Taiwan. He received his Ph.D. in Industrial Engineering and Management from the National Chiao Tung University, Taiwan and M.S. in Statistics from National Tsing Hua University, Taiwan. His main research interests include quality control, production planning, and decision-making problems. His articles have appeared in a variability of journals including Quality and Reliability Engineering International, International Journal of Production Research, International Journal of Systems Science, European Journal of Operational Research and International Journal of Production Economics, etc.

Mu-Chen Chen received both his Ph.D. and M. Sc. degrees in Industrial Engineering and Management from National Chiao Tung University, and his B. S. degree in Industrial Engineering from Chung Yuan Christian University. He is currently a Professor in the Institute of Traffic and Transportation at National Chiao Tung University, Taiwan. His teaching and research interests include data mining, logistics and supply chain management and meta-heuristics. He is also involved with research and industry in a range on NSC (National Science Council, Taiwan) and enterprise projects.

Copyright World Scientific Publishing Co. Pte., Ltd. Jun 2007

(c) 2007 Asia – Pacific Journal of Operational Research. Provided by ProQuest Information and Learning. All rights Reserved.




comments powered by Disqus