Reliability based design optimization using a genetic algorithm : application to bonded thin films areas of copper / polypropylene

In the present study, a methodology is appliedfor system reliability based design optimization (RBDO) to thin copper films that are deposited on flat surfaces of polypropylene polymer. The input data of the design of such polymer-metal joint was determined by the necessary normal stress to detach the copper film from the substrate of polypropylene by different forces and interface areas, obtained from uniaxial tensile test. The RBDO methodology was implemented in order to find the smallest bonding area required to assure different reliability levels, minimizing the detaching probability and cost (lower amount of the metallic film). The reliability level is treated as input data and considered as a constraint in the optimization problem. The used optimization methodology is the Genetic Algorithm (GA). This choice is due to the fact that GA presents good behavior when dealing with functions that may exhibit nonlinear behavior. The results shows that the applied methodology is efficient and it is concluded that high reliability requirements might impose larger areas. In the case of 98% of the reliability cases in not detaching the thin film, the resulted bonded area is 3.18 greater than the initial value without safety requirements.


INTRODUCTION
The desire to obtain optimal conditions for a project, which represent lower costs, higher performance and improved efficiency, are permanent objectives in all areas of engineering.In this work, a Genetic Algorithm (GA) was applied as a global search method to evaluate minimum values of area for bonded metal thin films considering the mechanical strength under specific loads.The authors [1] comment that in order to solve optimization problems involving uncertainty, it is necessary to use nondeterministic methods that take into account this randomness such as optimization methods based on reliability, like RBDO (Reliability Based Design Optimization).
The metallization of polymers is applied in several disciplines [2].For example, in the electronic industry for printed circuit boards, in the food industry using metallic food packaging or even in biomechanics in prostheses with polymer components to improve surface wear conditions.For a successful use of metallized polymers, among its many applications, it is necessary to control the quality of the adhesion of the metal film on the polymer substrate.It is known that temperature affects significantly the strength of polymers [3]; therefore, it is very important to control this variable in the process of deposition of the metal film [4].Thus, the effectiveness of the adhesion interface is considered an important function in the bonding system, since it ensures that the physic-chemical mechanisms will operate and no problems will occur in the anchoring such as peeling, formation of micro cracks or wear in the thin film.
Adhesion mechanisms should ensure the role of the composite material and they define a most suitable pair of metal-polymer materials.This choice is defined by a trade-off analysis of spent energy and physical required properties.The effectiveness of adhesion can be determined qualitatively and quantitatively.When one quantitatively one refers to adhesion, the concept is related to mechanical properties requirements that are necessary to resist to normal and shear stresses generated to loads.Thus, it is possible to define geometric values and select materials for in the design stage.There are several studies in the literature on bonding polymers.In these studies, the adhesion interface conditions are modified in order to evaluate the influence on the joint strength [5][6][7][8][9].The surface cleaning technique of ions spray on the piece was reported by [10], who observed the influence of pre-treatment applied in the titanium films on polymeric substrates.The results demonstrated that changing the polymer morphology improved adhesion between materials.Other techniques are also used to activate the surface of collage, as acid etching or plasma immersion [11].All methods have the common goal of changing the adhesion strength by improving the contact interface.Another important limiting factor is the presence of low surface energy, such as in polypropylene, which hinders a chemical bond in the film-substrate interface [12].

TESTS SURFACE
Usually, thermoplastic polymers that are injected into the size of bend test bodies [13] are cut with band saw with size dimensions of 3.2 x 12.5 x 25.0 mm.The roughness of the injection mold is similar to the injected samples.The metallization is performed on the polymer by evaporation of copper in the vacuum chamber where the samples are protected from impurities receiving only the electrolytic copper vapor to achieve the desired layer thickness (approximately .5 µm in this paper, guaranteed by the applied manufacturing process).
The preparation of polymeric compounds coated with a metallic thin film is the first step of the process, after that, displacement tests are performed in this thin copper polymers film: polyamide 6 and polypropylene.The adhesion is evaluated by measuring the normal load in testing machine using load cell with loads from 1 N to 300 N, depending on the adhesion strength range.The test follows the standard recommendations [14], with rated load speed of 2 mm/min, at 23 ºC room temperature, 50% relative humidity and at 86 to 106 kPa pressure.
The test results are the maximum force value to tear the metal film and the area in which it is bonded; therefore, considering the normal force and the adhesion area it is obtained the normal stress of the adhesion strength.In Figure 1, it is possible to check the experimental test rig where there is a rod bonded to the copper surface.Figure 1 b) shows the rod pulled from the bonded area.

FIRST ORDER RELIABILITY METHOD (FORM)
According to [15,16], reliability analysis has been applied to many engineering fields.The reliability is related to the complement of the failure probability, i.e., the likelihood of survival (specific event) of a system.A mathematical expression to relate the failure probability (violation of a specific constraint) to random variables can be stated as: where g means the limit state function (LSF) that defines the constraint to be fulfilled and X i , n variables that affect that constraint [17].Some of these variables may present random behavior.g(.) ≤ 0 means that the system is at the failure domain and g(.) > 0 means the system is at the safety domain.The probability of failure can be evaluated using the joint probability density function f x (X 1 ,…,X n ) as indicated: where D is the failure domain g(.) ≤ 0. Φ is the standard cumulative distribution and b is an index related to system's safety.
Let σ i be the mechanical stress that can be measured in a loaded component i, assuming a failure situation where this value exceeds some imposed material limit value (σ lim ).The same equation for the limit state function can be rewritten as: The solution for equation ( 2) is difficult since most of time one uses several random variables n.Closed form solution only exists for some particular cases.Moreover, the statistics for the function f x (X) is not known a priori and the number of samples may not be enough to ensure confidence in the attributed distributions.So, a very simple but robust way to get and estimate for the reliability index (that is related to the probability of survival) b is using the first and second moments for the probability density function (mean and variance) for the limit state function g(X 1 ,…,X n ).When the limit state function g(X) is linear and the random variables are normally distributed and uncorrelated, the reliability index b can be approximated by (See Figure 2): where µ g and σ g represent the mean value and the standard deviation (square root of variance) for the LSF g(X) respectively.When the safety margins are nonlinear, the approximated values of µ g e σ g are obtained by the linearization of the function g(X) by the Taylor expansion up to linear terms.The point where the linearization is performed affects µ g and σ g values.
A method to obtain the reliability index b that is independent of the limit state function formulation is known as AFOSM (Advanced First Order Second Moment) and was proposed by [18].For uncorrelated random variables, X i , they are transformed into normalized ones U i by the transformation: where [F Xi (x i )] and Φ -1 (.) are the cumulative distribution function of the random variable X i and the inverse of the Gaussian cumulative function, respectively.In this way, the limit state function in the actual space X is transformed to the uncorrelated normalized space U by: The linearization of the limit state function is performed at U* point which presents the short distance to the origin of the uncorrelated space U and that ensures h (U) = 0.The U* point is called the design point and the reliability index b is evaluated by the distance from the origin to this point.
Hasofer-Lind-Rackwitz-Fiessler (HL-RF) Algorithm In order to solve equation ( 7) the recurrent algorithm proposed by [19] is used.It is described in the following pseudocode: Step 1: Set the limit state function for the problem g (X) = 0 .
Step 2: Assume initial values for the design point in the actual space X* = (X 1 ,…,X n ) T and evaluate the corresponding values for the limit state function g(X) (for instance, assuming an initial design point as mean values of the random variables).
Step 3: Evaluate the equivalent Gaussian mean value and standard deviation for the random variables.
Step 4: Transform de random variables from the actual space X to the normal uncorrelated one U.
The design variables values at the design point are evaluated as: Step 5: Evaluate the sensitivities ∂g X ( ) ∂X i at the design point X*.
Step 6: Evaluate the partial derivatives ∂g X ( ) ∂U i at the normal uncorrelated space using the chain rule.∂g X ( ) Step 7: Evaluate the new value for the design point U i * at the uncorrelated space using the recurrent equation: Step 8: Evaluate the distance from the origin to this new point and estimate the new reliability index: Step 9: Verify the convergence of b values along iterations to a predefined tolerance.
Step 10: Evaluate the random variables at the new design point using: Step 11: Evaluate g(X) value for the new random variables and check for a convergence criterion, for instance, ∆g(X)< tolerance and∆X< tolerance.
Step 12: If both criteria are met, stop iterating, otherwise repeat step 3 to 11.
This algorithm assumes all the random variables as non-correlated in the original actual space.If there exists correlation between random variables, Cholesky decomposition of the covariance matrix should be used so the correlated variables can be transformed to uncorrelated ones and the previous algorithm is still valid ( [20][21]).

RBDO (RELIABILITY BASED DESIGN OPTIMIZATION)
In the reliability-based design optimization (RBDO) the objective function to be optimized should satisfy predefined probabilistic constraints, which is set as initial constraints to the problem.Failure analyses are performed along the optimization process in order to verify the probabilistic constraints and guide the optimization towards the target reliability level.
The easier formulation for RBDO implements the algorithm as a double loop where the optimization is split into two stages: (a) on a first stage the objective function optimization is performed focusing on the design variables, (b) on a second stage the optimization is performed focusing on the random variables starting from the design variables from the outer loop.More details can be found in [22].
A deterministic model for the minimization can be generally defined as [23]: Subjected to: where v p is the vector of design variables, p is the vector of fixed parameters for the optimization problem, g i (.) = 0 is the i-th model equality constraint from a total of ne equality constraints and nr -ne inequality constraints, v du and v dl are the vector that contain the upper and lower values for the design variables (Figure 3).
However, a deterministic optimization does not consider the uncertainties in the design variables nor fixed design parameters.For the RBDO, the probabilistic constraints are added to the deterministic constraints.Since the reliability index is be defined in terms of the cumulative probability function for the limit state function (and vice versa), the following holds: In this paper, the reliability constraint is formulated as follows: where g j r .
( ) is the dimensionless ratio expressed between the evaluated reliability index b and the target reliability index and b t .This means that if the evaluated reliability index b during the optimization is larger than the target index b t , then g j r .
( ) ≤ 0 and the probabilistic criterion is met.Otherwise, there will exist a penalization for the objective function.Figure 3 shows the main differences based on a geometric interpretation for deterministic and reliability based design optimization.The implementation of the optimization can be performed by two different approaches, using RIA (Reliability Index Approach) or using PMA (Performance Measure Approach).

Reliability Index Approach, RIA
This approach for the reliability constraint is treated as an extra constraint that is formulated in the uncorrelated design space by the reliability index b.So, the following can be written: Find p in order to mininize f u, p ( ) where u is the vector the normalized uncorrelated random variables and g i and h j are m inequalities and n equalities deterministic constraints.f k and f i are p inequality and q equality probabilistic constraints.During iterations, the reliability index varies and can assume values that are not b t but it is expected to converge to the target value as the algorithm iterates.

Performance Measure Approach, PMA
This formulation is performed as the inverse of the RIA analysis, in such a way that the following can be written: Find p to minimize f u, p ( ) So, the constraints are set in terms of the probability of failure instead of the reliability index.During iterations the reliability index is kept fixed at b t .
The advantages and disadvantages in using this or the previous formulation can be found in [24].

GENETIC ALGORITHM
Genetic Algorithms (GA) currently represent a powerful tool to search for problem solving with complexity and nonlinearity.This method is used in the search for minimum / maximum of functions where the search is based on Darwin's Theory of Evolution, which assumes that individuals evolve generating benefits that makes them more likely to survive and pass these characteristics to their offspring The operation of the GA method begins with the generation of a random population of chromosomes.Such structures are evaluated and related to a probability of reproduction so that individuals that more likely to survive are associated with chromosomes that have better fitness function.The fitness function is typically defined regarding the current population, and undergo some modifications in order to meet the selection process individuals needs [26].
The GA simulates, in a simplified way, the evolutionary process numerically.They represent the parameters of a given problem encoding into a vector of bits.As in the Genetics, genes consist of chromosomes that are potential candidates of the solution of the problem.Similarly, in the GA in its simplified form, the vector containing the design variables that are encoded in "bits".The vector of "bits" is decoded into the respective real parameter value.Some fitness function involving the vector of "bits" for the design variables is used as basis for comparisons between individuals in a population.
A simple genetic algorithm consists of three basic operators, namely, selection, reproduction or crossover and mutation.Details about these genetic operators can be found in [27].
The algorithm starts with a population of individuals each representing a possible solution to the problem.Individuals, as in nature, use these three basic operators and evolve in generations where Darwin's theory for survival of the fittest prevails and as a result, a population of more adapted individuals is a natural outcome of the process.
In terms of reproduction, the evaluation of the fitness function indicates which individuals will be more likely to transfer genetic material to offspring.In the genetic operations, the pairs of genes of individuals are exchanged and as in nature, this exchange can occur in various forms that are commonly called crossover or recombination.
Some of the advantages of GA compared to conventional techniques can be summarized as follows: -The GA operates in the search task with encoded form of the parameters and not directly with the parameters; -The GA works with a population of solutions that represent a diversity of possible solutions to the problem and not just one solution at a time; -Most optimization algorithms requires evaluation of the derivatives of the objective function, in opposite to GA that only requires the use of the fitness function value; -Only Probabilistic rules and the rule of natural selection are used with GA, which enables the output of the local optimal solution algorithm most likely to find global optimum.
When working with the genetic algorithm in binary form, each of the actual parameters to be optimized b i are translated into a binary code according to the following equation: where bin n indicates a binary translation to a string s of n "bits", n means the number of bits, P min (k) and P max (k) means the range for maximum and minimum values allowed for each design variable.
In order to transform the binary codes to real values the following equation is used: where bin n −1 s ( ) means the translation of the binary coded values to respective real ones.It noteworthy that, with this formulation it is implicit that mapping has a resolution of P max k ( ) − P min k ( ) This restricts the search space of the real parameters to discrete values, which could induce to local maxima/minima.This could be circumvented using real coded genetic algorithms.This approach assumes real values for each variable.The main differences are found in the crossover operator.There are several methods to deal with the real coded genetic algorithms crossover such as flat crossover, simple crossover, arithmetical crossover, Wright's crossover, linear BGA crossover, etc.
Figure 4 shows the main steps followed by the real coded genetic algorithm to maximize functions.

RESULTS
This study aimed to minimize the bonded area (A), considering pre-defined system's reliability constraints (Reliability ranges of 90%, 93%, 95% and 98%).Experimental detaching tests for load (L) from 1 to 300 N were performed in thin copper films on a polyethylene substrate.The load is considered a random variable following a normal distribution with a coefficient of variation (CV) of 10%.The design variable is the diameter size D of the bonded area and can assume values from 0.1 mm to 300 mm.The limit state function is defined as g = F -σA and the function to be minimized is the bounded area, F obj = A = πD 2 /4.
The uncertainties in the probabilistic model were obtained from available experimental data.The material uniaxial strength presents σ lim =0.5413 MPa as mean value and a CV of 34.4%.It follows a Gaussian distribution.The GA prevents diameter D to assume values beyond upper and lower limits.However, for the reliability constraint, the RIA approach is used and a penalty formulation is applied for the objective function.
The value of the penalty is chosen in such way that it increases with the violation of the reliability constraint.Specifically, the penalty is given by ,where H is the jump function (it is zero if there is no violation, otherwise it assumes the actual value of an existing violation).C is a penalization constant Initialize the time t = 0 Initialize the population size "m" Probability of Mutation "P m ", probability of recombination "P c " Number of chromosomes "nc" Allowable limits for each chromosome, "P max (nc), P min (nc)".Generation of initial population B 0 = (b 1.0 , b 2.0 ,…, b m, 0 ) "Generation Loop (time)" While stop condition is not satisfied: In this study a maximum of 50 generations were used with 100 individuals in each generation for the GA.Table 1 shows the values obtained for the objective function (Area) for each reliability level constraint and applied load and represents the area that will ensure the target reliability level.
Figure 5 shows curves for the required (optimized) areas for each reliability level, considering different load levels.
Based on this graphic, from the distance of the lines, it is possible to check that for small differences in the reliability level, as the applied load increases, the required area also increases considerably.
The interpretation of the graph suggests that using the copper metal film deposition process, for a reliability level of 98% for not detaching, there will be no pullout of the film by normal traction if the optimized areas are taken into consideration.Assuming a 98% reliability for not detaching, applied load of 1 N and bonded area of 5.76 mm², the effective adhesion stress is σ lim, d =1/5,76 = 0.17 MPa.In traditional design, it is necessary to apply a reduction factor of γ =3.18 for the mean experimental adhesion stress in order to meet the corresponding reliability level σ lim, d = σ lim γ = 0.5413 3.18 = 0.17 MPa ⎛ ⎝ ⎜ ⎞ ⎠ ⎟ .A similar argument can be applied for other reliability levels.It should also be noted that these reduction factors take different values depending on the reliability levels, thus making not suitable using a single reduction factor value for any reliability level or applied load.

Figure 1 .
Figure 1.a) rod bonded on the copper surface, b) rod pulled from the bonded area.

Figure 2 .
Figure 2. Geometric definition for the reliability index b.

Figure 3 .
Figure 3. Geometric interpretation for the reliability based design optimization.

Figure 4 .
Figure 4. Pseudocode for the Genetic Algorithm with chromosome real encoding.

Figure 5 .
Figure 5. Obtained values for bonded area to different loads and reliability level constraints.

Table 1 .
Results for optimized area for each reliability level.