Design of Multi-Objective Optimization Problems
Page: 1-16 (16)
Author: Andre A. Keller
DOI: 10.2174/9781681085685117010003
PDF Price: $15
Abstract
Multi-objective optimization (MOO) problems belong to programming approaches in which the decision-maker is faced with a multiplicity of conflicting objectives. This situation occurs with real-world problems involving engineering design, chemical processes, financial management, etc. This case means that achieving an optimum for one objective function requires some compromises on one or more of the other objectives. In such problems, we will obtain rather a set of equally good solutions, and not just one. The decision variables or parameters of MOO problems can be continuous, 0-1 binary or mixed-integer variables. The feasible region of a MOO problem is a dimensional space satisfying bounds on the variables, equalities, and inequalities. Equality constraints arise from mass, energy and momentum balances, and can be algebraic or differential equations. Inequality constraints come from possible requirements of the system, such as the temperature of a reactor that must not exceed a particular value, failure of the material, and other technical features. We begin by presenting a short history of global optimization. The development of multi-objective optimization techniques is due to the combined effects of new approaches and challenging applications. The new techniques are evolutionary algorithms inspired by Nature. There are some various real-world applications difficult to solve. A classification of MOO methods is based on two criteria, the number of Pareto solutions, and the decision-maker preferences. Examples illustrate the resolution of continuous and combinatorial problems from literature (e.g., minimum spanning tree, bicriteria knapsack problem).
Extended Multi-Objective Optimization Problems
Page: 17-45 (29)
Author: Andre A. Keller
DOI: 10.2174/9781681085685117010004
PDF Price: $15
Abstract
One another important feature of mathematical programming is the existence of uncertainties. These uncertainties may be due to a lack of information or to the internal error of measures, and evaluations in a fuzzy environment. All these features will condition the formulation for a MOO problem. A MOO problem may be a standard continuous or combinatorial programming problem, a robust optimization problem or a fuzzy optimization problem. Real-life optimization problems involve more complex situations including two types of difficulties, namely the existence of nonlinearities and uncertainties. Bimatrix games with associated quadratic programming problems and geometric programming illustrate the first type of real-world problems. The second type of programming models under uncertainties includes robust programming and fuzzy programming. Robust programming has been developed to increase the quality and reliability of engineering processes. Fuzzy programming problems refer to situations where decision-makers face with incorrect or uncertain data. The fuzzy environment includes uncertain preferences, fuzzy objectives, fuzzy constraints, fuzzy data. In such problems, decision-makers maximize their degree of satisfaction in a specified decision set. The extension to multiple objectives uses numerical examples.
Pareto Optimality
Page: 46-66 (21)
Author: Andre A. Keller
DOI: 10.2174/9781681085685117010005
PDF Price: $15
Abstract
The Pareto optimality is based on the concept of dominance which definitions and properties are proposed. We distinguish weakly and strongly Pareto-optimal sets. The dominance binary relation is a strict partial order relation. This approach allows a comparison between feasible solutions in the objective space and the decision space. The nondominated solution sets yield Pareto fronts. Different methods are proposed to find good approximations of the Pareto sets when a Pareto front cannot be determined analytically. Numerous examples from the literature show connected and disconnected Pareto-optimal fronts in both decision space and fitness space. In particular, we can observe that objectives are conflicting, that the shapes of the Pareto front may be convex or nonconvex, connected or not, that Pareto fronts change if we decide to maximize instead to minimize the objectives. Necessary and sufficient conditions for Pareto optimality for constrained multi-objective optimization problems are also outlined.
Founding Multi-Optimization Techniques
Page: 67-108 (42)
Author: Andre A. Keller
DOI: 10.2174/9781681085685117010006
PDF Price: $15
Abstract
Classical optimization methods solve vector optimization problems. These methods produce approximations of Pareto-optimal sets by which we analyze the performances and limitations. The research initially extended existing methods. The Zeleny’s simplex algorithm illustrates this approach in extending the simplex method to multiple linear objective functions. The simplex method is an iterative procedure that finds an optimal solution to a linear single-objective programming problem. It is one of the numerous techniques proposed to solve linear SOOP problems. The process uses a finite number of iteration steps. In the beginning, a reformulation of the program is such that slack variables introduce the inequality constraints. These slack variables are the primary basis. The process moves on from an extreme point of the feasible space to another adjacent point. Multi-objective simplex tableaus are augmented to solve linear MOO problems by using similar principles and processes. There is one row for each objective function. Numerical examples illustrate the whole computation procedure. Weighting objective method is one another class of founding the technique to solve nonlinear MOO problems. The method consists of aggregating (or making scalar) the objective functions. The objectives are normalized before the aggregation. The weighting can be a convex linear combination of objectives with different weights. These weights differ according to the method such as with the weighted sum method, the weighted metric method, and the weighted exponential method. Numerical examples illustrate each case.
Preference-Based Methods of MOO Problems
Page: 109-137 (29)
Author: Andre A. Keller
DOI: 10.2174/9781681085685117010007
PDF Price: $15
Abstract
Preference-based methods are classical optimization techniques, that integrate decision maker's preferences at some stage of the resolution process. These preferences are required before the resolution process begins. In this class of methods, we find the value function method, the -constraint method, goal programming, and the generalized center methods. The foundation of the value function method is the individual choices theory. DM’s preferences are based on rationality assumptions. The DM compares pairs of alternatives (i.e., pairs of objective functions), and ranks them according to preference relations. The different types of preferences are Leontief, Cobb-Douglas or CES preferences. The -constraint method requires the selection of one objective while all other objectives are constrained to some value. A payoff table is constructed by solving for each chosen objective a SOO problem with additional constraints. Goal programming is another preference-based method for which DM decides a particular goal for each objective. The programming problem is to minimize the total deviation of solutions from the targets using a specified distance. In the generalized center method, level constraints on the objective function value restrict the feasible region by successive steps. Less performant half-spaces level sets are discarded in the process. The method can be described as a sequence of unconstrained minimization problems using a distance function. Numerical examples from literature illustrate the different classical methods.
Mixed-Integer Nonlinear Programming
Page: 138-172 (35)
Author: Andre A. Keller
DOI: 10.2174/9781681085685117010008
PDF Price: $15
Abstract
Real-world problems may require large-scale systems with particular features. Thus, water resource systems (WRS) can be described by large-size multi-objective optimization systems. The main characteristics of such systems are notably their large scale with mixed-integer decision variables and the multiplicity of objectives. Adapted methods for solving such systems are required. The generalized Benders decomposition and branch-and-bound techniques are such efficient methods. Suppose a MOO problem for which we can have an equivalent parametric pMINLP. A decomposition-based algorithm describes an iterative process where subproblems interact with a master problem. Suppose a SOO programming problem. Using a GBD algorithm will generate an upper bound and a lower bound of the solution at each iteration step. A primal NLP subproblem provides information about the upper bounds and Lagrange multipliers. Next, the master ILP problem calculates the new set of lower bounds. In this study, the Benders decomposition method is used for solving single objective and multi-objective MINLP problems.
Nested Optimization of Hierarchical Systems
Page: 173-203 (31)
Author: Andre A. Keller
DOI: 10.2174/9781681085685117010009
PDF Price: $15
Abstract
What does the decision-making involve in hierarchical organizations? How to conciliate the overall objectives at the top level of a firm, with the productivity and marketing objectives at a lower level? A hierarchy of decision-makers is a typical situation for local governments and planning agency. The decision variables are partitioned among an upper level and different lower levels. The programming problem looks like a set of nested programming problems with agents belonging to hierarchical levels. The problem is similar to a static noncooperative two-person game by Stackelberg. Within each level, the agents play a -person non-zero-sum game. Between the levels, the agents play a -person Stackelberg game. Let the problem correspond to a bilevel programming (BLP) problem. The two players optimize their payoffs by controlling their decision variables. Both players have perfect information about the objectives and strategies of the opponent. The leader plays first but must anticipate all the possible reactions, and the followers react optimally.The algorithm for finding the Nash- Stackelberg solution belongs to the four classes of solution methods. The methods are a reformulation by using optimality conditions, the penalty method, and metaheuristics such as with SA or GAs algorithms. Thus, under convexity and regularity conditions, the initial problem can be reformulated as a single nonlinear optimization by using KKT optimality conditions. The -best algorithm computes the global solution of BLP by enumerating the extreme points of the constrained region. Several problems illustrate the process, such as Bard’s BLP problem, and two other challenges with one and two followers.
Fuzzy Logic Programming
Page: 204-237 (34)
Author: Andre A. Keller
DOI: 10.2174/9781681085685117010010
PDF Price: $15
Abstract
Besides the usual “true” and “false” statements, there is also a place for “vague” or "fuzzy” statements in the real-world of decision-making problems. The linguistic statements may also be “possible,” “almost sure,” “hardly fulfilled,” “approximately equal to,” “considerable larger to,” etc. The essential elements of fuzzy logic consist of the following main elements. The single-objective fuzzy logic programming is presented first. Next, we show its natural extension to multi-objective optimization problems. The symmetric method for SOO fuzzy problems consists of different steps which include the determination of the membership functions, the fuzzy feasible set, and the fuzzy set of the optimal value. The problem is solved by using a maximin operator. The extension to multiple objectives is based on similar principles. Membership functions are associated with objectives and constraints. A fuzzy decision set can result from the Bellman-Zadeh principle that forms an appropriate aggregation approach. Different fuzzy decision sets can be considered, depending on the chosen rule (i.e., the intersection rule, the convex rule, and the product rule). Using the Belman-Zadeh criterion, the problem maximises a satisfaction level subject to -inequality constraints for objectives and the inequality constraints of the problem. An example illustrates the full process for finding - parametrized solutions.
Noncooperative Games in a Fuzzy Environment
Page: 238-260 (23)
Author: Andre A. Keller
DOI: 10.2174/9781681085685117010011
PDF Price: $15
Abstract
Noncooperative games correspond to situations where two players or more are in conflict. Each player has mixed strategies, given payoffs and an objective function. Data of the game may be either crisp numbers or fuzzy numbers. The game is equivalent to a minimax bilinear programming problem (BLP). A Nash equilibrium solution is achieved if the objectives of the two players are fulfilled simultaneously. Different techniques can be used to achieve equilibrium solutions, such as with the Mangasarian-Stone optimality conditions, or with the Lemke-Howson pivot algorithm. A single-objective bimatrix game is first explored in a crisp environment. These nonzero sum games are such that the sum of payoffs differs from zero for each pair of pure strategies. Thereafter, a fuzzy context can be considered as possible imprecise data and vague statement of the players. Equilibrium is defined w.r.t. the degree of attainment of fuzzy goals. Numerical examples are solved in both crisp and fuzzy environments. Multi-objective matrix games (or zero-sum games) describe situations where the players have several objectives. Such problems use equivalent programming problems. These multi-objective games can also be placed in a fuzzy environment where the goals are fuzzy. Thereafter, more general multi-objective bimatrix games are developed in both crisp and fuzzy environment. Numerical examples illustrate different game situations.
Introduction
Multi-Objective Optimization in Theory and Practice is a traditional two-part approach to solving multi-objective optimization (MOO) problems namely the use of classical methods and evolutionary algorithms. This first book is devoted to classical methods including the extended simplex method by Zeleny and preference-based techniques. This part covers three main topics through nine chapters. The first topic focuses on the design of such MOO problems, their complexities including nonlinearities and uncertainties, and optimality theory. The second topic introduces the founding solving methods including the extended simplex method to linear MOO problems and weighting objective methods. The third topic deals with particular structures of MOO problems, such as mixed-integer programming, hierarchical programming, fuzzy logic programming, and bimatrix games. Multi-Objective Optimization in Theory and Practice is a user-friendly book with detailed, illustrated calculations, examples, test functions, and small-size applications in Mathematica® (among other mathematical packages) and from scholarly literature. It is an essential handbook for students and teachers involved in advanced optimization courses in engineering, information science, and mathematics degree programs.