Applications of Mathematical Programming for Automated Design and Operation of Chemical Processes
The mathematical programming approach to design, integration and operation problems, or more generally synthesis problems, consists of three major steps. The first is the development of a representation of alternatives from which the optimum solution is selected. The second is the formulation of a mathematical program that generally involves discrete and continuous variables for the selection of the configuration and operating levels, respectively. The third is the solution of the optimization model from which the optimal solution is determined. Significant advances have taken place with this methodology, which offers the possibility of developing automated tools to support the exploration of alternatives and optimization by design engineers.
Over the last decade there have been considerable advances in mathematical programming techniques. For instance, the solution of mixed-integer nonlinear programming problems and the rigorous global optimization of nonlinear programs has become a reality. Furthermore, there have been great advances in the capability of solving very large problems, particularly for linear and mixed-integer linear programming techniques. There has also been recently a trend towards new logic-based formulations that can facilitate the modeling and solution of these problems. Finally, the availability of modeling systems that can facilitate the formulation of optimization problems has also made great progress, as well as the development of alternative solution strategies.
We first present an overview of methods for mixed-integer linear and nonlinear problems, and their more recent formulation as generalized disjunctive programming problems. We also give a brief review of methods for global optimization. We next discuss several ideas that have emerged for the development of superstructure representations, and models at various levels of abstraction, ranging from aggregated to detailed models.
Design and synthesis problems give rise to discrete/continuous optimization problems, which when represented in algebraic form, correspond to mixed-integer optimization problems that have the following form:
where f(x, y) is the objective function (e.g. cost), h(x, y) = 0 are the equations that describe the performance of the system (mass and heat balances, design equations), and g(x, y) <= 0 are inequalities that define the specifications or constraints for feasible choices. The variables x are continuous and generally correspond to the state or design variables, while y are the discrete variables, which generally are restricted to take 0-1 values to define the selection of an item or an action. Problem (MIP) corresponds to a mixed-integer nonlinear program (MINLP) when any of the functions involved are nonlinear. If all functions are linear it corresponds to a mixed-integer linear program (MILP). If there are no 0-1 variables (MIP) reduces to a nonlinear program (NLP) or linear program (LP) depending on whether or not the functions are linear.
The formulation and solution of major types of mathematical programming problems can be effectively performed with modeling systems such as GAMS and AMPL. While these require that the model be expressed explicitly in algebraic form, they have the advantage that they automatically interface with codes for solving the various types of problems. They also perform automatic differentiation and allow the use of indexed equations, with which large scale models can be readily generated. It should also be noted that these modeling systems now run mostly on desktop and PC computers, making their use and application widely available.
The solution of LP problems relies largely on the simplex algorithm, although lately interior-point methods have received increased attention for solving very large problems given their polynomial complexity. MILP methods rely largely on simplex LP-based branch and bound methods that consists of a tree enumeration in which LP subproblems are solved at each node, and eliminated based on bounding properties. These methods are being improved through cutting plane techniques, which produce tighter lower bounds for the optimum. LP and MILP codes are widely available. The best known include CPLEX, OSL and XPRESS, all which have achieved impressive improvements in their capabilities for solving problems. On the other hand, since MILP problems are NP-complete it is always possible to run into time limitations when solving problems with large number of 0-1 variables, especially if the integrality gap is large.
The solution of NLP problems relies either on the successive programming algorithm (SQP) , or on the reduced gradient method. Major codes include MINOS and CONOPT for the reduced gradient method, and OPT (Vasantharajan et al., 1990) for the SQP algorithm. These NLP methods are guaranteed to find the global optimum if the problem is convex (i.e. convex objective function and constraints). When the NLP is nonconvex a global optimum cannot guaranteed. One option is to try to convexify the problem, usually through exponential transformations, although the number of cases that this possible is rather small. Alternatively, one could use rigorous global optimization methods which over the last few years have made significant advances. These methods assume special structures such as bilinear, linear fractional and concave separable. Although this may appear to be quite restrictive, Smith and Pantelides (1996) have shown that algebraic models are always reducible to these structures, provided they do not involve trigonometric functions.. Computer codes for global optimization still remain in the academic domain, and the best known are BARON , and a-BB. It should also be noted that non-rigorous techniques such as simulated annealing and genetic algorithms, which have also become popular, do not make any assumptions on the functions, but then they cannot guarantee rigorous solutions, at least in finite amount of time. Also, these methods do not formulate the problem as a mathematical program since they involve procedural search techniques that in turn require some type of discretization. Furthermore, violation of constraints is handled through ad-hoc penalty functions.
Major methods for MINLP problems include first Branch and Bound (BB) , which is a direct extension of the linear case, except that NLP subproblems are solved at each node. Generalized Benders Decomposition (GBD), and Outer-Approximation (OA), are iterative methods that solve a sequence of NLP subproblems with all the 0-1 variables fixed, and MILP master problems that predict lower bounds and new values for the 0-1 variables. The difference between the GBD and OA methods lies in the definition of the MILP master problem; the OA method uses accumulated linearizations of the functions, while GBD uses accumulated Lagrangian functions parametric in the 0-1 variables. The LP/NLP based branch and bound essentially integrates both subproblems within one tree search, while the Extended Cutting Plane Method (ECP) does not solve the NLP subproblems, and relies exclusively on successive linearizations. All these methods assume convexity to guarantee convergence to the global optmum. Nonrigorous methods for handling nonconvexities include the equality relaxation algorithm and the augmented penalty version of it. The only commercal code for MINLP is DICOPT (OA-GAMS), although there are a number of academic versions (MINOPT , a-ECP ).
In recent years a new trend that has emerged in the formulation and solution of discrete/continuous optimization problems through a model that is known as Generalized Disjunctive Programming (GDP) . The basic idea in GDP models is to use boolean and continuous variables, and formulate the problem with an objective function, and subject to two three types of constraints: (a) global inequalities that are independent of discrete decisions; (b) disjunctions that are conditional constraints involving an OR operator; (c) pure logic constraints that involve only the boolean variables. More specifically the problem is given as follows:
where x are continuous variables and y are the boolean variables. The objective function involves the term f(x) for the continuous variables (e.g. cost) and the charges ck that depend on the discrete choices. The equalities/inequalities g(x) must hold regardless of the discrete conditions, and hik(x) = 0 are conditional equations that must be satisfied when the corresponding boolean variable yik is True for the i'th term of the k'th disjunction. Also, the fixed charge ck is assigned the value gik for that same variable. Finally, the constraints W(y) involve logic propositions in terms of boolean variables.
Problem (GDP) represents an extension of disjunctive programming , which in the past has been used as a framework for deriving cutting planes for the algebraic problem (MIP). It is interesting to note that any GDP problem can be reformulated as a MIP problem, and vice-versa. It is more natural, however, to start with a GDP model, and reformulate it as a MIP problem. This is accomplished by reformulating the disjunctions using the convex hull transformation or with "big-M" constraints. The propositional logic statements are reformulated as linear inequalities. For the linear case of problem GDP, and when no logic constraints are involved, Beaumont proposed a branch and bound method that does not rely on 0-1 variables and branches directly on the equations of the disjunctions. This method was shown to outperform the solution of the alternative algebraic MILP models. Raman and Grossmann developed a branch and bound method for solving problem GDP in hybrid form; i.e. with disjunctions and mixed-integer constraints. For this they introduced the notion of "w-MIP representability" to denote those disjunctive constraints that can be transformed into mixed-integer form without loss in the quality of the relaxation.
For the nonlinear case of problem (GDP), and for the case of process networks, Türkay and Grossmann proposed a logic-based Outer-Approximation algorithm. This algorithm is based on the idea of extending the Outer-Approximation algorithm by solving NLP subproblems in reduced space, in which constraints that do not apply in the disjunctions are disregarded, with which both the efficiency and robustness can be improved. In this method the MILP master problems correspond to the convex hull of the linearization of the nonlinear inequalities. Also, several NLP subproblems must be solved to initialize the master problem in order to cover all the terms in the disjunctions. Penalties can also be added to handle the effect of nonconvexities as in the method by Viswanathan and Grossmann . This method has been implemented in the computer prototype LOGMIP, a GAMS-based computer code. Finally, it should be noted that a new method for solving GDP problems has been recently been reported by Lee and Grossmann. These authors have developed reformulations and algorithms that rely on the convex hull of nonlinear convex inequalities. Also, their method is not restricted to process networks.
From the above review, it should be clear that LP and MILP codes have become quite powerful. NLP methods are being significantly advanced by rigorous global optimization algorithms, which, however, can still be relatively expensive to apply. Finally, as for MINLP methods the new exciting direction is logic based optimization methods, such as Generalized Disjunctive Programming, which promise to facilitate problem formulation and improve the solution efficiency and robustness.
In the application of mathematical programming techniques to design and synthesis problems it is always necessary to postulate a superstructure of alternatives. This is true whether one uses a high level aggregated model, or a fairly detailed model. Most of the previous work has relied on representing the superstructure for each particular problem at hand, but without following some general principles. There are two major issues that arise in postulating a superstructure. The first is, given a set of alternatives that are to be analyzed, what are the major types of representations that can be used, and what are the implications for the modeling. The second, is for a given representation that is selected, what are all the feasible alternatives that must be included to guarantee that the global optimum is not overlooked.
As for types of superstructures, Yeomans and Grossmann have characterized two major types of representations, The first is the State-Task Network (STN) which is motivated by the work in scheduling by Kondili, Pantelides and Sargent . The basic idea here is that the representation makes use of two types of nodes: states and tasks (see Fig. 1.a). The assignment of equipment is dealt implicitly through the model. Both the cases of one-task one-equipment [OTOE] or variable task equipment assignment [VTE] can be considered. The second representation is State Equipment Network (SEN) which is motivated by recent work of Smith, and where the basic idea is to work with two types of nodes: states and equipment (see Fig. 1.b). The tasks in this case are treated implicitly through the model. This representation considers the case of variable task equipment assignment [VTE]. Yeomans and Grossmann have developed GDP models for each of the two different types of representations. These can then be used for solution with a GDP algorithm, or they can be used for reformulation as MILP or MINLP problems.
(a) State Task Network
(b) State Equipment Network
Figure 1. Alternate superestructure representation for ternary distillation
As for the issue on how to systematically generate the superstructure that includes all the alternatives of interest, Friedler et al. have proposed a novel graph theoretic approach that has polynomial complexity to find all the interconnections in process networks, given that nodes for processes and chemicals are specified. This procedure has been succesfully applied for synthesizing process networks for waste minimization . These authors have also used these ideas to perform more efficiently the search in the optimization .
Closely related to the selection of the superstructure, is the selection of level of detail of the optimization model. A common misconception about the mathematical programming approach is that models are always detailed and require a great deal of information. This, however, is not necessarily true. In general mathematical programming models can be classified into three main classes:
a) Aggregated models. These refer to high level representations in which the design or synthesis problem is greatly simplified by an aspect or objective that tends to dominate the problem at hand.
b) Short cut models. These refer to fairly detailed superstructures that involve cost optimization (investment and operating costs), but in which the performance of the units is predicted with relatively simple nonlinear models in order to reduce the computational cost, and/or for exploiting the algebraic structure of the equations, especially for global optimization.
c) Rigorous models. These also rely on detailed superstructures, but involve rigorous and complex models for predicting the performance of the units. The area of synthesis of distillation sequences (ideal and non-ideal) is perhaps the one that has received the most attention for developing rigorous models.
It should be noted that aggregated models give rise to simpler types of optimization models. They are often LP, NLP or MILP models of modest size, that are simpler to solve than larger MINLP models. In contrast, both short cut and detailed models give rise almost exclusively to MINLP problems, which as mentioned above, can also be formulated as GDP problems. The important point to realize here is that mathematical programming can accommodate models of various degree of complexity.
There are several solution strategies that can be used in the optimization of mathematical programming models for design and synthesis. The two major strategies are simultaneous optimization, and the sequential optimization. In the simultaneous strategy a single model is optimized at once. The optimization is rigorous in that all the trade-offs are taken simultaneously into account. Also, this model used is commonly of one type, but hybrids are possible. For example one can perform simultaneous optimization of a flowsheet in which the reaction, separation and heat integration are each represented by aggregated models. Alternatively, simultaneous optimization can be applied to the synthesis of subsystems, for example heat exchanger networks, or heat integrated distillation units. In the latter example, one might use detailed models for the distillation and heat integration, or detailed for distillation and aggregated for heat integration.
The sequential optimization strategy consists of solving a sequence of subproblems, normally at an increasing level of detail. The major motivation is to solve simpler problems to avoid solving a large single problem (normally MINLP). A good example is the procedure implemented in MAGNETS in which an LP is solved first to target the utility cost, next an MILP to determine the identity of the fewest number of matches, and finally an NLP superstructure in which interconnection of exchanger with the predicted matches are determined. Another example is the hierarchical decomposition procedure by Douglas in which the flowsheet is sequentailly optimized through various level, from a simple input-output model to the detailed structure.
simultaneous and sequential strategies have been used for a
long time, there are several new variants that have been
proposed. These include combined mathematical programming
with physical insights, and the combined Hierarchical
Decomposition and MINLP optimization by Daichendt and
Grossmann . What is important to realize is that applying
mathematical programming models can be achieved through a
variety of strategies, that in turn make use of models at
various levels of complexity. For instance, in sequential
decomposition one can use aggregated, short-cut and detailed
models. In simultaneous optimization one can use a model of
a single type, or a mix of several types. Finally, it is
possible to use mathematical programming in combination with
other approaches, most notably, physical insights.
A detailed review of Mathematical programming approach to Process Synthesis, with a great amount of references can be found in:
Grossmann, I.E.; Caballero, J.A.;
Yeomans H. Mathematical Programming Approaches to The
Synthesis of Chemical Process Systems. Korean J. Chem. Eng.
16(4) 407-426 (1999).