Interior point optimization methods: Theory, implementations and engineering applications

11
Interior point optimization methods: theory, implementations and engineering applications Les méthodes d'optimisation basées sur le point intérieur: théorie, réalisations et applications en ingénierie Anthony VannelN, ViCtOr H. Quintana, and LuiS VargaS, Department of Electrical and Computer Engineering, University of Waterloo, Waterloo, Ontario N2L 3G1 The objective of this paper is to present, in a tutorial way, the basic ideas involved in Narendra Karmarkar's projective scaling algorithm for solving linear programming problems and to show two of its most important extensions, the dual and the dual affine algorithms. Implementation issues regarding the practical solution of the search direction are also discussed. Anew parallel preconditioned conjugate gradient solution technique is introduced. This parallel search direction solver provides almost perfect p speedup forp processors. A summary of application results on a very large scale integration (VLSI) circuit layout and economic dispatch in power systems is presented. In all these applications, the interior point methods exhibit an average speedup of about six to 100 times over a well-known MINOS simplex code. Cet article mets un lumière, avec un exposé de type magistral, les idées fondamentales impliquées dans l'algorithme de réduction par projection de Narendra Karmarkar pour la solution de problèmes de programmation linéaire et illustre deux de ses plus importantes extensions, les algorithmes dual et dual par affinité. Les implications d'une réalisation visant à déterminer des solutions pratiques à la direction de recherche sont aussi discutées. On y présente une nouvelle méthode de solution par le gradient conjugué pré-conditionné. Cette solution, qui utilise une direction de recherche parallèle, permets presque l'atteinte d'une augmentation de vitesse par un facteur/? lorsque/? processeurs sont utilisés. On y présente un résumé des applications possibles pour l'architecture des circuits VLSI et pour la répartition économique dans les réseaux de puissance. Pour toutes les applications envisagées, la méthode du point intérieur montre une augmentation de vitesse de traitement de 5 à 100 fois supérieure à celle du code simplex bien connu MINOS. I. Introduction The simplex method, developed in the mid-1940s by George Dantzig, has been the workhorse algorithm for solving Linear Pro- gramming (LP) optimization problems. Since 1984, interest in linear programming has gained considerable strength due to the publica- tion of work by Narendra Karmarkar [1] on a new linear program- ming algorithm. This new method, called the projective scaling algorithm, has been extensively discussed in the scientific commu- nity due, mainly, to the following reasons: • first, Karmarkar's method can be implemented in such a way that it is considerably faster than any other known algorithm for solving large-scale linear programs [2]; second, it is a polynomial-time algorithm (the simplex method is not); this means that its running time does not explode as the problem size increases; and third, the ideas behind this new method are being utilized in nonlinear and integer programming problems to devise polynomial- time algorithms [3]. This paper consists of seven sections. In section II, we present the basic ideas of the original Karmarkar's method and the basic algo- rithm. In sections III and IV, the dual and dual affine algorithms, respectively, are explained. The main implementation issues related to the search direction are discussed in section V. In section VI, we present three new applications of the dual affine algorithm to different areas in electrical engineering. Conclusions about the features and performance of interior point methods are presented in section VII. II. The basic Karmarkar's method A. The standard LP form The following standard form of a linear programming problem is considered: Can. J. Elect. & Comp. Eng., Vol. 17, No. 2, 1992 min{c r x}, (1.1) X such that Ax = 0, (1.2) e r x=l, (1.3) x>0, (1.4) where e e $R W ; x e W 1 ; A e <R WXW is a rank m matrix; and e is a vector with Is. Also, the condition m < n - 1 must be satisfied; otherwise the standard LP problem (1) would have no solution. In solving the LP problem (1), the following necessary conditions must be satisfied: i) Ae = 0, so that x = (l/«)e is a feasible point. ii) The minimum of the objective function is zero; i.e., c r x* = 0, (2) where x* is the minimizing point. In the literature, linear programming problems are usually written in the simplex form: min{c r x}, (3.1) X such that Ax = b, (3.2)

Transcript of Interior point optimization methods: Theory, implementations and engineering applications

Page 1: Interior point optimization methods: Theory, implementations and engineering applications

Interior point optimization methods: theory, implementations and engineering applications

Les méthodes d'optimisation basées sur le point intérieur: théorie, réalisations et applications

en ingénierie

A n t h o n y V a n n e l N , ViCtOr H. Q u i n t a n a , and L u i S V a r g a S , Department of Electrical and Computer Engineering,

University of Waterloo, Waterloo, Ontario N2L 3G1

The objective of this paper is to present, in a tutorial way, the basic ideas involved in Narendra Karmarkar's projective scaling algorithm for solving linear programming problems and to show two of its most important extensions, the dual and the dual affine algorithms. Implementation issues regarding the practical solution of the search direction are also discussed. Anew parallel preconditioned conjugate gradient solution technique is introduced. This parallel search direction solver provides almost perfect p speedup forp processors. A summary of application results on a very large scale integration (VLSI) circuit layout and economic dispatch in power systems is presented. In all these applications, the interior point methods exhibit an average speedup of about six to 100 times over a well-known MINOS simplex code.

Cet article mets un lumière, avec un exposé de type magistral, les idées fondamentales impliquées dans l'algorithme de réduction par projection de Narendra Karmarkar pour la solution de problèmes de programmation linéaire et illustre deux de ses plus importantes extensions, les algorithmes dual et dual par affinité. Les implications d'une réalisation visant à déterminer des solutions pratiques à la direction de recherche sont aussi discutées. On y présente une nouvelle méthode de solution par le gradient conjugué pré-conditionné. Cette solution, qui utilise une direction de recherche parallèle, permets presque l'atteinte d'une augmentation de vitesse par un facteur/? lorsque/? processeurs sont utilisés. On y présente un résumé des applications possibles pour l'architecture des circuits VLSI et pour la répartition économique dans les réseaux de puissance. Pour toutes les applications envisagées, la méthode du point intérieur montre une augmentation de vitesse de traitement de 5 à 100 fois supérieure à celle du code simplex bien connu MINOS.

I. Introduction

The simplex method, developed in the mid-1940s by George Dantzig, has been the workhorse algorithm for solving Linear Pro­gramming (LP) optimization problems. Since 1984, interest in linear programming has gained considerable strength due to the publica­tion of work by Narendra Karmarkar [1] on a new linear program­ming algorithm. This new method, called the projective scaling algorithm, has been extensively discussed in the scientific commu­nity due, mainly, to the following reasons: • first, Karmarkar's method can be implemented in such a way that

it is considerably faster than any other known algorithm for solving large-scale linear programs [2];

• second, it is a polynomial-time algorithm (the simplex method is not); this means that its running time does not explode as the problem size increases; and

• third, the ideas behind this new method are being utilized in nonlinear and integer programming problems to devise polynomial-time algorithms [3].

This paper consists of seven sections. In section II, we present the basic ideas of the original Karmarkar's method and the basic algo­rithm. In sections III and IV, the dual and dual affine algorithms, respectively, are explained. The main implementation issues related to the search direction are discussed in section V. In section VI, we present three new applications of the dual affine algorithm to different areas in electrical engineering. Conclusions about the features and performance of interior point methods are presented in section VII.

II. The basic Karmarkar's method

A. The standard LP form The following standard form of a linear programming problem is

considered:

Can. J. Elect. & Comp. Eng., Vol. 17, No. 2, 1992

min{crx}, (1.1) X

such that

Ax = 0, (1.2)

e r x = l , (1.3)

x > 0 , (1.4)

where e e $RW; x e W1; A e <RWXW is a rank m matrix; and e is a vector with Is. Also, the condition m < n - 1 must be satisfied; otherwise the standard LP problem (1) would have no solution.

In solving the LP problem (1), the following necessary conditions must be satisfied: i) Ae = 0, so that x = (l/«)e is a feasible point. ii) The minimum of the objective function is zero; i.e.,

crx* = 0, (2)

where x* is the minimizing point.

In the literature, linear programming problems are usually written in the simplex form:

min{crx}, (3.1) X

such that

Ax = b, (3.2)

Page 2: Interior point optimization methods: Theory, implementations and engineering applications

VANNELLI/QUINTAN A/VARGAS: INTERIOR POINT OPTIMIZATION METHODS 85

Figure 1 : Feasible region and descent direction.

x>0. (3.3)

This formulation may be put into the required form of Karmarkar's method by using elementary algebraic operations [2].

Karmarkar's method for an iterative solution of (1.1)-(1.4) is based on the following two fundamental ideas:

1) If the current point of an iterative optimization process is near the centre of the feasible region, then it makes sense to move in a direction of steepest descent to get closer to the optimal point. To illustrate this idea, let us consider the feasible region and descent direction shown in Fig. 1. If we move from the point x0 along the direction of steepest descent (c), we can significantly improve the solution. Such a significant improvement can be achieved mainly because x0 is near the centre of the feasible region. However, if we use the steepest descent direction from point xj, the boundary is reached before much improvement occurs in the performance (cost) function.

2) The solution space can be transformed so as to place the current solution point near the centre of the transformed feasible region. Without changing the problem in any essential way, such a trans­formation can be done by using an appropriate type of transfor­mations, calledprojective transformations. The idea in projective transformations is to make a rescaling of the original problem in such a way that the current point is put into the centre of the feasible region.

Using these two fundamental ideas, a basic outline of Karmarkar's method may be given as follows: a) Start the iterative solution from a point (x°) near the centre of the

feasible region. Initialized iterative counter: k = 0. b) Put point x* into the centre of the feasible region using a projec­

tive transformation. c) Find a new solution point, x*+1, near the boundary of the current

(tranformed) feasible region using the steepest descent direction. d) Put the transformed-space new point x*+1 back into the original

problem space to obtain x*+1. e) Check for optimality conditions. If the optimal has not been

reached, go back to b). Otherwise, stop.

B. Projective transformations As we have seen in the preceding subsection, the crucial step in

Karmarkar's method is the transformation of the current point to the centre of the feasible region. The projective transformation T that performs such a transformation is defined as

y ^ W = eTOlx (4)

where D is a nonsingular diagonal matrix, and T : W1 -> 9?".

If we define D as

D = diag{*1,*2, . . . , * „ } , (5)

where JC,· represents the ith component of vector x, then the projective transformation T{x) gives a new point y at the centre of the trans­formed feasible region. Notice that we always have y = (l/w)e. The inverse transformation is given by

x = T-'(y) erDy (6)

Karmarkar has shown [ 1 ] that by using projective transformations, we change the equality constraints Ax = 0 to ADy = 0, while eTx = 1 remains the same; that is, ery = 1.

C. The potential objective function The objective function crx changes in an undesired way because

the linear functions are not invariant under a projective transforma­tion. For this reason, Karmarkar has proposed changing the original objective function to an equivalent objective function that is invari­ant under projective transformations. He employs for this purpose the so-called potential functions, defined as

where /(x) is a linear function and K is a constant.

Thus, the problem solved by Karmarkar is

(7)

min \p(x) = Σ m "

such that

Ax = 0,

e r x = l ,

x > 0 .

(8.1)

(8.2)

(8.3)

(8.4)

Another important result in Karmarkar's developments is that the transformed-space gradient of p(x) is approximately Dc.

D. Karmarkar's algorithm In the previous subsection we have presented the equations corre­

sponding to the constraints and gradient of the objective function in the space transformed by the projective transformation T. Thus we have all the elements to get the descent directions in this space. The complete Karmarkar's algorithm to solve (8.1)-(8.4) can be stated as follows: i) Start with the initial solution point x°, and iteration counter k = 0. ii) If c V < ε, stop; otherwise, continue to step iii). iii) Put the current point xk in the centre of the feasible region using the projective transformation of (4); i.e., y* = Γ(χ*). iv) Transform the constraint matrix A, and the gradient of the objec­tive function c; i.e., A -» AD and c —> Dc. v) Find the feasible descent direction in the transformed space as in usual linear programming methods [6]; i.e., project the gradient Dc into the space spanned by the constraints (columns of AD plus e7). For convenience we define the matrix

Β Ξ AD

and the orthogonal projector

P = I - B ^ B B 7 ) - ^ ,

(9)

(10)

Page 3: Interior point optimization methods: Theory, implementations and engineering applications

86 CAN. J. ELECT. & COMP. ENG., VOL. 17, NO. 2, 1992

where I is the identity matrix. Therefore, the new feasible descent direction d* (orthogonal to the active constraints) is given by

d* = -PDc = [I - B'iBBVBJDc. (11)

This step is the main computational burden of the method. It corre­sponds to the solution of the least squares problem,

min | | B r u - D c | | 2. (12) u

If u* is the solution to this problem, then

d̂ = B r u*-Dc . (13)

vi) Choose the step size λ using

λ = πιίη{λ|.| $ + λ^ = 0; λ,.>0}. (14)

vii) Evaluate the new point in the transformed space:

y^=yk+aXak. (15)

In practice, a e (0.90, 0.98). In the original paper Kamarkar used the value a = 1/4; however, experience shows that this value leads to slower convergence. viii) Return to the original problem; i.e.,

JH-l : r l ( v * + l ) *+K = J^ k+\

TOyk :+l (16)

Go to step ii).

/. Illustrative example

Let us consider the following elementary LP problem:

such that

minf-jq + x2 + 2*3}, X

x\ + x2 ~ 2*3 = 0>

X\ + X2 + X~$ = 1 )

JCJ>0, x2>0 andx3>0.

This LP example is of the standard form (1.1)-(1.4) and satisfies the assumptions proposed by Karmarkar; i.e., the minimum of the ob­jective function is zero, and dn is a feasible point.

In the iterative solution, we follow the steps described above for the Karmarkar's algorithm, and we use a tolerance ε = 0.01 for the convergence criterion.

a. First iteration i) The starting point is (x°)r= (1/3) (1, 1, 1). Iteration counter k = 0. ii) Compute the objective function

Λ ° = | ( - 1 + 1+2) = | .

As 2/3 > ε, we continue to step iii). iii) Put the current point in the centre of the feasible region,

y° = Γ(χ°). For this case,

D 1 0 0 0 1 0 0 0 1

therefore,

iv) Transform constraint matrix and gradient vector,

( A D f - 3 and DC = -

v) Find the feasible descent direction,

"1 1 3 3

1 1 L

2 3

1 J

->d° = -

vi) Step size λ,

γ} = 0 - > τ + λ ι τ = 0 ^ λ 1 = - 1 ,

ν 2 = 0 - > Τ - λ 2 τ = 0 ->λ 2 = 1,

y\= 0 -► - + λ30 = 0 -> λ3 = οο.

Therefore, λ° = λ2 = 1. vii) Evaluate the new point in the transformed space,

1 1 y = 3 + (0.9)(1)| J_

30

viii) Return to the original problem,

1 0 0 0 1 0 0 0 1

1 0 0 0 1 0 0 0 1

1 30

19 1 10

30

K1 l !]

b. Second iteration At the end of the second iteration we get

30 190 0 0 1 0 0 0 10

0.5433 0.0333 0.5169

[ H I ] 30 190 o] 0 1 0 0 0 10J

Γθ.5433 0.0333

[0.5169

0.6649 0.0021 0.3329

and the value of the objective function is

crx2 = ((-1)0.6649 + (1)0.0021 + (2)0.3329) = 0.003.

Since 0.003 < ε, stop. The optimal solution has been obtained.

£. Objective function As we have seen, the minimum of the objective function must be

zero. This is a nontrivial condition, and there is no generally accepted way to satisfy it. In the beginning, Karmarkar proposed two methods which, in practice, increase the complexity and computational bur­den of the original algorithm; therefore, they are unsuitable for practical use. Many other approaches have been proposed in the literature [4], [7]-[9], without a general agreement.

In the next two sections we discuss two alternatives to overcome this problem, which lead to new implementations of the Karmarkar's algorithm.

III. The dual algorithm

The dual algorithm has been proposed by Todd and Burrel [5]. It basically uses an iterative estimation of the optimum of the objective function.

Page 4: Interior point optimization methods: Theory, implementations and engineering applications

VANNELLI/QUINTANA/VARGAS: INTERIOR POINT OPTIMIZATION METHODS 87

Let/* be the optimum of the objective function. Then, if we solve the problem min(crx - / * ) , the solution is obviously zero. Since erx = 1, we can write

B E

crx - / * = crx - / * e r x = (c - / * e ) r x .

Therefore, min crx is equivalent to min(c - / * e ) r x .

(17)

Todd and Burrel denote a s / the corresponding value of the dual objective function which, by duality, is a lower bound of/*. Thus, at each iteration, a computation of the dual problem is performed to get the updated/

The dual problem is defined as

max{ /} ,

such that

Aru +/e < c.

For a given u, if we choose / a s

/ = min (c - Aru),, j

(18.1)

(18.2)

(19)

i .e . , / is the minimum of all components of vector c - Aru, then (u , / ) is a solution of the dual problem.

The dual algorithm is as follows: i) Start with x0, and k = 0. ii) Compute the solution of the dual problem,

o AAyuu = Ac ->u u ,

f° = min(c - Aru°) ,·. j

(20.1)

(20.2)

iii) If cTxk -fk < ε, stop; otherwise, continue to step iv). iv) Put the current point into the centre of the feasible region using the projective transformation y* = Γ(χ*). v) Update the dual variables:

■,*+!. A D 2 A V + 1 = AD2c(/*)

and

fk+l: f= min(c - ATuk + l ) .·. j

If / > / * , then

/*+ 1 = / ;

otherwise,

fk+l _ fk

vi) Update the equivalent cost vector,

ck+\ =ck _fk+\e

vii) Update u*+1 with the updated cost,

A D 2 A V + 1 = A D V + 1 - > ^ + 1 .

(21)

(22)

(23.1)

(23.2)

(24)

(25)

viii) Transform the constraints and the gradient of the objective function,

A - > A D

c -> Dc.

ix) Find the feasible descent direction as in the usual linear program­ming methods; i.e., project the gradient of the objective function (Dc) into the space spanned by the constraints (columns of AD plus e7).

For convenience, we define the matrix

AD (26)

Therefore, the new feasible descent direction d* is given by

d* = -[I - B ^ B B V B J D C . (27)

In practice, we do not solve (27) because we have already done most of the work necessary to get its solution. In fact, we should recall that d* = P#Dc, where FB is the orthogonal projector defined previously in the basic Karmarkar's algorithm and which may also be obtained from P# = P^P^z). The projectors Ye and PAD are the orthogonal projectors associated to the vector e and matrix AD, respectively; i.e.,

p . = i -

and

VAD = I - (AD/tADiAD)7] ! AD

Thus, we can compute d* as

d^ = P e [Dc^ + 1 -DAV + 1 ] .

x) Choose the step size λ using

λ = ηιΐηΐλ; I jf + Xrf = 0; λ, > 0}.

xi) Get the new point,

y*+ 1=y* + oa,d*.

In practice, a e (0.90, 0.98). xii) Return to the original problem,

vk+\ : r'(y*+1 > Dy ,k+\

erDy rk+\

(28)

(29)

(30)

(31)

(32)

(33)

Go to step iii).

The above algorithm generates primal and dual solutions with objective values converging to the common optimal primal and dual value, in polynomial time.

IV. The dual affine method

A. Description of the method This method has been recently proposed by Adler, Resende, Veiga

and Karmarkar [9]. Rather than expressing the dual problem in the standard form,

such that

min{bry}, y

Ary = c,

y>o,

(34.1)

(34.2)

(34.3)

they use the inequality dual formulation because there are some computational advantages in the selection of search directions under this formulation. To do so, let us now consider the following LP problem:

such that

max {crx},

A x < b ,

(35.1)

(35.2)

where c and x are «-vectors, b is an w-vector, A is a full rank m x n matrix, m > n and c Φ 0. Also, an initial interior feasible solution x° is required.

Page 5: Interior point optimization methods: Theory, implementations and engineering applications

88 CAN. J. ELECT. & COMP.

The algorithm described below is a variation of Karmarkar's original projective algorithm [1]. It is obtained by substituting an affine transformation for the projective transformation, and the ob­jective function for the potential function.

Starting at x°, the algorithm generates a sequence of feasible interior points {x1, xz

objective values; i.e.,

and

., x*,.. .} with monotonically increasing

b - Ax* > 0 (36)

c ¥ + 1 > c V . (37)

The algorithm terminates when a stopping criterion is satisfied.

Introducing slack variables to the problem formulation (35), we have

ENG., VOL. 17, NO. 2, 1992

hkx = -(ATO2

kA)-lc.

The corresponding direction for the slack variables is

h*=-Ah*.

The updating equation is x*+1 = x* + ah*.

B. The dual affine algorithm

i) Choose the starting point x° such that

Ax° < b.

Set k = 0. ii) If k > 0, check the stopping criterion

(47)

(48)

I crx*+1 - crx* I Max{l, | c V I

<ε.

max{crx}, X

such that

Ax + v = b,

v > 0 ,

where v is the w-vector of slack variables.

(38.1)

(38.2)

(38.3)

If (50) is not satisfied, continue to step iii); if satisfied, stop. iii) Calculate the slack vector,

v* = b - Ax*.

iv) Calculate the diagonal matrix using

Ok = diag J_ _1_

(49)

(50)

(51)

(52)

The affine variant of Karmarkar's algorithm consists of a scaling operation followed by a search procedure that determines the next iterate. At each iteration k, with v* and x* as the current iterates, a linear transformation is applied to the solution space,

v) Calculate the projection of c onto the space spanned by the columns of AD^,

where

D^ = diag

D*v,

vf

(39)

(40)

The slack variables are scaled so that x* is equidistant to all hyperplanes generating the closed half spaces whose intersection forms the transformed feasible polyhedral set,

h ^ A ^ A ^ c .

vi) Get

hj= -Ah*.

vii) Calculate the step size

a = γ x min{-vf/(/4 | (A& < 0, i = 1 , . . . , ι i

where 0 < γ < 1. viii) Calculate the interior point,

{xe W | D^Ax<D^b}. (41) x*+1 =xk + ah£.

(53)

(54)

(55)

(56)

Rewriting (35.1) and (35.2) in terms of scaled slack variables, we have

Go to step ii).

max{crx},

X

such that

Ax + D^v = b, A

v = 0.

The set of feasible solutions is denoted by

F= {x e <RW | A x < b }

and the set of the feasible scaled slacks is denoted by

S = {v e W1 | Ax + D^v = b for x e F}.

If matrix A is full rank, then

v = D^b - Ax)

and

x(v) = ( A ^ A ) " 1 A%(D^b - v).

The new feasibile direction d* is obtained from

(42.1)

(42.2)

(42.3)

(43)

(44)

(45)

(46)

The stopping criterion of (50) has been improved in [9] by using the dual variables y* = D|h{. The algorithm terminates when the following conditions are satisfied:

J ^ - s i for / = ! , . ,m (57.1)

and

l^vf l > - ε 2 | lyM | 2 | |y*| | 2 f o r y = l , . . . , m (57.2)

for given small positive tolerances ε{ and ε2.

V. Implementation issues

As we have seen in the preceding sections, the main computational burden of interior point methods is the solution of a least squares problem of the form

ArD^Ax = c, (58)

where A is an m x n matrix.

From this equation it is clear that these methods are tailor-made for LP problems for which the number of constraints is larger than the number of variables (m > n). If, for instance, A is a 10 000 x 100 matrix, then ArD^A is a 100 x 100 matrix.

Page 6: Interior point optimization methods: Theory, implementations and engineering applications

VANNELLI/QUINTANA/VARGAS: INTERIOR POINT OPTIMIZATION METHODS 89

In this section we revise the most important methods that have been proposed to solve (58).

A. Direct methods Direct methods use a noniterative process to solve a system of

linear equations. They perform a factorization in the left-hand side of (58), and the solution is obtained in two stages, usually called forward and backward substitution.

Tomlin [13] has used Q-R methods, such as Householder and Givens, to solve (58), but he has obtained results slower than the simplex method.

The solution to (58) has been successfully achieved by using an LU decomposition technique of the form

ArD^A LU. (59)

This implementation, proposed by Adler et al. [15], was applied to a wide range of problems. The results show that the interior point method is, on the average, 1.8 to 6.8 times faster than the MINOS 4.0 simplex code. The factorization proposed in this implementation takes into account the fact that the sparsity structure of (58) remains the same through all the iterations (only the diagonal elements of D^ change in the updating process). The matrix ArD^A is ordered in such a way that minimum fill-in occurs during the factorization. In addition, the algorithm retains the structural information of the matrix for the subsequent iterations; this procedure is called sym­bolic factorization .

Any dense column in A degrades the sparsity of ArD^A and, for practical problems, a few dense columns demand prohibitively high computational effort and storage requirements. Furthermore, for a degenerate problem, the left-hand side of (58) becomes increasingly ill-conditioned as the solution is approached [4], and the accuracy of the computed version of ArD^A correspondingly deteriorates.

To overcome these difficulties, the solutions of (58) are obtained by iterative methods, which, for large problems (n > 1000), are also faster than direct methods. In the next section we revise one of the most important iterative methods: preconditioned conjugated gradient.

B. Preconditioned conjugate gradient technique The conjugate gradient technique is an optimization algorithm that

minimizes the objective function Φ, defined as

0 = ^xrArD^Ax- (60)

The minimum for (60) corresponds to the solution of (58). The rate of convergence of the conjugate gradient algorithm depends on the condition number (ratio of the maximum and minimum eigenvalues) of the matrix A^D^A.The preconditioned algorithm consists of using the conjugate gradient method as applied to the following equivalent equation:

[ν-ιΑτΌΐΑ(Ρτ)-ι](Ρτχ) = (P^c). (61)

This equation has the form A'x' = b', which is the same structure as (58). The matrix P is called the preconditioner of the system, and is chosen to improve the condition number of A' [20].

1. Incomplete Cholesky factorization

This approach, proposed by Munksgaard [16], uses an incomplete Cholesky factorization to precondition the matrix ArD^A. Munks­gaard finds a matrix P"1 such that P is an approximate factored matrix of ArD^A without too many fill-ins in the L Cholesky factor ofP;thatis,

and

P = LL r «A r D |A

P"1 ArDjA * I.

(62)

(63)

This procedure improves the condition number of ArD^A and, as a consequence, fewer matrix-vector multiplications in a conjugate gradient technique are required to find the solution to the system of linear equations (58). Munksgaard [16] uses a preconditioned method similar to Ortega [18, pp. 196-200] to solve the better-conditioned system of linear equations (61),

p-1(ArD^A)x = P~1c. (64)

We have developed a FORTRAN code, called DUALAFFINE, which implements the above-mentioned features.* In addition, this code incorporates many other structural properties of LP problems (i.e., incomplete Cholesky factorization).

2. A constraint reduction approach

The matrix ArD^A can be made more sparse if those έ/,-'s that are much smaller than others can be deleted from the calculation. The diagonal element dt is the inverse of the ith slack value of row i. If row i has very little slack, then dj is large; on the other hand, if row i has a large slack (that is, the constraint is not binding at this stage), then di is small.

The large dt values have a dominant effect on the calculation of ArD^A and projection x. The deletion of small dt values has very little accuracy effect on the evaluation of ArD|A. However, such a deletion can make A^D^A much more sparse.

a. Illustrative example These important properties can be illustrated by the following

4 x 3 matrix A and 4 x 4 diagonal matrix D:

1 1 0 0 1 0 1 0 1 0 0 1

D = 1 0 0 0 0 100 0 0 0 0 100 0 0 0 0 100

For this case we have

DA = 1 100 0 0 1 0 100 0 1 0 0 100

ArD2A = 10001 1 1

1 10001 1 1 1 10001

Note that if we delete dx or set it to zero, we have the following results:

D =

"0 0 0 0 " 0 100 0 0 0 0 100 0 0 0 0 100

Λ

, DA = 0 100 0 0 0 0 100 0 0 0 0 100

ArD2A = 100000 0 0

0 10000 0 0 0 10000

Thus, the matrix ArD2A is approximately the same as A7!)2 A and is very sparse since it has become a diagonal matrix.

The solution of the linear system (58) by replacing ArD2A by ArD2A is not substantially affected; however, the sparsity of the re­sulting system will allow us to find the solution much more quickly by Cholesky factorization or preconditioned conjugate gradient approaches.

An executable of the code can be made available from the authors for MS-DOS, UNIX and VM/CMS environments, for noncommercial use.

Page 7: Interior point optimization methods: Theory, implementations and engineering applications

90 CAN. J. ELECT. & COMP. ENG., VOL. 17, NO. 2, 1992

The large values d{ retained in the approximate calculation of the projection matrix are selected according to a specified tolerance δ; i.e.,

Ì- di iïdi > δ 0 otherwise.

(65)

In practical problems, 10-50% of all the constraints in (59) are not used in obtaining the optimal solution. In the more general global routing problem, Vannelli [10] has shown that 50-95% of such arc constraints have little effect on the optimal solution. This implies that these arcs have small di values and can be deleted. Unlike the simplex approaches that cannot delete these constraints, the interior point approach allows one to reduce the number of constraints as the algorithm proceeds.

3. Matrix-splitting preconditioned conjugate gradient

We can use the observations of the previous subsection to formu­late another preconditioned conjugate gradient technique to solve (58). Note that one must take care in determining which dt values are set to 0. Setting such values to 0 can lead the interior point algorithm into a projection direction which does not increase the objective function value. In an attempt to take advantage of the larger dt values and account for smaller dt values, we develop a matrix splitting preconditioned conjugate gradient technique, initially de­scribed in Conçus, Golub and O'Leary [19] and later generalized for positive-definite linear systems of equations by Golub and Van Loan [20, pp. 373-376].

We define a matrix splitting of A7!)2A as follows:

ArD2A = ArD^A + ArD^A = P - R,

where

and

HH-

"iL-

di ifdi>8 0 if t/, < δ

di if di < δ 0 i f^>ô*

(66.1)

(66.2)

(66.3)

In addition, all the nonnegative constraints are placed in P = A^D^A. This assures that matrix P is positive definite. Golub and Van Loan [20] show that, when using (66) at iteration k, a new solution update of (60) is found by solving basically the simpler and sparser system of equations

Px* = Rx*! + c. (67)

The interested reader is referred to [20] for details on updating the system of equations (67).

Based on these concepts, we have developed a second precondi­tioned conjugate gradient FORTRAN code that uses a matrix split­ting preconditioner — DUALAFFINE(MS).

4. Parallel preconditioned conjugate gradient solver

In this section, we describe a simple parallel preconditioned con­jugate gradient approach for the key projection step in solving (58). The technique presented here uses a simple block diagonal precon­ditioner to solve the projection step efficiently.

The key to establishing an effective parallel preconditioned con­jugate gradient approach for solving

Bx = (ArD2A)x = c

is to select a matrix M so that

M ^ B « I.

(68)

(69)

We show how this is done using the following preconditioned conjugate gradient algorithm from Ortega [18].

a. Preconditioned conjugate gradient (PCG) algorithm • Choose x° and M.

Set

Solve

Set • For k = 0, 1 ,2 , . . . ;

1)

2)

r° = c -Bx° .

M?° = r°.

α _ (r*,r*) * (p* ,BpV

χ£+1 =xk _ akpk.

3) r*+ 1=r* + a*Bp* Test for convergence.

1) Solve

2)

3)

Mpfr+l = rk+\ (70)

(r* , r* )

Jfc+l =7k+\ r*+1 + ß*P*.

A simple parallel implementation results from solving (70). For instance, in the dual affine Karmarkar implementation, if

A = [Ab A2, . . .,Ap],

then M can be chosen so that

M = d i ag{M 1 ,M 2 , . . . ,M p } ,

where

M; = A/b2A;

(71)

(72)

(73)

and each M,· is still positive definite. For many problems, where JC, is 0 or 1 in the optimal solution, the constraints

Xi > 0,

result in large diagonal values in Mf- that dominate the off-diagonal elements in M,· at optimality. Then the inverse of the diagonal ele­ments of M, is a good preconditioner for

Bx = (ArD2A)x = c.

VI. Applications

In this section we present three applications of the dual affine variant of Karmarkar's method in electrical engineering problems. In all these applications the corresponding LP formulation exhibits a variety of sparsity structures and the ratio of number of columns to rows typically ranges from 2 to 4.

A. Detailed routing in VLSI circuit layout In the simplest terms, the basic circuit layout problem is to place

electronic modules on a chip in a non-overlapping manner, and then to route or connect the mutually non-interfacing wires according to a given wiring list or netlist. In the fundamental (detailed) routing problem the modules to be connected are placed on two or more layers, and connections are performed such that each track contains no more than one wire (or net).

To introduce the mathematical formulation of the routing problem, let us consider a simplified model in which the wires that connect the modules are routed over two layers only (typically metal and silicon) such that the wires never touch over both layers. By conven-

Page 8: Interior point optimization methods: Theory, implementations and engineering applications

VANNELLI/QUINTANA/VARGAS: INTERIOR POINT OPTIMIZATION METHODS 91

tion, vertical wires connect modules in the metal layer and horizon­tal wires connect modules in the silicon layer.

We assign the variable Xj to each different routing which connects a net [10]. The variable Xj takes the values

_ f l if routing y is used ^ [0 otherwise

The selection of one and only one routing for the net is represented by the condition

2 > y = l , k=l,2, •,P,

where p is the number of nets, and Nk is the set containing the different single horizontal track routings that we wish to consider for net k.

We can represent all possible ways to connect nets in two layers by a (0, 1) matrix A. For a grid graph with m arcs, there will be m rows in A, where the ith row corresponds to the ith arc in the grid graph and each column corresponds to the different ways of connect­ing a net. The au element in A can be represented by

f 1 if routing j goes through arc i v~\0 otherwise

Thus the detailed routing problem is equivalent to solving the following 0-1 integer programming problem [10]:

mm X 1=1

such that

4Xj= 1, k= 1 ,2, . . . , /?, X.-€M

^aijxj- l j i= 1.2,. . . , / H ,

(74.1)

(74.2)

(74.3)

[22] describe a simulated annealing approach which requires large computer running times and in which the solutions are dependent on critical annealing schedules. Linear programming (LP) approaches have been used to model large VLSI circuit layout problems. How­ever, these approaches have not been pursued because of two impor­tant limitations. First, such problems are quite large; problems may contain thousands (or even hundreds of thousands) of constraints and variables. Second, the usual linear programming solution codes (e.g., MINOS [23]) employ a simplex method which can move very slowly from corner point to corner point of the underlying con­straint region. Empirically, one finds that simplex computer codes require as many iterations (corner identifications) to solve this opti­mization problem as there are constraints. Since the number of constraints is already quite large, running times can become exces­sively long.

In this paper we use the interior point algorithms DUALAFFINE and DUALAFFINE(MS) to solve the routing problem. For compari­son purposes, the well-developed and efficient FORTRAN simplex code MINOS 5.0 [23] has been used to obtain the resulting lower bound on the detailed routing problem (75). Four placements are tested. These placements are processed by the maze-runner algo­rithm of Ting and Tien [21] to find routings. The netlists contain an average of four modules per net. The specifications of these place­ments are given in Table 1.

The numbers of constraints and variables in Table 1 are the sizes of the constraint matrices solved by DUALAFFINE and DUAL-AFFINE(MS).

DUALAFFINE, DUALAFFINE(MS) and MINOS 5.0 were run on an IBM 3081 main frame computer at the University of Waterloo. The total wire lengths obtained on the detailed routing problem using the interior point method and the well-known river routing algorithm of Ting and Tien [21] are compared in Table 2.

The LP solutions of problems 1 and 3 are, in fact, optimal detailed routings by this new interior point technique.

The four test problems were initially solved by the preconditioned conjugate gradient using incomplete Cholesky factorization-DUALAFFINE. These results are shown in Table 3.

Xjr = 0 or 1, (74.4)

where ct is the wire length, that is, the sum of vertical and horizontal edges required to connect edge xi9 and Ν=ΣΝίί.

Note that the number of equality constraints (74.2) is equal to the number of nets, and the number of inequality constraints (74.3) is equal to the number of arcs in the grid graph. The number of vertical arcs is a smaller subset of the total number of arcs (and correspond­ing constraints) in the grid graph.

A useful linear programming relaxation of (74) can be placed in the form

max X 1=1

(75.1)

Several striking differences between the two LP solvers become apparent from Table 3. First, although the number of iterations (pivot steps taken) increases dramatically using MINOS, the number of iterations (projection steps) using DU AL AFFINE stays in the 20-40 range. Most important, the running time of MINOS is also amplified as the problem size increases. In fact, MINOS was not able to solve problem 4 in the limited time of 600 CPU s. On the other hand,

such that

Table 1 [ Specifications of the placements

Problem | no. 1 1

2 3 4

No. of modules

50 100 200 400

No. of nets 100 200 400 800

Grid size 50x10 75x15

100x20 150x30

No. of constraints

1083 2153 4433 8367

No. of variables

810 1515 3120 5730

Σ *,·<!, *= 1,2,...,/>,

^azyxy< 1, /= 1,2, . . . , /w, j

Xj > 0.

(75.2)

(75.3)

(75.4)

Maze-runner heuristic techniques for solving the routing problem have been developed by Ting and Tien [21]. Vecchi and Kirkpatrick

Table 2 | Total wire lengths

Problem no. 1 2 3 4

Interior point

823 1243 2679 5467

Ting & Tien technique [21]

907 1 1412 2897 5976

Page 9: Interior point optimization methods: Theory, implementations and engineering applications

92 CAN. J. ELECT. & COMP. ENG., VOL. 17, NO. 2, 1992

Table 3 | Test results — DUALAFFINE

Problem no.

[ 1 2 3 4

MINOS 5.0

CPUs 8.3

32.6 185.1

*

Iters. 857

2283 5812

*

DUALAFFINE

CPUs 14.3 27.7 55.3

134.2

Iters. 21 23 32 35

^Optimal solution was not found after

MINOS/ DUALAFFINE

0.58 1.18 3.34

>4.47

600 CPUs.

Table 4 Test results — DUALAFFINE(MS)

Problem no.

1 1 2 3 4

MINOS 5.0

CPUs 8.3

32.6 185.1

Iters. 857

2283 5812

*

DUALAFFINE(MS)

CPUs 10.8 21.6 30.4 61.1

Iters. 21 24 29 32

MINOS/ DUALAFFINE(MS)

0.77 1.51 6.09

>9.82

^Optimal solution was not found after 600 CPU s.

Banded and block tridiagonal preconditioned conjugate gradient techniques have also led to fast parallel implementations [27]. These approaches are also being investigated.

C. Constrained economic dispatch In this application, the Succesive Linear Programming (SLP)

method is used to solve the Security-Constrained Economic Dis­patch (SCED) problem in electric power systems [24]. Results con­sidering two standard IEEE systems are presented. In addition, some implementation issues related to the interior point methods in the solution process of SLP are discussed.

/. Formulation of the SCED problem

The SCED problem deals with the calculation of the optimal operating schedule of a power system subject to transmission and generation constraints. The classic formulation for this problem is as follows:

c, = £q(PG) i e G

such that

YPG^PD + PL,

(77.1)

(77.2) i=l

DUALAFFINE's running time seems to increase almost linearly as the problem size increases. The final column in Table 3 shows the relative speed of DUALAFFINE versus MINOS 5.0. Notice that DUALAFFINE obtains the lower bound on the minimal channel width at least 4.47 times faster than MINOS on the largest test problem.

These four test problems were solved again using the matrix splitting preconditioned conjugate gradient technique DUAL-AFFINE(MS). The values for the diagonal entries on the matrices ΌΗ and OL are chosen as follows:

diH =

and

*. iLz

diifdi>o = 50 0 ifd(<ò = 50

diifdi<ô = 50 0 ifd,>3 = 50'

(76.1)

(76.2)

where

|S/| <Sf, / e l ,

(77.3)

(77.4)

Ct is the total generation cost, PG is the active generating power in bus Gh PQ is the total demand in the system, PL is the active losses in the system, Si is the apparent power through element /, G is the set of all generator buses, L denotes the set of all system lines, and superindices m and M denote lower and upper limits.

A common practice in industry is to represent the cost function of each generator through a monotonically increasing second-order polynomial of the form

Ci(PG) = ai + biPG+ciP2G. (78)

These results are shown in Table 4.

DUALAFFINE(MS) is much faster than DUALAFFINE on the tested problems and again seems to become faster than the incom­plete Cholesky approach as the problem size increases. Clearly the factorization of the matrix ArD^A has become more sparse.

B. A paraUel interior point implementation The parallel preconditioned conjugate gradient algorithm devel­

oped in the previous section was tested on several global routing problems that appear in the VLSI circuit layout literature [5]. A Parallel FORTRAN implementation was run on an 8 CPU (20 MHz) INMOS T800 Transputer system. The results are summarized in Table 5 for different problem sizes.

The speed-up provided by the parallel PCG method is approxi­mately seven times compared with the fastest interior point method on a single CPU (PCG method). The parallel PCG method is over 400 times faster than the simplex method (MINOS 5.2) on one CPU. Other iterative methods could be used to exploit the underlying structure of

Bx = (ArD2A)x = c.

The successive linear programming technique solves the nonlinear optimization problem (77) via a sequence of linear programs in such a way that, in the limit, the successive solutions of the linear pro­gramming problems converge to the solution of the nonlinear pro­gramming problem [25]. To linearize the set of equations (77), the following assumptions are used:

1) The constraint (77.4) may be approximated as [24]

|P,| <Pf, / e l , (79)

Table 5 Running time (s)

Matrix A mxn

1000x100 2000x500 5000x1000 10 000x2000

Simplex method

(one CPU) 35.7

823.3 3597.8

18 791.8

PCG method

(one CPU) 20.5 96.8

154.8 314.5

Parallel PCG

(eight CPUs) 2.9 1

14.2 22.9 45.6

Page 10: Interior point optimization methods: Theory, implementations and engineering applications

VANNELLI/QUINTANA/VARGAS: INTERIOR POINT OPTIMIZATION METHODS 93

where P/ is the active power flow through element /.

2) To relate the active power flows with the generation power, we use Generalized Generation Distribution Factors (GGDFs) [28],

Λ=Σβ«Ά· (80)

ieG

3) To relate the active losses with the generation power in (77.2), we use Incremental Transmission Losses Factors (ITLFs):

APL = ^yiAPGi, (81) ieG

where γ, is the ITLF that relates the losses with the real power generation of bus /.

The factors βζ / are obtained at the beginning of the procedure and remain constant as long as the topology of the network is preserved. The factors γ, are updated from the current ac power flow solution. In this application, the active part of the fast decoupled load flow (FDLF) [26] is used throughout the simulations.

By using assumptions 1), 2) and 3), the linearized SCED problem at the k?h iteration becomes

minW = I(*/ + 2 c A ) ^ l (82.1)

such that

n

Σ ( 1 - γ ί ) Δ / ^ = 0, ie G, (82.2) i=l

APg < ΔΡ^. < ΔΡ^ , ι e G, (82.3)

Σ β , - , / Δ Ρ ^ Δ / f , leL, (82.4) ie G

where AP'g and AP% are the step bounds on APQ .

At each iteration, the linear programming problem (82) is solved by an appropriate numerical technique.

The stopping criterion for the SLP algorithm is

\Cff&l)-CM)\ lc,(pfe+1)l

where P^ is the vector of the active power generation in the Üh

iteration and δ is the tolerance for the SLP algorithm.

2. Numerical results

In this section, we compare the performance of the simplex method and the proposed interior point method in the solution of the linear programming problems of SLP. Tests, considering different tolerance criteria and covering a realistic range of number of con­straints, are addressed.

The standard IEEE 30-bus and IEEE 118-bus systems, at the maximum demand condition, were used in all the simulations. The number of variables in each case is the number of generation units: six for the 30-bus system and 18 for the 118-bus system. All the software was run on a micro VAX computer at the University of Waterloo. The convergence criterion for the SLP is δ = 0.001.

For comparison purposes, the well-developed and efficient FORTRAN simplex code of MINOS 5.0 was used to solve the resulting linear programming problems. The interior point method was implemented in a FORTRAN code called IPMF, and the stop-

Problem

30 buses 118 buses

Table 6 Test results

MINOS 5.0 s

6.99 16.55

Iters. 3 5

IPMF $

1.51 8.70

Iters. 3 5

Speed-up MINOS/IPMF

4.63 1.9Ö

ping criterion for the IPMF code was set at ε = 10 4 (see section IV.B). The numerical results are shown in Table 6.

In Table 6, the second and third columns show the total running time and the number of iterations when the MINOS code was used in the whole SLP process. The corresponding values for the IPMF method are shown in the fourth and fifth columns. The final column in Table 6 shows the relative speed of IPMF versus MINOS.

From these results, it can be seen that the running time of SLP when using the IPMF is much faster than that corresponding to MINOS code. In fact, with IPMF the optimal solution is obtained at least 1.9 times faster than with MINOS 5.0.

It is important to note that in the simulations, the optimal values of the generation units obtained by using simplex are the same as those obtained by the IPMF code.

3. Improved solutions

One of the main features of the interior point methods is that they get a good approximation of the optimal point after the first few iterations. Geometrically, in Fig. 2 a typical interior point path, described by xh is shown. In this figure, the simplex scheme, whose solution goes from corner point to corner point, is indicated by x\. The steepest descent direction is represented by c.

The interior point method leads to a "good estimate" of the optimal solution after the first few iterations (three iterations in Fig. 2). This feature is very important in SLP because, for each linearization of the original problem, only an approximate solution of the linear programming problem is required. Moreover, it is usually good enough to obtain a point near the optimal solution.

Basically, in the interior point method there are two ways to control the proximity of the current solution to the optimal point. The most direct approach is to fix the maximum number of iterations internally in the code. This approach has the disadvantage that, for some problems, the solution may be far from the current point when the maximum number of iterations is reached. Therefore, the per­formance of the SLP is worsened. To avoid this problem, it is better

Figure 2: Poly type of a two-dimensional feasible region.

Page 11: Interior point optimization methods: Theory, implementations and engineering applications

94 CAN. J. ELECT. & COMP. ENG., VOL. 17, NO. 2, 1992

Table 7 j Time comparison with different values for ε

Problem

30 buses 118 buses

MINOS 5,0

s 6,99

16.55

Iters.

3 5

IPMF

s L21 6.70

Iters.

4 7

Speed-up

MINOS/IPMF

5.73 2.47

to increase the tolerance ε in the stopping criterion. With this change, we stop the iterations in the internal path of the feasible region before we reach the LP solution. The advantage of this approach is that we can control the proximity of the interior point to the true solution (the one provided by the simplex method) by decreasing or increas­ing the tolerance ε.

By using this feature we have set ε = IO-3 in the first two iterations and ε = IO"4 in the following iterations. The results are shown in Table 7.

From Table 7 we see that although the number of iterations increases slightly, the total running time decreases. The solutions using the IPMF are at least 2.4 times faster than those using the MINOS code.

When the tolerance ε is changed, the final results in the SLP process obtained by using the IPMF code are the same as those obtained by the simplex code. Therefore, we are not modifying the solution point when we change the tolerance.

These simulations show that the use of interior point methods to solve nonlinear optimization problems increases the efficiency of the man­agement of a power network. The proposed algorithm for active power scheduling can be run with a higher frequency, which provides a finer control of the generating units; therefore, it gives more economical solutions to the active power scheduling of the network.

VII. Conclusions

The results gathered in this paper show that on the basis of the CPU time, the dual affine variant of Karmarkar's method seems clearly superior to the simplex method for solving large LP problems encountered in different areas of electrical engineering.

The performance is better in systems with a sparse structure. The number of iterations does not increase significantly with the size of the problem.

For the solution of nonlinear problems by a successive linear programming method, an efficient interior point implementation has been presented. According to the test results, significant speed-ups over the simplex method may be obtained with this approach. Fur­thermore, by taking advantage of the nature of the SLP process, even faster solutions may be achieved by changing the tolerance in the convergence of the interior point method.

Since the implementations of interior point algorithms are still in the testing stage, further improvements in the next few years are expected. Therefore, this approach seems very promising as an alternative to the simplex method for solving linear programming problems. The solution of very large scale engineering optimization problems that were unsolvable by traditional methods can now be obtained by these novel techniques.

Acknowledgements

The authors acknowledge the financial support to conduct this research provided by the Natural Sciences and Engineering Research Council of Canada, under grants OGP-8835 and OGP-0044456.

References

[ 1 ] N. Karmarkar, "A new polynomial-time algorithm for linear programming," Com-binatorica, vol. 4, no. 4, 1984, pp. 373-395.

[2] J.H. Hooker, "Karmarkar's linear programming algorithm," Interfaces, vol. 16, no. 4, 1986, pp. 75-90.

[3] Y. Ye and E. Tse, "An extension of Karmarkar's projective algorithm for convex quadratic programming," Math. Programming, vol. 44, no. 2, 1989, pp. 157-171.

[4] P.E. Gill, W. Murray, M.A. Saunders, J.A. Tomlin and M.H. Wright, "On projected Newton barrier methods for linear programming and an equivalence to Karmarkar's projective method," Math. Programming, vol. 36, no. 2, 1986, pp. 183-209.

[5] M.J. Todd and B.P. Burrel, "An extension of Karmarkar's algorithm for linear programming using dual variables," Algorithmica, vol. 1, no. 4,1986, pp. 409-424.

[6] G. Strang, Introduction to Applied Mathematics, Wellesley, Mass.: Wellesley-Cambridge Press, 1986, pp. 673-689.

[7] R.J. Vandervei, M.J. Meketon and B.A. Freedman, "A modification of Karmarkar's linear programming algorithm," Algorithmica, vol. 1, no. 4, 1986, pp. 395-407.

[8] E.R. Barnes, "A variation on Karmarkar's algorithm for solving linear program­ming problems," Math. Programming, vol. 36, no. 2, 1986, pp. 174-182.

[9] I. Adler, M. Resende, G. Veiga and N. Karmarkar, "An implementation of Kar­markar's algorithm for linear programming," Math. Programming, vol. 44, no. 3, 1989, pp. 297-335.

[10] A. Vannelli, "An adaptation of the interior point method for solving the global routing problem," IEEE Trans. CAD/ICAS, vol. CAD-10, no. 2, Feb. 1991, pp. 193-203.

[11] K. Ponnambalam, A. Vannelli and T.E. Unny, "An application of Karmarkar's interior-point linear programming algorithm for multi-reservoir operations opti­mization," Stoch. Hydrology and Hydraulics, vol. 3, no. 1, 1989, pp. 17-29.

[12] T.C. Hu and M.T. Shing, "A decomposition algorithm for circuit routing," VLSI Circuit Layout: Theory and Design, T.C. Hu and E.S. Kuh, Eds., New York: IEEE Press, 1985, pp. 144-152.

[13] J.A. Tomlin, "An experimental approach to Karmarkar's projective method for linear programming," Math. Programming Study, vol. 31, 1987, pp. 175-191.

[14] N. Karmarkar and K.G. Ramakrishnan, "Implementation and computational re­sults of the Karmarkar's algorithm for linear programming, using an iterative method for computing projections," presented at 13th Int. Math. Programming Symp., Tokyo, Japan, Aug. 1988.

[15] I. Adler, N. Karmarkar, M.G.C. Resende and G. Veiga, "An implementation of Karmarkar's algorithm for linear programming," Dept. of Industrial Engineering and Operations Research, U. of California at Berkeley, working paper, to appear in Math. Programming, 1992.

[16] N. Munksgaard, "Solving sparse symmetric sets of linear equations by precondi­tioned conjugate gradients," ACM Trans. Math. Software, vol. 6, no. 2, 1980, pp. 206-219.

[17] A. George and J.W. Liu, "User guide for SPARSPAK: Waterloo sparse linear equations package," Dept. of Computer Sci., U. of Waterloo, Report CS-78-30, 1978.

[ 18] J.M. Ortega, Introduction to Parallel and Vector Solution of Linear Systems, New York: Plenum Press, 1988.

[19] P. Conçus, G.H. Golub and P.P. O'Leary, "Ageneralized conjugate gradient method for the numerical solution of elliptic partial differential equations," in Sparse Matrix Computations, J.R. Bunch and D.J. Rose, Eds., New York: Academic Press, 1976.

[20] G.H. Golub and C.F. Van Loan, Matrix Computations, Baltimore: Johns Hopkins University Press, 1984.

[21] B.S. Ting and B.N. Tien, "Routing techniques for gate arrays," IEEE Trans. CAD/ICAS, vol. CAD-2, no. 4, 1983, pp. 301-312.

[22] M.P. Vecchi and S. Kirkpatrick, "Global wiring by simulated annealing," IEEE Trans. CAD/ICAS, vol. CAD-2, no. 4, 1983, pp. 215-222.

[23] B.A. Murtaugh and M.A. Saunders, "MINOS user's guide," Dept. of Operations Research, Stanford U., Systems Optimization Lab. Rep. No. 77-9, 1977.

[24] R. Mota-Palomino and VH. Quintana, "A penalty function-linear programming method for solving power system constrained economic operation problems," IEEE Trans. Power Appar. Systems, vol. PAS-103, no. 6, June 1984, pp. 1414-1422.

[25] F. Palacios-Gomez, L. Lasdon and M. Engquist, "Nonlinear optimization by successive linear programming," Management Sci., vol. 28, no. 10, Oct. 1982, pp. 1106-1120.

[26] B. Stott and O. Alsac, "Fast decoupled load flow," IEEE Trans. Power Appar. Systems, vol. PAS-93, no. 3, May/June 1974, pp. 859-896.

[27] J.J. Dongarra and A.H. Sameh, "On some parallel banded system solvers," Parallel Computing, vol. 1, nos. 3 & 4, 1984, pp. 223-235.

[28] WY. Ng, "Generalized generation distribution factors for power system security evaluation," IEEE Trans. Power Appar. Systems, vol. PAS-100, no. 3, 1981, pp. 1001-1005.