# The greedy strategy in optimizing the Perron eigenvalue
^{†}^{†}thanks:
The research is supported by the RSF grant 17-11-01027

###### Abstract

We address the problems of minimizing and of maximizing the spectral radius over a convex family of non-negative matrices. Those problems being hard in general can be efficiently solved for some special families. We consider the so-called product families, where each matrix is composed of rows chosen independently from given sets. A recently introduced greedy method works surprisingly fast. However, it is applicable mostly for strictly positive matrices. For sparse matrices, it often diverges and gives a wrong answer. We present the “selective greedy method” that works equally well for all non-negative product families, including sparse ones. For this method, we prove a quadratic rate of convergence and demonstrate its exceptional efficiency in numerical examples. In dimensions up to 2000, the matrices with minimal/maximal spectral radii in product families are found within a few iterations. Applications to dynamical systems and to the graph theory are considered.

Keywords: iterative optimization method, non-negative matrix, spectral radius, relaxation algorithm, cycling, spectrum of a graph, quadratic convergence, dynamical system, stability

AMS 2010 subject classification: 15B48, 90C26, 15A42

1. Introduction

The problem of minimizing or maximizing the spectral radius of a non-negative matrix is notoriously hard. There are no efficient algorithms even for solving this problem over a compact convex (say, polyhedral) set of positive matrices. This is because the objective function is neither convex nor concave in matrix coefficients and, in general, non-Lipschitz. There may be many points of local extrema which are, moreover, hardly identified. Nevertheless, for some special sets of matrices efficient methods do exist. In this paper we consider the so-called product families. Their nice spectral properties were discovered and analysed by Blondel and Nesterov in [5]; the first methods of optimizing the spectral radius over such families originated in [24]. One of them, the spectral simplex method, was further developed in [28]. This method is quite simple in realization and has a fast convergence to the global minimum/maximum. Its modification, the greedy method [1, 25], has a fantastic rate of convergence! Within 3-4 iterations, it finds the global minimum/maximum with a good precision. Moreover, the number of iterations does not essentially grow with the dimension. However, it is applicable only under a firm assumption: all matrices must be strictly positive or, at least, irreducible. Otherwise the method may either cycle or converge to a very rough estimate. We provide a corresponding example in Section 3, although in practical problems this trouble occurs very often, especially for sparse matrices. This is a serious disadvantage, since most of applications deal with sparse matrices.

In this paper we develop the greedy method which works equally well for all non-negative matrices, including sparse matrices. Numerical results demonstrated in Section 6 show that even in dimension of several thousands, the greedy methods manages to find minimal and maximal spectral radii within a few iterations. We also prove the rate of convergence and thus theoretically justify the efficiency of the method. Finally we consider several applications.

Before introducing the main concepts let us define some notation. We denote the vectors by bold letters and their components by standard letters, so . The support of a non-negative vector is the set of indices of its nonzero components. For a non-negative matrix , we denote by its entries, by its spectral radius, which is the maximal modulus of its eigenvalues. By the Perron-Frobenius theorem, is equal to the maximal non-negative eigenvalue called the leading eigenvalue of the matrix . The corresponding non-negative eigenvector is also called leading. We usually normalize it as , where is the Euclidean norm. A non-negative matrix is reducible if after some renumbering of coordinates it gets a block upper-triangular form. For an irreducible matrix, its graph is strongly connected.

###### Definition 1

Let , be arbitrary nonempty compact sets referred to as uncertainty sets. Consider a matrix such that for each , the th row of belongs to . The family of all those matrices is denoted by and called a product family or a family with product structure.

Thus, we compose a matrix choosing each row from the corresponding uncertainty set. The family of all such matrices is, in a sense, a product of uncertainty sets. Of course, not every compact set of non-negative matrices has a product structure. Nevertheless, the class of product families is important in many applications. For instance, in dynamical systems and asyncronous systems [18, 20, 28], graph theory [10, 21, 24], mathematical economics [8, 22, 5], game theory [3, 1, 16, 29], matrix population models (see [22] and references therein), etc.

We consider the problems of optimising the spectral radius over a product family :

(1) |

Both minimization and maximization problems can be efficiently solved [24]. The spectral simplex method for finding global minimum and maximum demonstrates a surprising efficiency [28]. For example, if all the sets are two-element (so the set has a structure of a Boolean cube), then in dimension , the matrices with minimal and maximal spectral radii are found (precisely!) within iterations for 10-15 sec in a standard laptop. Note that in this case contains matrices. If for the same dimension , each uncertainty set consists of rows (so, ), then the spectral simplex method performs about iteration and solves the problem (1) in less than a minute. We write the algorithm in the next section, but its idea can be described within a few lines. Let us consider the maximization problem (1). We start with an arbitrary matrix with rows , and compute its leading eigenvector . Then we solve the problem , in other words, we find an element from that makes the biggest projection onto the eigenvector . If the row is already optimal, then we leave it and do the same with the second row and with the set , etc. If all rows of are optimal, then has the biggest spectral radius. Otherwise, we replace the row by an optimal element from and thus obtain the next matrix . Then we do the next iteration, etc. If all the sets consist of strictly positive rows, then the spectral radius increases each iteration. So, we have a relaxation scheme which always converges to a global optimum. However, if the rows may have zeros, then the relaxation is non-strict (it may happen that ), and the algorithm may cycle or converge to a non-optimal matrix. In [28] this trouble was resolved and the method was modified to be applicable for sparse matrices. We will formulate the idea in Theorem A in the next section.

Recently in [1, 25] the spectral simplex method was further improved: in each iteration we maximize not one row but all rows simultaneously. According to numerical experiments, this slight modification leads to an exceptionally fast convergence: in all practical examples the algorithm terminates within iterations! In the example above, with , the absolute minimum/maximum is found within a few seconds and the number of iterations rarely exceeds . Even for dimension , the number of iterations never exceeds . The authors of [1] and of [25] came to problem (1) from different angles. The paper [1] studies entropy games with a fixed number of states. The new method was called policy iteration algorithm. It is based not only on the spectral simplex method but on earlier ideas form the game theory of Hoffman and Karp [16] and of Rothblum [29]. The paper [25] solves the problem of finding the closest stable matrix in the norm. The authors derive the same method and call it the greedy method. We will use the latter name. This modification of the spectral simplex method, although has a much faster convergence, inherits the main disadvantage: it works only for strictly positive matrices. For sparse matrices, it often stucks, cycles, or converges to a wrong solution. None of ideas from the work [28] which successively modified the spectral simplex method helps for the greedy method. This issue is discussed in detail is Section 3. In [1] the positivity assumption is relaxed to irreducibility of all matrices in each step, but this condition is also very restrictive and is never satisfied in practice for sparse matrices. In [25] the greedy method was modified for sparse matrices, but only for minimization problem and by a significant complication of the procedure. Therefore, in this paper we attack the three main issues:

1. Is it possible to drop the strict positivity assumption for the greedy method? Can this method be applied for sparse matrices?

2. To estimate the rate of convergence theoretically. To reveal the secret of an extremely fast convergence of the greedy method.

3. To apply the greedy method to known problems.

The first issue is solved in Section 4. We derive the selective greedy method working for all kind of non-negative matrices. To this end we introduce and apply the notion of selected leading eigenvector. The new greedy method is as simple in realisation and as fast as the previous one.

In Section 5 we prove two main theorems about the rate of convergence. We show that both the simplex and the greedy methods have a global linear rate of convergence. Then we show that the greedy method (but not the spectral simplex!) has a local quadratic convergence, which explains its efficiency. We also estimate the constants and parameters of the convergence.

Numerical results for matrix families in various dimensions are reported in Section 6. Section 7 shows application of the greedy method to problems of dynamical systems and of the graph theory. Finally, in Section 8, we discuss the details of practical implementation.

2. The minimizing/maximizing spectral radius

of product families

2.1. The description of the methods

A matrix is said to be minimal in each row if it has a leading eigenvector such that for all . It is maximal in each row if it has a leading eigenvector such that . Those notation are not completely analogous: the minimality in each row is defined with respect to an arbitrary leading eigenvector , while the maximality needs a strictly positive .

We first briefly describe the ideas of the methods and then write formal procedures. Optimisation of spectral radius of a product family is done by a relaxation scheme. We consider first the maximization problem.

Starting with some matrix we build a sequences of matrices such that Every time the next matrix is constructed from by the same rule. The rule depends on the method:

Spectral simplex method. is obtained from by changing one of its lines so that , where is a leading eigenvector of . We fix and take the element which maximizes the scalar product with the leading eigenvector over all . The algorithm terminates when no further step is possible, i.e., when the matrix is maximal in each row with respect to .

Thus, each iteration makes a one-line correction of the matrix to increase the projection of this line to the leading eigenvector . Of course, such a line may not be unique. In [28] the smallest index rule was used: each time we take the smallest for which the line is not maximal with respect to . This strategy provides a very fast convergence to the optimal matrix. In [1] the convergence was still speeded up by using the pivoting rule: is the index for which the ratio is maximal. Thus, we always choose the steepest increase of the scalar product. One iteration of this method requires the exhaustion of all indices , which may be a disadvantage in case of complicated sets .

The greedy method. is obtained by replacing all rows of with the maximal rows in their sets with respect to the leading eigenvector .

So, in contrast to spectral simplex method, we change not only one line of , but all non-optimal lines, where we can increase the scalar product . If the line is already optimal, we do not change it and set . Otherwise we replace it by the row that gives the maximal scalar product over all .

Realization. The maximal scalar product over all is found by solving the corresponding convex problem over . If is finite, this is done merely by exhaustion of all elements of . If is a polyhedron, then we find among its vertices solving an LP problem by the (usual) simplex method.

Convergence. Denote . It was shown in [28] that both methods are actually relaxations: for all . Moreover, if is irreducible, then this inequality is strict. Hence, if in all iterations the matrices are irreducible (this is the case, for instance, when all the uncertainty sets are strictly positive), then both methods produce strictly increasing sequence of spectral radii . Hence, the algorithm never cycles. If all the sets are finite or polyhedral, then the set of extreme points of the family is finite, therefore the algorithm eventually arrives at the maximal spectral radius within finite time. This occurs at the matrix maximal in each row.

If all are general compact sets, then it may happen that a matrix maximal in each row, although exists, will not be reached within finite time. In this case the algorithm can be interrupted at an arbitrary step, after which we apply the a posteriory estimates for defined below. We denote , so . Then for for an arbitrary matrix and its leading eigenvector , we define the following values.

(2) |

Thus, is the maximal ratio between the value over all and the th component of . Similarly, for the minimum:

(3) |

Then the following obvious estimates for are true:

###### Proposition 1

[28] For both the spectral simplex method and the greedy method (for maximization and for minimization respectively), we have in th iteration:

(4) |

In Section 5 we show that at least for strictly positive matrices, estimates (4) converge to and to respectively with linear rate. Moreover, for the greedy method the convergence is locally quadratic, which explains its efficiency in all practical examples.

However, for sparse matrices the situation is more difficult. The equalities are not necessarily strict. The algorithm may stay in the same value of for many iterations and even may cycle. Before addressing this issue, we write a formal procedure for both algorithms.

2.2. The algorithms

The spectral simplex method.

Initialization. Taking arbitrary , we form a matrix . Denote its rows by and its leading eigenvector .

Main loop. The th iteration. We have a matrix composed with rows . Compute its leading eigenvector (if it is not unique, take any of them) and for , find a solution of the following problem:

(5) |

If is finite, then this problem is solved by exhaustion; if is polyhedral, then it is solved as an LP problem, and is found among its vertices.

If and , then we set and solve problem (5) for the next row. If , then the algorithm terminates. In case , the matrix is maximal in each row, and , i.e., is a solution. If , then we set , for all and form the corresponding matrix . Thus, is obtained from by replacing its th row by . Then we start st iteration.

If we obey the pivoting rule, we solve problems (5) for all and take the row for which the value defined in (2) is maximal. Then is obtained from by replacing its th row by

Termination. If is maximal in each row and , then is a solution. Otherwise, we get an estimate for by inequality (4).

End.

###### Remark 1

If each row of makes the maximal scalar product with the eigenvector , but is not strictly positive (i.e., is “almost” maximal in each row), then we cannot conclude that . However, in this case the family is reducible: the space spanned by the vectors with the same support as is invariant with respect to all matrices from . Hence, the problem of maximizing the spectral radius is reduced to two problems of smaller dimensions. Therefore, before running the algorithm we can factorize to several irreducible families. The procedure is written in detail in [28]. However, this way is not very efficient, since the case when the final leading eigenvector is not strictly positive occurs rarely. Another way is to factorize the family after the termination into two families as written above and then run the algorithm separately to both of them them.

The simplex method for minimization problem is the same, replacing maximum by minimum in the problem (5) and omitting the positivity assumption in the termination of the algorithm.

The greedy method

Initialization. Taking arbitrary , we form a matrix with rows . Take its leading eigenvector . Denote .

Main loop. The th iteration. We have a matrix composed with rows . Compute its leading eigenvector (if it is not unique, take any of them) and for , find a solution of the problem (5). For each , we do:

If , then we set the th row of to be equal to that of , i.e., .

Otherwise, if , then we set .

We form the corresponding matrix . If the first case took place for all , i.e., , and if , then is optimal in each row. Thus, the answer is . If has some zero components, then the family is reducible. We stop the algorithm and factorize (see Remark 1). Otherwise, go to st iteration.

Termination. If is maximal in each row, then is a solution. Otherwise, we get an estimate for by inequality (4).

End.

The greedy method for minimization problem is the same, replacing maximum by minimum in the problem (5) and omitting the positivity assumption in the termination of the algorithm.

As we mentioned, in case of strictly positive uncertainty sets both those algorithms work efficiently and, if all are finite or polyhedral, they find the optimal matrix within finite time. However, if rows from have some zero components, the situation may be different. We begin with the corresponding example, then we modify those methods to converge for arbitrary non-negative uncertainty sets. Then in Section 5 we estimate the rate of convergence.

3. The cycling and the divergence phenomena

Both the spectral simplex method and the greedy method work very efficiently for strictly positive product families. However, if some matrices have zero components, the methods may not work at all. For the spectral simplex method, this phenomenon was observed in [28]. Here is an example for the greedy method.

We consider the product family of matrices defined by the following three uncertainty sets:

So, , and consists of and rows respectively. Hence, we have in total matrices. Taking the first row in each set we compose the first matrix (which is positive, hence the family is irreducible) and the algorithm runs as follows: , where

The matrix has a unique eigenvector . Taking the maximal row in each set with respect to this eigenvector, we compose the next matrix . It has a a multiple leading eigenvalue . Choosing one of its leading eigenvectors , we make the next iteration and obtain the matrix . It also has a multiple leading eigenvalue . Choosing one of its leading eigenvectors , we make the next iteration and come back to the matrix . The algorithm cycles.

We see that the greedy method stucks on the cycle of length two and on the value of the spectral radius . However, the maximal spectral radius of this family is not , but . The value is realized for the matrix

which is missed by the algorithm.

In this example the algorithm never terminates and, when stopped, gives a wrong solution: instead of . The a posteriori estimate (4) gives . The error is , and this is the best the greedy method can do for this family.

4. Is the greedy method applicable for sparse matrices?

The example in the previous section rises a natural question: is the idea of the greedy algorithm applicable for strictly positive matrices only? Can it be modified to work with sparse matrices? To attack this problem we first reveal two main difficulties caused by sparsity. Then we will see how to treated then. We also mention that the same troubles occur for the spectral simplex method, and they were overcome in [28]. However, those ideas of modification are totally inapplicable for the greedy method.

4.1. Two problems: cycling and multiple choice of

In Section 3 we saw that if some vectors from the uncertainty sets have zero components, then the greedy method may cycle and, which is still worse, may miss the optimal matrix and give a quite rough final estimate of the optimal value. Indeed, numerical experiments show that cycling often occurs for sparse matrices, especially for minimization problem (for maximizing it is rare but also possible). Moreover, the sparser the matrix the more often the cycling occurs.

The reason of cycling is hidden in multiple leading eigenvectors. Recall that the geometrical multiplicity of an eigenvalue is the dimension of the subspace spanned by all eigenvectors corresponding to that eigenvalue. It never exceeds the algebraic multiplicity (the multiplicity of the corresponding root of the characteristic polynomial). We call the subspace spanned by leading eigenvectors Perron subspace. If at some iteration we get a matrix with a multiple leading eigenvector, then we have an uncertainty with choosing from the Perron subspace. An “unsuccessful” choice may cause cycling. In the example in Section 3, both matrices and have Perron subspaces of dimension , and this is the reason of the trouble. On the other hand, if in all iterations the leading eigenvectors are simple, i.e., their geometric multiplicity are equal to one, then cycling never occurs as the following proposition guarantees.

###### Proposition 2

If in all iterations of the greedy method, the leading eigenvectors are simple, then the method does not cycle and (in case of finite or polyhedral sets ) terminates within finite time. The same is true for the spectral simplex method.

This fact follows from Theorem 2 proved below. Of course, if all matrices are totally positive, or at least irreducible, then by the well-known results of the Perron-Frobenius theory [13, chapter 13, §2], all are simple, and cycling never occurs. But how to guarantee the simplicity of leading eigenvectors in case of sparse matrices? For the spectral simplex method, this problem was solved by the following curious fact:

Theorem A. [28] Consider the spectral simplex method in the maximization problem. If the initial matrix has a simple leading eigenvector, then so do all successive matrices and the method does not cycle.

So, in the spectral simplex method, the issue is solved merely by choosing a proper initial matrix . In this case there will be no uncertainty in choosing the leading eigenvectors in all iterations. Moreover, the leading eigenvectors will continuously depend on the matrices, hence the algorithm will be stable under small perturbations. Note that this holds only for maximization problem! For minimizing the spectral radius, the cycling can be avoided as well but in a totally different way [28].

However, Theorem A does not hold for the greedy method. This is seen already in the example from Section 3: the totally positive (and hence having a simple leading eigenvector) matrix produces the matrix with multiple leading eigenvectors. So, a proper choice of the initial matrix is not a guaranty against cycling of the greedy method!

In practice, because of rounding errors, not only multiple but “almost multiple” leading eigenvectors (when the second largest eigenvalue is very close to the first one) may cause cycling or at least essential slowing down of the algorithm.

Nevertheless, for the greedy method, a way to avoid cycling does exist even for sparse matrices. We suggest a strategy showing its efficiency both theoretically and numerically. Even for very sparse matrices, the algorithm works as fast as for positive ones.

4.2. Anti-cycling by selection of the leading eigenvectors

A naive idea to avoid cycling would be to slightly change all the uncertainty sets making them strictly positive. For example, to replace them by perturbed sets , where and is a small constant. So, we add a positive number to each entry of vectors from the uncertainty sets. All matrices become totally positive: , where is the matrix of ones. By Proposition 2, the greedy method for the perturbed family does not cycle. However, this perturbation can significantly change the answer of the problem. For example, if a matrix has ones over the main diagonal and all other elements are zeros ( if and otherwise), then , while . Even if is very small, say, is , then for we have , while . Hence, we cannot merely replace the family by without risk of a big error. Our crucial idea is to deal with the original family but using the eigenvectors from the perturbed family .

###### Definition 2

The selected leading eigenvector of a non-negative matrix is the limit , where is the normalized leading eigenvector of the perturbed matrix .

The next result gives a way to find the selected eigenvector explicitly. Before formulating it we make some observations.

If an eigenvector is simple, then it coincides with the selected eigenvector. If is multiple, then the selected eigenvector is a vector from the Perron subspace. Thus, Definition 2 provides a way to select one vector from the Perron subspace of every non-negative matrix. We are going to see that this way is successful for the greedy method.

Consider the power method for computing the leading eigenvectors: linearly with the rate , where are the first and the second largest by modulus eigenvalues of or , if the leading eigenvalue has non-trivial (i.e., of size bigger than one) Jordan blocks. Rarely, if there are several largest by modulus eigenvalues, then the “averaged” sequence converges to , where is the imprimitivity index (see [13, chapter 13]). In all cases we will say that the power method converges and, for the sake of simplicity, assume that there is only one largest by modulus eigenvalue, maybe multiple, i.e., that . . In most of practical cases it converges to a leading eigenvector

###### Theorem 1

The selected leading eigenvector of a non-negative matrix is proportional to the limit of the power method applied to the matrix with the initial vector .

Proof. It may be assumed that . The spectral radius of the matrix strictly increases in , so for some . We have

Denoting by the sum of entries of the vector , we obtain

and hence

Since , we have

and obtain

Assume now that the power method for the matrix and for converges to some vector . In case ( is the imprimitivity index), this means that as . Then direction of the vector converges as to the direction of the vector . Since as , the theorem follows. If , then . Arguing as above we conclude again that the direction of the vector converges as to the direction of the vector . This completes the proof.

###### Definition 3

The greedy method with the selected leading eigenvectors in all iterations is called selective greedy method.

Now we arrive at the main result of this section. We show that using selected eigenvectors avoids cycling.

###### Theorem 2

The selective greedy method does not cycle.

Proof. Consider the case of finding maximum, the case of minimum is proved in the same way. By the th iteration of the algorithm we have a matrix and its selected leading eigenvector . Assume the algorithm cycles: , and let be the minimal length of the cycle (for , the equality means that is the optimal matrix, so this is not cycling). Consider the chain of perturbed matrices , where

1) if , then this row is not changed in this iteration: . In this case .

2) if , then the row is not optimal, and . Hence the same inequality is true for the perturbed matrices and for the eigenvector of , whenever is small enough.

Thus, in both cases, each row of makes a bigger or equal scalar product with than the corresponding row of . Moreover, at least in one row this scalar product is strictly bigger, otherwise , which contradicts to the minimality of the cycle. This, in view of strict positivity of , implies that [28, lemma 2]. We see that in the chain the spectral radius strictly increases. Therefore , and hence . The contradiction competes the proof.

###### Example 1

We apply the selective greedy method to the family from Section 3, on which the (usual) greedy method cycles and arrives to a wrong solution. Now we have:

Thus, the selective greedy method arrives at the optimal matrix at the third iteration. This matrix is maximal in each row with respect to its leading eigenvector , as the last iteration demonstrates.

The selective greedy method repeats the first iteration of the (usual) greedy method, but in the second and in the third iteration it chooses different leading eigenvectors (“selected eigenvectors”) by which it does not cycle and goes directly to the optimal solution.

4.3. Realization of the selective greedy method

Thus, the greedy method does not cycle, provided it uses selected leading eigenvectors in all iterations. If the leading eigenvector is unique, then it coincides with the selected leading eigenvector, and there is no problem. The difficulty occurs if the leading eigenvector is multiple. The following crucial fact is a straightforward consequence of Theorem 2.

###### Corollary 1

The power method applied to the initial vector converges to the selected leading eigenvector.

Therefore, to realize the selective greedy method we compute all the leading eigenvectors by the power method starting always with the same vector (the vector of ones). After a sufficient number of iterations we come close to the limit, which is, by Corollary 1, the selected eigenvector (up to normalization). So, actually we perform the greedy method with approximations for the selected eigenvectors, which are, in view of Theorem 2, the leading eigenvectors of perturbed matrices for some small .

Thus, to avoid cycling we can compute all eigenvectors by the power method starting with the same initial vector . Because of computer approximations, actually we deal with totally positive matrices of the family for some small .

4.4. The procedure of the selective greedy method

We precisely follow the algorithm of the greedy method presented in subsection 2.2, and in each iteration we compute the corresponding leading eigenvectors by the power method starting with the vector of ones . The precision parameter of the power method is chosen in advance and is the same in all iterations. By Theorem 2, if is small enough, then the greedy method does not cycle and finds the solution within finite time. In practice, if the algorithm cycles for a given , then we reduce it taking, say, and restart the algorithm.

To avoid any trouble with the power method for sparse matrices, we can use the following well-known fact: if is a matrix, then the matrix has a unique (maybe multiple) eigenvalue equal by modulus to . This implies that the power method applied for the matrix always converges to its leading eigenvector, i.e., as . Of course, has the same leading eigenvector as . Hence, we can replace each uncertainty set by the set , where is the th canonical basis vector. After this each matrix is replaced by , and we will not have trouble with the convergence of the power method.

5. The rate of convergence

In this section we address two issues: revealing the secret of the extremely fast convergence of the greedy method and getting an explanation why does the spectral simplex method converges slower (although very fast as well). We restrict ourselves to the maximization problems, where we find the maximal spectral radius . For minimization problems, the results and the proofs are the same.

First of all, let us remark that due to possible cycling, no efficient estimates for the rate of convergence exist. Indeed, in Section 3 we see that the greedy method may not converge to the solution at all. That is why we estimate the rate of convergence only under some favorable assumptions (positivity of the limit matrix, positivity of its leading eigenvector, etc.) Actually, they are not very restrictive since the selective greedy method has the same convergence rate as the greedy method for a strictly positive family .

We are going to show that under those assumptions, both methods have global linear convergence, but the greedy method, in contrast to the spectral simplex method, has moreover a local quadratic convergence. In both cases we estimate the parameters of linear and quadratic convergence.

5.1. Global linear convergence

The following result is formulated for both the greedy and the spectral simplex methods. The same holds for the method with the pivoting rule. We denote by the matrix obtained by the th iteration, by its spectral radius and by its left and right leading eigenvectors (if there are multiple ones, we take any of them). We also denote . As usual, is the canonical basis in and is the vector of ones.

###### Theorem 3

In both the greedy method and the spectral simplex method, in each iteration we have

(6) |

provided .

Proof. Consider the value defined in (2). Let the maximum in formula (2) be attained in the th row. Then for every matrix , we have , and therefore . Thus, . On the other hand, in both greedy and simplex methods, the th component of the vector is equal to the th component of the vector . In all other components, we have . Therefore,

Multiplying this inequality by the vector from the left, we obtain

Since , we have and

Dividing by and taking into account that , we arrive at (6).

Rewriting (6) in the form

we see that the th iteration reduces the distance to the maximal spectral radius in at least

###### Corollary 2

Each iteration of the greedy method and of the spectral simplex method reduces the distance to the optimal spectral radius in at least

If , then the contraction coefficient from Corollary 2 can be roughly estimated from above. Let and be the smallest and the biggest entries respectively of all vectors from the uncertainty sets. For each , we denote by its leading eigenvector and by its spectral radius. Let also be the sum of entries of . Then for each , we have . Similarly, . Hence, for every matrix from , the ratio between the smallest and of the biggest entries of its leading eigenvector is at least . The same is true for the left eigenvector. Therefore,

We have arrived at the following theorem on the global linear convergence

###### Theorem 4

If all uncertainty sets are strictly positive, then the rate of convergence of both the greedy method and of the spectral simplex method is at least linear with the factor

(7) |

where and are the smallest and the biggest entries respectively of vectors from the uncertainty sets.

Thus, in each iteration of the greedy method and of the spectral simplex method, we have

(8) |

5.2. Local quadratic convergence

Now we are going to see that the greedy method have actually a quadratic rate of convergence in a small neighbourhood of the optimal matrix. This explains its fast convergence. We establish this result under the following two assumptions: 1) the optimal matrix , for which , is strictly positive; 2) in a neighbourhood of , the uncertainty sets are bounded by smooth strictly convex surfaces. The latter is, of course, not satisfied for polyhedral sets. Nevertheless, the quadratic convergence for sets with a smooth boundary explains the fast convergence for finite or for polyhedral sets as well.

The local quadratic convergence means that there are constants and for which the following holds: if , then . In this case, the algorithm converges whenever , in which case for every iteration , we have

(see, for instance, [4]).

As usual, we use Euclidean norm . We also need the -norm . We denote by the identity matrix; means that the matrix is positive semidefinite.

###### Theorem 5

Suppose the optimal matrix is strictly positive and all uncertainty sets are strongly convex and smooth in a neighbourhood of ; then the greedy algorithm has a local quadratic convergence to .

In the proof we use the following auxiliary fact

###### Lemma 1

If is a strictly positive matrix with and with the leading eigenvector , then for every and for every vector , such that , we have , where

Proof. Let . Denote , where are some non-negative numbers. Choose one for which the value is maximal. Let it be and let (the opposite case is considered in the same way). Since and , it follows that there exists for which . Since , we have . Therefore,

(9) |

The last sum is bigger than or equal to one term , we have , hence 9), we get . Substituting to ( . Therefore, . Note that for all other

Hence, , which completes the proof.

Proof of Theorem 5. Without loss of generality it can be assumed that . It is known that any strongly convex smooth surface can be defined by a smooth convex function so that and for all . For example, one can set equal to the distance from to for outside or in some neighbourhood inside it. For each , we set and denote by such a function that defines the surface . Fix an index and denote by and the th rows of the matrices and respectively (so, we omit the subscript to simplify the notation). Since , it follows that . Denote by and the lower and upper bounds of the quadratic form in some neighbourhood of , i.e., at all points such that . Writing the Tailor expansion of the function at the point , we have for small :

where the remainder . Substituting and taking into account that and belong to , we obtain . Thus, . Hence , because both

This holds for each row of , therefore , and hence is defined in Lemma 1. , where the value

Further,