A prescient quote from Tjalling Koopmans in the introduction to [64] reads: \It has been found so far that, for any computation method which seems useful in relation to some set of data, another set of data can be constructed for which that method is obviously unsatisfactory." (This compares strikingly with the quote from Bixby et al. [13] at the end of this section.)
In [30], Dantzig writes: \Luckily the particular geometry used in my thesis was the one associated with the columns of the matrix instead of its rows. This column geometry gave me the insight which led me to believe that the simplex method would be an efficient solution technique. I earlier had rejected the method when I viewed it in the row geometry because running around the outside edges seemed so unpromising."
Since much has been written about the early history (and pre-history) of linear programming, for example in [29], Chapter 2, [30], and [83], pp. 209{ 225, this paper will concentrate more on developments since the seventies. I hope to intrigue the reader enough to investigate some of the byways and alleys associated with linear programming as well as the more well-travelled highways. We will look at simplex, ellipsoid, and interior-point methods, and also at least mention some other approaches. Of course, I hope the reader will forgive my personal bias in the topics selected. (Let me mention here Megiddo's article [75], which also surveys some recent developments from a different viewpoint.)
Following the development of the simplex method in 1947 [27], the '50s had been the decade of developing the theoretical underpinnings of linear programming, of extending its applicability in industrial settings and to certain combinatorial problems, and of the first general-purpose codes. The '60s saw the emergence of large-scale linear programming, of exploitation of special structure (again pioneered by Dantzig and Dantzig-Wolfe in [28,31]), and of extensions to quadratic programming and linear complementarity. If the '50s and the '60s were the decades of unbridled enthusiasm, the '70s were the decade of doubt, as the theory of computational complexity was developed and Klee and Minty [60] showed that the simplex method with a common pivot rule was of exponential complexity. We will concentrate on the developments since that time; hope has been restored by new polynomial-time algorithms, by bounds on the expected number of pivot steps, and by amazing computational studies on problems with numbers of variables ranging up to the millions.
Linear programming studies the optimization of a linear function over a feasible set defined by linear inequalities, hence a polyhedron. The problem is in some sense trivial, since it is only necessary to examine a finite number of vertices (and possibly edges), but if one is interested in efficient computation, the topic is wonderfully rich and has been the subject of numerous surprising new insights.