Skip to content (access key 's')
Logo of Technion
Logo of CS Department


CGGC Seminar: Approximate Greatest Common Divisors and Polynomials Roots
9:30-11:30, 13:00-15:00, Taub 337
event speaker icon
Joab Winkler (CS, University of Sheffield)
event date icon
Tuesday, 19.8.2008, 09:30
event location icon
Room 337-8 Taub Bld.
The discussion of the structured condition number and Gauss' method for computing the roots of a polynomial shows that a sequence of greatest common divisor (GCD) computations of two polynomials must be performed. This is a classical problem in mathematics that is often solved by Euclid's algorithm. This algorithm is satisfactory for symbolic computations with exact polynomials, but the computation of the GCD of inexact polynomials is not trivial because it is ill-posed (not ill-conditioned) since the output (the GCD) is a discontinuous function of the input, that is, the polynomials. In particular, the polynomials p(x) and q(x) may have a non-constant GCD, but their perturbed forms p(x) + ±p(x) and q(x) + ±q(x) are, with probability almost one, coprime. Furthermore, the calculation of the degree of the GCD reduces to the determination of the rank of a matrix, which is a challenging problem in computational linear algebra. This topic will be addressed in Lecture 6.

Data from practical examples is always corrupted by noise, and it is therefore assumed that the given polynomials, the GCD of which it is required to compute, are inexact. It therefore follows that, with high probability, they are coprime, and thus only an approximate greatest common divisor (AGCD) can be computed. The term approximate greatest common divisor arises because the given inexact polynomials must be perturbed slightly in order to induce a non-constant GCD, and thus the computed GCD is approximate with respect to the given inexact polynomials. Several di®erent methods can be used to compute an AGCD, and the technique based on the Sylvester resultant matrix S(f; g) of the polynomials f(x) and g(x) will be described. This matrix, which arises in classical algebraic geometry, is convenient for computations because it has a well-de¯ned linear structure that can be preserved when it is desired to compute a perturbed form of this matrix. As noted above, it is assumed that f(x) and g(x) are inexact and therefore coprime, and it is therefore necessary to calculate the perturbations ±f(x) and ±g(x) such that f(x) + ±f(x) and g(x) + ±g(x) have a non-constant GCD. This is achieved by applying structured matrix methods to S(f+±f; g+±g) and its subresultants, which are matrices derived from S(f+±f; g+±g) by deleting some rows and columns. More precisely, it will be shown that the computation of an AGCD yields a least squares equality problem, which can be solved by the QR decomposition.

Numerous computations have been performed and excellent results on high degree polynomials with roots of multiplicity 20 and 30 have been obtained. Some of these results will be shown and it will be explained that the simple linear structure of the Sylvester matrix introduces a degree of freedom that can be exploited to obtain signi¯cantly improved results. The introduction of this degree of freedom is not required when computations are performed in a symbolic environment, but it is crucial when the computations are performed in a °oating point environment. Its importance for obtaining good results on high degree polynomials with a large number of roots of high multiplicity will be shown, and a simple and fast algorithm to obtain its optimal value will be discussed.

More details:
פרטים נוספים:


[Back to the index of events]