###### You are here

Multiplication is crucial in many fields, including adapting a recipe's proportions, calculating percentages, solving mathematics problems, as well as in computer programs. It is thus hardly surprising that it is one of the four basic algebraic operations taught at school, along with addition, subtraction, and division. Yet during their recent research, Joris van der Hoeven from the Laboratoire d’informatique de l’École polytechnique1 in Palaiseau (Essonne), and his Australian colleague David Harvey from the University of New South Wales, successfully developed a method that can multiply whole numbers (without decimals) more quickly. This is the stuff "to help increase the calculation speed of computers," van der Hoeven enthuses.

## The unbelievable slowness of the classical method

To get a better understanding, let us take another look at the classical multiplication method taught in school. It involves placing the two numbers to be multiplied on top of one another, then multiplying each figure in the first number by each figure in the second one, and finally adding the results.

For example, to multiply two integers of 3 figures each, 3 x 3 multiplications of figures are needed, or 9 operations, along with the addition of the 9 results. For two numbers of 5 figures each, 5 x 5 are required, or 25 multiplications, along with the addition of the 25 results. More generally, a number with n figures requires n x n, or n^{2}, multiplications.

As a result, the larger the numbers being multiplied, the more operations are needed.

"Multiplying two numbers of a billion figures each requires a billion times a billion multiplications, in other words a quintillion (a billion billions, or 1018) steps. So if we assume that we need one second to perform a billion operations, as is the case for your average computer today, this requires a billion seconds... or nearly 32 years!" van der Hoeven explains. Hence the interest of developing faster techniques.

## An unprecedented algorithm

A number of multiplication algorithms that are faster than the classical method were developed in the past, especially in 1971 when the German mathematicians Arnold Schönhage and Volker Strassen discovered a procedure that reduces the number of operations needed to n x (logFermerThe logarithm is a mathematical function that "transforms" multiplications into additions (for example, log (100) = 2, log (1000) = 3 and log (1,000,000) = 6).(n)) x log(log(n)).2 "This advance reduced the time it takes to multiply two numbers of a billion figures each to thirty seconds using one of today's laptops, as opposed, it bears reminding, to over 30 years," van der Hoeven underscores. In fact, Schönhage and Strassen's algorithm is now used by most software programs that calculate large numbers.

Schönhage and Strassen also predicted the existence of an even faster algorithm, which could multiply numbers with n figures in just n x (log (n)) operations. "Our recent article gives the first known example of an algorithm that enables this," van der Hoeven points out. More precisely, the new algorithm reduces the number of operations that must be performed–"multiplications, but also and especially additions and subtractions" van der Hoeven emphasizes–to Cx(nx(log n)), where C is a constant that depends on the speed of the machine performing the calculations. For example, if C equals 1, the multiplication of two numbers of 10 figures would require 10 x log (10), in other words 10 operations, as opposed to 10 x 10, or 100 operations, with the classical technique. And the multiplication of two numbers of 100 figures would require 200 operations instead of 10,000.

"If this new technique–which is being verified by experts–proves correct, it would represent a major advance in fundamental research in the field of computational complexity theory," observes Fredrik Johansson, a researcher at l’Institut de mathématiques de Bordeaux.3

## Valid only for very large numbers

A problem nevertheless remains: at the moment, the new method is valid only for very large numbers, "with more than 20 billion quintillion figures," van der Hoeven points out. In other words astronomical numbers that are difficult to even imagine! By way of comparison, the age of the Universe is "only" 14 billion years... Hence for now, the new method is of no use for simple everyday multiplication, and will not help us for filing our taxes.

"We designed our demonstration to be as short and simple as possible, which entailed limiting it to very large numbers," van der Hoeven continues. "That said, in the future we plan to identify variants of the algorithm that can be applied to smaller numbers."

## The stuff to accelerate mathematical research?

The new algorithm should improve the calculation speed of computers in certain fields. "One example is in mathematical research, for calculating the innumerable decimals in the number Pi (3,14159...), a constant that is used in numerous math, engineering, and physics formulas," van der Hoeven adds. It is worth noting that last March, Google announced that it had set a new record, with the calculation of 31,000 billion decimal places.

"Variants of our algorithm could also be embedded in software programs using what is known as the ‘Fourier fast transform’ (or FFT) algorithm, which is widely used to process and interpret digital signals in various fields including computer simulation, fluid mechanics, and image processing."

It is possible that an even faster algorithm will be discovered one day? "Probably not, for in 1971 Schönhage and Strassen predicted that n* log (n) is the best result possible," answers van der Hoeven, who has co-published no less than 6 articles on the subject. "This time, barring a major surprise, we have surely brought this research to a close."

## Comments

Log in, join the CNRS News community