15 of the Most Important Algorithms That Helped Define Math…


Algorithms are used by all of us all the time with or without our direct knowledge. They have applications in many different disciplines from science to math to physics and, of course, computing

It turns out algorithms have a long and illustrious history stretching back as far as the Babylonians. But which of these popular and complex calculation processes could be considered some of the most important?

These 15 are probably good candidates.

The following list is far from exhaustive and is in no particular order. 

1. Babylonian Algorithms Are the Oldest Ever Found

Creator: Unknown

When it was created: circa 1600 BC

Its impact/implications on the world: The world’s first known algorithm

Although there is some evidence of early multiplication algorithms in Egypt (around 1700-2000 BC) the oldest algorithm is widely accepted to have been found on a set of Babylonian clay tablets that date to around 1600-1800 BC. 

Their true significance only came to light around 1972 when computer scientist and mathematician Donald E. Knuth published the first English translations of various cuneiform mathematical tablets.

Here are some excerpts from his 1972 manuscript that explain these early algorithms:-

 “The calculations described in Babylonian tablets are not merely the solutions to specific individual problems; they are actually general procedures for solving a whole class of problems.” – Pages 672 to 673 of  “Ancient Babylonian Algorithms“.

The tablets also appear to have been an early form of instruction manual:-

“Note also the stereotyped ending, ‘This is the procedure,’ which is commonly found at the end of each section on a table. Thus the Babylonian procedures are genuine algorithms, and we can commend the Babylonians for developing a nice way to explain an algorithm by example as the algorithm itself was being defined….” – Pages 672 to 673 of  “Ancient Babylonian Algorithms”.

babylonian algorithm
An 18th Century BC medical tablet. Source: Osama Shukir Muhammed Amin FRCP/Wikimedia Commons

2. Euclid’s Algorithm is Still in Use Today

Creator (if known): Euclid

When it was created (if known): 300BC

Its impact/implications on the world: Euclid’s Algorithm is one of the earliest algorithms ever created and, with some alterations, is still used by computers today.

The Euclidian algorithm is a procedure used to find the greatest common divisors (GCD) of two positive integers or numbers. It was first described by Euclid in his manuscript the Elements written around 300 BC.

It is a very efficient computation that is still used today by computers in some form or other. 

Euclid’s algorithm requires the successive division and calculation of remainders until the result is reached. It is best described by use of an example (courtesy of Encyclopedia Britannica.com):-

What is the GCD of 56 and 12?

Step 1 – Divide the larger by the smaller number:-

56 / 12 = 4 (remainder 8)

Step 2 – Divide the divisor by the remainder from the previous step:-

12 / 8 = 1 (remainder 4)

Step 3 – Continue step 2 until no remainders are left (in this case it’s a simple 3 step process):-

8 / 4 = 2 (no remainder)

In this case, the GCD is 4.

This algorithm is widely used for reducing common fractions to their lowest terms and in advanced mathematics applications such as finding integer solutions in linear equations.

Euclid's Algorithm
Comparison of Euclid’s and Nicomachus’ methods for finding a GCD.  Source: Wvbailey/Wikimedia Commons

3. The ‘Sieve of Eratosthenes’ is an Ancient, Simple Algorithm

Creator (if known): Eratosthenes 

When it was created (if known): 200BC

Its impact/implications on the world: The ‘Sieve of Eratosthenes’ is widely accepted as one of the most ancient algorithms of all time. It allows you to find all the prime numbers in a table of given numbers (as many as you want to include).

To perform it, you find all the numbers greater than 2, then cross out the ones divisible by 2. Then you do the same for non-crossed out numbers greater than 3, so on so forth ad infinitum until all divisible numbers are crossed out.

Here is a visualization of the technique (mute if the voice-over irritates you):

4. Boolean (Binary) Algebra Was The Foundation For The Information Age

Creator (if known): George Boole

When it was created (if known): 1847

Its impact/implications on the world: This algorithm is widely recognized as the foundation of modern computer coding – It is still in use today especially in computer circuitry.

You might be familiar with the term Boolean in mathematics, logic and computer coding. If so, this is its origin.

Boolean algebra is a branch of algebra in which the variable can only ever be true or false – so-called truth values (usually binary 1 or 0). It was first invented by George Boole in his 1845 work “The Laws of Thought”.

His Boolean algebra is widely credited as being the foundation for the Information Age.

 

5. Ada Lovelace’s Algorithm Was the First Computer Program

Creator (if known): Ada Lovelace

When it was created (if known): 1840’s

Its impact/implications on the world: This algorithm is widely recognized as the first computer program.

Ada Lovelace spent the best part of a year translating one of Charles Babbage’s lectures (that had been transcribed into French by an Italian Engineer – yes we know) into English. During this process, she dutifully added additional explanatory notes of her own. 

Her notes were labeled A – G with the latter describing an algorithm for an Analytical Engine to compute Bernoulli numbers. This ‘note G’ is now widely accepted as being the first recorded example of a computer code – making her the first ever computer programmer.

The engine was never built, and so, her algorithm was never tested during her lifetime. Her work was rediscovered in 1953 when her notes were republished. 

Ada Lovelace's Algorithm
Source: Ada Lovelace/Wikimedia Commons

6. Fast Fourier Transform Breaks Down Signals Into Frequencies

Creator (if known): Carl Gauss, Joseph Fourier, James Cooley and John Tukey

When it was created (if known): 1802, 1822, 1965

Its impact/implications on the world: This algorithm is used to break down a signal into the frequencies that compose it – much like a musical chord can be expressed in frequencies, or pitches, of each note therein. 

The Fast Fourier Transform (FTT) algorithm can trace its origins to Carl Gauss who first created it to calculate the orbits of asteroids. The formula was later expanded on by Joseph Fourier in 1822.

But the more modern and widely used form of the algorithm was created, and published by, James Cooley and John Tukey in 1965. This version is the most far-reaching algorithm in applied mathematics and it revolutionized signal processing.

“FFT relies on a divide-and-conquer strategy to reduce an ostensibly O(N2) chore to an O(N log N) frolic. But unlike Quicksort, the implementation is (at first sight) nonintuitive and less than straightforward. This in itself gave computer science an impetus to investigate the inherent complexity of computational problems and algorithms.” – Barry A. Cipra.

7. Google’s Ranking Algorithm (PageRank) Could Be the Most Widely Used Algorithm

Creator (if known): 1996

When it was created (if known): Larry Page (mainly) and Sergey Brin

Its impact/implications on the world: PageRank is, arguably, the most used algorithm in the world today. It is, of course, the foundation of the ranking of pages on Google’s search engine. 

Interestingly PageRank is named after Larry Page rather than it literally meaning ‘to rank web pages’. It is not the only algorithm that Google uses nowadays to order pages on its search result but is it the eldest and best-known of them.

PageRank was devised by Larry Page and Sergey Brin whilst they were studying at Stanford University as part of their research project on a new kind of search engine. 

The main premise was to rank pages based on their relative ‘importance’ or ‘popularity’. Generally speaking, pages appear higher in the hierarchy of the have more back-links or links to them.

The PageRank algorithm is given by the following formula:-

PR(A) = (1-d) + d (PR(T1)/C(T1) + … + PR(Tn)/C(Tn))

Where:-

  • PR(A) is the PageRank of page A,
  • PR(Ti) is the PageRank of pages Ti which link to page A,
  • C(Ti) is the number of outbound links on page Ti and;
  • d is a damping factor which can be set between 0 and 1.

From this, we can see that this algorithm doesn’t rank websites as a whole but ranks each page individually. It can also be likened to a ‘pyramid scheme‘ in which a particular page is ranked recursively depending on what other pages link to it. 

It has fallen out of favor in recent years but is still used as part of a general suite of other algorithms at Google. 

google algorithm
Source: Pixabay

8. Monte Carlo Method (Metropolis Algorithm) Was Used at Los Alamos

Creator (if known): John von Neumann, Stan Ulam, and Nick Metropolis

When it was created (if known): 1946

Its impact/implications on the world: It was used as a Los Alamos code word for the stochastic simulations applied to build better atomic bombs after the Second World War.

Monte Carlo Method is defined as:- “Monte Carlo is the art of approximating an expectation by the sample mean of a function of simulated random variables.”

This important algorithm is used to approximate solutions to numerical problems of unmanageable complexity by mimicking a random process. It relies on repeated random sampling to obtain a result – in effect using randomness to solve problems that might be deterministic in principle.

The algorithm is widely used in physical and mathematical problems and is most useful when it is difficult or impossible to use other approaches.

Monte Carlo methods are mainly used in three problem classes: optimization, numerical integration, and generating draws from a probability distribution.

Monte Carlo Method
Basic visualization of the Monte Carlo Method. Source: –pbroks13/Wikimedia Commons

9. The Simplex Method for Linear Programming was Widely Adopted by Industry

Creator (if known): George Dantzig

When it was created (if known): 1947

Its impact/implications on the world: The Simplex Method of Linear Programming is one of the most successful algorithms of all time. It was widely used in the world of industry or any other situation where economic survival rests on the ability to maximize efficiency within a budget and/or other constraints.

Devised by George Dantzig in 1947 at the RAND Corporation, this algorithm has become one of the most widespread in history, despite the fact that most real-world problems are very rarely linear in nature.

It does, however, offer a simple and elegant way of deriving solutions to problems. Some have noted that it can be affected by exponential delays but is otherwise highly efficient – it usually takes 2m (where m is the range of equality constraints) to 3m iterations to complete.

It works by using a systematic strategy to generate and validate candidate vertex solutions within a linear program. At each iteration, the algorithm chooses the variable that makes the biggest modification towards the minimum solution.

That variable then replaces one of its covariables, which is most drastically limiting it, thereby shifting the simplex method to another part of the solution set and toward the final solution. 

10. Krylov Subspace Iteration Methods Are Still Used Today

Creator (if known): Magnus Hestenes, Eduard Stiefel, and Cornelius Lanczos

When it was created (if known): 1950

Its impact/implications on the world: The Krylov subspace iteration method is one of the ten most important classes of numerical methods in the world.

The Krylov subspace iteration methods are a set of algorithms that were developed at the Institute for Numerical Analysis at the National Bureau of Standards in the 1950s. They address the deceptively simple task of solving equations of the form Ax = b.

Of course, it isn’t quite as simple as that, the A is an enormous n by n matrix. This, therefore, means the algebraic answer x= b/A isn’t exactly child’s play to calculate.

“Iterative methods—such as solving equations of the form Kxi + 1 = Kxi + b – Axi with a simpler matrix K that’s ideally “close” to A—lead to the study of Krylov subspaces. Named for the Russian mathematician Nikolai Krylov, Krylov subspaces are spanned by powers of a matrix applied to an initial “remainder” vector r0 = b – Ax0.” – Barry A. Cipra.

“Lanczos found a nifty way to generate an orthogonal basis for such a subspace when the matrix is symmetric. Hestenes and Stiefel proposed an even niftier method, known as the conjugate gradient method, for systems that are both symmetric and positive definite.”

These algorithms have been continually tweaked over the past 50 years. Their current iterations for non-symmetrical systems include the Generalized minimum residual method (GMRES) and Biconjugate gradient stabilized method(Bi-CGSTAB). 

11. Kalman Filter is Great For Predicting the Future, sort of

Creator (if known): 1958-1960

When it was created (if known): Rudolf E. Kálmán

Its impact/implications on the world: The Kalman Filter is a general and powerful tool for combining information in the presence of uncertainty.

Kalman Filtering, aka Linear Quadratic Estimation (LQE), is an algorithm that uses a series of measurements, observed over time and including statistical noise, to produce an estimate of unknown variables via a joint probability distribution. 

In other words, it helps you make an educated guess about what a system is likely going to do next, within reason of course. Kalman Filters are great for situations where systems are constantly changing. 

Another great thing about this algorithm is that it is light on ‘memory’. Only the result of the previous step is needed to progress which makes it very fast and ideally suited for real-time problem-solving.

12. QR Algorithms For Computing Eigenvalues Have Proved Incredibly Useful

Creator (if known): John G. F. Francis and by Vera N. Kublanovskaya independently

When it was created (if known): Late 1950’s

Its impact/implications on the world: The QR algorithm greatly simplified the calculations of eigenvalues (which are the most important numbers associated with matrices). After its development, calculating these pesky numericals became a routine task rather than a formidable and labor-intensive process.

The QR Algorithm, aka Eigenvalue Algorithm, is important in numerical linear algebra. In addition to enabling the swift calculation of eigenvalue, it also aids in the processing of eigenvectors in a given matrix. 

Its basic function is to perform QR decomposition, writing the matrix as a product of an orthogonal matrix and an upper triangular matrix, and multiplying the factors in the reverse order, and iterating.QR Algorithm

13. The Fortran Optimizing Compiler Could Be The Most Important Event in Programming History

Creator (if known): John Backus (and IBM team)

When it was created (if known): 1957

Its impact/implications on the world: Quite possibly the most important algorithm/event in computer programming.

FORTRAN was developed by John Backus and his team at IBM in the later 1950s. Its creation is often cited as the most important event in computer programming. 

It enabled scientists, and other users, to actually tell a computer what they wanted it to do without the need to get ‘bogged down’ in the minutiae of machine code. FORTRAN optimizing compiler is modest by modern day standards with ” 23,500 assembly-language instructions—the early compiler was nonetheless capable of surprisingly sophisticated computations.” – Barry A. Cipra.

Backus would recognize its significance to the world later, in 1998, when he recalled the history of Fortran I, II and III for the IEEE Annals of the History of Computing. The compiler, Backus said, “produced code of such efficiency that its output would startle the programmers who studied it.” 

14. Quicksort Is Great at Helping Sort Things

Creator (if known):  Tony Hoare of Elliott Brothers, Limited, London

When it was created (if known): 1962

Its impact/implications on the world: It provided a means of quickly and efficiently sorting lists alphabetically and numerically.

Sorting a set amount of things in order either alphabetically or numerically had always been a laborious and tedious task. Tony Hoare managed, in 1962, to produce an algorithm to perform this task quickly.

His Quicksort algorithm used a recursive strategy to ‘divide and conquer’ to rapidly reach a solution. It would prove to be two to three times quicker than its main ‘competitors’ merge sort and heapsort.

It works by choosing one element to be the ‘pivot’. All others are then sorted into “bigger” and “smaller” piles of elements relative to the pivot.

This process is then repeated in each pile. 

“Although it’s possible to get stuck doing all N(N – 1)/2 comparisons (especially if you use as your pivot the first item on a list that’s already sorted!), Quicksort runs on average with O(N log N) efficiency.” – Barry A. Cipra.

The simplicity and elegance of this algorithm won it great praise in its day and made it the poster child of computational complexity in the late ’90’s. 

Quicksort
An example of quicksort on a random set of numbers. Source: Znupi/Wikimedia Commons

15. JPEG and Other Data Compression Algorithms Are Incredibly Useful

Creator (if known): N/A

When it was created (if known): JPEG – 1992

Its impact/implications on the world: Data compression algorithms, like JPEG, MP3, zip or MPEG-2, are widely used the world over. Most have become the de-facto standard for their particular application. They have made computer systems cheaper and more efficient over time.

It is difficult to single out one particular data compression algorithm as their ‘value’ or importance depends on the files’ applications. They have become so widespread today that they are used whenever you download a file to your computer, play music or video games, stream videos, use a database or engage with The Cloud.  

 

Via: uta.edugizmodo.com, nickhigham.wordpress.com

Algolia News

Be the first to comment

Leave a Reply

Your email address will not be published.


*