Operational Research: Yesterday (2/4)

Tom Walker
Tom Walker
“All the ascendancy of the Hurricanes and Spitfires would have been fruitless but for this system which had been devised and built before the war.

"It had been shaped and refined in constant action, and all was now fused together into a most elaborate instrument of war, the like of which existed nowhere in the world.” - Churchill on operational research’s contribution to the Battle of Britain

Much of the early impetus for a scientific approach to operational decision making came from the existential challenges of the World Wars. 

In February 1917, German U-boats intensified their attacks on the United Kingdom’s supply lines. At this time, 1 in 10 British ships were being sunk. 

Two Royal Navy experts, Reginald Henderson and Rollo Appleyard, convinced a sceptical Admiralty round to the case for convoy defence. Evaluating optimal convoy sizes, speeds and frequencies in papers full of algebraic equations and diagrams, they brought an early mathematical rigour to the study of such logistical problems. By September 1917, with a switch to the convoy system off the back of this work, losses were down twentyfold to 1 in 200. 

From these early beginnings, the British established operational research (OR) as a formal discipline in their detailed defence preparations for the growing Nazi threat from the 1930s onwards through WWII. 

Radar was a standout development in what Churchill described as the “wizard war” – the application of hard science to the management of a nation’s complex defence systems. Early interest in a “death ray” which could direct focussed radio beams at incoming enemy aircraft was deemed infeasible, but turned into concerted development of radar technology to detect them. 

As well as technological supremacy in their radar technology (the cavity magnetotron), it was the UK’s organisational supremacy in managing this new network of radar stations that tipped the scales for the Battle of Britain. 

Radar was a standout development in what Churchill described as the “wizard war” – the application of hard science to the management of a nation’s complex defence systems.

Patrick Blackett, a brilliant experimental physicist, brought quantitative rigour to the design of what became known as the Dowding system. Flows of information were intricately perfected – from the new radar stations, through to local HQs, to a continually-updated master-map of threats, to the fighter pilots. Interception angles from RAF bases to enemy bombers were optimised.

The result was that the Dowding system acted as a “force multiplier”, extracting the maximal results from Britain’s defence assets and staff. Prior to the system, interception rates of 30% were typical. Under the Dowding system, rates of 90% were common, with some Luftwaffe raids intercepted with 100% accuracy. Churchill was clear-eyed about the contribution of the system to winning the Battle of Britain.

After the war, the torch for developing operational research was handed on across the pond, where US scientists brought fresh algorithmic sophistication to the field.

Many operational research problems are optimisation problems: maximising an objective (or minimising a cost), subject to constraints. One might want to minimise distance for a bicycle courier, or maximise profit for a network of shops. Constraints in the problem setup encode information about the dynamics of the system being optimised (for example: the courier only delivers to a given location once; clothes prices once discounted can’t be subsequently raised).

The problem with these kinds of optimisation problems is that they are often NP-hard (read: hard!). The time to find optimal solutions to these problems explodes as the systems being studied increase in size (more stops for the bike courier; more SKUs to price).

Many operational research problems are optimisation problems: maximising an objective (or minimising a cost), subject to constraints.

As a result, a lot of study has been done approximations to these optimisation problems so that they can be solved tractably.

One such approximation is the field of Linear Programming, and one such efficient algorithm is the Simplex Algorithm, developed by George Dantzig in 1947. 

Dantzig was identified as a mathematical genius from a young age: a story of him completing two famously unsolved problems in statistics jokingly set as homework by his secondary school teacher inspired the scene in Good Will Hunting where Matt Damon’s caretaker solves similar problems on the blackboard during his nightshift.

He was present at the outset of the field of operational research during WW2, heading the combat analysis branch of the USAF. It was here that he began to work on linear programming, using it as an optimisation framework when studying ways to minimise costs to the US Army, and maximise losses to the enemy. 

His work during the war was kept secret, but in 1947 he published his simplex algorithm; a program which could solve linear programs in a reliably efficient manner. Dantzig’s work on the simplex algorithm sparked insights from John von Neumann, mentioned in the introduction, who further developed the algorithms. Their work established linear programming as a practical approach to optimisation problems, and allowed the nascent field of operational research to flourish as a hard science.

One last mathematical giant, and his contribution, should be mentioned in this account of the early days of OR. Richard Bellman was an applied mathematician, born in New York City in 1920. As with many of the other figures in this early history, he applied his mathematical knowledge in the service of the government during WW2, working for the Theoretical Physics Division group in Los Alamos. After the war, he joined the newly-formed RAND Corp. governmental think tank, and it was there that he developed the field of Dynamic Programming.

Dynamic programming is an approach which simplifies a complex decision problem into a sequence of decision steps over time. Bellman’s famous equation describes how the larger decision problem can be solved by recursively solving these sub-problems. The approach proved very general and powerful in application to optimisation and control problems of the time.

Today, Bellman’s work underpins the exciting field of reinforcement learning, which is both helping companies like DeepMind master chess and go, and giving modern computer scientists and neuroscientists tantalising insights into the nature of human consciousness (if you’d like to know more, read on!).