The necessity of multi-disciplinary scholarship for finance: On Ayache and Roffe

  • Timothy C. Johnson,
Finance and Society, 2016, 2(2): 189-204.
DOI: http://dx.doi.org/10.2218/finsoc.v2i2.1733

Corresponding author:
Timothy C. Johnson, Department of Actuarial Mathematics and Statistics, Heriot-Watt University, Edinburgh EH14 4AS, UK.
Email: t.c.johnson@hw.ac.uk

Abstract:

Ayache presents a view of markets and mathematics that attempts to conform to the philosophies of Alain Badiou and Quentin Meillassoux. However, this attempt is unsuccessful because Ayache adopts a view of probability rooted in nineteenth-century conceptions that cannot accommodate the radical uncertainty of the markets. This is unfortunate as it is reasonable to believe that the ideas of Badiou and Meillassoux, when synthesised with contemporary ideas of probability, could offer interesting insights. Roffe presents a better argued synthesis of Deleuze and markets, however he makes similar assumptions about contemporary probability that undermine his conclusions.

Keywords: Derivative pricing, financial mathematics, probability, uncertainty, contingency,

Ayache, E. (2015) The Medium of Contingency: An Inverse View of the Market. London: Palgrave Macmillan. 414 pp., $50.00 (hbk), ISBN 978-1-137-28654-3

Roffe, J. (2015) Abstract Market Theory. London: Palgrave Macmillan. 180 pp., $100.00 (hbk), ISBN 978-1-137-51174-4

Introduction

At the Congress of the Bachelier Finance Society in 2008 Steven Shreve, in his presidential address, argued that financial mathematicians would emigrate from mathematics departments, just as computer scientists had, because financial mathematics is essentially a multi-disciplinary area while mathematics is protective of its distinctive culture. Finance is quite clearly a social phenomenon, and so is far removed from the usual subjects of applied mathematics. Financial mathematicians cannot rely on the immutability of physical laws or the slow evolution of natural environments. While this means that financial mathematics can never claim to be indubitable, it nonetheless can provide insights into how society should respond to uncertainty.

Although mathematics has a strong culture, it is not a monoculture. The general conception is that mathematics provides a mechanism for calculating the correct result. This is the view of non-mathematicians, and is dismissed by the aphorism ‘There are three types of mathematicians; those that can count and those that can’t’. More popular amongst mathematicians is the idea that mathematics is concerned with Platonic forms, and so a mathematical truth is a universal truth. Bourbaki’s expulsion of diagrams from textbooks was a manifestation of this view and resulted in a breakdown of relations between Western mathematicians and physicists in the 1970s. There is a third tradition in mathematics, which accepts that mathematics is constructed (see Poincaré and Gould, 2001); 1+1 was not equal to 2 at the time of the dinosaurs because the dinosaurs did not speak. One aspect of this tradition is that the value of mathematics is in delivering proof where experimentation is not possible. A mathematician ‘discovered’ the Higgs boson particle before the Large Hadron Collider was constructed, and mathematics is essential in the markets because there is no consistency in social laws that would enable experimentation. In this tradition, mathematics is not about objects but the relationships between objects, which are discerned through a process of abstraction and generalisation. The mathematician, a theorem-prover as distinct from a calculator, gains perspective by pulling away from the detail, something that can be hard to grasp for scientists who either value the instrumental aspect of mathematics in calculation or admire the ethereal nature of mathematics in identifying ideal forms.

Elie Ayache’s critique of probability theory, presented in The Medium of Contingency, is rooted in the mind-set of an instrumental user, not a creator, of mathematics who genuflects to the supposed power of mathematics in delivering universal Truths. His book seems oblivious to what abstract mathematics has achieved in trying to identify the essential nature of markets. The book will have its admirers, particularly amongst the cohort of ‘quants’ who graduated from physics and applied mathematics to the markets in the 1980s and then saw the partial-differential equations that they were expert in replaced by measure theoretic probability through the 1990s. Its usefulness to readers with a background in social and human sciences, however, is limited and potentially misleading.

The medium of contingency

Ayache’s book aims to introduce a new metaphysical ‘matter’ pertinent to financial markets. What it does is present how his experience of the markets has changed his conception of probability theory, rather than offering a well-founded critique of contemporary probability theory (xiii-xv). The book offers no new insights to inform derivative pricing, theory or technology, since the ideas he presents as innovative are, in fact, conventional. For example, one of the conclusive statements is:

Another formulation of the recalibration problem is the fundamental principle according to which states of the world in the market are prices, all the prices and nothing but the prices (i.e. they are not abstract states of the world). You cannot couch this principle in a probabilistic theoretical framework. (363)

The first sentence is well known, understood and uncontroversial. The second is wrong as contemporary financial mathematics, in theory and in practice, defines events in terms of prices in a probabilistic theoretical framework. This contradiction arises because Ayache’s understanding of probability theory seems to be rooted in a conception of probability employed in the physical sciences, and he does not appear to grasp the full implications of the measure theoretic probability upon which mathematical theories of asset pricing are constructed. As a consequence, he is unable to draw clear connections with the philosophies of Alain Badiou and Quentin Meillassoux that would have been worthwhile, and which are present in the background throughout the book. Jon Roffe employs these apparent connections in his book Abstract Market Theory. Without commenting in detail on Roffe’s analysis, its weakness rests on an overreliance on Ayache’s flawed understanding of probability theory, which is the focus of this review.

The foundations of probability

Ayache’s argument is based substantially on measure theoretic probability, which was developed by Andrei Kolmogorov and presented in its full form in his 1933 Foundations of the Theory of Probability. Both Ayache and Roffe emphasise the importance of mathematical formalism, implying its significance in modern probability theory, but Kolmogorov is generally regarded as being of the intuitionist school, characterised by Poincaré, where mathematics is motivated by observed phenomena rather than the abstract “game played according to certain simple rules with meaningless marks on paper” of Hilbert and Bourbaki (Kendall et al., 1990: 63).

The Foundations was contingent, in the sense that it occurred when different streams came together. The background was that Kolmogorov had established his international reputation as a mathematician in 1922 when, and at the end of the Russian Civil War, he produced a significant result in functional analysis. He received his doctorate in 1929 having published 18 papers on mathematics, and was sent to France and Germany returning to Moscow, as a Professor of Mathematics, in 1931. Kolmogorov became interested in probability as he left on his trip abroad.

Kolmogorov’s decision was made in a cultural context. In 1927, Russell (2009: 220) observed that “opinions differ” when it comes to what is meant by probability, highlighting a philosophical void at the heart of science. Meanwhile, mathematics had undergone a paradigm shift with Cantor’s introduction of algebraic set theory in a series of papers between 1874-1884. While most mathematicians embraced Cantor’s theory, and its transfinite numbers, many Marxist mathematicians, such as Struik and Brouwer, rejected proofs that relied on ‘ideal’ entities that had no physical manifestation. In 1930 Kolmogorov’s doctoral supervisor, Nikolai Luzin, was criticised for being too abstract and bourgeois in this context, and he would be criminally convicted for such crimes in 1936. Kolmogorov might have been sensitive to these issues, and since probability had been an important topic in Russian mathematics (but rather peripheral in France and Germany at the time), deciding to focus on the subject could have been regarded as choosing to concentrate on ‘Soviet’ mathematics.

Cantor’s work related directly to algebra and analysis (the synthesis of geometry and algebra that started in the seventeenth century and incorporates calculus) and the leading French mathematicians, Lebesgue and Borel, were central to developing measure theory in this context. Kolmogorov recast probability theory on this basis and in the spirit of the (dangerously idealistic) Hilbert Program, which sought to lay sound foundations for mathematics. The Programme would collapse with Gödel’s Incompleteness Theorems of 1931.

In the late 1920s there were two distinctive approaches to probability: the frequentist approach rooted in physical sciences, and the subjectivist approach grounded in the social sciences.

Problems with the conventional, frequentist approach to probability based on counting relative frequencies became apparent with the turn to probability in physics. In 1890, Poincaré had won King Oscar’s prize addressing the stability of the solar system by proving that any physical system confined to a finite space with fixed total energy (as the universe is assumed to be) must eventually return arbitrarily close to its initial state. This result implied the universe ‘recurs’, and was picked up by those interested in Nietzsche’s discussion of ‘eternal recurrence’ in Die fröhliche Wissenschaft (1882). The issue for physics was that, in the context of the probability theory of the time, this result led to the implication that time was reversible and the second law of thermodynamics was broken. Better known today is the impact of quantum physics. At the end of the nineteenth century, continuous phenomena in physics were regarded as deterministic, while discrete phenomena were seen as random (von Plato, 1994: 135-136). The identification of discrete energy states in 1877 resulted in Planck’s 1900 quantum hypothesis and the idea of discrete time and length. This discretisation suggested that the laws of physics were probabilistic and not deterministic. [en. 1]

The most sophisticated attempt to address the problem of probability from the physical sciences came from Richard von Mises, an Austrian engineer linked to the Vienna Circle of logical-positivists, who attempted to lay down the axioms of probability based on observable facts. The result was published in German in 1931 and popularised in English as Probability, Statistics and Truth, now regarded as the main justification for the frequentist approach to probability based on the principle of indeterminism and related to Laplace’s demon, placing it within Platonic realism.

Meanwhile, economics was similarly challenging the conventional approach to probability. Frank Knight, in Risk Uncertainty and Profit (1921), took the view that economics had developed a theory of competition that brought the value (price) of economic goods to equality with their cost. However, this equality was in fact only an ‘occasional accident’. Profit (and loss) in economic affairs was a radically uncertain event (Knightian uncertainty) that was not amenable to analysis based on ‘risk’ (i.e. known probabilities). Knight argued, from the perspective of institutional economics, that if uncertainty did not dominate chance then all prices would be known and the entrepreneur would be redundant: there would be no ‘free will’ in economics. Simultaneously, in his Treatise on Probability, John Maynard Keynes observed that in some cases cardinal probabilities could be deduced. In others, ordinal probabilities — one event was more or less likely than another, an intensive magnitude — could be inferred, but there were a large class of problems that were not reducible to the concept of probability. In time, Keynes, like Knight, would place uncertainty at the heart of his economics.

Frank Ramsey challenged Keynes in his book Truth and Probability (1926), which argued that probability relations exist between a premise and a conclusion. Ramsey defines ‘probability’ in the subjective sense of ‘a degree of belief’, which can be established through a (betting) market. Keynes, a friend and mentor of Ramsey, appears to have been satisfied with the argument but whether it is true, as claimed by many modern economists, that Ramsey justifies rational expectations is open to investigation. Ramsey’s approach is better known through the Italian actuary Bruno de Finetti and the American statistician Leonard Savage. Collectively, these approaches are considered subjectivist or Bayesian, pointing to their relationship to the eighteenth-century Bayes’ Rule.

Kolmogorov lays the foundations of probability by equating probability with a measure of an event. A random variable is a mapping from an event space to a number, and this means that a mathematical expectation becomes an integral of the random variable, over all outcomes, and with respect to the probability measure. On this basis, Kolmogorov is able to derive both the Law of Large Numbers, fundamental to the frequentist conception of probability, and Bayes’ Rule, fundamental to the subjectivist conception. This is good mathematics for a number of reasons. It abstracts from the specific phenomena of interest to von Mises or Bayes into functional analysis and, in so doing, the fundamental relations of probability are identified. This generalisation leads to the synthesis of the physical and social sciences’ approaches to probability; the mathematical method is to abstract in order to generalise and identify connections. Also, the theory is parsimonious – Kolmogorov does not appear to add anything new other than present probability as a branch of analysis; he applies well-developed theory in a different domain in a clear and easy to follow manner.

One of Ayache’s central arguments is that mathematical probability theory fails because it cannot identify all possible states of the world. For example, ex ante we cannot account for “The radically-emergent event [that] is not part of a previous range of possibilities” (19). However, Ayache does not appear to distinguish between a state of the world and an event, as probability theory does.

Probability theory starts with a ‘sample space’ that represents all possible states of the future world. The sample space is a set whose elements include outcomes such as ‘9/11’, ‘an asteroid hits New York on 23 November 2020’, ‘a contingent claim is written on 11 September 2001 to pay-out on 11 September 2021 is lost on 23 November 2020’, and so forth. It should be obvious that it is impossible to identify all the possible outcomes, and mathematics realises this; probability theory is not extensional. Mathematics gets around this issue by augmenting the sample space with what is colloquially known as an ‘event space’. The pair of an (unmeasurable) sample space augmented with an event space is a ‘measurable space’, and has been the foundation of functional analysis since at least 1901 when Lebesgue introduced his integral.

Kolmogorov identified a random variable as a mapping from an event space (not a sample space) because it involves a specific property: it takes us from the event space to a number but also it must be able to take us from a specific number to a specific event. In particular, we are unable to distinguish outcomes that map onto the same number. For example, consider the random variable that represents the value of a traded asset. The event that maps onto a price of ‘0’ could be made up of outcomes such as ‘an asteroid hits New York on 23 November 2020’, ‘a contingent claim written on 11 September 2001 to pay-out on 11 September 2021 is lost on 23 November 2020’, and so on. Because these outcomes lead to the same value of the random variable (the same price in this context), mathematics does not have to, and cannot, distinguish the different outcomes as different events. Clearly the sample space can include outcomes that are inconceivable ex ante and so are technically immeasurable. Mathematics handles this by focusing not on the sample space but on the measurable event space. In the case of asset prices, the problem is, in fact, straightforward. Since the possible future price of an asset is an element of a finite set, [en. 2] there are a finite number of possible events that are relevant to asset prices and so could be defined extensionally.

Sets of measure zero

Ayache’s argument also highlights issues with (his conception of) sets of measure zero. The most obvious measure of a set is to count its elements, as in the classical frequentist approach to probability. For example, I might have a set of drawings comprising of my own doodles and those of my young children. I could measure the events defined by my own and my children’s drawings by counting them. The paradigm shift that Kolmogorov initiated consists in associating a probability with an abstract measure, freeing probability theory from being shackled to concepts rooted in counting elements of sets. Using Kolmogorov’s formulation of probability, I could measure the events by considering the area of the drawings, another physical measure, in which case the measure of one large picture of my daughter could exceed the dozen smaller drawings of my son.

To fully grasp the meaning of sets of measure zero, consider the situation if I were an art dealer. It is conceivable that I would measure the pictures in my collection by the value I believe they would achieve at public auction, not by the physical measures of number or size. In this case it might turn out that the events of paintings by my children would have measure zero.

This simple example suggests that Ayache’s statement at the start of this argument betrays a serious misunderstanding of this fundamental idea:

For this reason, such events are neither ‘improbable’ nor ‘extremely improbable’. They are not even ‘impossible’‒ what probability theory characterizes as events of measure 0. They are literally immeasurable. I call them im-possible, to emphasize the fact that they are external to the whole regime of possibility. (19)

In assigning a measure of zero to my children’s pictures, I am in no way implying that my children’s pictures are ‘im-possible. It might come to pass that my daughter is the world’s most renowned artist on 23 November 2020, in which case I would change my value-measure of her work, and mathematics does not inhibit me doing this. In fact, Ayache’s association of sets of measure zero with ‘im-possibility’ is precisely the misconception that led nineteenth-century scientists to believe that time was reversible and suggests a mind-set locked into the classical, frequentist conception of probability.

Ayache develops his beliefs in the context of Nassim Taleb’s (2007) idea of the ‘Black Swan’ event. His purpose is to undermine Taleb’s assertion that we cannot use probability to determine the price of an asset in order to advance his own thesis that the issue is not in pricing technology, but with pricing theory. In criticising probability, rather than its application in derivative pricing, Ayache is taking on a greater challenge than Taleb.

The concept of the ‘Black Swan’ appears in Juvenal’s Satires as an impossibly rare event, and even St. Augustine noted that while we could imagine a black swan because we have experienced a swan and the colour black, this does not mean it could exist. Juvenal’s idea persisted through the ages but recently Taleb (2007: xvii) has more narrowly defined a ‘Black Swan’ event as one that has a significant impact, and which “lies outside the realm of regular expectations, because nothing in the past can convincingly point to its possibility” and is retrospectively justified. In this formulation, Taleb demonstrates his understanding of probability by distinguishing the random variable (the quantified impact of the event) from the measure of the event (its probability). Taleb’s third criterion is epistemic and is similar to Knight’s and Keynes’. In essence, his argument is concerned with our inability to measure the probability of rare events, even though the ex-ante impossible outcome was clearly conceivable ex-post.

One might think that Ayache is having difficulty expressing himself, and that his issue is not with mathematics’ inability to handle events subject to inconceivable outcomes but the inability of users of mathematics to measure the probability of such events. However, Ayache is clear about where he sees the issue:

The first predicament of probability theory and consequentially its inability to deal with the medium (the market), lie in identifying the possible states, not in their subsequent probabilistic weighting. (147)

This “predicament of probability” is a consequence of Ayache’s peculiar conception of probability theory (that it rests on states not events), rather than it being an issue with the theory itself. In the mainstream contemporary approach to probability theory, taken by mathematics in general and Taleb in particular, the problem is with the inability to measure the probability of the inconceivable outcome that contributes to the event.

Brownian motion

Ayache demonstrates that he is familiar with the details of measure theoretic probability, if not its meaning, in the context of presenting his own interpretation of probability (104). This is done by discussing the Law of Large Numbers, when contrasting von Mises interpretation of probability with Kolmogorov’s, introducing Cournot’s principle and leading to:

a really provocative (perhaps even revolutionary) thought: … that the derivatives market ‒ which is real, of course ‒ may really be the consequence of true, mathematical Brownian motion. (137-38)

This section is very difficult to follow for anyone with more than a superficial understanding of contemporary probability, and perhaps my confusion explains why I find a later statement – that “The market of contingent claims is not the consequence of the mathematical model” (298) – contradictory to this earlier one. My immediate concern is that Ayache has argued the case for the ‘radically emergent event’, which is not controversial given Knight and Keynes. However he argues that the derivatives market, dominated by radical uncertainty, is a ‘consequence’ of Brownian motion. The problem is that, from the perspective of mathematics, Brownian motion is not that uncertain.

Brownian motion is a physical phenomenon: the independent motion of inanimate pollen grains. It was a subject of widespread discussion in the nineteenth century and the economist William Stanley Jevons believed that the phenomena was electrical, calling it pedesis, and made the suggestion that it could be addressed using probability theory (Brush, 1976: 665). It became a fundamental object in physics when it was the subject of one of four papers published in Annalen der Physik by Einstein in 1905, his annus mirabilis. Einstein’s objective was to explain the behaviour of liquids and solids in terms of the motion of atoms and molecules; it was a proof of the existence of atoms.

There is a view that Bachelier had pre-empted Einstein, but since their objectives were very different it is difficult to justify, not least since Bachelier never referred to the phenomenon. Bachelier was developing an idea that goes back to the origins of mathematical probability – the idea that a price follows a random walk. This was central to the canonical foundation of mathematical probability in the 1654 correspondence between Pascal and Fermat, which, in modern terms, addresses the pricing of a digital option on a binomial tree. In the mid-nineteenth century, the French financier, Regnault, used the binomial tree as the foundation of his Calcul des Chances et Philosophie de la Bourse (1863) (Jovanovic and Le Gall, 2001). It was in this long tradition that Bachelier wrote his 1900 Théorie de la Spéculation, where he developed Regnault’s discrete time model into a continuous time model (compare this to how Cox-Ross-Rubinstein discretised Black-Scholes-Merton in the late 1970s, making it more accessible). Bachelier’s approach was not unique at the time; independently, in 1908, Vincent Bronzin published a text using the same basic price model to price derivatives (Zimmermann and Hafner, 2007).

Ayache highlights the difference between the physical phenomenon of Brownian motion and the context Bachelier was working in when he asserts “Price is perfect for Brownian motion” (301). However, a standard criticism of the use of Brownian motion in finance is that it is a continuous process, whereas price processes are not, because the only possible price of an asset is discrete. Brownian motion is a useful model of prices and as such it has the same limitations as a sketch of a body has in guiding a brain surgeon. Ayache gets around this problem by putting the cart before the horse, presenting price as a model of Brownian motion:

Better to say that price is an ideal model of the formal-mathematical Brownian motion, yet as a model and interpretation of the formalism, there is something material attaching to it. (302)

This step appears to be at the heart of Ayache’s revolutionary argument. Ayache’s point is that Brownian motion is a Form and price the materialisation of the Form.

The Wiener process is the mathematical object that represents the physical (and financial) phenomenon, Brownian motion. In 1913, the American Norbert Wiener was awarded a scholarship to study philosophy at Cambridge. While in Cambridge, Bertrand Russell suggested he should attend lectures in mathematics and Wiener was introduced to measure theory. At the end of the First World War, Wiener joined M.I.T. and turned his attention to specifying, mathematically, Brownian motion.

Imagine that a fly is seen to be on the inside of a window at a certain spot at noon, but ten minutes later that same fly is sitting on the space bar of a typewriter on a table in the room. During those ten minutes the fly was buzzing around all over the room while nobody paid attention to it. What are the odds that the fly touched the ceiling during the intervening ten minutes? (Heims, 1980: 62)

What Wiener realised was that there were an infinite number of directions in which the fly could leave the window (because directions are continuous), and at each point of its flight, it could carry on in an infinite number of directions. Wiener needed to measure the relative size of these infinite numbers of possible paths the fly could take to the typewriter via the ceiling and compare that to the infinite number of ways the fly could travel from the window to the typewriter without hitting the ceiling. The problem confronting Wiener was how to construct this intuition into a well-defined mathematical object.

Wiener solved the problem in 1921 (a decade before Kolmogorov’s Foundations) by specifying two properties of the Wiener process. Firstly, he states that the problem at each point was independent of what had happened before; technically the random process representing the fly’s flight has independently and identically distributed changes. Secondly, he states that at each point the future location of the fly after a specific interval would have a Gaussian (Normal) distribution with mean zero and variance given equal to the time interval. These properties made the apparently unsolvable problem tractable.

If a process is independently and identically distributed, it is ‘stationary ergodic’ and its statistical properties can be deduced from a single, sufficiently long sample of the process. Furthermore, because the Wiener process is continuous, it is predictable in the sense that we can predict it will hit a value if it comes sufficiently close to that value. [en. 3] These properties, inherited by all other processes driven by a Wiener process, are not unimportant; they mean that ideas such as dynamic replication, in finance, and stochastic control, in general, can be developed. This would be a much harder task if the process was not continuous and explains why the Weiner process is used to model discrete price processes. However, the stationary ergodic property also means that the radically emergent event cannot be accommodated. Contrary to Ayache’s (113) assertion, Kolmogorov made no assumption of either independence or identical distributions in formulating his theory. He only makes the assumption in the proof of the Law of Large Numbers, making the same assumptions as von Mises had. Ayache’s argument, that markets are characterised by radical uncertainty (the radically emergent event) becomes incoherent as soon as he tries to link this theme with the idea that markets are conceptually related to a process, Brownian motion, which can be completely described statistically. Ayache argues that Taleb, in focusing on the epistemological problem, criticises science and, somewhat incoherently, that he also “never really dropped the frequency or probability-based notion of event” (54). However, in focussing on Brownian motion, this is exactly what Ayache is doing.

This flaw is not peripheral. For example, his discussion of recalibration (30-35) does not appear to realise that recalibration is a necessity because the pricing model is stationary ergodic whereas the market, generating radically emergent events, is not stationary ergodic. Ayache needs to explain how, if price is the manifestation of Brownian motion, prices can manifest the radically emergent event. He might argue that the solution lies in a regime-switching model (30-32), but I would contend the issue is more fundamental: that the markets, in reality, cannot be associated with any stationary ergodic processes, on which he bases his subsequent arguments. My impression is that Ayache has an ideological commitment to a conception of probability rooted in the physical sciences. His response to the challenge that his experience of markets makes to this ideology is to project it onto measure theoretic probability. Instead of recognising the fault in his own understanding, and rather than engaging with the well-established and rigorously tested theory, he seeks to argue for “The End of Probability”.

Financial mathematics

Moving from probability theory to the financial application, there is a similar lack of clarity that implies a lack of understanding in Ayache’s discussion of incomplete and complete markets. Financial mathematics, in both its theory and practice, is built on the ideas of replication and an absence of arbitrage. Pricing on the basis of an absence of arbitrage is an ancient concept. It is the idea that asset prices should be coherent in the sense that trading cannot deliver a sure profit. It is explained in Fibonacci’s 1202 text on commercial arithmetic, the Liber Abici; it is the basis of Black and Scholes’ (1973: 637) argument, which opens with the statement “It should not be possible to make sure profits”; and it is central to all contemporary financial mathematics in the identification of a risk-neutral pricing measure and an absence of arbitrage in the Fundamental Theorem of Asset Pricing.

The idea of dynamic replication follows immediately from the principle of no arbitrage and was not “first shown”, as asserted by Ayache (285), in the formulation of Black-Scholes-Merton (BSM) model. It was employed by Jan de Witt in his 1671 paper, The Worth of Life Annuities in Proportion to Redeemable Bonds. Bronzin also employed the technique in 1908 when pricing derivatives by ‘covering’ or hedging them with portfolios of other assets on the basis of ‘equivalence’ (Zimmermann and Hafner, 2007).

The BSM approach to pricing presents a replication argument that is a consequence of the continuity of the Wiener process, which delivers a deterministic function of five observable parameters to give a derivative’s price. This removal of uncertainty in pricing options by BSM had a particular, important, effect: it meant that trading financial options was not illegal gambling, since there was, apparently, no randomness in the activity. The sociologist, Donald MacKenzie, discussed its effect with the legal counsel to the Chicago Board Options Exchange, Burton Rissman, when the formula emerged. Rissmann made the point that:

Black-Scholes was what really enabled the exchange to thrive … we were faced in the late 60s and early ‘70s with the issue of gambling. That fell away, and I think Black-Scholes made it fall away. It wasn’t speculation or gambling it was efficient pricing … I never hear the word ‘gambling’ again in relation to stock options traded on the Chicago Board Options Exchange. (MacKenzie, 2008: 158)

BSM delivers a unique, deterministic price because the Wiener process is stationary ergodic with continuous paths; it represents a complete market that involves Knight’s ‘risk’ but not radical uncertainty.

When asset prices are modelled by stationary ergodic processes that are not continuous, we lose the ability to construct a replication argument. This does not mean the market now contains arbitrage, just that we cannot identify a unique strategy that precludes arbitrage and replicates a claim’s pay-off. In this case the market is incomplete; we cannot identify a unique pricing measure and hence a unique price of an asset. We are still in the world of Knight’s ‘risk’ if our models are based on stationary ergodic processes, but we lose the uniqueness of a model’s output and so the price is uncertain and the entrepreneur is not redundant. This account is standard, but Ayache ignores it when he argues that since there is a single price in a market and, given his account, “[BSM] is the start of something different where talk of incomplete or complete market becomes spurious, and even dishonest” (270).

How Ayache conceives of a market, which delivers ‘unique’ prices, is interesting if not perplexing. In Ayache’s presentation ‘the market’ resembles a ticker-tape machine that informs the reader what the price of an asset is at that time (268-72). Nothing is said as to how the machine works. In order to appreciate the limitations of this approach, let me be explicit: when financial markets set a price they do so in a discursive manner. [en. 4] A market-maker will make an assertion as to the price of an asset by giving the market a bid and offer price. If the other traders agree with the bid-offer, they let it pass and do nothing. If, however, another trader feels the market-maker has mispriced the asset, they will act ‒ challenging the assertion ‒ by executing a trade and a price is recorded based on the transaction. [en. 5] In such a set up, recorded prices represent not a belief in the price quoted by the market-maker, but a disbelief in the market-makers valuation (since they would not trade if they agreed with the market-maker). The market represents a discursive arena wherein market-makers make claims as to what is a true price of an asset is. These claims are challenged by other traders taking the prices, and it is through this process that the market seeks to converge on the Truth of an asset price. However, the Truth, like the gold at the end of the rainbow, is always one step away, since the false-pricings, evidenced by the stream of quotes, never cease. So, when Ayache justifies his arguments on the basis that the market delivers unique prices, he passes over the fact that the market is made up of wilful participants and a quoted price is a disputed price.

Ayache moves on and creates a distinction between a contingent claim (the contract) and a contingent pay-off, which he associates with the output of the BSM model (285-286). He stresses that Harrison and Pliska (1981, 1983) do not explicitly make this distinction. In the conventional understanding, a contingent pay-off is an input to the model. It is the random variable representing the pay-out (contingent on events) that needs to be delivered (attained) in the uncertain future under the terms of the contract. The contract defines the pay-off. This distinction is implicit in Harrison and Kreps (1979) and Harrison and Pliska (1981, 1983), while Pliska (1997: 112) explicitly defines a contingent claim as a random variable. Ayache argues that there is “equivocation” (286) in Harrison and Pliska’s lack of distinction, but I believe the ambiguity derives from Ayache’s misunderstanding of what Harrison, Kreps and Pliska aim to achieve, as well as how they do it.

In the course of this exposition, Ayache introduces the work of Shafer and Vovk (2001), who construct a theory of probability based on game theory rather than measure theory. In their book Probability and Finance, an essential point is that probability is derived from quoted prices, not from sets of outcomes. Shafer and Vovk point out that their approach, based on dynamic replication and the exclusion of arbitrage, is more in tune with the origins of mathematical probability than abstract measure theory.

This is not innovative; Ramsey argued that prices in a betting market gave probabilities, while both Hald (1990: 69-70) and Sylla (2006: 28), in their accounts of the early development of probability, describe how in the first text on mathematical probability, Huygens formulated the classical theory of probability on the basis of calculating the price of a mixture. Nor is it controversial; it is explicit in all contemporary mathematical finance that the probabilities used in pricing are derived directly from prices. In modern financial mathematics, prices are logically anterior to probabilities. This is apparent in the simplest asset-pricing framework, the single period binomial model that generates the BSM framework.

This model consists of three prices for the underlying asset: the current price and two
possible values of the asset at a specified time in the future. For the market to preclude arbitrage, the current price must lie between the lower and upper future prices. This presents a simple
geometric interpretation of the probabilities; they indicate where the current price lies in between the future prices. In more complex cases, the binary choice of prices is replaced with a distribution of
future prices. In the BSM framework, the prices are log-normally distributed with mean defined
by the riskless-rate of interest and the variance by the ‘volatility’. Volatility ‘smiles’ and ‘skews’
tell traders how the market prices diverge from the model distribution, or equivalently how
markets assess the probabilities of different prices materialising, and recalibration occurs
because the distribution of the future price of the underlying asset evolves in a manner
impossible to know.

This model consists of three prices for the underlying asset (the current price and two possible values of the asset at a specified time in the future), and it allows for a simple geometric interpretation of the probabilities that the asset reaches each of its two possible future values, such that they satisfy Kolmogorov’s definition of probabilities by the no-arbitrage condition. In more complex cases, the binary choice of prices is replaced with a distribution of future prices. In the BSM framework, the prices are log-normally distributed with mean defined by the riskless-rate of interest and the variance by the ‘volatility’. Volatility ‘smiles’ and ‘skews’ tell traders how the market prices diverge from the model distribution, or equivalently how markets assess the probabilities of different prices materialising; and re-calibration occurs because the distribution of the future price of the underlying asset evolves in a manner impossible to know.

The ‘underlying’ asset defines the market geometry on which all contingent claims based on the underlying must be priced. The mathematical theory of derivative pricing has nothing to say as to the efficient allocation of scarce resources or about predicting the future. It is only concerned with coherent pricing of assets such that an arbitrage is not possible. This comes as somewhat of a disappointment to many who believe that mathematics can, somehow, identify the ‘true’ price of an asset or provide a technique to predict the future. This approach is observed in practice, for example, by Beunza and Stark (2012: 391), who make it explicit that traders convert prices into probabilities.

Shafer and Vovk’s contribution is a realisation that in the markets (and their account is specific to probability in finance), mathematicians are dealing not with a frequentist or subjective version of probability, where the expected value is contingent and might never be realised, but a prescriptive version, where you must price by the risk-neutral, geometric, probabilities. The problem for financial mathematics, according to Shafer and Vovk’s approach, is not that it is discordant with the mainstream but that it is focused on the particular, and mathematicians prefer general to particular frameworks. Measure theory incorporates the frequentist (von Mises), subjective (Ramsey, de Finetti, Savage) and financial (Shafer and Vovk) conceptions of probability, hence it is to be preferred. The achievement of Harrison, Kreps and Pliska was in representing the approach developed in BSM in terms of measure theory. The benefit of this was immediately apparent to the theorists. There had always been a dissonance between the Black-Scholes approach, rooted in the Capital Asset Pricing Model, and Merton’s, rooted in stochastic calculus (Johnson, 2015b: 52‒53). The connection between the two approaches became apparent in the ‘Radon-Nikodym derivative’ central to Harrison, Kreps and Pliska’s efforts.

There is, however, another issue with Ayache appealing to Shafer and Vovk connected to the Cournot principle, which argues that small probabilities should be ignored. As such it is inconsistent with Ayache’s assertion that the problem with probability is in not being able to measure the ‘im-possible’ event. It is worth noting that the origin of Cournot’s principle is in the discussion of the Petersburg game, which was prominent in the eighteenth century when the financial (Huygens, Bernoulli) and classical (de Moivre, Montmort) conceptions of probability were debated. Cournot’s principle was displaced by the concept of utility only in the mid-twentieth century.

While Ayache’s knowledge of probability theory is out of date and consequently his understanding of financial mathematics is flawed, the closing stages of The Medium of Contingency are the least problematic from a mathematical point of view. The essence of what Ayache is saying, that price is anterior to probability and that models need to be re-calibrated if based on stationary ergodic processes, is conventional. There are still difficulties, though, when Ayache claims: “Price has everything to do with volatility (price is local) and nothing to do with probability or the long run” (326). This is misguided because volatility has everything to do with probability in financial mathematics, and is now used as a proxy for probability. The statement is informative in that it highlights the central issue with Ayache’s account: his conception of probability seems to be is based on what he was formally taught and relates to the conception of probability rooted in the physical sciences. His experiences in the markets challenge this conception, pointing to a more subjectivist view. However financial mathematics points to a third conception, that probability derives from price and that prices represent events. None of this is controversial. The problem with Ayache’s argument is not in its end-point but that it is built on a peculiar conception of measure theory that leads him to create straw men, which he demolishes with gusto, only to arrive at an uncontroversial position that was there at the very origins of mathematical probability. His conclusive statements (363-364) reflect this, being a mixture of the obvious and the incorrect, as I have highlighted above.

Measure theory provides financial mathematicians with a sophisticated toolbox that enables them to discern something of the nature of markets. It is not the job, or even the aspiration, of financial mathematicians to determine the ‘true’ price of assets (Johnson, 2011; 2015a). Mathematics tells us that if asset prices in a market were deterministic functions (derivatives) of a continuous stationary ergodic processes (the underlying asset) and the market does not admit arbitrage opportunities (opportunities to generate a risk-less profit), then there is a unique, indisputable, price for those assets. If asset prices are deterministic functions of dis-continuous stationary ergodic processes, we no longer have a unique price for the assets, though we still know the distribution of the underlying asset and so can make informed decisions as to the prices of derivative assets (Cont and Tankov, 2004: Section 10.5.2). In the case of real markets, where the driving asset prices are not given by stationary ergodic processes, we have the tools of measure theory to use to search for results. One current initiative is to develop pricing models without having to specify how an asset price evolves; rather, the input is quoted asset prices, for example as described in Hobson (2011). This is the response of mathematics to the well-known issues that Ayache highlights, which leaves me thinking that Ayache’s thinking is out-dated.

Abstract market theory

The weakness in Ayache’s argument derives from its narrow basis in the literature. Ayache seems oblivious to recent scholarship investigating the pre-Laplacian development of probability or the pricing of derivatives before Bachelier that render his account banal. Ayache’s account is of interest because of the links he draws to contemporary French philosophy, notably Badiou and Meillassoux. However, since his account of the mathematical theory is so weak, it is difficult to see how firm connections can really be made. Yet it is precisely through these connections that Jon Roffe employs Ayache in his book Abstract Market Theory. Roffe’s book presents a shorter, clearer argument that approaches ‘the market’ from the standpoint of French philosophy. This is a line similar to Ole Bjerg’s Making Money (2014), where Slavoj Žižek, rather than Deleuze, Badiou, and Meillassoux, provides the framework. While Roffe’s argument is well presented, it conveys the impression that it is principally a work of hermeneutics, a feature shared with Bjerg’s book, which does not actually get to grips with the actual phenomenon of ‘the market’.

In particular, while Roffe aims to “philosophically engage with the question of the market, a central yet neglected object of economics, on non-economic grounds” (3), the basis of his understanding of markets comes from a few principal sources: Ayache, Taleb, Fama and Graeber. The result of the imbalance between source materials and analytic methods is that, after around 150 pages of analysis, Roffe presents the reader with 26 Propositions. One would have hoped to get to something more concrete than a collection of statements with no suggestion that they might, justifiably, be elevated to Conjectures. For example, the second proposition – “Probability and its root possibility are incoherent concepts that rely on incompatible logical and temporal predispositions” (21) – seems to be based on Ayache’s arguments, arguments that as a probabilist I find fundamentally flawed.

Technically I think Roffe’s line is weak. In his introduction, Roffe (4) states that:

the argument advanced here involves (1) developing a concept of price, (2) developing a concept of the market, and making clear the nature of the market-price relationship, and (3) with these concepts in hand, situating in precise terms the regime of the social in relation to that of the market.

Roffe then proceeds by first discussing probability before moving onto prices. The consequence of this is that we have no clear articulation of what is meant by ‘the market’ until we are presented with Fama’s definition in relation to the Efficient Markets Hypothesis (123), which is exactly the same definition that Bjerg (2014: 51) uses. It is not unreasonable for an abstract theory of the market to travel towards a definition of the market; the problem is that Fama’s definition is very specific and closely related to the economic definition (the efficient allocation of scarce resources), which Roffe claims to wish to avoid.

To appreciate the significance of these comments, consider Roffe’s (31-33) discussion of Ayache’s rejection of Collateralised Debt Obligations (CDOs) as legitimate financial assets. Contrary to Ayache’s belief, CDOs are not novel inventions. The sixteenth-century Fuggers employed ‘corpo’ and ‘supracorpo’ structures to bundle their loan portfolio into ‘tranches’ that were sold to investors (Palmer, 1974: 554; Poitras, 2000: 269), while Mortgage Backed Securities were introduced in the USA during the nineteenth century (Levy, 2012: Chapter 5). The issue with CDOs is that they are products manufactured by financial institutions and then sold, as any other product might be made and sold. Despite being manufactured by financial institutions they are not widely traded in financial markets. This is why they do not fit into Ayache’s account.

Conclusion

The financial crises that have occurred since 2007 highlight that the radical uncertainty of markets demands that we are flexible in our thinking, and that the consequences of ideologies that inhibit reflexivity are significant. Steven Shreve’s advice, given at the Bachelier Finance Society Conference, was that mathematicians must find collaborators in the social sciences because it is important to view the markets from many perspectives and to search for connections between different perspectives if we are going to get to grips with finance. The weakness in Ayache’s argument derives from its narrow basis in the literature. He seems to relish in ignoring ethnographic and sociological work on markets, and while he might feel himself to be a competent calculator he has demonstrated that he has no real understanding of mathematics. Neither Ayache nor Roffe engage with the historical literature. I understand this approach from the perspective of bankers; it is difficult to justify fat fees if you acknowledge you are peddling a financial technology, such as securitisation, that was already present in medieval finance as the triple-contract. I cannot explain why scholarly efforts to explain the markets ignore history, not least because Roffe is a colleague of James Franklin, who would have been able to explain that (historically) price is anterior to probability (see Franklin, 2001).

Ayache’s criticism of measure theoretic probability is rather conventional. Von Mises (1982: 99) criticised it as unnecessarily complex, while the statistician Maurice Kendall (1949: 102) argued that measure theory fails “to found a theory of probability as a branch of scientific method”. More recently, the physicist Edwin Jaynes (2003: 655) has described Leonard Savage’s subjectivism as having a “deeper conceptual foundation which allows it to be extended to a wider class of applications, required by current problems of science” in comparison with measure theory. When Shafer and Vovk (2001) proposed their alternative to measure-theoretic probability they argued that game-theoretic probability “captures the basic intuitions of probability simply and effectively”. While all these particular expressions of probability might be locally useful, they do not pull themselves away from the specific issue in order to gain a general sense of what is going on. Harrison, Kreps and Pliska, in building a mathematical theory of derivative pricing on the basis of measure theory, did capture the essence in the Fundamental Theorem of Asset Pricing, which can be summed up by Black and Scholes’ famous quip that it should not be possible to make sure profits. Herein lies the matter of markets as revealed by mathematics; it is as much a normative as a positive pursuit, with roots in the scholastic injunction that a riskless profit is turpe lucrum ‒ a shameful gain.

The really disappointing consequence of the limited scholarship that both Ayache and Roffe rely on is that there might be substantial links between finance and the philosophy of Badiou and others that are worth exploring. Taleb’s third characteristic of ‘Black Swan’ events, that they are retrospectively justified, is reminiscent of Engels’ analysis of Hegel: “all that was previously real becomes unreal, loses its necessity, its right of existence, its rationality” (Engels et al., 1941: 11). Could there be a relationship between Badiou’s statement “of ‘a’, but also of ‘not a’” (Meillassoux, 2011: 3) and the idea that an event, in measure theory, could relate to different outcomes in the sample space? But one of the potentially most interesting correspondences between Badiou and markets is whether Badiou’s point that one can never be certain of an event having taken place – it relies on the constant re-affirmation through a “faithful procedure” (Badiou, 2007: Section 35.4) – relates to the market process of traders challenging each other’s quotes, described above. The relevance is that while Badiou argued only science and politics (along with art and love) can identify ‘Truth’, there is a growing body of literature that argues Western concepts related to democracy and science emerged out of commercial practice (Hadden, 1994; Kaye, 1998; Seaford, 2004).

Notes

  1. Ayache suggests that the issue is more specific and related to the quantum mechanical wave function being a meta-probabilistic predictive tool (23). Strictly speaking the square modulus of the wave function (a complex, not real valued function) can be interpreted as a probability, in the sense that it represents a density, but it is not a probability as mathematicians conceive of it. This approach is used as an analogue in explaining quantum mechanics to lay audiences, such as through the so-called Copenhagen Interpretation. There is a topic, quantum probability, which creates a mathematical theory of probability in which the wave function does represent a probability, but this is peripheral even amongst physicists.
  2. I argue that the price of any asset must be bounded by a finite multiple of the number of atoms in the universe, which is bounded. Since all prices must be recordable, the set of possible prices within this finite bound is finite. The price of an asset is never going to be ‘purple’.
  3. When Ayache says “that the next price cannot but be unpredictable, for if it wasn’t it wouldn’t be the next price, it would be the present price” (316) he cannot understand this fundamental property of the Weiner process/Brownian motion and still claim there is a significant association between prices and the Weiner process.
  4. This account might be becoming outdated as automated traders replace traditional market-makers, but it is relevant to Ayache’s argument.
  5. Note that the specification of a bid-offer pair by a market-maker is critical; offering to sell air for £1,000/kg would not demonstrate anything, offering to buy air at £999.95/kg would be challenged as a mispricing.

References

  • Badiou, A. (2007) Being and Event. London: Continuum.
  • Beunza, D. and Stark, D. (2012) From dissonance to resonance: Cognitive interdependence in quantitative finance. Economy and Society, 41(3): 383-417.
  • Bjerg, O. (2014) Making Money: The Philosophy of Crisis Capitalism. London: Verso.
  • Black, F. and Scholes, M. (1973) The pricing of options and corporate liabilities. Journal of Political Economy, 81(3): 637-54.
  • Brush, S. G. (1976) The Kind of Motion We Call Heat: A History of the Kinetic Theory of Gases in the 19th Century. Amsterdam: North-Holland.
  • Cont, R. and Tankov, P. (2004) Financial Modelling with Jump Processes. London: Chapman & Hall/CRC.
  • Engels, F., Marx, K., and Dutt, C. (1941) Ludwig Feuerbach and the Outcome of Classical German Philosophy. New York, NY: International Publishers.
  • Franklin, J. (2001) The Science of Conjecture: Evidence and Probability before Pascal. Baltimore, MD: Johns Hopkins University Press.
  • Hadden, R.W. (1994) On the Shoulders of Merchants: Exchange and the Mathematical Conception of Nature in Early Modern Europe. New York, NY: State University of New York Press.
  • Hald, A. (1990) A History of Probability and Statistics and their Applications before 1750. New York, NY: Wiley.
  • Harrison, J.M. and Kreps, D.M. (1979) Martingales and arbitrage in multi-period securities markets. Journal of Economic Theory, 20(3): 381-401.
  • Harrison, J.M. and Pliska, S.R. (1981) Martingales and stochastic integrals in the theory of continuous trading. Stochastic Processes and their Applications, 11(3): 215-60.
  • Harrison, J.M. and Pliska, S.R. (1983) A stochastic calculus model of continuous trading: Complete markets. Stochastic Processes and their Applications, 15(3): 313-16.
  • Heims, S.J. (1980) John von Neumann and Norbert Weiner: From Mathematicians to the Technologies of Life and Death. Cambridge, MA: MIT Press.
  • Hobson, D. (2011) The Skorokhod Embedding problem and model-independent bounds for option prices. In: Carmona, R. (ed.) Paris-Princeton Lectures on Mathematical Finance 2010. Berlin: Springer, 267-318.
  • Jaynes, E.T. (2003) Probability Theory: The Logic of Science. Cambridge: Cambridge University Press.
  • Johnson, T.C. (2011) What is financial mathematics? In: Pitic, M. (ed.) The Best Writing on Mathematics: 2010. Princeton, NJ: Princeton University Press, 43-46.
  • Johnson, T.C. (2015a) Finance and mathematics: Where is the ethical malaise? The Mathematical Intelligencer, 37(4): 8-11.
  • Johnson, T.C. (2015b) Reciprocity as a foundation of Financial Economics. The Journal of Business Ethics, 131(1): 43-67.
  • Jovanovic, F. and Le Gall, P. (2001) Does God practice a random walk? The ‘financial physics’ of a nineteenth-century forerunner, Jules Regnault. The European Journal of the History of Economic Thought, 8(3): 332-62.
  • Kaye, J. (1998) Economy and Nature in the Fourteenth Century. Cambridge: Cambridge University Press.
  • Kendall, D.G., Batchelor, G.K., and Bingham, N.H. et al. (1990) Andrei Nikolaevich Kolmogorov (1903-1987). Bulletin of the London Mathematical Society, 22(1): 31-100.
  • Kendall, M.G. (1949). On the reconciliation of theories of probability. Biometrika, 36(1/2): 101-16.
  • Levy, J. (2012) Freaks of Fortune: The Emerging World of Capitalism and Risk in America. Cambridge, MA: Harvard University Press.
  • MacKenzie, D. (2008) An Engine, Not a Camera: How Financial Models Shape Markets. Cambridge, MA: MIT Press.
  • Meillassoux, Q. (2011) History and event in Alain Badiou. Parrhesia, 12: 1-11.
  • Palmer, G. (1974) The emergence of modern finance in Europe 1500-1750. In: Cipolla, C. (ed.) The Fontana Economic History of Europe: The Sixteenth and Seventeenth Centuries. London: Collins/Fontana, 527-94.
  • Pliska, S. (1997) Introduction to Mathematical Finance: Discrete Time Models. London: Blackwell.
  • Poincaré, H. and Gould, S.J. (2001) The Value of Science: Essential Writings of Henri Poincaré. New York, NY: Modern Library.
  • Poitras, G. (2000) The Early History of Financial Economics, 1478‒1776. Cheltenham: Edward Elgar.
  • Russell, B. (2009) An Outline of Philosophy. Abingdon: Routledge.
  • Seaford, R. (2004) Money and the Early Greek Mind: Homer, Philosophy, Tragedy. Cambridge: Cambridge University Press.
  • Shafer, G. and Vovk, V. (2001) Probability and Finance: It’s Only a Game! New York, NY: Wiley.
  • Sylla, E.D. (2006) Commercial arithmetic, theology and the intellectual foundations of Jacob Bernoulli’s Art of Conjecturing. In: Poitras, G. (ed.) Pioneers of Financial Economics: Contributions Prior to Irving Fisher. Cheltenham: Edward Elgar, 11-45.
  • Taleb, N.N. (2007) The Black Swan: The Impact of the Highly Improbable. New York, NY: Random House.
  • von Mises, R. (1982) Probability, Statistics and Truth. New York, NY: Dover.
  • von Plato, J. (1994) Creating Modern Probability. Cambridge: Cambridge University Press.
  • Zimmermann, H. and Hafner, W. (2007) Amazing discovery: Vincenz Bronzin’s option pricing models. Journal of Banking and Finance, 31(2): 531-46.

Refbacks

  • There are currently no refbacks.