Return to Article Details Response to Johnson: A random sample versus the radical event

Response to Johnson: A random sample versus the radical event

Finance and Society, 2016, 2(2): 205-16.
DOI: http://dx.doi.org/10.2218/finsoc.v2i2.1734

Corresponding author:
Elie Ayache, ITO 33, 36 rue Lacépède, 75005 Paris, France.
Email: elie@ito33.com

Abstract:

Timothy Johnson’s working hypothesis in his review of my latest book, The Medium of Contingency, is that I (as well as the ‘quants’ involved in the derivative pricing industry) do not understand the foundations of abstract probability theory. In this response, I show that this is not the case. On the contrary, rules and protocols which are common in the derivative pricing industry, the result of which can be an extension of abstract probability theory as it now stands, seem to elude Johnson. To address these failings, I provide theoretical reflections on probability theory and its formalisms.

Keywords: Abstract probability theory, random sample, random variable, contingency, derivatives market,

Introduction

From the start of his review, Johnson (2016b) establishes a demarcation line between the quants, who build derivatives pricing technologies for the derivatives traders, and the guardians of abstract probability theory and the fine points of its formalism, who lecture in financial mathematics. According to him, the quants use probability theory as a mathematical tool but are not aware of its foundations. Consequently, they do not understand measure theory. This is false. What is true is that Johnson, who by his own admission belongs to a different culture than the quants, is not aware of the ramifications of their practice and the ways they apply quantitative models in finance. It even appears that Johnson himself is confused about the very foundations of probability theory, which he advertises with such authority. In response to Johnson, I first address measure theory and his confusion between ‘sample space’ and ‘state of the world’. Next, I address the importance of thinking the recalibration of the derivative pricing model as a radically contingent event. In the final section, I reflect on Johnson’s failure to grasp the implications of radical changes of context.

Measure theory

Johnson claims that I don’t understand one of the main concepts of measure theory: sets of measure zero. I wonder whatever gave him this idea. Here is evidence to the contrary in my book: “Measure theory in its finest was invented to account for sets of measure zero” (Ayache, 2015: 95); “Measure theory is of the essence as soon as infinitary events are considered, and we need to exclude sets of measure zero” (Ayache, 2015: 105). And there are many more. I could have saved the reader the pain of this forensic analysis by reminding Johnson that I went to the same school as Lebesgue, Borel, and, for that matter, Poincaré. So did the majority of French quants who surround me.

Similarly, I wonder how Johnson could ever entertain the thought that a ‘state of the world’ is the same thing as what the formalism of probability theory defines as a ‘sample’, or an element of the ‘sample space’, or again, a ‘random outcome’. Johnson (2016b) writes: “Probability theory starts with a ‘sample space’ that represents all possible states of the world”. This is a confused understanding of probability theory. The concrete world from which samples are taken cannot be in a ‘state’. The notion of state is an abstraction; only a model, or an abstract representation of the world, can be in a state. States of the world belong in the state space, not in the sample space. They are the values of the state variables that are identified by the physical model [en. 1] or, in the probabilistic case, the values of the random variables.

In finance, we deal mainly with stochastic processes, or collections of random variables. When a quant uses the Black-Scholes-Merton partial differential equation (PDE) to value derivatives, the price of the underlying asset is the state variable. When he uses a more advanced stochastic volatility model, such as Heston, volatility is the second state variable. There is nothing even remotely related to a sample, in this.

Probability theory is a mathematical abstraction which applies to the concrete world. Random phenomena are physically or socially produced, and as a consequence probability theory has to mention the concrete sample or concrete outcome. However, it cannot account for its concreteness in full, what Jacques Bonitzer (1984: 157) calls its “absolute concreteness”. For instance, a human sample that a cancer research institution draws from a smoking population is not limited to a pair of lungs, but is a full-blooded human being. Similarly, every time we throw the die, the random outcome in its absolute concreteness is not limited to the number that will show up. The full description of the concrete situation might have to include the strength of the throwing arm (which may fail to lift the die) or the intensity of gravity (which may vanish). The whole situation consisting of gravity on earth, of the physical ability of the player and perhaps even of the expression on his face, will have to be considered as the random generating device that is triggered every time the die is thrown and produces a number on its face.

Probability theory only recognizes as a measurable event a certain characteristic of the fully concrete outcome. We must, crucially, exercise on the random phenomenon what Bonitzer (1984: 80) calls a “point of view”, keeping in mind that this point of view might change, due to unforeseen circumstances. From the point of view of lung cancer research, only the pair of lungs matter. From the point of view of gambling, only the number engraved on the face matters. A measurable event can then be defined as this number showing up. A more abstract event could be that an odd number shows up. Which fully concrete random outcome has given rise to it will only matter to the extent that it has the relevant number as one of its characteristics or properties.

While properties belong to individuals in the physical world, in mathematical ontology individuals belong to properties. In this case, the concrete human being will belong to the healthy or the unhealthy pair of lungs. The fully concrete and individual random outcome giving rise to the number ‘six’, will belong to the property ‘six’, or to the event ‘six’. The event ‘six’ is realized if the given physical outcome belongs to it, or gives rise to it. In this sense, the event is an abstraction of the random outcome. As Bonitzer (1984: 81) writes:

The category of ‘point of view’ is very closely linked to the category of abstraction. For someone observing a certain natural or social phenomenon, to exercise abstraction is to select a partial set of characteristics from the infinity of all those that belong to the phenomenon in its concreteness.

The formalism is great in expressing only the hinge between the fully concrete outcome and the abstract event, namely that the one is an element of the other. That is to say, it is great precisely in refraining from saying what the full concreteness consists of and, on the other hand, in refraining from saying who decided which events were of interest. The beauty of the formalism lies in what it doesn’t say. Thus, the formalism opens the possibility of interpretation of the formalism.

One should limit the fully concrete random outcome, in the case of the die, to the material device that we recognize as the die, and not extend it to the larger device known as the ‘world’. As a matter of fact, the fully concrete random outcome can be limited to the actual number showing up; that is to say, one can safely describe the material device known as a die as a device generating random numbers, abstracting the die itself, in a way, and no longer just the outcome. [en. 2] Again, as Bonitzer (1984: 147-8) writes:

Nothing stops us from reasoning on the numerical representation of the random outcome (or on its abstract representation) instead of reasoning on the ‘concrete’ random outcome. For instance, we can represent a throw of the die simply by the number that comes out as a result. As a matter of fact, we will see that, in many cases, we are led to put in correspondence a ‘concrete’ sample space and an ‘abstract’ sample space, and to move from one to the other depending on the requirements of the reasoning.

This paves the way for the introduction of the concept of random variable, which is just the extension of the concept of the event. As the Fields medallist, Terence Tao (2010), writes: “An event E can be in just one of two states: the event can hold or fail, with some probability assigned to each. But we will usually need to consider the more general class of random variables which can be in multiple states”. [en. 3] Usually, the states are numerical values, for instance the prices of an asset.

Quants work with random variables and the only algebra of events that matters for them is the one induced by the random variable. As Bonitzer (1984: 158) writes: “The standard Borel σ-algebra B of the real numbers allows us to associate with any random variable X the σ-algebra X-1(B); the corresponding measurable space will be sufficient for our purposes, so long as they depend only on values of the random variable X”. And further: “We could, after all, if not dispense altogether with the concept of the random sample (which is indispensable in constructing the very important concept of the random variable), at least very well dispense with considering it a primary concept” (Bonitzer, 1984: 159). As Tao (2010) writes:

Core concepts of probability theory, such as random variables, can be defined abstractly, without explicit mention of a measure space. Probability theory is only ‘allowed’ to study concepts and perform operations which are preserved with respect to extension of the underlying sample space. As long as one is adhering strictly to this dogma, one can insert as many new sources of randomness (or reorganise existing sources of randomness) as one pleases; but if one deviates from this dogma and uses specific properties of a single sample space, then one has left the category of probability theory.

Bonitzer (1984: 148) concurs: “The ‘concrete’ sample space has a specific function: that of keeping open the possibilities of broadening the abstract representation”. Tao (2010) continues:

This dogma is an important aspect of the probabilistic way of thinking, much as the insistence on studying concepts and performing operations that are invariant with respect to coordinate changes or other symmetries is an important aspect of the modern geometric way of thinking. With this probabilistic viewpoint, we shall soon see the sample space essentially disappear from view altogether, after a few foundational issues are dispensed with.

This is the stage the quants have reached. They do not linger on foundational issues. When Johnson says that mathematics cannot account for all possible outcomes, he, too, insists that the sample space must essentially disappear from view. However, he perverts the spirit of this abstraction, and consequently the benefit it might have for probability theory. For instance, Johnson considers the random variable known as the price of an exchange-traded asset. Like all other random variables, it is an abstraction taken over the concrete world. It is certainly the ‘concrete world’ that triggers the random changes of asset prices; however, outcomes such as 9/11 or a meteorite wiping out Manhattan are too real or too concrete to matter as such, according to Johnson. They are not measurable and were never intended to be. Only their impact on Wall Street is measurable.

Fine as this point may be, it presupposes that probability theory has already listed all the possible future outcomes of the world, and they are simply revealed in time. The outcomes may not be identified in advance; however they exist. They are already ‘there’, potentially contributing to our measurable events or triggering our recognized random variables. To exist, in mathematical ontology, is to be an element of a set, and the outcomes, or samples, are by definition elements of the measurable events.

I feel very uncomfortable stretching the comprehension of the sample space to such an extreme. The sample space just cannot be said to be equal to the world in its totality – I mean, in the totality of its ‘outcomes’, both past and future. One can hardly measure the degree of metaphysical commitment that is involved in saying, for instance, that a future unknown ‘outcome’, such as a meteorite wiping out Manhattan – that any future unknown outcome, as a matter of fact – is already an outcome of the random generating device known as the ‘world’. In saying this, Johnson deviates from the dogma above and uses a “specific property of a single sample space” (Tao, 2010).

It is not without reason that Bonitzer (1984: 146) insists that the random device generating the outcomes cannot be anything whatsoever, but should be constrained by a rule or a protocol. In the case of the die, it is a material object. It has to be consistent. It cannot be the ‘world’. The world is not a total of cases. It is not a defined set. The whole philosophy of Meillassoux (2007) is an argument against such totalization. As a matter of fact, not only is this stretching of the collectivizing character of the sample space metaphysically unwarranted, but physical reality itself will manifest cases of invalidity of such backward rewriting and counterfactual reasoning.

According to Meillassoux (2007: 70), probability posits “a pre-given set of possible cases which no becoming is supposed to modify”. It identifies the world “with a universe of possible cases indexable in principle, that is to say, pre-existing their ultimate discovery, and thereby constituting the potentialities of that universe” (ibid.). “The affirmation of a fundamental hazard”, he writes, “thus does not challenge, but on the contrary presupposes, the essential fixity of such a becoming, since chance can only operate on the presupposition of a universe of cases determined once and for all” (ibid.).

Meillassoux recognizes that the random outcomes, or what he calls the ‘cases’, pre-exist their discovery and their identification. They may be inconceivable ex-ante, as Johnson remarks; however, they exist; they are part of the pre-given set of possible cases, as probability theory, which is based on extensional set theory, requires. “One cannot deduce in univocal fashion the succession of events, but one can in principle index these events in their totality – even if, in fact, their apparent infinity prohibits for all time the definitive foreclosure of their recollection” (Meillassoux, 2007: 70). Meillassoux then adds that such a belief “would constitute a metaphysics of chance”, in so far as chance “would prescribe the fixed set of events within which time finds itself free to oscillate” (ibid.). “The belief in chance is inevitably a metaphysical belief” (ibid.), he remarks. It is such a decision that Meillassoux claims to have “extracted himself from by detotalising the possible” (71). He now distinguishes “the infinite from the All”, since, as he writes, “the infinity of the possible cannot be equated with its exhaustion” (ibid.).

By reassigning any conceivable or unconceivable outcome into the sample space, Johnson is committed to the metaphysical belief and to the metaphysics of chance that Meillassoux is exposing. So is Taleb (2007), by the way. For Taleb too, the fire in the casino, or generally events that do not belong, as he says, “inside the casino’s building”, are too concrete to matter as such. What matters is their abstraction, or their impact on the random variable known as the P&L of the casino. In this, Taleb demonstrates his understanding of probability, as Johnson insists. The cost, however, is the same naïve tendency as in Johnson to confuse the world and history with one sample space or one random generating device whose outcomes are merely working havoc upon the existing catalogue of our random variables. As I write in The Blank Swan: “The question of history (or of the market) is too great and too overwhelming a phenomenon to fall under Taleb’s hasty metaphysical reductionism and to be identified with a mere output that would be available to his inspection, as if from outside” (Ayache, 2010: 11). Indeed, Taleb (2007) speaks of the “script that produces events” (8); of “the generator of history”, the “generator of the world” (268), and the “generator of reality” (270); of the “model which runs the world” (267); and, in other places, of “that big machine that generates events” (12). Even truths are “generated by mechanisms” (20), according to him. “The true explanation is unique”, he writes, “whether or not it is within or reach” (72).

We all agree that the sample space must be kept implicit and ultimately disappear from view. However, as both Tao and Bonitzer indicate, the main benefit is not to hide the concrete world but, on the contrary, to introduce new sources of randomness, that is to say, new random variables, and consequently new states of the world. This is radically different from producing old (yet hidden) samples. This is called recalibration, or a radical change of the point of view, or a radical change of context, or a change of the algebra of events – in a word, a change of the world. It cannot be produced by an existing outcome, but only by a reinterpretive event. To repeat, the ‘state of the world’ is not a sample. It is the state of the interpreted and continually reinterpreted world.

If massive meteorites continue falling randomly on the major cities of the planet, surely a new source of randomness will have to be recognized, and consequently Wall Street will have to introduce a new random variable measuring this effect. Consider, on the other hand, the case of no gravity that we have already mentioned. To what event, among those that were identified in the context of playing dice, should such an outcome belong? This is a context-disrupting outcome, so it leaves us with the alternative either to recognize that it doesn’t count in the context of playing dice or that we have been kicked into a different context altogether, in which the intensity of gravity is now properly identified as a random variable, and in which a certain sub-algebra only used to be confused, by some foolish and carefree men, with the algebra of events relevant to playing dice in that corner of the world.

Note, however, that even this is not guaranteed, because the two contexts in question may not be commutative. If God had been first to play dice with the earth’s gravity, it may indeed have never come to pass that man should invent the game of dice or even exist. Extension of algebras of events to larger algebras, or set-theoretical inclusion of sub-algebras in larger ones, is not the only way to vary their structure. Contexts or algebras of events can be incompatible with each other and physical reality can be such that observables and events are not jointly measurable. It may not be possible to ‘add’ sources of randomness one after the other, as the Kolmogorov measure space (and its exegesis by Bonitzer and Tao) seem to suggest, because ‘physical objects’, their ‘properties’ and the ‘states’ they are in may not be stable and re-identifiable as we move from one context to the next. Reality is more general than objectivist reality and it may definitely produce statistics, i.e. something real and tangible, which are incompatible with any possible variation of the Kolmogorov σ-algebras and the probability measures defined on them (the violation of the Bell inequality in quantum mechanics).

In fact, Kolmogorov makes too strong an assumption in postulating that outcomes can be collected in a single universe. Reality is ‘weaker’ and it is actually the case that whole ranges of possible states, together with all their possible set-theoretic combinations, can be incompatible with each other, and therefore necessitate a richer structure than Kolmogorov’s σ-algebra. This is called an ortho-algebra and a meta-probability calculus corresponds to it, known as the wave function.

It is not that reality is more restrictive and less general than abstract probability theory. On the contrary, it is probability theory that is too specific. In an article in which she establishes that “quantum mechanics is neither a ‘normal’, nor an ‘abnormal’ realization of the abstract theory of probabilities, but a pioneering materialization of a possible future extension of the abstract theory of probabilities as it now stands”, Mugur-Schächter (1992: 89, 100) concludes that “the concept of a probabilizable space is very restrictive”. Indeed, probability theory is based on quite regional random generating devices, which can only generate classical statistics. Caution should therefore be exercised when the world at large is considered as the ‘random generating device’. In general, incompatible ranges of possibility may arise, and this general view of reality is the one I have adopted and very explicitly advertised, when I introduced the void of possibilities from which the event radically emerges.

I am grateful to Johnson for instructing the reader of my book in Kolmogorov, von Mises, Keynes, Knight, Ramsey, Wiener and Poincaré, and all the luminaries of probability theory, but there is one major generalization of probability theory which he has missed, by far the most significant in my own construction. It is a meta-contextual probability theory, an instance of which is quantum mechanics. Because and only because it is more general and ‘weaker’ than probability theory, and admits of probability theory only as a sub-case, I endorse Johnson’s characterization of my understanding of probability theory as weak.

Thinking the contingent event

The world is such that it can turn against us, as an ‘outcome’, the failure of our whole procedure of grouping outcomes into events, and events into algebras. For how long are we going to keep our eyes fixed on what used to matter and might, for that matter, no longer exist after the event? For how long are we going to keep applying to the event the backward looking mirror that Taleb himself criticizes? While the outcome of no gravity ‘may not count’ in the dice-throwing game, or may not be part of the corresponding world, players of the financial market cannot display such frivolity. The random phenomenon they are engaged with is truly the world at large, or history in all its history-changing ‘outcomes’ (I prefer to call them events), and this is the reason why I believe that probability theory meets its limit, here, and should be supplemented by a more general calculus which can account for the changes of context.

This meta-probabilistic calculus is the market of contingent claims, according to me. Probability theory needs not only to augment the sample space with an event space, as Johnson lectures us; it needs to augment itself with the whole derivatives market, in order to re-establish contact with reality and with the massive randomness of the concrete world – only it will do so from the opposite end to the sample space. This is my ambition for the market, advertised in my book from start to finish, and it is exactly what Johnson has missed.

The formalism of abstract probability theory doesn’t tell us how to change the context and reorganize the ‘random outcomes’ of the world in a different algebra of events. While mathematics, as Johnson says, cannot and need not recognize the outcomes, we can, and we must. This is probably because we are physicists, or better, metaphysicians. The formalism cannot attend to the meta-formalism. Someone locked inside a formalism cannot understand that those who are not have a superior understanding of the formalism.

Meillassoux (2007: 72) continues: “I will call contingency the property of an indexed set of cases (not of a case belonging to an indexed set) of not itself being a case of a set of sets of cases; and virtuality the property of every set of cases of emerging within a becoming which is not dominated by any pre-constituted totality of possibles”. Here, Meillassoux very clearly refers to the radically emergent event. It is not that an unconceivable outcome becomes conceivable when it occurs and is subsequently said to have all along been an element of a measurable event, therefore to pose no problem to the probability formalism. Rather, it is an event, or a set of cases, as Meillassoux very clearly states, that emerges and was not part of the previous set of sets of cases. It is an event that disrupts the previous algebra of events. Such events, writes Meillassoux, “cease to be doubled by phantomatic possibilities which prefigure them before they occur, to be conceived instead as pure emergences, which before being are nothing, or, once again, which do not pre-exist their existence” (72). He adds: “Time throws the die, but only to shatter it, to multiply its faces, beyond any calculus of possibilities” (74).

As I write in The Medium of Contingency:

My point of rupture with sets and enumeration of states remains the event, in the sense that the event belongs to nothing that exists before, to no previous situation or ontology. My point is that the void of possibilities, which is the only ‘thing’ that the emergence of the event can possibly lean against, can be filled with the market as medium of the event and can take place in the marketplace. (Ayache, 2015: 23)

Further: “The existence of a level of reality in which we have to conceive of such a thing as incompatible ranges of possibilities is, to us, a pressing indication of the existence of a level of reality in which we may conceive of no range of possibilities at all” (Ayache, 2015: 21, original emphasis).

Just as it is impossible, in quantum mechanics, to measure conjugate variables in the same context and to join together the ranges of possible states of the corresponding observables, and just as it is impossible to reduce the purely contingent event to the register of possibility and to identify beforehand the range of possibilities (or states of the world) of which the event is supposed to be the realization, recalibration is im-possible and cannot be accounted for in possibility. (Ayache, 2015: 25)

I have, as ambition for the market, precisely that it connects us, through recalibration, with the non-total of possibilities that Meillassoux is talking about. As I write: “If you wish to trade and engage dynamically in the market, you become dependent on all the prices of derivatives of all degrees of complexity – in another words, you become dependent on the total ‘state’ of the market, which can never be reduced to a total of states” (Ayache, 2015: 50). A really challenging criticism of my work would be that the market ultimately falls short of such an ambition because it remains a ‘calculus’ no matter what I do. Compare this to the regressive criticism that the market poses no problem for probability and never deviates from its formalism.

Johnson acknowledges the necessity of recalibration toward the end of his article. However, like anything else I propose, he thinks recalibration is uncontroversial. Indeed, it is a known fact that the price processes are non-stationary and that jumps and stochastic volatility will sooner or later irrupt and force us to recalibrate a basic model like BSM. But wait a minute! What probabilistic formalization has Johnson to offer for recalibration? If recalibration is the very matter the market is made of, how is it covered by abstract probability theory, which, according to Johnson, has it all covered already? Rebonato (2004: 15), who has written the reference book of the quant’s world, recognizes: “Possibly no aspect of derivatives trading has a deeper-reaching impact on pricing than the joint practices of out-of-model hedging and model recalibration.” And further: “Similarly important, universal and difficult to justify theoretically is the practice of re-calibrating a model to the current market plain-vanilla prices throughout the life of the complex trade” (ibid.).

When Johnson writes that ‘outcomes’ such as 9/11 or the Manhattan-wiping meteorite are not measurable and as such do not pose a problem to probability theory – that on the contrary they confirm its formalism which recognizes them as unmeasurable outcomes – Johnson saves the formalism and offers what’s in fact a probabilistic formalization of 9/11 and of the meteorite. He reduces them to the mathematical symbol ω є {S&P 500 = 0}. But he proposes no such formalization of recalibration.

By contrast, I explicitly ask:

Why can’t the usage of the BSM formula, which everybody agrees is tantamount to the practice of calibrating and recalibrating BSM to derivatives prices in ways which obviously contradict and misappropriate its probabilistic make-up, be built into a rigorous meta-discourse of the BSM theory? (Ayache, 2015: 93, original emphasis)

And further:

Extending the market of the underlying into a market for contingent claims written on it, or, again, formalizing the market in the full sense of the term – which is that the underlying price and the derivative price are of the same nature – does not proceed along the line of underlying stochastic processes of increasing complexity but along the line of recalibrations to contingent claims of increasing complexity. (Ayache, 2015: 361)

In derivative pricing technology, and contrary to Johnson’s claim, recalibration has nothing to do with the price process of the underlying asset and any stationarity or non-stationarity thereof. The quant cannot wait for volatility eventually to become stochastic or for jumps eventually to irrupt. Hedging against changes of the parameters that are constant in the model – what Rebonato calls ‘out-of-model hedging’ – has to be part of the package from the start. Why? Because it is not hedging against unforeseen changes of the process of the underlying price (or recalibration in Johnson’s sense); it is hedging against changes of the market price of the derivative. This is part of the package, because the model is destined to a derivative trader and it is part of the design of the shipped technology that the market price of the derivative should be different, both in nature and in numerical value, from its model value. Recalibration is acknowledging that the value of the derivative is a market price, which therefore introduces a new range of states of the world.

Probability theory abstracts the reality known as the ‘market’. From all the concrete random outcomes that this reality produces, it retains only what it can measure, namely, their contribution to changes of the random variable known as the ‘price’ of a certain asset. Functions of that random variable, known as ‘contingent payoffs’, are equally unproblematic. Non-arbitrage imposes that their values are deterministic functions of the state variable, in this case, the price of their underlying asset. The universe is closed and even finite, as Johnson insists. Now, the fact that the derivative valuation model should become part of the pricing technology and that the derivative trader should use it to create the market of the derivative, adding new ranges of states of the world and breaching the closure of the previous world, this whole practice and even industry is too real and too concrete to matter, according to Johnson. For him, it is and has always been just another unmeasurable concrete outcome of the reality we have abstracted anyway. How could a market for the derivative ever emerge when our ‘abstraction’ imposes that only a market for the underlying asset exists?

My book focuses on making problematic what Johnson believes is unproblematic. It consists in recognizing the specific nature of price. I sincerely doubt that abstract mathematics can “identify the essential nature of markets”, as Johnson claims. The reality and the technology that the quants are immersed in are better candidates, I think. I will let the sociologists, the philosophers and the artists out there decide to which they prefer to turn their attention.

Radical changes of context and the failure of the notion of state

Johnson’s whole misreading of my book is explained by his failure to grasp the incompatibility of contexts and the meta-probabilistic level it calls for. When I claim that the becoming-price of the derivative value, or that the emergence of the derivative price as a new state of the world, are unmeasurable events relative to the σ-algebra of BSM, and therefore call for a radical change of context, Johnson replies they don’t. To him ‘unmeasurable’ can only suggest the unproblematic sample, not the extraordinary event, something below the algebra of events, not something transcending it.

Yet, as soon as non-commutative contexts are considered, calling for the richer structure of ortho-algebras, an event may no longer be divisible by another. Tanaka (2013) calls them incommensurable events, in a different sense of the word ‘measure’, obviously, than the one relative to the inclusion of the probabilistic sample in the probabilistic event. “The dogmatic thesis of the divisibility of all events” is a presupposition, according to Tanaka, which may not be “faithful to the concrete situation of experimental contexts”. He insists that “where irreducible contingency appears in the context of observation”, indivisible events may exist.

It is part of the essential (and as a matter of fact, extraordinary) nature of price, not only that it should be random, but that any statistics concerning it should be writable as a contingent claim, therefore tradable. The realized volatility of price can be written as a volatility swap; the jump in the price can be written as a gap option. These contingent claims trade and admit of prices in turn, and for this reason the pricing model has to account for stochastic volatility and for jumps. Crucially, this is all happening on the same plane and in the same instant, otherwise known as the market, and not happening in a sequence. There is not one context, not one stochastic structure, in which the market could ever be framed. Ultimately, this metamorphic matter is itself incompatible with the notion of state of the world. This is the full meaning of my statement: “Another formulation of the recalibration problem is the fundamental principle according to which states of the world in the market are prices, all the prices and nothing but the prices (i.e. they are not abstract states of the world)” (Ayache, 2015: 363).

Of course, Johnson finds unproblematic that states of the world in the market are prices and nothing but prices. But he doesn’t seem to notice the second clause of my statement, which is that they are all the prices. By that I mean that the prices of all the derivatives, of all grades of complexity and encoding any imaginable statistics – what’s more, written one upon the other in an infinite chain – should be part of the same ever-extending medium. Clearly this universe is not finite! My speculation is that, as per Meillassoux, it fails even to constitute a total set of states of the world.

Another serious misreading of Johnson is when he lends me the thought that “mathematical probability theory fails because it cannot identify all possible states of the world”. The actual statement I make, which, ironically enough, Johnson quotes in full, is the following: “The first predicament of probability theory, and consequently its inability to deal with absolute contingency and its medium (the market), lie in identifying the possible states, not in their subsequent probabilistic weighting” (Ayache, 2015: 147, original emphasis). So it is not the failure to identify the states of the world that poses me a problem, but exactly the opposite! As the emphasis indicates, it is the identification of states, it is the whole notion of state of the world, the whole stochastic structure underlying probability theory, which I criticize. All of part III of my book is dedicated to this criticism. It finds no mention in Johnson’s review.

Irony becomes farce when Johnson promotes Hobson at the closing of his review. In Johnson’s reading of Hobson, the proposal is no longer to “specify how an asset price evolves”, but rather to consider the “quoted asset prices” as input. I wonder whether Johnson really understands what he is transcribing. For the exact proposal, in Hobson (2011), is to drop any stochastic assumption relating to the underlying asset and to consider, instead, the market prices of derivatives written on that asset as basis. In other words, it is to drop the framework of probability and the identification of states of the world, in favour of the quant’s constant and universal recalibration to prices of contingent claims!

Lorenzo Bergomi, the head quant of Société Générale, is a few steps ahead on the same path. He introduces the important distinction between model and pricing equation. He writes: “A pricing equation is essentially an analytical accounting device: rather than predicting anything about the future dynamics of the underlying securities, a model’s pricing equation supplies a decomposition of the P&L experienced on a derivative position” (Bergomi, 2016: 2). Further: “The argument goes this way and not the other way around. Modelling in finance does not start with the assumption of a stochastic process” (Bergomi, 2016: 7, original emphasis).

At least as charming as his final surrender to Hobson is Johnson’s final admission that market prices are set in a “discursive manner” and that they do not resemble numbers printed on ticker tapes. Am I hearing right? Where has our probability champion gone, all of sudden? I thought the essential nature of markets was supposed to be identified by abstract mathematics! Could we be finally looking at the end of probability? And did it only cross Johnson’s mind that my whole book might be an essay in the ontology of the market, precisely exceeding mathematical finance, yet resisting a dissolution in sociology or anthropology? It has always been my motto that price is not a number (Ayache, 2015: 52, 81, 98, 223, 362).

My book is an attempt to lay the metaphysical foundations of, and consequently to formalize, what Bergomi, Rebonato, and as a matter of fact, Hobson, are up to. It aims to introduce the material price of the derivative, as opposed to the probability of the underlying price. The two categories at play, here, both exceeding abstract probability theory, are writing and money. Johnson has no notion of either. As a result, not only does he completely misunderstand my deduction of the matter of price from a novel interpretation of Brownian motion[en. 4] (part II) and of incomplete markets (part IV), but he equally misinterprets the work of Shafer and Vovk (2001). Indeed they propose that money is more fundamental than probability, and consequently their game-theoretic probability is an alternative to measure theory as a whole, not an uncontroversial case of risk-neutral probability, as Johnson thinks.

To crown my frustration, I have discovered that the main statements Johnson makes here (that abstract probability theory is no longer associated with counting, or that the market is not identifiable with a ticker tape) he has actually made elsewhere, in articles addressed to the general public. [en. 5] This makes me think that Johnson has not in fact reviewed my book but repeated a previous lecture. Perhaps he will emerge from this tunnel when he realizes that the bonfire he puts up at the end (Hobson) is in fact the entrance lighting of my book.

 

Notes

  1. Or metaphysical model: indeed, the terminology of ‘states of the world’ is very common among the metaphysicians of possible worlds, or generally the users of possible-worlds semantics.
  2. However, ‘six’, now considered as an outcome, would still have to be distinguished from ‘six’ as event. ‘Six’, as outcome, tells us what has physically or concretely happened; as such, it contributes to ‘six’, the abstract event that counts. Formally, this translates in saying that the outcome ‘six’ is an element of the event ‘six’, which has the outcome ‘six’ as only element: ω є {ω}.
  3. Note that Tao (2010) rightly assigns probabilities to states, thus confirming their difference with samples.
  4. Speaking of Brownian motion, I literally fell from my chair when reading this statement from Johnson (2016b): “Because the Wiener process is continuous, it is predictable in the sense that we can predict it will hit a value if it comes sufficiently close to that value”. But what is meant by ‘sufficiently close’ here and on what scale? Is it dollars, cents, or fractions of cents? Try and explain that to a high-frequency trader who is booking larger and larger trades in smaller and smaller time intervals. The Wiener process may be continuous, but it is unpredictable on any scale – it lacks a trend on any scale – because it is nowhere differentiable!
  5. In an article posted on his blog, Johnson (2016a) writes: “On this basis we can describe a market made by market-makers as a discursive arena”. In his contribution to The Best Writing on Mathematics 2010, he writes: “If I want to measure the value of a painting, I can do this by measuring the area that the painting occupies” (Johnson, 2010).

References

  • Ayache, E. (2010) The Blank Swan: The End of Probability. Chichester: Wiley & Sons.
  • Ayache, E. (2015) The Medium of Contingency: An Inverse View of the Market. New York, NY: Palgrave Macmillan.
  • Bergomi, L. (2016) Stochastic Volatility Modeling. Boca Raton, FL: CRC Press.
  • Bonitzer, J. (1984) Philosophie du Hasard. Paris: Éditions Sociales.
  • Hobson, D. (2011) The Skorokhod embedding problem and model-independent bounds for option prices. In: Carmona, R. (ed.) Paris-Princeton Lectures on Mathematical Finance 2010. New York, NY: Springer, 267-318.
  • Johnson, T. (2010) What is financial mathematics? In: Pitici, M. (ed) The Best Writing on Mathematics 2010. Princeton, NJ: Princeton University Press, 43-46.
  • Johnson, T. (2016a) Sincerity – the subjective rationality of markets. Available at:
  • <http://magic-maths-money.blogspot.fr/2016/01/sincerity-subjective-rationality-of.html/>. Accessed 31 August 2016.
  • Johnson, T. (2016b) The necessity of multi-disciplinary scholarship for finance: On Ayache and Roffe. Finance and Society, 2(2): 189-204.
  • Meillassoux, Q. (2007) Potentiality and virtuality. Collapse: Philosophical Research and Development, II: 55-81.
  • Mugur-Schächter, M. (1992). The probability trees of quantum mechanics: Probabilistic meta-dependence and meta-meta-dependence. In: Carvallo, M.E. (ed) Nature, Cognition and System II. Dordrecht: Kluwer Academic Publishers, 89-111.
  • Rebonato, R. (2004) Volatility and Correlation: The Perfect Hedger and the Fox. Chichester: Wiley & Sons.
  • Shafer, G. and Vovk, V. (2001). Probability and Finance: It’s Only a Game! Chichester: Wiley & Sons.
  • Taleb, N.N. (2007) The Black Swan: The Impact of the Highly Improbable. New York, NY: Random House.
  • Tanaka, Y. (2013) The problem of the indeterminate past in quantum physics and Whitehead’s epochal theory of time. Available at: <http://pweb.sophia.ac.jp/process/>. Accessed 31 August 2016.
  • Tao, T. (2010) 254A, Notes 0: A review of probability theory. Available at:
  • <https://terrytao.wordpress.com/2010/01/01/254a-notes-0-a-review-of-probability-theory/>. Accessed 31 August 2016.