The Role of Ignorance about Keynes’s Inexact, Approximation Approach to Measurement in the A Treatise on Probability in the Keynes-Tinbergen Exchanges of 1938-1940 in the Economic Journal

J.Tinbergen and J. M. Keynes held diametrically opposed positions on measurement. Tinbergen’s physics background led him to deploy an exact approach to measurement based on the specification of probability distributions, like the normal and log normal, with exact and precisely probabilities that were linear, additive and definite. All probabilities for Tinbergen were assumed to be well defined, precise, exact, determinate, definite, additive, linear, independent single number answers, whether the field was physics or economics. Keynes’s approach was an inexact one. Probabilities for Keynes were, with a few exceptions, partially defined, imprecise, inexact, indefinite, indeterminate ,non additive, non linear, and dependent. Probability estimates for Keynes required two numbers to specify the probability within a lower and upper bound (limit), and not one single number like Tinbergen’s approach. Keynes called this approach Approximation. Keynesian probabilities are interval valued. The problem, from Keynes’s perspective, was that Tinbergen was trying to apply to economic data techniques which were only sound when applied in physics ,where laboratory controlled environments with detailed experimental design could generate data and replicate/duplicate the experiments. Keynes had always argued that economics was not a physical or life science like physics, engineering, biology or chemistry and that it could never be like physics. www.scholink.org/ojs/index.php/ape Advances in Politics and Economic Vol. 2, No. 3, 2019 230 Published by SCHOLINK INC.

For it is not this probability that we have discovered, when the accession of new evidence makes it possible to frame a numerical estimate. Possibly this theory of unknown probabilities may also gain strength from our practice of estimating arguments, which, as I maintain, have no numerical value, by reference to those that have. We frame two ideal arguments, that is to say, in which the general character of the evidence largely resembles what is actually within our knowledge, but which is so constituted as to yield a numerical value, and we judge that the probability of the actual argument lies between these two. Since our standards, therefore, are referred to numerical measures in many (authornote Keynes's emphasis in italics) two numerical measures, we come to believe that it must also, if only we knew it, possess such a measure itself". (Keynes, 1921, pp. 31-32; bold face added). This quotation is the foundation for F. Y. Edgeworth's conclusion in his 1922 review that Keynesian probabilities are intervals. Keynes also explicitly discusses intervals on pp. 22, 23, 24, 29, and 35 of chapter III. Keynes's diagram on page 39 (p. 42 of CWJMK version of the TP) presents no theory of ordinal probability just as chapter 3 of the GT contains no theory of expected aggregate demand and expected aggregate supply.

Keynes's Method of Inexact Measurement and Approximation in Part II of the TP
Keynes's characterization of his approach again emphasizes the importance of probabilities being "between" an upper bound and a lower bound: "It is evident that the cases in which exact numerical measurement is possible are a very limited class, generally dependent on evidence which warrants a judgment of equi-probability by an application of the Principle of Indifference. The fuller the evidence upon which we rely, the less likely is it to be perfectly symmetrical in its bearing on the various alternatives, and the more likely is it to contain some piece of relevant information favouring one of them. In actual reasoning, therefore, perfectly equal probabilities, and hence exact numerical measures, will occur comparatively seldom. The sphere of inexact numerical comparison is not, however, quite so limited. Many probabilities, which are incapable of numerical measurement, can be placed nevertheless between (Author-note Keynes's emphasis) numerical limits. And by taking particular non-numerical probabilities as standards a great number of comparisons or approximate measurements become possible. If we can place a probability in an order of magnitude with some standard probability, we can obtain its approximate measure by comparison. This method is frequently adopted in common discourse" (Keynes, 1921, p. 160).
Pp. 161-163 are of great importance as Keynes demonstrates his method of analysis using approximation, as he does also in chapter 17 on pp. [186][187][188][189][190][191][192][193][194] presentation of his method of inexact measurement and approximation in Part II can only lead to the complete and total rejection of the fundamentalist claim that purports to have discovered Keynes's "theory" on pages 38-40 of the TP in an analysis that Keynes calls "brief". "There is one class of probabilities, however, which I called the numerical class, the ratio of each of whose members to certainty can be expressed by some number less than unity; and we can sometimes compare a non-numerical probability in respect of more and less with one of these numerical probabilities. This enables us to give a definition of "finite probability" which is capable of application to non-numerical as well as to numerical probabilities. I define a "finite probability" as one which exceeds some numerical probability, the ratio of which to certainty can be expressed by a finite number.

Keynes's Method of Inexact Measurement and Approximation in
The principal method, in which a probability can be proved finite by a process of argument, arises either when its conclusion can be shown to be one of a finite number of alternatives, which are between them exhaustive or, at any rate, have a finite probability, and to which the Principle of Indifference is applicable; or (more usually), when its conclusion is more probable than some hypothesis which satisfies this first condition" (Keynes, 1921, p. 237; bold face added).
Keynes repeats himself in chapter 22: "There is a vagueness, it may be noticed, in the number of instances, which would be required on the above assumptions to establish a given numerical degree of probability, which corresponds to the vagueness in the degree of probability which we do actually attach to inductive conclusions. We assume that the necessary number of instances is finite, but we do not know what the number is. We know that the probability of a well-established induction is great, but, when we are asked to name its degree, we cannot. Common sense tells us that some inductive arguments are stronger than others, and that some are very strong. But how much stronger or how strong we cannot express. The probability of an induction is only numerically definite when we are able to make definite assumptions about the number of independent equi-probable influences at work.
Otherwise, it is non-numerical, though bearing relations of greater and less to numerical probabilities according to the approximate limits within which our assumption as to the possible number of these causes lies" (Keynes, 1921, p. 259 methods of empirical proof, by which we strengthen the probability of our conclusions, are not at all dissimilar, when we apply them to the discovery of formal truth, and when we apply them to the discovery of the laws which relate material objects, and that they may possibly prove useful even in the case of metaphysics; but that the initial probability which we strengthen by these means is differently obtained in each class of problem. In logic it arises out of the postulate that apparent self-evidence invests what seems self-evident with some degree of probability; and in physical science, out of the postulate that there is a limitation to the amount of independent variety amongst the qualities of material objects. But both in logic and in physical science we may wish to consider hypotheses which it is not possible to invest with any à priori probability and which we entertain solely on account of the known truth of many of their consequences. An axiom which has no self-evidence, but which it seems necessary to combine with other axioms which are self-evident in order to deduce the generally accepted body of formal truth, stands in this category. A scientific entity, such as the ether or the electron, whose qualities have never been observed but whose existence we postulate for purposes of explanation, stands in it also. If the analysis of Part III. is correct, we can never attribute a finite probability to the truth of such axioms or to the existence of such scientific entities, however many of their consequences we find to be true. They may be convenient hypotheses, because, if we confine ourselves to certain classes of their consequences, we are not likely to be led into error; but they stand, nevertheless, in a position altogether different from that of such generalizations as we have reason to invest with an initial probability (Keynes,1921, pp. 299-300; bold face added).
Keynes's footnote one adds the following point that ordinal probability can't be applied: "I am assuming that there is no argument, arising either from self-evidence or analogy, in addition to the argument arising from the truth of their consequences, in favour of the truth of such axioms or the existence of such objects; but I daresay that this may not certainly be the case. The reader may be reminded also that, when I deny a finite probability this is not the same thing as to affirm that 16 the probability is infinitely small. I mean simply that it is not greater than some numerically measurable probability" (Keynes, 1921, pp. 299-300; bold face added).
The reader should note again that this passage makes absolutely no sense if Keynes's probabilities are ordinal, since finite probabilities must apply to both numerical and non numerical interval valued probabilities. Consider chapter 26, which was viewed as being very important by both Edgeworth and Russell, since the discussions in chapter 26 required a theory of inexact measurement to have been already presented in an earlier part of the book (Part II), according to Russell (1922): "In Chapter III. of Part I. I have argued that only in a strictly limited class of cases are degrees of probability numerically measurable. It follows from this that the 'mathematical expectations of goods or advantages are not always numerically measurable; and hence, that even if a meaning can be given to the sum of a series of non-numerical 'mathematical expectations,' not every pair of such sums are numerically comparable in respect of more and less. Thus even if we know the degree of advantage which might be obtained from each of a series of alternative courses of actions and know also the www.scholink.org/ojs/index.php/ape probability in each case of obtaining the advantage in question, it is not always possible by a mere process of arithmetic to determine which of the alternatives ought to be chosen. If, therefore, the question of right action is under all circumstances a determinate problem, it must be in virtue of an intuitive judgment directed to the situation as a whole, and not in virtue of an arithmetical deduction derived from a series of separate judgments directed to the individual alternatives each treated in isolation. We must accept the conclusion that, if one good is greater than another, but the probability of attaining the first less than that of attaining the second, the question of which it is our duty to pursue may be indeterminate, unless we suppose it to be within our power to make direct quantitative judgments of probability and goodness jointly. It may be remarked, further, that the difficulty exists, whether the numerical indeterminateness of the probability is in trinsicor whether its numerical value is, as it is according to the Frequency Theory and most other theories, simply unknown" (Keynes, 1921, p. 312).
Keynes has just ruled out ordinal probability due to intervals overlapping with one another meaning that "…not every pair of such sums are numerically comparable in respect of more and less." It has been known for centuries that it is mathematically impossible to add (sum), subtract, divide, or multiple ordinal probabilities. Keynes's comment can only be applied to overlapping interval valued probabilities. It is not possible to understand Part IV of the TP if Keynes's theory of probability is an ordinal one, since finite probabilities can't possibly be defined. Indeterminate probability means for Keynes, like it meant for Boole, that the addition of more relevant evidence is not going to result in a narrowing of the wide gap that exists between the lower and upper probabilities.

Keynes's Method of Inexact Measurement and Approximation in Part V of the TP
Keynes repeatedly refers to indirect and inexact measurement using intervals all through Part V of the TP: The "Stability of Statistical Frequencies" would be a much better name for it. The former suggests, as perhaps Poisson intended to suggest, but what is certainly false, that every class of event shows statistical regularity of occurrence if only one takes a sufficient number of instances of it. It also encourages the method of procedure, by which it is thought legitimate to take any observed degree of frequency or association, which is shown in a fairly numerous set of statistics, and to assume with insufficient investigation that, because the statistics are numerous, the observed degree of frequency is therefore stable. Observation shows that some statistical frequencies are, within narrower or wider limits, stable But stable frequencies are not very common, and cannot be assumed lightly. 19 The gradual discovery, that there are certain classes of phenomena, in which, though it is impossible to predict what will happen in each individual case, there is nevertheless a regularity of occurrence if the phenomena be considered together in successive sets" (Keynes, 1921, p. 336 the fact that Tinbergen was committed to Exact measurement using precise probabilities while Keynes was committed to Inexact measurement using imprecise probability based on his approximation technique from chapters 15-17 of the TP or methods like those of the statistician Yule: • Beginning with Bernoulli's Theorem, we will consider the various solutions of this problem which have been propounded and endeavour to determine the proper limits within which each method has validity (Keynes, 1921, pp. 337-338).
• …the value of this probability being calculable by a process of approximation… (Keynes, 1921, p. 338).
• For the second part of the theorem some method of approximation is required… (Keynes, 1921, p. 338).
• It is possible, of course, by more complicated formulae to obtain closer approximations than this.* But there is an objection, which can be raised to this approximation, quite distinct from the fact that it does not furnish a result correct to as many places of decimals as bit might. This is, that the approximation is independent of the sign of h, whereas the original expression is not thus independent.
That is to say, the approximation implies a symmetrical distribution for different values of h about the value for h = 0; while the expression under approximation is unsymmetrical. It is easily seen that this want of symmetry is 20 appreciable unless mpq is large. We ought, therefore, to have laid it down as a condition of our approximation, not only that m must be large, but also that mpq must be large. Unlike most of my criticisms, this is a mathematical, rather than a logical, point (Keynes, 1921, pp. 338-339).
• This "fiction" will do no harm so long as it is remembered that we are now dealing with a particular kind of approximation…the probability* that the number of occurrences will lie between… (Keynes, 1921, p. 339).
• This same expression measures the probability that the proportion of occurrences will lie between… (Keynes, 1921, pp. 339-340).
• The probability that the proportion of occurrences will lie between given limits varies with the magnitude of the square root of 2pq/m, and this expression is sometimes used, therefore, to measure the 'precision' of the series. Given the à priori probabilities, the precision varies inversely with the square root of the number of instances. (Keynes, 1921, p. 340).
• Such a condition is very seldom fulfilled. If our initial probability is partly founded upon experience, it is clear that it is liable to modification in the light of further experience. It is, in fact, difficult to give a concrete instance of a case in which the conditions for the application of Bernoulli's Theorem are completely fulfilled. At the best we are dealing in practice with a good approximation, and can assert that no realised series of moderate length can much affect our initial probability… For this is an approximate formula which requires for its validity that the series should be long; whilst it is precisely in this event, as we have seen above, that the use of Bernoulli's Theorem is more than usually likely to be illegitimate (Keynes, 1921, p. 342 • …the probability that the number of occurrences m of the event in the s trials will lie between the limits sp ± l is given by…The probability that the number of occurrences of the event will lie between sp ± γk (square root of s) is given by … (Keynes, 1921, p. 345).
• It seems in plain opposition to good sense that on such evidence we should be able with practical certainty…. to estimate the number of female births within such narrow limits. And we see that the conditions laid down in § 11 have been flagrantly neglected…" (Keynes, 1921, p. 352).
• Leibniz's reply goes to the root of the difficulty. The calculation of probabilities is of the utmost value, he says, but in statistical inquiries there is need not so much of mathematical subtlety as of a precise statement of all the circumstances. The possible contingencies are too numerous to be covered by a finite number of experiments, and exact calculation is, therefore, out of the question. Although naturehas her habits, due to the recurrence of causes, they are general, not invariable. Yet empirical calculation, although it is inexact, may be adequate in affairs of practice" (Keynes, 1921, p. 368).
• In dealing with the correspondence of Leibniz and Bernoulli, I have not been mainly influenced by the historical interest of it. The view of Leibniz, dwelling mainly on considerations of analogy, and demanding "not so much mathematical subtlety as a precise statement of all the circumstances", is, substantially, the view which will be supported in the following chapters. The desire of Bernoulli for an exact formula, which would derive from the numerical frequency of the experimental results a numerical measure of their probability, preludes the exact formulas of later and less cautious mathematicians, which will be examined immediately" (Keynes, 1921, p. 369).
• They showed, that is to say, that certain observed series of events would have been very improbable, if we had supposed independence between 22 some two factors or if some occurrence had been assumed to be as likely as not, and they inferred from this that there was in fact a measure of dependence or that the occurrence had probability in its favour. But they did not endeavour to pass from the observed frequency of occurrence to an exact measure of the probability. With the advent of Laplace more ambitious methods took the field" (Keynes, 1921, pp. 369-370).
• Thus, given the frequency of occurrence in μ trials, these writers infer the probability of occurrence at subsequent trials within certain limits, just as, given the à priori probability, Bernoulli's Theorem would is employed in circumstances of greater or less validity" (Keynes, 1921, pp. 370-371).
• What, in the first place, does Laplace mean by an unknown probability? He does not mean a probability, whose value is in fact unknown to us, because we are unable to draw conclusions which could be drawn from the data; and he seems to apply the term to any probability whose value, according to the argument of Chapter III., is numerically indeterminate. Thus he assumes that every probability has a numerical value and that, in those cases where there seems to be no numerical value, this value is not non-existent but unknown; and he proceeds to argue that where the numerical value is unknown, or as I should say where there is no such value, every value between 0 and 1 is equally probable. With the possible interpretations of\the term "unknown probability", and with the theory that every probability can be measured by one of the real numbers between 0 and 1, I have dealt, as carefully as I can, in Chapter III. If the view taken there is correct, Laplace's theory breaks down immediately. But even if we were to answer these questions, not as they have been answered in Chapter III., but in a manner favourable to Laplace's theory, it remains doubtful whether we could legitimately attribute a value to the probability of an unknown probability's having such and such a value. If a probability is unknown, surely the probability, relative to the same evidence, that this probability has a given value, is also unknown; and we are involved in an infinite regress" (Keynes, 1921, p. 373).
Of course, Keynes's approximation approach uses two of the real numbers between 0 and 1. Chapter III is Keynes's introduction to his critique of exact and precise probability measurement with two exceptions (the POI is applicable; statistical data satisfies the Lexis-Q test for stability of frequency data). Chapters 10-17 of Part II contain the many details of his critique.
• Nobody supposes that we can measure exactly the probability of an induction. Yet many persons seem to believe that in the weaker and much more difficult type of argument, where the association under examination has been in our experience, not invariable, but merely in a certain proportion, we can attribute a definite measure to our future expectations and can claim practical certainty for the results of predictions which lie within relatively narrow limits. Coolly considered, this is a preposterous claim, which would have been universally rejected long ago, if those who made it had not so successfully concealed themselves from the eyes of common sense in amaze of mathematics" (Keynes, 1921, pp. 388-389).
It can be concluded from the extensive material covered in this section that Keynes's view of measurement was that ,in general, exact and precise probability measurement and assessments are not possible, in general, in macroeconomics. However, inexact and imprecise probability measurement and assessments are, in general, possible. and life science, it is doubtful in most social sciences, liberal arts, behavioral sciences, and especially in most areas of economics ,finance and business. The exception would be studies of consumer consumption spending and business inventory demand, which are highly stable over the short run (1-5 years). Irving Fisher's belief that there was an analogy between a particle and an individual ( (Fisher, 1925, p. 85), so that a particle in mechanics corresponds to an individual in economics, only holds under the dubious claim that the sum of the parts is the whole, so that there is no interactions occurring among the parts as in a pizza that has been cut into 8 slices. The analogy breaks down completely once it is realized that a particle is an inanimate object while an individual is an animate object.

Discussion
Consider the following exchange between Tinbergen and two interviewers in 1987: "From Keynes's criticisms of your League of Nations first report, it seems fairly clear that he knew very little of the developments in econometrics over the 1920's and 1930's….(Magnus & Morgan, 1987).
Tinbergen's response to the comment by Magnus and Morgan was that "Indeed, I did feel that ,at least on certain points, he was badly informed. It was a bit strange to me because he had written the Treatise on Probability, so I thought he was somewhat familiar with statistics" (Magnus & Morgan, 1987, p. 129 Tinbergen's decision to use a multivariate normal distribution, or log normal, in his regression analysis meant that he had to treat the problem of technological innovation, advance, change ,and obsolescence, which is the major factor causing changes in the composition of durable, physical, capital producer, investment goods over the business cycle over time, as well as being the main focal point of concern of businessmen in forward looking, long run business expectations formation, as being part of the residuals or as some lagged function of past, previous views. There are a few econometricians who have grasped Keynes's point, even if they are not completely familiar with the inexact approach to measurement advocated by Keynes in both the TP and GT: "The empirical content of economic theories is therefore systematically lower than those from physical theory. Testing an economic theory in quantitative form requires the introduction of all sorts of ad hoc statistical or econometric modelling assumptions so as to arrive at a fully specified empirical model (Blommestein, 1985). This ad hoc nature of economic model building generates a significant degree of specification uncertainty. For example, as noted above, the empirical pricing models for structured products such as CDOs and CDSs is hampered by a considerable degree of specification uncertainty.
Semantically insufficient theories make it therefore very hard to formulate reliable empirical models. In other words, the big problem with economic theories is not that they are too simplistic or that so-called "unrealistic" assumptions are being used, but it is their semantical insufficiency (low degree of testability and Identically Distributed) and that functions are parameters are LCC (Linear with Constant Coefficients). Tellingly, DF refer to them as "articles of faith" that had (and continue to have) a considerable influence on the applied literature" (Blommestein, 2009, p. 3). "However, economics and physics are qualitatively different enterprises, and always will be. Economics or finance as a social science differs in at least two important respects from the physical sciences. First, physical theories often formulate scientific predictions in the form of individual events, while predictions in economic theories (and other social sciences) are usually specified as patterns of a certain kind or type (Hayek, 1967;Blommestein, 1985, pp. 86-90). Second, theories of economic behaviour need to take into account reflexivity-the human capability for self-reference. Self-referential calculations have major (complicating) implications for economic explanations and predictions. For example, financial asset markets have a reflexive nature in the sense that prices are generated by the expectations of traders. But the latter expectations are based on the basis of the anticipation of others" expectations6 . This implication precludes the formation of expectations using deductive rules (Spear, 1989). In a fundamental way, economic issues (and social science topics in general) are harder and far more " (Blommestein, 2009, pp. 3-4).
And "Against the backdrop of this analysis, bringing together the insights from different parts of economics such as macroeconomics and financial economics is not necessarily the best response to the recent wave of criticism of academic finance. The end result [of this intended merger of theories] would still not go beyond disguised mathematics or sets of tautological structures. For example, the underlying behavioural paradigm of macroeconomic/macro finance models is that economic agents have rational expectations, meaning that the "representative" agent is at all times and everywhere completely informed, while fully understanding the complexities of the world (De Grauwe, 2009). This would imply that bankers and their clients at all times fully understand the complexities of the new financial landscape. It is difficult to reject the impression that these behavioural assumptions have been introduced to facilitate the underlying math and the formulation of quantitative models with closed-form solutions (Blommestein, 2009, pp. 5-6). Thus, Blommestein's work, along with that of D.
Freedman (2009), E Leamer (1983), and H. Keuzenkamp (2000),if it became dominant in econometric practice ,would go along way toward satisfying Keynes's original objections about the use of econometrics in attempting to quantify business cycle research over time.

Conclusions
The main reasons for the completely divergent views expressed by Tinbergen in their 1938-1940 exchanges in the Economic Journal about measurement was due to their completely different technical backgrounds and different views about measurement being either exact or inexact.
Keynes's background was in logic, mathematics and philosophy. Tinbergen's background was in physics. There simply was no common ground between the two academics.