I will be using this site for posting progress in making Horn and Twynam family documents accessible to the extended family in a durable, reliable and balanced form. I invite all with interests intersecting to contribute their own perspectives, results, and corrections.

Where the family stories can be of wider public significance the site can serve to launch well grounded publication in what ever format serves this interest.

The project has been motivated by finding myself in custody of papers, objects, correspondence bundles assembled and carefully stowed by  far sighted relatives who wished that their story and that of their origins be passed on to new generations.


State Spaces and the wide open spaces

Speaker: Peter Caley (Data61, CSIRO)

Topic: Some issues of inference in abundance trends for wide-ranging wildlife species

Highly mobile species that form aggregations can present special challenges for inferring trends in abundance where aggregations are spatially sparse, highly localised and sometimes transient in nature. Drawing from Australian examples of waterbirds, flying foxes and cockatoos, this talk explores how practitioners have grappled with some of these issues, and appeals to the statistical brains trust to get involved.

Peter Caley is a research scientist with CSIRO Data61. He has a background in applying quantitative methods for addressing contemporary problems in the environmental sciences. Topics have included wildlife & human disease epidemiology, vertebrate pest ecology & management, plant & insect biosecurity and extinction inference.

In the lead up to the next Australian Statistics Conference (ASC2018, Melbourne 26-29 August) I have been refreshing my reading around surveys, state spaces and inference. Peter Caley has been using a state space approach to measure rates of decline or growth of populations of wild water birds within the Murray Darling and Lake Eyre catchments based on 30 years of random transect surveying. The thirty years has been punctuated by two major flooding events – early 70s and 80s so the challenge is to filter out a message on the ecological health of inland Australia from expert observations within a fixed collection design – effectively repeated surveys. As birds move readily in response to climate variation, and do not necessarily stay within one catchment there is much scope for leakage, but there has been some success in devising robust inference, using the strengths of this ‘observational experimental’ scheme – namely ability to split numbers by species, with separate counts for 50 or so water birds encountered; consistent method of collection (single wing small planes able to fly along transects at a low altitude; and now accumulated 30 years data series split between two comparable regions with contrasting climate histories. His conclusions were phrased in terms of the individual species – whether or not they were in decline – and after SS correction could say that there was not a case for ‘steep decline’ across the board, and while it is possible to rank species on vulnerability to decline, on the whole most species were holding their own. I wonder if it may be possible to robustify this commentary further by using a multivariate filter and a new diversity index. This may be a better measure: if species presence was weighted as much as breeding numbers. Routine or cyclic Environmental changes will advantage some species at the expense of others; long term trends connected with global phenomena can affect all species adversely both in numbers and support for diversity, with most vulnerable disappearing first. A further opening may be in counting ‘roosts’ that is congregations of birds using a water resource for feeding, breeding, refuge or lay over. Roost numbers may be detectable at a distance even remotely; or by on site observation without disturbing the birds for the purpose of counting. Roost behaviour may be easier to monitor over time, and there may be a way of enduring identification.

Peter’s talk included the fate of the white necked ibis – ubiquitous in the cities as bin scavengers. I have heard a talk on its cousin, the strawnecked ibis of the plains, where radio tracking of individual birds demonstrated how widely they move and their ability to find their way to surface water over long distances. It is also apparent that flocks and individuals give different profiles. This dynamic structure to the populations of free moving creatures is interesting in itself, and would inform any design; it can interfere with inference in a common design like Caley’s but may also be a useful correlate for the overall diversity question. Noone pretends that the system is easily abstracted, making a state space a good choice for inference model.

It strikes me that there is much to be gained by working in step with the scientists monitoring the health of our natural systems subject to global climate challenge. This can only increase confidence in what we are attempting to sustain.

A little night music

I saw seven girls in saris
Move up our street; How sweet

Ignore the four islander boys
outdoor arm chair guffaws, to-all.

On the hill crest clouds
Explode in rose and mauve

Pass by squinting through glass
ordinary folk engulfed

In their shadowy flickering,
Made to fit, capsules for living

We walk out into the night,
Dogs in tow, follow that same path

Up and down not seeing,
Not good at seeing, trite.

Do Australians care about science?

Well Canadians do (apparently); and they do more than just about anyone else. See the release today posted on http://cnw.ca/He1J3; reporting on a study commissioned by the Council of Canadian Academies.

The Council  – ” an independent, not-for-profit organization that began operation in 2005. [it] supports evidence-based, expert assessments to inform public policy development in Canada. Assessments are conducted by independent, multidisciplinary panels of experts from across Canada and abroad. [its] blue-ribbon panels serve free of charge and many are Fellows of the Council’s Member Academies: the Royal Society of Canada; the Canadian Academy of Engineering; and the Canadian Academy of Health Sciences. The Council’s vision is to be a trusted voice for science in the public interest.

The manner of summarising bears some examination, but that aside, the message it conveys is worthy of reflection, given that its Australian counterpart has this month embarked on a new exercise to measure the dollar contribution of the core sciences to the Australian economy.

The Australian Chief Scientist has asked the Australian Academy of Sciences to devise an Australian version of recently released report into contribution of the Mathematical Sciences to the UK economy (conducted for the Royal Society by the accounting firm Deloites). AAS has in turn commissioned the Centre of International Economics to undertake this assignment for the core Australian science disciplines – interpreted as mathematics, physics, chemistry and earth sciences.

It is the inverse question to that posed in Canada: not whether people rank science as important in their world view and whether they are equipped to understand the scientific ramifications in public policy, but rather how much of our present propserity do we owe to ‘new’ science, and so the value of science as an activity or infrastructure in the policy field.

In other words why it is perhaps dangerous for the public to be unconcerned about science done in its name, or to support activities impinging on social cohesion and the ordinary enjoyment of life; why a disinvestment in science as a nationally recognised activity – in secondary and tertiary education; in public and private research – may be more than a little dangerous. 

The most direct way to make this argument is to map current economic activity in relation to reliance on a science base. The absence of that base can then be seen to come at a measurable cost, that can be weighed against other calls on government.

It could well be that Australians unknowingly have been enjoying the benefits of past investment in science, but have been excluded from or left behind in scientific literacy to secure the investment, leaving us exposed and languishing – as the Canadian report by extension indicates.

And what about statistics?

Should not our national agencies be monitoring not only the financial and material investment in science – the knowledge economy – but as well the human investment? The value – moral as much as ethical or monetary of knowing about the world and our place in it. How can people and their representatives be aware of the role of science in their lives, and the role of scientific evidence in decisions without an objective official statistical frame?

The Office of the Chief Scientist may stand in for a “Council of Academies” that can supply briefs to the political process on implications of foundational research (and why this is critical to the future – industrial and social). Certainly the present initiative could be said to concord with the role that the CCA has mapped for itself in Canada. Neither however will be effective without expert guidance in the planning, collection, extraction, accumulation, interpretation, presentation and use of official statistics.

But official statistics is here to be interpreted more broadly than usual: clearly agency collections are hostage to a conservative interpretation: as set out in legislation and circumscribed by a heirarchy of users and a diminishing budget.

What is needed is official statistics as embraced as an area of expertise, of objective and constructive advice, working with public interest organisations in parallel with the portfolio responsibility of government, but not limited to priorities set from government; instead addressing the domain of public policy, in the sense used by the CCA.

Furthermore representatives of the discipline of official statistics can act as (and be seen as) a ‘disinterested party’ alongside the core professions; the institutions of organised science; the enthusiastic advocacy of citizens for policy enlightened by good research, whether local or global, and not coloured by wishful thinking or distorted lenses applied to partial data, typical in the constraints of public advocacy.

Your ideas?

Stephen Horn       

The omics of Official Statistics

Professor Terry Speed’s AMSI-SSAI Lecture today at the Knibbs theatre provokes the following reflection.

Nuisances crowd out the signal – this is as true in genomics (or any of the bioinformatical omics spawned therefrom – proteomics; metabolomics; transferomics) as it is in modern official statistics, hand maiden to policy and socio-econometric modelling.

Nuisance however deserves attention. In an ideal world all data provided in statistical returns is simultaneously correct and perfectly recorded and transmitted. Furthermore the design of this ideal collection is itself perfect: the data collected is sufficient to answer the questions posed by users in their collectivity, without altering the inclination of respondents to cooperate, nor altering their behaviour in so doing. That is, the measurement process is dimensionless.

No one pretends that these conditions hold, or even approximately hold.

Instead the data resulting from the collection effort is conditioned by a quality framework that allows it to recede to the background. Official releases thus come with two crutches: formal rules of population inference – what can be inferred; its accuracy – centring on a true value, and precision – the width of the interval around an estimate containng the true value with certain confidence; and adherence to the nuisance-containing practices embodied in the collection operation.

These practices comprise the design. And this explains why official statistics is stubbornly design-based, even as statistics proper has struck out into the protean world of model building and model-based inference.

Both model-based and design-based approaches have been compromised by nuisance effects despite the loud and redundant appeals to ‘scientific method’ or ‘quality assurance’ respectively. In the one case data richness (and sample size) and spurious replicability obscured the real limitation of data acquisition; the other the drag induced by quality assurance required a stability in underlying processes which has patently been compromised in an external context of open data borders.

Can the negative control method elegantly applied to bioinformatics save official statistics too? Or rather if we take nuisance more seriously may we be inspired to find a more solid platform for the presentation of statistics used in public discourse?

If we restate the issue slightly differently – how to extract a consistent, reliable and useful signal of bearing to social governance from a multiplicity of data frames, where the criterion for signal quality (analogous to the deeper scientific truths underpinning bioinformatics or statistical investigation of physical or chemical phenomena) is encoded in the legislative ethos of government itself.

This not only allows nuisance but assumes it: the act of reducing an uncontrolled flow to a signal under metastatistical protocols (such as pre-existant or circumstantially imposed indicator series; or standards) is the badge of official statistics, best expressed by appeal to design. Certainly it is possible to improve on theory; most transparently by reviewing how deviations from design (for instance dealing with overlapping discordant collections) build a core assurance mechanism.

It happens that the methods put forward by Professor Speed in bioinformatics; and the discordancy accepting extension results that can be built from the geometric basis to sampling theory of Paul Knottnerus’ text play similar roles in the respective contexts. In both cases a fresh appraisal of the context in which statistics is applied has lead to results with immediate application as well as great generality.

Knottnerus, P., Samnple Surveys – a Euclidean view, Springer 2003

Measuring the worth of Mathematics

There is a sense where every self-directed extended human activity has an economic value. If nothing else the opportunity cost when occupied doing one (more or less productive) thing, as opposed to other (less or more productive) things. If the time thus spent is directed to some external project, it figures in the ultimate balance of value that stems from the project however accounted (cost-plus; capital gain; assurance; demand shift; monetarised policy objective; speculative gain…).

Mathematical training equips for a class of problems/ projects that require abstract thinking (or thinking in the abstract), bridging the conceptual gaps in tackling a new domain, or revisiting a well trammelled domain where new parameters or boundaries apply.

Advancing the corpus of mathematical knowledge is (or should be) the standard against which all subsequent application is made. This is how the subject is taught: abstractions beget abstractions. This is also the hardest to claim monetary value. A life time in mathematics does not leave such visible monuments; indeed some of the best mathematicians have led short and ignominious lives, yet their work is as central to the concept of the discipline as any public achievement by a Pasteur in biology; a Fermi in physics; a Davey in chemistry all of whom can claim to have added and continue to add to economic achievements.

It is necessary to show the derivative mathematics that most of us acquire in our school years is qualitatively different to mathematics as practised in and of itself. It might equally be said that conversance and fluency in the theory of statistics has placed the products of statistical reasoning in the hands of other scientists, indeed of most people working with real and unruly data sets and tameable. Then why still invest in the discipline?

There is a large element of speculation in any investment in core disciplines, as distinct from support for the governance mechanisms at the core of enterprises, public or private. Existing knowledge base is for many purposes sufficient; its mastery is implied in standard disciplinary training. Managing uncertainty when expressed at an executive level reduces to a question of  personality: only rarely is it seen as scientific. That scientific authority is contested makes its dismissal easier, and makes the case for investing in the hard disciplines of science tenuous.

Yet it is the creative output of these disciplines – the part most speculative – that yields dividends, that renews the worth of the discipline for the public, and from whence comes its most direct source of authority – external as well as internal.

But is it really such a high stakes game? A state rests not on force of arms but on its cultural strengths – the well being of its people; its history reconciled to its present course; its interpretation of its history and the reconciliation of past and present; its resilience to the uncertainties of nature. And its respect for the process of questioning old and acquiring new knowledge; not as elided into net current productive value but in another economy – what we need to know collectively about the world in which we are immersed if we are to be truly human.

There is a misconception about science that sees it as universal, as trafficable, as imperial; draftable into one or another enterprise of the state or its proxies in the market. This appears to be a truism as only such bodies can afford to build the scientific edifice, can align forces towards some goal (the eradication of malaria, sending a man to the moon); as if science can be engineered.

Of course it can, and there are natural alliances obvious when science is providing the knowledge in knowledge-based industry.  Unfortunately the power of engineering – encapsulated in the idea of high technology, is too easily mistaken as the standard of worth for the disciplines that have fed it. The culture that allows those disciplines to thrive, hinges on a respect for knowledge in the large, as well as those elements of knowledge that contribute to economic progress.

Economies become vulnerable when resting on the marketable only, on what works. Things work, or make a profit, or generate jobs and wealth, only up to the limits of ability to meet the unforeseen. Unforeseen is what is totally external (or seemingly so) like a GFC or a meteor or a war or an eruption; equally what has not yet been fully observed (unforeseen effects of a treatment); or properly internalised (adverse effects of fertilizer treatment); or manipulated to give a profit at the expense of competing values (sand mining; drilling the reef; mining antarctica; cross contour plowing). In other words what has been operationalised on market knowledge, not a forensic analysis of performance or public answerability for the use of privatised  knowledge.

The impacts of economic activity should be as accountable as the productive capacity generated, and it is as much an engineering as a scientific question as to how to design a process that is tuned to its environment.

This leads back to the core disciplines founded as they are on human experience, and aspirations. By bringing together the transformational goal of the activity (‘adding value’) and the transactional implications it may be possible to humanise progress to the extent of reducing cost and distributing benefits . That we think about ourselves in this fashion is a constant; the way we do is as process-tied as progress in the disciplines concerned: advancing by long periods of quiescent mastery, and short bursts of creative change.

How then do we measure the health of a discipline like mathematics?  One way is in the strength of renewal; the quality of teaching; the export of success; the attraction of collaborators; peer recognition (important in a competitive market for talent). Another way is in the breadth and sophistication of application, the passage from discovery to problem application; and its reverse, of public awareness of the role of the discipline, of skill value in innovation teams, in quality assurance for industrial process, in the construction of algorithms, of software, in the spawning of satellite disciplines – analytics, computer programming, genomics, biometry, actuarial science, evaluation, operations research. In each case the core is not questioned but the application builds the apparatus for understanding the foundational knowledge in context of solving a problem or feeding a process.

These two pillars separately define the social and economical worth of the discipline – what the discipline stands for – and prevent it from spiralling into debased obscurity, or pseudo-knowledge. They are the foundation for intervention, and authority; they will draw the next generation of trained scientists and consumers of science (the public, in government, among the entrepreneurial class). Both celebratory and performative they are inextricably linked.

A crude model for economic value (deriving from the state investing in the core disciplines) involves accounting for influence: students – through direct teaching, textbooks, examination, inspiration, extension – colleagues – administrative support, collaboration, superstructure; industrial partners – consultancies; algorithms/ software; the public at large – cultural element, adding to the national coherence, and respect for its institutions, attracting collaborative agreements, diplomacy; government – advice, policy contributions.

Not all can be measured by output through to outcome without the use of models or speculation (or both). Yet all provide indicators of health; can be used to detect deficiencies, and costs (opportunity costs), inefficiencies and flow on effects. This overall health combined with standardised output measures will identify the value of the discipline and the sources and fluctuations of that value over time.


Prepared ahead of a two-day meeting of the academy of sciences in the context of a consultancy on the economic gain from national core science investment.

Useful further reading

Stephan, P E (1966), The economics of science, Journal of Economic Literature, 1966 – JSTOR http://www.jstor.org/discover/10.2307/2729500

Dasgupta, Partha and Davids, Paul A. (1994)Towards a new economics of science Research Policy 23(1994) 487-521

natureOUTLOOK, Assessing science, lessons from Australia and New Zealand, 24 July 2014/ Vol 511/ Issue No 7510



Aesthetics and topology

This remains a title looking for an argument, at the moment. Those doing mathematics don’t need the garnish of an extra category to place what they do or what its intended reception is, as would be the case for instance in the realisation of a building project, or the creation of a film, or a work of art. But the practice of mathematics is governed by a severe aesthetic – from setting or defining the problem within a theory, to searching through heuristics for some way to advance it, to achieving and then refining a solution. The nature of this aesthetic is perhaps best revealed in some of the spectacular failures: the failure to secure a logical foundation for analysis; the ultimate failure of euclidean geometry to satisfactorily encompass the world of experience; the failure of the program to formalise mathematics logically. In each case the elusive aesthetic drove mathematics into new territory; an unsatisfactory state demanded resolution.

Rather than closing off however the result was an opening out to new terrains of abstraction, but as well a striking modernisation of mathematics as a tool for advance into new fields of knowledge or practice. The modern world needed the apparatus made available by nonstandard analysis; by noneuclidean geometry (a precursor of quantum physics); of infiinite recursions in which meadow programming flourished. Russell and Whitehead’s program failure lead to spectacular advances in set theory, algebra and number theory. Something similar could be said of the other road blocks listed.

What is this aesthetic then? A striving but never arriving; a fecundity in what is not yet accomplished, viz a viz what is known, what can be demonstrated, what can be mastered. Is it worth spending time on this meta-mathematical whimsy? Can we indeed apply the discipline of meta mathematics to better understand what motivates a proof in the first place; what drives discovery in a practice that eschews speculation, that demands an extreme deductibility, that seems to call out of the air new limits new rules.

Part 2

Let us for a moment step back from the aesthetics of doing mathematics – of creating mathematics, seeing mathematics as a performance – and reorient the question towards a mathematical interpretation of sense experience. There is after all a mathematics of space relations (geometry), a mathematics of sound (Fourier series and derived functional analysis), a mathematics of taste itself? It seems unlikely, or unproductive. Yet this drive to some resolution of experience reaches into mathematics: the music of the spheres; the golden mean; the magic of conic sections  – a theory elucidated by Pascal; the intriguing fixity of regular polygons and how somehow they influence the distribution of the planets (Kepler).

This search for regularity beyond human agency as a reassurance that we make sense in some wider narrative, be it one of numbers or shapes or laws not conditioned on the physicality of things and how they come to occupy the forms they do. In some sense the shape, the number or the law was there first. Behind superficially simple things are profoundly simple things. This is the realm of mathematics as an aesthetic medium, a precursor to experience.

Perhaps it is indeed a universal this search for perfection, be it in form or in time

Further Reading

Tom McCarthy (2014) ‘Ulysses and Its Wake’ London Review of Books pp39-41, 19 June 2014

Philip J. Davis & Reuben Hersh, (1980) The Mathematical Experience, Pelican Books

Jason Socrates Bardi (2006) The Calculus Wars – Newton, Leibniz and the Greatest Mathematical Clash of All Time, Thunder’s Mouth Press

GH Hardy (1940) A Mathematician’s Apology, Cambridge University Press. https://archive.org/details/AMathematiciansApology

R. Thom (1970) Topologie et Linguistique, Essays on Topology, Volume dedie a G. de Rham, Springer

 Hermann Weyl (1952) Symetrie et mathematique moderne, flammarion

Limitations of statistics

The purpose of this note is to look at the limits of statistics, using a sampling statistician as illustration and drawing on opening remarks by the author of a newly published, treatment of sampling theory (Singh, 2004).


In the context of setting out the theory of sampling it is useful to keep in mind the grounds on which it operates. Sampling theory is based on a few simple concepts – populations, variables, sampling units, variability, estimators and qualities of estimators, sample spaces and Borel sets of probability measures defined on sample spaces. And so on. Armed with these tools it is possible to construct procedures that fulfil the primary purpose of any branch of statistics: to extend understanding of quantifiable but inherently stochastic phenomena. The statistician stands between an expert who has command of a theoretical apparatus or controls or owns or has a proprietary interest in a data generating concern, and an experimentalist or field manager who makes controlled and verifiable measures reflecting on the organisational or theoretical construct. How do these measures relate to this construct? If the measures are made of the apparatus itself, ‘without error’, there is no need for intervention. Because constructs and ability to understand them have parted company while imperative of governing remains, the terrain for statistical work, by (let us call them) approximaticians, has opened.


An amusing exercise may be to classify the various branches of statistics by the sociological properties of this A-X-B relationship. Who owns the knowledge, who initiates the collection, who controls access to the source, who owns the resultant data, who judges the outcome, and who pays for the exercise. A commands the priors, B the evidence – the posteriors. X improves on the priors using the posteriors. We leave this particular endeavour for another occasion, to look more closely at the idea of statistical knowledge – if it is not a contradiction – as justifying the science of statistics, and distinguishing it from technique pure and simple – to look from the inside out.[1]


Statistics is founded on observations of random phenomena. The randomness is subjective; the immediate observer cannot predict the outcome of any observational episode, an ultimate observer may. It is assumed that this however bears on some underlying process about which it is desired to draw some inference.


What are its limits?


a) Statistics does not deal with individual measurements

The randomness may be in the selection of what is to be measured; in the measurement itself; in what is being measured. Coupled with randomness is incompleteness – we have at best limited access to what is under study. While any quantified study can be framed in these terms, there is a sensible domain restriction based on tolerance. At what point do individual measures interpreted one at a time become unreliable in predicting the behaviour of the whole?


This dealing with the collectivity of measurement marks out the domain of statistics. Likewise the specificity of measures are left to psychologists, to anthropologists, to physicists, biologists, lawyers, politicians or policemen. That is not to say that cognitive or thermal or political properties may not be important in the transformations to which statisticians are party. Questionnaire design is informed by how people perceive, interpret and respond to questions; the ultimate purpose of a survey does not revolve around how individuals respond but in advancing a discrete understanding of the state of the population under study. It is of profound indifference to the statistician if a respondent is truthful provided the estimators used are efficient.


b) Statistics deals exclusively with what is quantifiable

Is the world a better place; is a model correct; is a result important? For a statistician only in so far as there is an objective measure attached, and then only in so far as it gives interesting results. Surveys are useless for gauging feelings or preferences without these being discretised, synchronised and equivalated. This divides the statistics profession from others whose investigations are guided in different ways: for an economist maximising utility is not something that requires a quantification to be implementable; for a medical researcher manifest cause may lie within a postulated chemical pathway inferred from known mechanisms rather than observation; for a lawyer a whole chapter in legal doctrine will flow as much from a single case. Statistics as a body of reasoning begins with repeated measurements.


Sampling theory institutionalises repetition in schemes for randomising and systematising repeated measures toward some predetermined informatical goal. It deals with an economy of collection under design; and an efficiency of estimation drawing on aggregated and repeated measures external to the design and from the experience of collection. Because measures vary (over individuals, time, circumstances of collection) statistical judgement is required; without repetition there is no variation.


Sampling theory deals with all manner of ways in which this quantification of repeated measures can be formalised toward decision procedures for which control over the collection mechanism is retained. Research goals that cannot be translated to a sample design are ipso facto outside its ambit. Thus if the goal is to rid the world of cholera a sampler would be at a loss: not so an epidemiologist. She would know what to look for to make this fundamental biological conjuncture quantifiable, to make a survey sensible.


c) Statistics results are only true on average

Are they true at all? Statistics as a means of guiding decisions are never true. Truth lies in the relation between the theory (A) and its realisations (B1, B2, B3… ). Can we talk about what causes Cholera in a way that will lead to actions that in conception with some certitude will advance a policy of eradication? Certainly we can use exemplary statistical techniques to show cholera prevalence dropping after we reduce reporting funding, or after we do nothing and populations are wiped out by the disease, but that is through no advancement in knowledge. If we are superior statisticians, and have undergone a thorough training in Bayesian methods, we may well induce positive insight by the scientists who have engaged us.


The truth statistics shares with other branches of mathematics lies in the application of functional (in this case stochastic) relations to actual situations; or rather the translation of realistic indeterminacy to a logical calculus standing outside the observable world but with some claims to extend initial assignment of value to statements which hold in that world. The version of truth involved is ‘stochastic truth’. Truth with an element of indeterminacy. Or associative truth rather than logically closed reductive causal truth that empiricists search for: This goes with this more often than not; which direction does the evidence pull in?


Sample theory is built around a decision model: information to make a particular decision is incomplete, for the purposes of the problem; how can what I now know contribute to a ‘good’ decision – not the right one, the one I would have made if exposed to full information, and perfect judgement (not to speak of an adequate ethical construction), but the one that makes best use of what I can know or come to know using means at my disposal. In this regard truth is neither here nor there. On average a ‘good’ decision will resemble the ‘right’ decision. That is, in expectation, over all possible samples, the data informed good decision is the right decision, the only decision that could be made consistent with what has been observed.


d) Statistics can be misused or misinterpreted

The reason statistical data are collected is rarely disinterested. A researcher may seek an argument for advancing or dismissing a theory; a department may wish to target a given population for some action, assistance or retribution. The value of statistical intervention (for an expert or a data analyst) is in uncovering interpretable patterns in the data. Whether the data can sustain interpretation should be the first concern of a competent statistician. In the absence of such an assurance, statistics invite misuse.


Sampling theory furnishes context – ‘design’, ‘process quality’, ‘estimation’ – to the assemblage and manipulation of data into statistical form, that is as functions of sampled data which throw light on the character of the underlying population. Misapplication of statistics results from disengagement of these design elements from the analytical knowledge pool; or more commonly from the investment of the sampled elements with the qualities of their population counterpart. We interpret the sample – not as one realisation of many possible, but as an archetype of the phenomenon under study.


As antidote to the worst of this abuse sampling theory informs how data is to be assembled, processed and interpreted, entirely free of what it ‘means’. Data informs a researcher or subject analyst to the extent that the collection design faithfully reflects the research designs as conveyed to the statistician responsible for it and its implementation. Things go wrong – disastrously for the reputation of otherwise respectable branches of science – when the statistical artifice is mistaken for epistemic fact. Eysenck’s use of quantitative genetics to derive an organic theory comes unstuck because the quantitative givens (regression, correlates .. ) are treated as the elements of a theory of heritability, whereas they take meaning irrevocably in a statistical frame. He mishandles statistics badly in the course of imparting authority to a tendentious opinion. A reappraisal of the evidence correctly employing the statistical constructs gives strong evidence conducive to doubt in relation to his primary hypothesis.



Singh, S., Advanced Sampling Theory with Applications, Kluwer, 2004

Matthews, R. A. J., Facts versus Factions: the use and abuse of subjectivity in scientific research, The European Science and Environment Forum, Cambridge, 1998

Velden, M., Vexed Variations, Review of Intelligence, by Hans J. Eysenck, in the Times Literary Supplement, April 16 1999.

Lindley, D. V., Seeing and doing: the Concept of Causality, International Statistical Review (2000), 70,2 191-214

Quaresma Goncalves, S. P., A Brave New World, Q2004, Mainz, May 2004


[1] But for the other side of the picture See: Quaresma, ‘These options [filtering incorrect records or their translations to other codes] must be presented to the statisticians responsible for data production and who should choose which solution to adopt. Data ownership is always respected and ensured, and the data analyst role is only to help and assist statisticians along the process.’

Why performance indicators fail

PIs fail because they succeed! They are designed to separate a normal from an abnormal state in a ‘system under management’. Yet because they are brought into the system, into the way the system is managed, not merely to give an outward measure, they reduce or distort the capacity of the system to adapt; perhaps in inconsequential ways; perhaps in collusive state dependency.

Examples are easy to find, whether in relation to mechanical systems – the malfunctioning probe that overrides normal system adjustment; or more diffuse systems such as a finanicial system running on normal activity but with artificially maintained valuations. In such cases the detachment of the system of measurement from the nature of the system being measured is obvious after the event: shocking perhaps, but there is in fact no guarantee that certitude in the performance measure translates to macroscopic performance: to the quality of governance. Nor that an absence of evidence of performance translates to an absence of performance.

Yet ‘high performance’ is the currency of work contracts, of individuals as of organisations. It is how we judge managers, and how managers regulate their own behaviour. KPIs are the public face of managerial ability, how rewards are determined; how strategic pathways mapped; and how political programs framed. The ‘gap’ in public discourse is as real as any moral imperative, and in fact the more to be trusted because it lies outside moral or intellectual failures of the past. Closing the gap has moral urgency, because it has subsumed the debate on responsibility for the past, and the continuing failures of comprehension in the policy frameworks adopted.

The gaps – the contrast in outcomes according to social state, a form of social state determinism so relic of class consciousness perhaps if one were to enter into a psychoanalytical interpretation – show up social performance in a large sense. What happens then is outside of any effort or ingenuity in the construction. The ‘gap’ so revealed can be interpreted as a managerial lever. How to most effectively repair policy shortcomings, is to act on the elements of the indicator – reading rates, school performance scores, income poverty levels, crowding and so forth. And as such they have the quality of moving forward, while keeping intact the apparatus that lead to the gap.

That is the danger. Of course how is the citizenry or their body of servants and representatives to know that the system is healthy or not? The levers of government function as legislated; proximate effects are manifest.

Enter official statistics. Without fear or favour, it reflects the nation to itself. What is important, what is simply activity? OS rests on consent – on the authority resting in published measures outside performance within the programs of government – and on privileged access. OS, as expounded by NSOs, labours under a cloud of irrelevence if not illegitmacy, ironically a complement of the fatal success of the KPIs on which much management theory now seems to rely.

My presentation at the forthcoming ASC-IMS conference relates this heuristic to an emerging foundational account of inference within the reality of multiple data sourcing, drawing on the ever fecund concept of a learning organisation from the engineering literature. It nevertheless is foundationally and linguistically statistical.