Introduction
As a refresher, this week we present a synoptic account of the largely forgotten scientific revolution at the heart of our own work on change, a scientific revolution which we referred to last time.
In the way we described in our eponymous previous post, we ourselves at Change are “contemporaries of the future” in Pauwels’s sense, insofar as we stand astride the present, with one foot planted firmly in the future and the other rooted firmly in the past, that is, in this remarkable tradition of still overlooked past scientific work. That vital, revolutionary scientific work, of inestimable importance to the future of humanity, is in danger of being lost.
Today’s post is excerpted from our very much longer and more wide-ranging three-part article, “Philosophy Without Arguments,” published here in Change last year, which we have frequently had occasion to reference in subsequent posts.
The relatively brief synopsis below you may find useful background for digging deeper and understanding a little of what’s actually behind much of our own contrarian thinking about Change.
—The Editors
Catching Up with the Past: Looking Back to Find Our Way Forward
.
The 20th Century witnessed a revolution in ideas justly declared by the redoubtable Gregory Bateson in 1966 to be “the biggest bite out of the fruit of the Tree of Knowledge that mankind has taken in the last 2000 years,” although he added ruefully, “most such bites out of the apple have proved to be rather indigestible.”1 His words, sadly, were increasingly revealed to have been prophetic as the decades rolled on.
Heralded by the Macy Conferences in the late 1940s, the most important intellectual upheaval since the 17th Century, building on stunning scientific and philosophical breakthroughs made mainly in the Germanies in the 1920s, but also in the United States and in Britain, progressed most rapidly in the 1950s, ‘60s and early ‘70s mainly in Britain and the United States.
And then, for a host of purely adventitious reasons, the work began to get lost, in despite of once more, here and there, starting to regain a little momentum in the second decade of the 21st Century, particularly in the biological and social sciences, and in perhaps unexpected places, like the University of Tartu (founded 1632) in Estonia.
.
Strange Bedfellows and a Cornucopia of Problems
Let me begin by giving you a small glimpse of the kind of “revolution in ideas” we’re talking about, and what manner of men and women these 20th Century scientific revolutionaries were.
Here was a scientifically grounded philosophical worldview, exhaustively worked out in all its concrete details—in rigorous scientific theorizing and exhaustive empirical research over more than a century, with dramatic technological, practical and clinical applications across a staggering array of fields of science.
Beginning a few years before, during, and in the decade just following the Second World War, some of the greatest thinkers of the 20th Century—a host of more-or-less maverick but highly distinguished scientists working across countless disciplines—were thrown together in the course of their work on various, apparently unrelated and usually practical problems, many or perhaps most of these initially in war work, and often far from their day jobs.
This was work carried out, field by field of research, over—I shudder to think how many—perhaps some hundreds of thousands of man-years or more of tireless work by first-rate scientific investigators.
Most were prolific polymaths including, amongst them, more than their fair share of former child prodigies. Norbert Wiener graduated from Tufts at age 14. Walter Pitts at age 13 was invited by Bertrand Russell to do a doctorate at Cambridge, or alternatively, Russell said in his letter, if he already had a PhD, to join him on the Faculty there, having no idea that he was corresponding with a schoolboy, as we related in our previous post a fortnight ago.
Some hundreds of those who were most prominent in developing and promulgating the new thinking were among the 20th Century’s most distinguished names within the confines of their own disciplines. They were to number amongst them a dazzling array of Fellows of the Royal Society and a prodigious number of Nobel Laureates and Nominees. For these were intellectual giants in an age of giants. And this work could only have been the work of giants, to be sure.
Despite coming from wildly different scientific disciplines, they worked side-by-side in ever varying multidisciplinary groupings, labouring on the frontiers of their own fields and of Science writ large.
Working together in interdisciplinary teams were the century’s most distinguished mathematicians and linguists, anthropologists and engineers, ethologists and ethnologists, physicists and physiologists, philologists and geneticists, chemists and biochemists, psychologists and zoologists, psychoanalysts and neuroscientists, anatomists and astrophysicists, quantum mechanicists and biosemioticians, mathematical logicians and ecologists, economists and computer scientists, statisticians and physicians, cognitive scientists and epidemiologists, information theorists and psychiatrists, and a bewilderingly diverse host of others.
They published their results prolifically in the most prestigious, peer-reviewed scientific journals, now in one discipline, now in another (once one of them, who later regretted it, invented the peer-review system2), or else published their findings in Nature or in Science or in the host of new journals demanded by the new fields of study they were opening up.
They worked together on the most bewildering array of pure and applied scientific problems: the construction of reliable signal amplifiers from unreliable components, the chemical mediation of homeostasis, the eerie similarities in nerve-firing pattern between epileptic seizures and normal contraction in heart muscle, the mathematics of controlling antiaircraft guns, the phenomena of hypnotic trance and of psychotic hallucinations, fluid mechanics, the communicative behavior of dolphins, nervous system organization, the complex communicative significance of lighting someone’s cigarette, embryogenesis, the selection of army officers, the pseudo-mating rituals of groups of artificial turtles (each with two-neuron “brains”), the sophisticated chemotaxic navigational abilities of E. coli, the mechanics of frogs’ visual perception, psychopathology and psychotherapy, the training of guide dogs, the psychology and neurophysiology of laughter and humour, the organization of mental hospitals, communication between Down’s Syndrome children, the psychoanalytic treatment of schizophrenia, the interaction between ventriloquists and their puppets, family dynamics, the development of optical devices for the Apollo space program, the genesis of self-organizing dynamic systems, the word-salad of schizophrenics and the play of otters, along with such miscellaneous curiosities as the movement of circling-arm lawn sprinklers, as well as more central concerns such as the micro-dynamics of sub-cellular and inter-cellular processes and of human interaction-in-context—just to pick a handful of examples, more-or-less at random, of the divers problems on which they worked seamlessly together in multidisciplinary teams to make what were often groundbreaking discoveries.
These scientists from far-flung fields of research worked in surprising interdisciplinary combinations, strange bedfellows indeed. Despite coming from such widely disparate disciplines, and from both sides of the Atlantic, they actually came to know one another well, whether through conferences or correspondence, through universities and journals, babysitting one another’s kids or vacationing together every summer.
.
Self-styled Revolutionaries
With few exceptions,3 the overwhelming majority become better known for their narrower, disciplinary contributions to their own fields, which only a handful of them saw as being their main contribution to knowledge.
For the investigators I am talking about largely saw themselves and quite explicitly described themselves as part of a disorderly but unified global scientific movement, a movement that was never to acquire an agreed-upon name. They worked frequently, in many cases primarily, across disciplines—together making a scientific revolution on a scale and with a scope not seen since the 18th Century. They knew just how high the stakes were. Their shared rhetoric was in terms of “revolution,” a word they used frequently and without hyperbole.
The work was scattered across Europe and North America, but there were a few epicentres of the work, to be sure, including, amongst many others equally important to this scientific movement: Bell Labs, Manhattan and the University of Hamburg, Germany in the late ’20s and early ’30s; then Cambridge University and MIT; Harvard University and Stanford; the Biological Computer Laboratory at the University of Illinois and the National Physical Laboratory at Teddington, Middlesex; Northfield Military Hospital, Birmingham, England and The Cassel Hospital, Richmond, Surrey; the Malvern Radar Establishments (both of them in fact), Worcestershire, England and Brunel University, London; Chestnut Lodge Sanatorium, Rockville, Maryland and Austen Riggs, Stockbridge, Massachusetts; and perhaps most pivotally, the Macy Conferences and the legendary, London-based Ratio Club, to name some more obvious ones.
The connections between all of these and more gets less surprising the deeper one digs into the history and traces and maps out the professional and social networks.
As I mentioned last time, I have been at work for many years now on researching and writing a comprehensive history of this movement from the First World War onwards, which also traces the all-important earlier history in considerable detail, going right back to the opening years of the 1600s, and with a particular emphasis on the physiological sciences in the Germanies throughout the 18th Century and early 19th Century.
In an early, book-length review of this scientific movement in 1956, a sophisticated contemporary observer, Pierre de Latil in Paris, set out the new ideas and heralded them, with great prescience, as making for no less than “a revolution in metaphysics,” taking up where Kant had left off, he said. There was justice in the assessment, and some of Kant’s work had in fact been an important precursor to many of the key ideas. However, in fact, as it turned out, the new thinking would eventually knock Kant’s whole metaphysics into a cocked hat.
One or two of these self-styled revolutionaries, such as Norbert Elias, throughout their life’s work quite explicitly took on Kant as their chief adversary, yet they numbered amongst them some of the Century’s most distinguished Kantians and neo-Kantians. Others took aim, as the Newtonians once had, at Descartes. But one way and another, all were convinced that philosophy, let alone science, or technology or medicine, would never be the same again.
So what was the Big Idea?
Well, genuine novelty cannot be briefly summarized. So I can hardly do justice to the answer to that question here.
However I will do my best to adumbrate a very brief, synoptic view of “the big idea” that emerged from this scientific revolution, trying to convey its broad outlines as accessibly as possible and in terms as non-technical as the subject matter will allow.
This regrettably rough sketch, of necessity presented dogmatically, should at least give you a sense of what the fuss was all about, and how this 20th Century revolution in ideas provided the foundations on which our own scientific research on change was built, along with its practical applications in catalyzing major transformations by minimalist means.
I shall come at the topic initially from left field.
.
Life as Autonomy—the Abrogation of Causality
When I am driving along the road, my car is buffeted by crosswinds and jigged about by the uneven road surface, and I stay on the road as it twists and turns. I keep jiggling the steering wheel in tiny unconscious movements to maintain my perception of the front of my car as centred on its piece of road, between the curb and the central line. That’s all I need to do to stay on the road.
My movements of the steering wheel, of which I am unaware, are simply cancelling out any errors from my selected reference condition, “centred in my lane,” thus cancelling out the effect of any perturbations in real time—before they can even become perturbations, as far as I’m concerned.
This is the fundamental biological phenomenon of control, and it was in this sense that behaviour, whether of human beings or of E. coli, was seen now as the control of perception.4 And from the study of this phenomenon there soon emerged a radically new conception of living organisms.
The behaviour of all living things could no longer be explained as a response to external stimuli or as the effect of internal or external causes or both. It turned out that nature just doesn’t work that way. Behaviour of any kind could be accounted for only in terms of the purposeful cancelling-out of error.
Effectively, the behaviour of living things at all levels was demonstrated compellingly to consist of taking a selected environmental variable and extracting it from the causal nexus,5 echoing William James’s insight in the late 19th Century, that whereas in the inorganic realm, variable ends were attained by consistent means, in the organic realm consistent ends were attained by variable means.6 Far from being seen as the creature of causes, living things were now recognized as living only insofar as they cancelled out the effects of any would-be causes and so functioned purely autonomously at every level, down to the tiniest organelle.
More importantly, for the first time in the history of science, the formal mechanics of purpose were worked out in excruciating theoretical detail and put to empirical test, and the outright impossibility of any causal model for explaining behaviour was conclusively demonstrated empirically—whether or not anyone outside the field, least of all our benighted philosophical Academy, even bothered to notice!
For it could be demonstrated compellingly, and endlessly reconfirmed to the Nth decimal point, that the entire organic realm consists of a hierarchy of control systems, whose logical structure in turn consists of the cancelling out of any and all causal perturbations, from whatever source.
Nature abrogates causality. It’s how the entire organic realm works. And like the rest of the organic realm, the psychological realm could henceforth only be understood scientifically as not the creature of causes at all but as autonomous and purposeful—the systematic nullifying of any causal influences.7
.
Perception Out in the World
What is more, at least—but not only—in all higher mammals, behaviour was fundamentally expressive, communicative, purposeful and concerned above all with relationships, as was dramatically demonstrated by Bateson in the case of dolphins.
You can teach dolphins to perform tricks by rewarding them with fish when they randomly do what you’re looking for, so-called “operant conditioning”—that’s classical learning theory 101. But Bateson and his colleagues found a number of oddities. To take just one example, you can also reward dolphins for novelty—every time they did something new that you’d never seen them do before, you’d throw in some fish, and you would never reward them for the same trick twice.
But there comes a point, perhaps after fourteen or fifteen training sessions, when the dolphin suddenly “gets it,” has a porpoise’s epiphany, figures out what’s expected, and then suddenly it produces a flurry of novel tricks, including a number of pieces of behaviour that have never before been seen in the species.
There was a lot going on that was not accounted for in the learning theory of the day: Bateson showed that classical learning theory missed most of what was going on with the dolphins. He proved that, for starters, there was learning about context, there was the overwhelming influence of the trainers’ expectations, and the trainer’s relationship with the “trainee,” along with purposeful communication between them— most of it, by definition, non-verbal—about all such matters.
Nor could purpose and meaning be ignored in the understanding of the physiological mechanics of perception, for even at the lowest synaptic levels of visual processing, even in the lowly frog, meaning and action, and especially purpose and context, were demonstrated to play not only an inextricable role, but indeed to take the lead role in all perception. Frogs are on the lookout for bugs because they want to catch and eat them, and they are built to be good at seeing bugs. But frogs don’t deduce bugs from uninterpreted buggy sense data; the eye sends no such data.
Rather, as Horace Barlow had first proposed, the frog’s eye sends its brain news of tasty bugs in the vicinity. In their much-cited watershed 1959 paper, “What the Frog’s Eye Tells the Frog’s Brain,” Lettvin, Maturana, McCulloch and Pitts, drawing on the earlier seminal work of Barlow and on Oliver Selfridge’s work on pattern recognition, would effectively put paid once and for all to a number of classical empiricist dogmas: the myth of the irrelevance of purpose and context to what is perceived, the whole 19th Century conception of a visual field, the myth of abstracting meaning from sense-data, and much else besides, including the very distinction between sensation and perception, for it turned out that it’s all perception, at least at the conscious level.
This was also the conclusion of the great Hungarian experimental psychologist, neuroscientist and (before emigrating to America) sometime psychoanalyst Béla Julesz, in his pioneering work on vision at Bell Labs published the following year, which confirmed the suggestive findings on frogs from a new and unexpected direction.
Spatial depth was directly perceived rather than constructed or interpreted out of sense data. In fact, near the end of his long life’s work devoted to the biophysics and neurophysiology of vision, Julesz once summed up the philosophical import of his seminal scientific findings by saying that the entire perceptual process is remarkable for being unconscious at every stage, with the very first conscious stage being the readily actionable or verbalizable end-product (“hey, that’s one of those new Ford Thunderbirds!”).
Barlow went further, concluding that human perception itself was a derivative of the verbal, social communication of descriptive aspects of the environment (say, Ford Thunderbirds) that happen to be of local interest to us and our pals. We might aphoristically translate Barlow’s conclusions into contemporary Twitter-speak, by saying that so-called perceptual consciousness is merely an abstraction from all that we tweet—or might tweet if we chose—about our immediate environment.
The neuroscientist Donald MacKay, in a similar vein to Barlow, had earlier characterized conscious perception in terms of patterns of conditional readiness-to-respond-in-action.
Both Barlow and MacKay, incidentally, had taken Wittgenstein’s account of aspect-perception8 as an inspiration for their empirical research, and it provided some of their work’s philosophical foundations. Both of them had studied with Wittgenstein at Cambridge, and his influence on them permeates their work and that of many of our scientific revolutionaries.
Perception is direct and unmediated, inextricable from action; and consciousness, far from being a brain process at all, is no more than an infelicitous, abstract characterization of our publicly verbalizable patterns-of-conditional-readiness-to-respond.
The work by these neuroscientists and their colleagues and students, which continued over two generations, also put paid to the myth of the privacy of consciousness and especially the myth of “internal representations.”
For, what is more, they showed that the brain does not conjure with internal representations of external objects—there are no such representations to be found encoded anywhere in the brain (the myth of internal representations being a more “pompous” and even more wrong-headed version of Locke’s “demonology” of ideas in modern dress, as Gilbert Ryle stingingly characterized it).9
Nor are any such alleged internal representations needed to account scientifically for any known phenomena, it turned out, nor is the nervous system wired that way, or indeed in any way that could conceivably have any room or need for such “representations.” It is remarkable how many pages of philosophy books and journals are wasted, even today, in theorizing about these entirely mythical ‘inventities’, the so-called “mental representations,” under one name or another. There ain’t no such animal.
What is more, as Ryle had already argued on philosophical grounds as early as 1949,10 there simply is no ghostly inner theatre, whether immaterial or neurally instantiated, and at last we could now understand why that is—from a purely scientific point of view—as well. Incidentally, it does philosophy no credit at all to go on writing in 2023 (as it largely does) as if biology had not advanced since the 1890s, or psychology since the 1880s, or physics since the 1860s.
.
Spontaneous Interaction in Context
Maturana and his colleagues, extending the work with Lettvin et al. in subsequent decades, were to carry these conclusions further still, beyond questions of perception and consciousness, action and cognition, ultimately yielding a thoroughgoing interactional approach to understanding the self-organization of living systems of any kind, in which the interacting elements were—in an important sense—shown to be not really interacting at all from each element’s own point of view, in contrast to the point of view of the observer.
Rather, they were each completely autonomous, purposeful, Leibnizian windowless monads, forming with their environment an inextricable unity—a self-contained micro-universe of the kind presaged in great detail and with great philosophical sophistication by von Uexküll in his influential theory of the Umwelt.11
But more radical conclusions were still to come.
Spontaneity of action, as the neurologist Kurt Goldstein was amongst the earliest of the new thinkers to recognize (by no means the first but perhaps one of the most influential on the topic), turned out to be an irreducible biologic phenomenon, logically prior to mere reaction.
But more to the point, in neurophysiology, indeed in all organic activity, the fully functioning whole could readily be shown to be more elementary than its parts, and the nature, indeed the very identity of the parts themselves could only be understood in the context of the whole. It was only in the context of this particular, functioning idiosyncratic whole that one could even say, in the first place, what the parts themselves were.
Just as for Goldstein the nervous system was not a bunch of neurons strung together, and as each neuron was, on the contrary, merely a node in a living network that functioned only as an integrated whole (with each neuron merely recruited to play its allotted role in responding to, and thus contributing to, the total context at any given moment), so all psychodynamics, in the human sphere, were now to be understood contextually in terms of interactional patterns embedded integrally within the whole interpersonal network in which those patterns live and move and have their being.12
All the same, individuals, though embedded inextricably in patterns of interpersonal interaction, nonetheless remain free agents, responding to their understood, self-defined, contingent situations by electing to do one thing rather than another in relation to their desired outcomes.13 Patterns of interpersonal interaction provide the essential context for the deliberation and choice of actions, within which their own part is exquisitely choreographed unconsciously, but those interactions in no way determine or direct specific, substantive actions.14
.
The Mental and the Physical: Freedom and the New Science
For these reasons along with a host of others, changes in the mental realm could be shown to account for changes on the physical level but—even in principle—explanation could never work the other way around.
Physical changes could never explain mental changes, with the possible exception of relatively rare cases of physical pathology, when the system is effectively breaking down. MacKay, perhaps best known for his work on the physiology of seeing aspects as we mentioned above, was adamant on this point, while remaining equally insistent that there could never be a change on the mental level without a corresponding change on the physical level.
The properties or states of the physical could never account for the mental, any more than the circuitry of a radio set, they argued, let alone its ‘genetic blueprint’—the code followed by the manufacturers at Zenith, first laid down by the engineers who designed it—could ever account for the rich content of the Third Programme of the BBC.
D. J. Stewart pointed out decades ago15 that the logic of this had already been recognized by Hume in the 18th Century, but that—as a large body of seminal scientific investigation went on to demonstrate—we at last had the hard science to show that it could not possibly work in the other direction, at least not in this particular universe of ours. The mental could account for the physical, but never the other way around.
Free will thus turned out to be real, and worse yet for the determinists, had at last been put on solid empirical grounds. Far from being scientifically inexplicable, free will was from now on scientifically inexpugnable.
Many of the revolutionaries—Kenneth Craik explicitly, and perhaps, for example, Bateson, MacKay, even McCulloch—might have described themselves as hylozoists, seeing life and mind as essential properties of matter, but in no way as a function of the physics of matter which they took to be largely irrelevant here. Biology, for example, could never again be a matter of following that old pious hope of a recipe, “take Physics, and just add carbon.” Rather, life and mind were a function chiefly of the organization of matter, its design.
Thus most were clearly also, in effect, hylomorphists, regarding all being, indeed any particular entity (on Aristotle’s act-potency scheme) as a unity of form and matter. The concept goes back to Aristotle’s biology, though the term “hylomorphism” itself dates only from the 1920s when once again the notion had an important role to play in the biological sciences, particularly in Central Europe.
While all of these scientific revolutionaries set their face against all forms of reductionism, determinism, physicalism (in the sense in which the term is used today) and genetic dogmatism, all were arguably in some sense mechanists, albeit of an entirely novel kind—insofar as they were interested not in the alleged causes of things (what could that now even mean in the universe they had come to realize they inhabited?), nor were they interested in the physics of things, which no longer played more than a subsidiary explanatory role outside physics proper.
Rather, they were interested in (let us call it) the dynamics of things, what many of them informally liked to call “the cybernetics of it”—the loops and feedbacks and all that makes any given, idiosyncratic situation tick—in other words, the precise mechanism of the thing, the Machine in the Ghost.
.
They Were All from Missouri
In seeking to apply rigorous, scientific understanding to every idiosyncratic situation put forward for study, they would unravel its equally idiosyncratic interconnections, in order to grasp the dynamics of what made any given thing tick.
But to do this they didn’t construct models of what they were investigating, for the most part, nor did they abstract byzantine “systems” which could be mapped out on paper, in the manner of the emerging ‘opposing camp’, the systems theorists. For they could show the futility of such exercises for most scientific purposes, except where the real thing was either too dangerous to meddle with or hasn’t been built yet, as in the case of nuclear reactors or suspension bridges, for example.16
If they used models at all, they were working models that could be tested on the bench and their behaviour analyzed. The most notable examples of how this should be done would be, for example, Ashby’s famous homeostat which he somehow carted around with him to scientific meetings, and the working models of W. T. Powers and his colleagues, with open source code provided to the rest of the scientific community.
These were not spaghetti-and-meatballs diagrams, pictures, metaphors, useless mid-level abstractions and fine words that don’t butter any parsnips, but carefully programmed artifacts, actual physical pieces of kit, constructed from bits of hardware or run on a computer, whose behaviour could be scientifically studied. From now on, we’re all from Missouri: “you’ve got to show me.”
For the most part, from this new perspective, a scientist had to work “directly with pieces of the real world, in a close and delicate manner,” as Stewart put it, and after all, “the best material model of a cat,” Arturo Rosenblueth more than once famously intoned, “is another—or preferably the same—cat.”
Whatever it was they happened to be studying, the aim was to understand what Kenneth Craik liked to call “the go of it,” using the Scots colloquial expression so famously deployed by Clerk Maxwell from earliest childhood when demanding of his parents, “What’s the go of it?”—and if not satisfied with their answer, “but what’s the particular go of it?” And it was always the particular go of a thing that was now to be of chief scientific interest.
It was not that matter and energy had no role at all to play in "the go of," the workings of the cosmos and hence in the fabric of reality. On the contrary, matter and energy conveniently provided a medium of communication and suitable raw materials to be organized—as passive building blocks—by the flows of information (“differences on the move”), in turn organized, as D. J. Stewart later demonstrated,17 by values and preferences (“imparities on the move”).
Like a hod carrier or a forklift truck, energy could do the heavy lifting and matter could provide the bricks and mortar, but these merely enabled the work of organization to be implemented, the structures to be built and moved, in the context, according to the architectural demands of purpose and reason, custom, intention and design.
Causality hence could no longer play any explanatory role in a scientific understanding of what takes place in the universe, at least once we rise above the crudest level of comparatively trivial physical phenomena and enter the biological realm, let alone the human realm.
For most of the distinct phenomena in the universe were, in effect, immune to causal influence—after all, the universe harbours infinitely more distinct phenomena in Berlin than in Betelgeuse. And even the physicists had abandoned causality over a century ago. Causality was clearly long past its “use by” date.
.
Negative Explanation and the Primacy of the Local and Idiosyncratic
So what then? If there was no longer any place for the old conceptions of causality, which seemed now to lack any scientific application, and if the laws and concepts of physics were clearly irrelevant to scientific explanation when it came to the overwhelming majority of natural phenomena, then we had to look elsewhere, beyond matter and energy, power and forces, if we were to find any unifying principles running through the fabric of this highly patterned but fundamentally anarchic, non-rule-governed universe of ours—something to take the place of the superannuated notion of “cause-and-effect.” But what was the alternative?
On the view of Nature that had held sway for four centuries, the persistence of order was taken to be the status quo ante, with any change needing to be accounted for in causal terms. On the new perspective, which was like a photographic negative of the old picture, we were entitled to expect continuous, random flux everywhere, and it was the persistence of any particular order or pattern in any region of the universe that was viewed as highly improbable, needing to be accounted for.18
Persistence now presupposed mechanism, and it was henceforth the scientist’s job to elucidate the specific mechanism that accounted for the persistence of any descriptive invariance—any pattern. If there were ceaseless, unconstrained, random flux, there would be an infinity of possibilities that might now be realized just here.
But since only a small subset of those possibilities are currently realized, our scientific inquiry is aimed at revealing the nature of the constraints precluding any state of affairs other than the one observed. What are the operative constraints in play, such that nothing else is currently possible, that is, nothing other than what we observe?19
These constraints may include certain universal invariances, e.g. that light cannot travel faster than C. However, in specifying more fully the set of constraints-on-variance precluding all states of affairs other than the one we are seeking to account for, we now had to specify, with the highest degree of scientific rigor, those far more numerous sources of constraint which are not universal at all but quite concrete, local, and idiosyncratic to the situation—the cybernetics of it, “the particular go of it.”
Our scientific explanation would not be complete until we could satisfy a questioner’s quite specific “why this rather than that,” and specify the constraints idiosyncratic to this specific context. A state-of-affairs was now only to be regarded as having been explained scientifically once it could be demonstrated to be the only state–of–affairs not currently precluded, and from now on scientific explanations would be explanations in terms of unities—local, idiosyncratic unities at that.
The primacy of the local, along with a decided preference for explanation in terms of the particular, concrete and situated over the universal, abstract and timeless, and of idiographic explanation in terms of unique individual histories over nomothetic explanation in terms of universal laws, were all hallmarks of the new science.
And from all this, and from the logic of negative explanation, there was an additional, and very big, payoff: From now on, scientific explanations can come to an end: they can be complete and categorical in relation to the specific question being addressed.
For rather than seeking an explanation of how (per impossibile, on the new cosmology) something has been “brought about,” we seek to discover and demonstrate, in any given case, how some initially puzzling state-of-affairs is the only state-of-affairs currently possible given the constraints revealed to be currently in place. We stop not because we have run out of steam but because we can show that, in logic, there is nothing more to be said in response to the particular question being asked.
In the new epistemology, the notions of object-and-forces, cause-and-effect, were entirely replaced by the notions of pattern-and-context, flux-and-constraint. We soon found ourselves back at home in our familiar world of diversity and idiosyncrasy, spontaneity and improvisation, context and purpose—a world of autonomous beings navigating their way through an anarchic universe according to their designs.
And yet the fabric of reality, at every level—the go of it—could now be accounted for in terms of the revolutionary new science, with logical rigour and even mathematical precision.
However, it also followed that what anything even was, in the first place, depended fundamentally on the specific, idiosyncratic context and on the observer’s purposes and point of view and the specific question being asked about it at any given time. Every situation had to be accounted for in its own terms, and to a particular questioner’s satisfaction.
The old myth was that all explanations must be of the same form,20 or at least must all fit together, and join up somehow, even if only “at the back.” But as scientists we were not all working together on some single, shared scientific enterprise. There was no City of Truth to be built. You could no longer put phenomena into categories and talk about “kinds of situation” or “this sort of thing”—except in relation to now one question, now another.
.
The Administrative Fallacy
The notion of something being “a case of” something was still to be very important, as it always was in science, perhaps more important than ever before, but what things could be considered to be cases of was no longer the preserve of any given academic discipline. There was no way to know in advance what kind of knowledge needed to be brought to bear.
Nor could you rationally any longer prejudge what sort of phenomenon you were dealing with, and then simply call in an investigator from the appropriate research specialism. That old, rationalist Epistemology of the Yellow Pages was dead; disciplines, for all practical purposes, were now for the birds.
The tacit assumption had previously been that the great structure of human knowledge itself came ready divided up into specialist disciplines, corresponding to the peculiar administrative structure common to most universities after the 1860s.21
You might call this “the Administrative Fallacy.” But all at once, with the new thinking, that quaint picture of the nature and structure of knowledge began to collapse like a house of cards.
For apart from the fundamental theoretical difficulties with the notion of knowledge as departmentalized, difficulties to which I’ve already adverted, it became clear that all too often the artificial divides between disciplines simply got in the way of good science. Mark Twain’s prescient remark perhaps got too close to the bone: “The researches of many commentators have already thrown much darkness on this subject, and it is probable that, if they continue, we shall soon know nothing at all about it.” That is precisely the situation we have in most fields of science today.
The exceptions, as the late John Ziman pointed out throughout his later work, are mostly where science had fortunately become “post-academic,” which is why some of the best science, even some of the best pure science, was now being carried out in multidisciplinary places like Bell Labs and Xerox PARC. Outside Schleiermacher and von Humboldt’s discipline-riven Academy, the name of the game was to learn something new that was truly innovative and of demonstrable value, and not merely to push back the boundaries of some single discipline.22
Tantamount to a wholesale attack on modernist rationalism, this unsettling logical consequence of the new thinking escaped few, if any, of the scientific revolutionaries.
Nor were the partisans merely “talking their own book,” as they say on Wall Street, in finding fatal flaws in the post-18th-century, rationalist conception of knowledge.
For it is no accident that these discoveries could never have been made had the scientists stayed within their own disciplines.
The universe, in reality, just isn’t divided up that way.
© Copyright 2012, 2022, 2023 Dr James Wilk
The moral right of the author has been asserted
“From Versailles to Cybernetics,” in Steps to an Ecology of Mind, London: Paladin Books, 1972
The great neuroscientist Ralph Gerard one of the revolutionary partisans, in a moment of weakness at the Office of Naval Research (for which he is doubtless still turning in his grave), gave us the peer review system in the precise form still in use everywhere today, in grant-funding and in academic publishing. It remains an unrivalled and widely reviled system which, in one of those typical ironies with which the history of science is littered, has consistently been one of the main counter-revolutionary forces to have slowed the wider adoption of the new thinking. The story goes, that in 1946, after the Babel of the first, cantankerous, Macy Conference, a wholesale bun fight in which a badly-behaved Warren McCulloch, the Chairman, insulted the eminent neuroscientist Gerard and rudely stopped him from speaking (McCulloch’s immortal words were, “Oh shut up, Gerard, you can talk next year!”) and before Gerard’s eminently successful experiences, by his own account, at the convivial 1948 interdisciplinary Hixon Symposium and comparatively civilized post-1949 Macy Conferences, he had prematurely concluded that interdisciplinary science was simply a non sequitur, or at least a bloody pain in the arse to be avoided at all costs. In Gerard’s enduring peer review system, we are today still the doubtful beneficiaries of the great man’s understandable umbrage. (In 20th-century History of Science, the equivalent of Cleopatra’s Nose in changing the course of naval history, may well be McCulloch’s Cheek.)
Here, the main exceptions that come to mind at once are probably Harry Black, Kurt Goldstein, Norbert Wiener, Gregory Bateson, Warren McCulloch, Ross Ashby, William T. Powers, Humberto Maturana, Heinz von Foerster, Stafford Beer, George Spencer Brown, and Francisco Varela.
In W. T. Powers’s felicitous formulation, from whom my example of driving the car has been borrowed.
Again, I owe this elegant and accurate formulation to W. T. Powers
See, for example, the opening pages of his two-volume Principles of Psychology, 1890.
The whole corpus of Powers’s work, including his seminal 1973 masterpiece, Behavior: The Control of Perception, provides a clear introduction to this vast field of scientific research
Ludwig Wittgenstein, Philosophical Investigations, trans. G. E. Anscombe, Oxford: Basil Blackwell, 1953
With all due respect to Locke, whom Ryle recognized was not attempting to set out a philosophical psychology or anything of the sort (unlike Hume who aspired to be the Newton of the mental realm). Rather, Locke’s whole “demonology” of ideas along with his “intermittent lapses” into “wires-and-pulleys” questions (in his influential 1689 Essay, which it is important to recognize was written for the general public), were only in the service of assisting disputants in that most disputatious age in which Locke was writing to recognize the kinds of propositions they were disputing about.
in The Concept of Mind
Jakob Johann von Uexküll (1864–1944) first published his groundbreaking ideas in his 1909 monograph Umwelt und Innenwelt der Tiere, developed further in his 1913 Bausteine zu einer biologischen Weltanschauung, and were perhaps most comprehensively presented in his Theoretische Biologie (1920/28). Despite the flaws in many of the answers proffered in Maturana and Varela’s epic 1980 monograph, Autopoiesis and Cognition (W. T. Powers and his colleagues were on much solider ground here), still, here at last were the right kinds of questions being posed in the right kind of language, albeit not with the degree of clarity that their redoubtable conceptual innovations deserved, perhaps—but these are difficult matters to write about, and it is worth persevering with Maturana’s writing. His 1980 volume with Varela was a worthy successor to von Uexküll’s seminal work, and much influenced by it.
This was a significant extension of, and advance on, Goldstein’s original organismic conception, due primarily to Michael Foulkes’s psychoanalytic integration of Goldstein’s work in neurology with the work of the sociologist Norbert Elias on figurations.
Cf. Michael Oakeshott, On Human Conduct, Oxford: Clarendon Press, 1991
Ibid.
in, for example, "A Ternary Domanial Structure as a Basis for Cybernetics and its Place in Knowledge", Kybernetes, Vol. 18, no. 4, pp.19 – 28 (1989)
I owe this point to Dr D. J. Stewart
See, for example, his 1989, op. cit. The mechanisms of this were unravelled in the work of W. T. Powers and his colleagues.
James Wilk, “Mind, Nature and the Emerging Science of Change: An Introduction to Metamorphology,” in Gustaaf C. Cornelis, Sonja Smets and Jean Paul Van Bendegem (eds.), Metadebates on Science, Brussels and London: VUB University Press, Vrije Universiteit Brussel, and Kluwer Academic Publishers, 1999, pp. 71-89
Ibid.
See J. C. B. Gosling (1973), Plato, in The Arguments of the Philosophers series, Chapter XVII, “Preferred Explanations.”
In 1810, implementing a reform proposal of Schleiermacher’s, von Humboldt had founded the first universitas litterarum, the University of Berlin (with the ambitious Fichte, of all people, as its first Vice Chancellor!) and it became the model for all 19th Century German research universities, and soon for all universities worldwide. For nearly a century, in fact, since just after the American Civil War when America’s older colleges began to embrace the new German model of the universitas litterarum and the first American universities were born, this tacit assumption has been knocking about. For an insightful discussion of this, and some of its implications for liberal arts education in America and beyond, see Prof. Anthony T. Kronman’s excellent Education’s End: Why our Colleges and Universities Have Given Up on the Meaning of Life, New Haven: Yale University Press, 2007. Cf. also Stephen Toulmin’s 2001 Return to Reason (Cambridge, Mass: Harvard University Press)
See for example, Ziman’s “‘Post-Academic Science’: Constructing Science with Networks and Norms,” in Science Studies, Vol. 9, 1996.