Introduction:
At the end of Part I of this 3-part article, we introduced the 20th Century scientific revolution on which our work on change was founded.
We indicated the extraordinary array of disciplines from which the investigators came who were in engaged in this interdisciplinary research, mostly between the First World War and 1970, and we offered a small sample of the wide range of problems on which they worked together.
We noted that one leading scientist among them declared this work to be “the biggest bite out of the fruit of the Tree of Knowledge that mankind has taken in the last 2000 years,” although he added ruefully that “most such bites out of the apple have proved to be rather indigestible.”
We also cited an early survey of this scientific movement declaring it to have made for no less than a revolution in metaphysics, taking up where Kant had left off, and we noted the way in which the radical, sweeping implications of the work most certainly did not escape the attention of those involved in it.
But what were these ideas and findings that were supposed to be so revolutionary? This week’s post, Part II, sets out a summary of what all the excitement was about.
—The Editors
Philosophy Without Arguments—Part II
Think Before You Think
So what was the Big Idea?
Well, genuine novelty cannot be briefly summarized. So I can hardly do justice to the answer to that question here.
However I will do my best to adumbrate a very brief, synoptic view of “the big idea” that emerged from this scientific revolution, trying to convey its broad outlines as accessibly as possible and in terms as non-technical as the subject matter will allow. This regrettably rough sketch, of necessity presented dogmatically, will make up the whole of this week’s edition of Change (Part II of a 3-part article, “Philosophy Without Arguments”), and should at least give you a sense of what the fuss was all about, and how this 20th Century revolution in ideas provided the foundations on which our own scientific research was built, along with its practical applications in catalyzing major transformations by minimalist means.
I shall come at the topic initially from left field.
Life as Autonomy—the Abrogation of Causality
When I am driving along the road, my car is buffeted by crosswinds and jigged about by the uneven road surface, and I stay on the road as it twists and turns. I keep jiggling the steering wheel in tiny unconscious movements to maintain my perception of the front of my car as centred on its piece of road, between the curb and the central line. That’s all I need to do to stay on the road.
My movements of the steering wheel, of which I am unaware, are simply cancelling out any errors from my selected reference condition, “centred in my lane,” thus cancelling out the effect of any perturbations in real time—before they can even become perturbations, as far as I’m concerned.
This is the fundamental biological phenomenon of control, and it was in this sense that behaviour, whether of human beings or of E. coli, was seen now as the control of perception.1 And from the study of this phenomenon there soon emerged a radically new conception of living organisms.
The behaviour of all living things could no longer be explained as a response to external stimuli or as the effect of internal or external causes or both. It turned out that nature just doesn’t work that way. Behaviour of any kind could be accounted for only in terms of the purposeful cancelling-out of error.
Effectively, the behaviour of living things at all levels was demonstrated compellingly to consist of taking a selected environmental variable and extracting it from the causal nexus,2 echoing William James’s insight in the late 19th Century, that whereas in the inorganic realm, variable ends were attained by consistent means, in the organic realm consistent ends were attained by variable means.3 Far from being seen as the creature of causes, living things were now recognized as living only insofar as they cancelled out the effects of any would-be causes and so functioned purely autonomously at every level, down to the tiniest organelle.
More importantly, for the first time in the history of science, the formal mechanics of purpose were worked out in excruciating theoretical detail and put to empirical test, and the outright impossibility of any causal model for explaining behaviour was conclusively demonstrated empirically—whether or not anyone outside the field, least of all our benighted philosophical Academy, even bothered to notice!
For it could be demonstrated compellingly, and endlessly reconfirmed to the Nth decimal point, that the entire organic realm consists of a hierarchy of control systems, whose logical structure in turn consists of the cancelling out of any and all causal perturbations, from whatever source.
Nature abrogates causality. It’s how the entire organic realm works. And like the rest of the organic realm, the psychological realm could henceforth only be understood scientifically as not the creature of causes at all but as autonomous and purposeful—the systematic nullifying of any causal influences.4
Perception Out in the World
What is more, at least—but not only—in all higher mammals, behaviour was fundamentally expressive, communicative, purposeful and concerned above all with relationships, as was dramatically demonstrated by Bateson in the case of dolphins.5
Nor could purpose and meaning be ignored in the understanding of the physiological mechanics of perception, for even at the lowest synaptic levels of visual processing, even in the lowly frog, meaning and action, and especially purpose and context, were demonstrated to play not only an inextricable role, but indeed to take the lead role in all perception. Frogs are on the lookout for bugs because they want to catch and eat them, and they are built to be good at seeing bugs. But frogs don’t deduce bugs from uninterpreted buggy sense data; the eye sends no such data.
Rather, as Horace Barlow had first proposed, the frog’s eye sends its brain news of tasty bugs in the vicinity. In their much-cited watershed 1959 paper, “What the Frog’s Eye Tells the Frog’s Brain,” Lettvin, Maturana, McCulloch and Pitts, drawing on the earlier seminal work of Barlow and on Oliver Selfridge’s work on pattern recognition, would effectively put paid once and for all to a number of classical empiricist dogmas: the myth of the irrelevance of purpose and context to what is perceived, the whole 19th Century conception of a visual field, the myth of abstracting meaning from sense-data, and much else besides, including the very distinction between sensation and perception, for it turned out that it’s all perception, at least at the conscious level.
This was also the conclusion of the great Hungarian experimental psychologist, neuroscientist and (before emigrating to America) sometime psychoanalyst Béla Julesz, in his pioneering work on vision at Bell Labs published the following year, which confirmed the suggestive findings on frogs from a new and unexpected direction.
Spatial depth was directly perceived rather than constructed or interpreted out of sense data. In fact, near the end of his long life’s work devoted to the biophysics and neurophysiology of vision, Julesz once summed up the philosophical import of his seminal scientific findings by saying that the entire perceptual process is remarkable for being unconscious at every stage, with the very first conscious stage being the readily actionable or verbalizable end-product (“hey, that’s one of those new Ford Thunderbirds!”).
Barlow went further, concluding that human perception itself was a derivative of the verbal, social communication of descriptive aspects of the environment (say, Thunderbirds) that happen to be of local interest to us and our pals. We might aphoristically translate Barlow’s conclusions into contemporary Twitter-speak, by saying that so-called perceptual consciousness is merely an abstraction from all that we tweet—or might tweet if we chose—about our immediate environment.
The neuroscientist Donald MacKay, in a similar vein to Barlow, had earlier characterized conscious perception in terms of patterns of conditional readiness-to-respond-in-action. Both Barlow and MacKay, incidentally, had taken Wittgenstein’s account of aspect-perception6 as an inspiration for their empirical research, and it provided some of their work’s philosophical foundations.7
Perception is direct and unmediated, inextricable from action; and consciousness, far from being a brain process at all, is no more than an infelicitous, abstract characterization of our publicly verbalizable patterns-of-conditional-readiness-to-respond.
A number of our contemporary philosophers8 seem to have rediscovered this conception in a somewhat watered-down form, but amongst these early investigators all this, along with its radical implications, had been taken as read by the early 50s at the latest, and by the early 60s it was no longer a matter of establishing it empirically but of adding finer-grained neurophysiological detail to the picture.
The work by these neuroscientists and their colleagues and students, which continued over two generations, also put paid to the myth of the privacy of consciousness and especially the myth of “internal representations.” For, what is more, they showed that the brain does not conjure with internal representations of external objects—there are no such representations to be found encoded anywhere in the brain (the myth of internal representations being a more “pompous” and even more wrong-headed version of Locke’s “demonology” of ideas in modern dress, as Gilbert Ryle stingingly characterized it).9
Nor are any such alleged internal representations needed to account scientifically for any known phenomena, it turned out, nor is the nervous system wired that way, or indeed in any way that could conceivably have any room or need for such “representations.” It is remarkable how many pages of philosophy books and journals are wasted, even today, in theorizing about these entirely mythical ‘inventities’, the so-called “mental representations,” under one name or another. There ain’t no such animal.
What is more, as Ryle had argued on philosophical grounds in 1949,10 there simply is no ghostly inner theatre, whether immaterial or neurally instantiated, and at last we could understand why that is—from a purely scientific point of view—as well. Incidentally, it does philosophy no credit at all to go on writing in 2022 as if biology had not advanced since the 1920s, or psychology since the 1880s, or physics since the 1860s.
Spontaneous Interaction in Context
Maturana and his colleagues, extending the work with Lettvin et al. in subsequent decades, were to carry these conclusions further still, beyond questions of perception and consciousness, action and cognition, ultimately yielding a thoroughgoing interactional approach to understanding the self-organization of living systems of any kind, in which the interacting elements were—in an important sense—shown to be not really interacting at all from each element’s own point of view, in contrast to the point of view of the observer.
Rather, they were each completely autonomous, purposeful, Leibnizian windowless monads, forming with their environment an inextricable unity—a self-contained micro-universe of the kind presaged in great detail and with great philosophical sophistication by von Uexküll in his influential theory of the Umwelt.11 But more radical conclusions were still to come.
Spontaneity of action, as the neurologist Kurt Goldstein was amongst the earliest of the new thinkers to recognize, and perhaps the most influential on the topic, turned out to be an irreducible biologic phenomenon, logically prior to mere reaction. But more to the point, in neurophysiology, indeed in all organic activity, the fully functioning whole could readily be shown to be more elementary than its parts, and the nature, indeed the identity of the parts themselves could only be understood in the context of the whole. It was only in the context of this particular, functioning idiosyncratic whole that one could even say, in the first place, what the parts themselves were.
Just as for Goldstein the nervous system was not a bunch of neurons strung together, and as each neuron was, on the contrary, merely a node in a living network that functioned only as an integrated whole (with each neuron merely recruited to play its allotted role in responding to, and thus contributing to, the total context at any given moment), so all psychodynamics, in the human sphere, were now to be understood contextually in terms of interactional patterns embedded integrally within the whole interpersonal network in which those patterns live and move and have their being.12
All the same, individuals, though embedded inextricably in patterns of interpersonal interaction, nonetheless remain free agents, responding to their understood, self-defined, contingent situations by electing to do one thing rather than another in relation to their desired outcomes.13 Patterns of interpersonal interaction provide the essential context for the deliberation and choice of actions, within which their own part is exquisitely choreographed unconsciously, but those interactions in no way determine or direct specific, substantive actions.14
The Mental and the Physical: Freedom and the New Science
For these reasons along with a host of others, changes in the mental realm could be shown to account for changes on the physical level but—even in principle—explanation could never work the other way around. Physical changes could never explain mental changes, with the possible exception of relatively rare cases of physical pathology, when the system is effectively breaking down. MacKay, perhaps best known for his work on the physiology of seeing aspects, was adamant on this, while remaining equally insistent that there could never be a change on the mental level without a corresponding change on the physical level.
The properties or states of the physical could never account for the mental, any more than the circuitry of a radio set, they argued, let alone its ‘genetic blueprint’—the code followed by the manufacturers at Zenith, first laid down by the engineers who designed it—could ever account for the rich content of the Third Programme of the BBC.
D. J. Stewart pointed out decades ago15 that the logic of this had already been recognized by Hume, but that—as a large body of seminal scientific investigation went on to demonstrate—we at last had the hard science to show that it could not possibly work in the other direction, at least not in this particular universe of ours. The mental could account for the physical, but never the other way around.
Free will thus turned out to be real, and worse yet for the determinists, had at last been put on solid empirical grounds. Far from being scientifically inexplicable, free will was from now on scientifically inexpugnable.
Many of the revolutionaries—Kenneth Craik explicitly, and perhaps, for example, Bateson, MacKay, even McCulloch—might have described themselves as hylozoists, seeing life and mind as essential properties of matter, but in no way as a function of the physics of matter which they took to be largely irrelevant here. Biology, for example, could never again be a matter of following that old pious hope of a recipe, “take Physics, and just add carbon.” Rather, life and mind were a function chiefly of the organization of matter, its design.16
While all of these scientific revolutionaries set their face against all forms of reductionism, determinism, physicalism (in the sense in which the term is used today) and genetic dogmatism, all were arguably in some sense mechanists, albeit of an entirely novel kind—insofar as they were interested not in the alleged causes of things (what could that now even mean in the universe they had come to realize they inhabited?), nor were they interested in the physics of things, which no longer played more than a subsidiary explanatory role outside physics proper.
Rather, they were interested in (let us call it) the dynamics of things, what many of them informally liked to call “the cybernetics of it”—the loops and feedbacks and all that makes any given, idiosyncratic situation tick—in other words, the precise mechanism of the thing, the Machine in the Ghost.
In seeking to apply rigorous, scientific understanding to every idiosyncratic situation put forward for study, they would unravel its equally idiosyncratic interconnections, in order to grasp the dynamics of what made any given thing tick.
But to do this they didn’t construct models of what they were investigating, for the most part, nor did they abstract byzantine “systems” which could be mapped out on paper, in the manner of the emerging ‘opposing camp’, the systems theorists. For they could show the futility of such exercises for most scientific purposes, except where the real thing was either too dangerous to meddle with or hasn’t been built yet, as in the case of nuclear reactors or suspension bridges, for example17.
If they used models at all, they were working models that could be tested on the bench18 and their behaviour analyzed. For the most part, from this new perspective, a scientist had to work “directly with pieces of the real world, in a close and delicate manner,” as Stewart put it, and after all, “the best material model of a cat,” Arturo Rosenblueth famously intoned, “is another—or preferably the same—cat.”
Whatever it was they happened to be studying, the aim was to understand what Kenneth Craik liked to call “the go of it,” using the Scots colloquial expression so famously deployed by Clerk Maxwell from earliest childhood when demanding of his parents, “What’s the go of it?”—and if not satisfied with their answer, “but what’s the particular go of it?” And it was always the particular go of a thing that was now to be be of chief scientific interest.
It was not that matter and energy had no role at all to play in "the go of," the workings of the cosmos and hence in the fabric of reality. On the contrary, matter and energy conveniently provided a medium of communication and suitable raw materials to be organized—as passive building blocks—by the flows of information (“differences on the move”), in turn organized, as D. J. Stewart later demonstrated,19 by values and preferences (“imparities on the move”).
Like a hod carrier or a forklift truck, energy could do the heavy lifting and matter could provide the bricks and mortar, but these merely enabled the work of organization to be implemented, the structures to be built and moved, in the context, according to the architectural demands of purpose and reason, custom, intention and design.
Causality hence could no longer play any explanatory role in a scientific understanding of what takes place in the universe, at least once we rise above the crudest level of comparatively trivial physical phenomena and enter the biological realm, let alone the human realm.
For most of the distinct phenomena in the universe were, in effect, immune to causal influence—after all, the universe harbours infinitely more distinct phenomena in Berlin than in Betelgeuse. And even the physicists had abandoned causality over a century ago. Causality was clearly long past its “use by” date.
Negative Explanation and the Primacy of the Local and Idiosyncratic
So what then? If there was no longer any place for the old conceptions of causality, which seemed now to lack any scientific application, and if the laws and concepts of physics were clearly irrelevant to scientific explanation when it came to the overwhelming majority of natural phenomena, then we had to look elsewhere, beyond matter and energy, power and forces, if we were to find any unifying principles running through the fabric of this highly patterned but fundamentally anarchic, non-rule-governed universe of ours—something to take the place of the superannuated notion of “cause-and-effect.” But what was the alternative?
On the view of Nature that had held sway for four centuries, the persistence of order was taken to be the status quo ante, with any change needing to be accounted for in causal terms. On the new perspective, which was like a photographic negative of the old picture, we were entitled to expect continuous, random flux everywhere, and it was the persistence of any particular order or pattern in any region of the universe that was viewed as highly improbable, needing to be accounted for.20
Persistence now presupposed mechanism, and it was henceforth the scientist’s job to elucidate the specific mechanism that accounted for the persistence of any descriptive invariance—any pattern. If there were ceaseless, unconstrained, random flux, there would be an infinity of possibilities that might now be realized just here.
But since only a small subset of those possibilities are currently realized, our scientific inquiry is aimed at revealing the nature of the constraints precluding any state of affairs other than the one observed. What are the operative constraints in play, such that nothing else is currently possible, that is, nothing other than what we observe?21
These constraints may include certain universal invariances, e.g. that light cannot travel faster than C. However, in specifying more fully the set of constraints-on-variance precluding all states of affairs other than the one we are seeking to account for, we now had to specify, with the highest degree of scientific rigor, those far more numerous sources of constraint which are not universal at all but quite concrete, local, and idiosyncratic to the situation—the cybernetics of it, “the particular go of it.”
Our scientific explanation would not be complete until we could satisfy a questioner’s quite specific “why this rather than that,” and specify the constraints idiosyncratic to this specific context. A state-of-affairs was now only to be regarded as having been explained scientifically once it could be demonstrated to be the only state–of–affairs not currently precluded, and from now on scientific explanations would be explanations in terms of unities—local, idiosyncratic unities at that.
The primacy of the local, along with a decided preference for explanation in terms of the particular, concrete and situated over the universal, abstract and timeless, and of idiographic explanation in terms of unique individual histories over nomothetic explanation in terms of universal laws, were all hallmarks of the new science.
And from all this, and from the logic of negative explanation, there was an additional, and very big, payoff: From now on, scientific explanations can come to an end: they can be complete and categorical in relation to the specific question being addressed.
For rather than seeking an explanation of how (per impossibile, on the new cosmology) something has been “brought about,” we seek to discover and demonstrate, in any given case, how some initially puzzling state-of-affairs is the only state-of-affairs currently possible given the constraints revealed to be currently in place. We stop not because we have run out of steam but because we can show that, in logic, there is nothing more to be said in response to the particular question being asked.
In the new epistemology, the notions of object-and-forces, cause-and-effect, were entirely replaced by the notions of pattern-and-context, flux-and-constraint. We soon found ourselves back at home in our familiar world of diversity and idiosyncrasy, spontaneity and improvisation, context and purpose—a world of autonomous beings navigating their way through an anarchic universe according to their designs. And yet the fabric of reality, at every level—the go of it—could now be accounted for in terms of the new science, with logical rigour and even mathematical precision.
However, it also followed that what anything even was, in the first place, depended fundamentally on the specific, idiosyncratic context and on the observer’s purposes and point of view and the specific question being asked about it at any given time. Every situation had to be accounted for in its own terms, and to a particular questioner’s satisfaction.
The old myth was that all explanations must be of the same form,22 or at least must all fit together, and join up somehow, even if only “at the back.” But as scientists we were not all working together on some single, shared scientific enterprise. There was no City of Truth to be built. You could no longer put phenomena into categories and talk about “kinds of situation” or “this sort of thing”—except in relation to now one question, now another.
The Administrative Fallacy
The notion of something being “a case of” something was still to be very important, as it always was in science, perhaps more important than ever before, but what things could be considered to be cases of was no longer the preserve of any given academic discipline. There was no way to know in advance what kind of knowledge needed to be brought to bear.
Nor could you rationally any longer prejudge what sort of phenomenon you were dealing with, and then simply call in an investigator from the appropriate research specialism. That old, rationalist Epistemology of the Yellow Pages was dead; disciplines, for all practical purposes, were now for the birds.
The tacit assumption had previously been that the great structure of human knowledge itself came ready divided up into specialist disciplines, corresponding to the peculiar administrative structure common to most universities after the 1860s.23 You might call this “the Administrative Fallacy.” But all at once, with the new thinking, that quaint picture of the nature and structure of knowledge began to collapse like a house of cards.
For apart from the fundamental theoretical difficulties with the notion of knowledge as departmentalized, difficulties to which I’ve already adverted, it became clear that all too often the artificial divides between disciplines simply got in the way of good science. Mark Twain’s prescient remark perhaps got too close to the bone: “The researches of many commentators have already thrown much darkness on this subject, and it is probable that, if they continue, we shall soon know nothing at all about it.” That is precisely the situation we have in most fields of science today.24
Tantamount to a wholesale attack on modernist rationalism, this unsettling logical consequence of the new thinking escaped few, if any, of the scientific revolutionaries.25 Nor were the partisans merely “talking their own book,” as they say on Wall Street, in finding fatal flaws in the post-18th-century, rationalist conception of knowledge.
For it is no accident that these discoveries could never have been made had the scientists stayed within their own disciplines. The universe, in reality, just isn’t divided up that way.
© Copyright 2012, 2022 Dr James Wilk
The moral right of the author has been asserted
Next week, this article continues in Part III: Next Friday we will be considering, in the final instalment of this 3-part article (our last post before the holidays), the problem of duelling epistemologies arising from this revolution in ideas. We will take up the question of where dialogue between radically, irreconcilably conflicting views becomes either productive and fruitful, on the one hand, or else an unproductive total waste of time. We will then begin to make sense of the loss to the world when philosophers start talking only to themselves, abandoning their post where and when they are most needed: in making sense of new directions in scientific thinking. But we will also consider when it is best that the philosophers stay clear out of the way.
In W. T. Powers’s felicitous formulation, from whom my example of driving the car has been borrowed.
Again, I owe this elegant and accurate formulation to W. T. Powers
See, for example, the opening pages of his two-volume Principles of Psychology, 1890.
The whole corpus of Powers’s work, including his seminal 1973 masterpiece, Behavior: The Control of Perception, provides a clear introduction to this vast field of scientific research.
You can teach dolphins to perform tricks by rewarding them with fish when they randomly do what you’re looking for, so-called “operant conditioning”—that’s classical learning theory 101. But Bateson and his colleagues found a number of oddities. To take just one example, you can also reward dolphins for novelty—every time they did something new that you’d never seen them do before, you’d throw in some fish, and you would never reward them for the same trick twice. But there comes a point, perhaps after fourteen or fifteen training sessions, when the dolphin suddenly “gets it,” has a porpoise’s epiphany, figures out what’s expected, and then suddenly it produces a flurry of novel tricks, including a number of pieces of behaviour that have never before been seen in the species. There was a lot going on that was not accounted for in the learning theory of the day: Bateson showed that classical learning theory missed most of what was going on with the dolphins, and that, for starters, there was learning about context, there was the overwhelming influence of the trainers’ expectations, the trainer’s relationship with the “trainee,” and purposeful communication between them, most of it—by definition—non-verbal, about all such matters.
Ludwig Wittgenstein, Philosophical Investigations, trans. G. E. Anscombe, Oxford: Basil Blackwell, 1953
Both of them, I recall, had studied with Wittgenstein at Cambridge.
Alva Nöe and the late Susan Hurley come to mind, and there are many others.
With all due respect to Locke, whom Ryle recognized was not attempting to set out a philosophical psychology or anything of the sort (unlike Hume who aspired to be the Newton of the mental realm). Rather, Locke’s whole “demonology” of ideas along with his “intermittent lapses” into “wires-and-pulleys” questions (in his influential 1689 Essay, which it is important to recognize was written for the general public), were only in the service of assisting disputants in that most disputatious age in which Locke was writing to recognize the kinds of propositions they were disputing about.
in The Concept of Mind
Jakob Johann von Uexküll (1864–1944) groundbreaking ideas were first published in his 1909 monograph Umwelt und Innenwelt der Tiere, developed further in his 1913 Bausteine zu einer biologischen Weltanschauung, and were perhaps most comprehensively presented in his Theoretische Biologie (1920/28). Despite the flaws in many of the answers proffered in Maturana and Varela’s epic 1980 monograph, Autopoiesis and Cognition (W. T. Powers and his colleagues were on much solider ground here), still, here at last were the right kinds of questions being posed in the right kind of language, albeit not with the degree of clarity that their redoubtable conceptual innovations deserved, perhaps—but these are difficult matters to write about, and it is worth persevering with Maturana’s writing. His 1980 volume with Varela was a worthy successor to von Uexküll’s seminal work, and much influenced by it.
This was a significant extension of, and advance on, Goldstein’s original organismic conception, due primarily to Michael Foulkes’s psychoanalytic integration of Goldstein’s work in neurology with the work of the sociologist Norbert Elias on figurations.
Cf. Michael Oakeshott, On Human Conduct, Oxford: Clarendon Press, 1991
Ibid.
in, for example, "A Ternary Domanial Structure as a Basis for Cybernetics and its Place in Knowledge", Kybernetes, Vol. 18, no. 4, pp.19 – 28 (1989)
Thus most were clearly also, in effect, hylomorphists, regarding all being, indeed any particular entity (on Aristotle’s act-potency scheme) as a unity of form and matter. The concept goes back to Aristotle’s biology, though the term “hylomorphism” itself dates only from the 1920s when once again the notion had an important role to play in the biological sciences, particularly in Central Europe.
I owe this point to Dr D. J. Stewart
The most notable examples of how this should be done would be the working models of W. T. Powers and his colleagues, with open source code provided to the rest of the scientific community. These were not spaghetti-and-meatballs diagrams, pictures, metaphors, useless mid-level abstractions and fine words that don’t butter any parsnips, but carefully programmed artifacts, pieces of kit, run on a computer or constructed from bits of hardware, whose behaviour could be scientifically studied. From now on, we’re all from Missouri: “you’ve got to show me.”
See, for example, his 1989, op. cit. The mechanisms of this were unravelled in the work of W. T. Powers and his colleagues.
James Wilk, “Mind, Nature and the Emerging Science of Change: An Introduction to Metamorphology,” in Gustaaf C. Cornelis, Sonja Smets and Jean Paul Van Bendegem (eds.), Metadebates on Science, Brussels and London: VUB University Press, Vrije Universiteit Brussel, and Kluwer Academic Publishers, 1999, pp. 71-89
Ibid.
See J. C. B. Gosling (1973), Plato, in The Arguments of the Philosophers series, Chapter XVII, “Preferred Explanations.”
In 1810, implementing a reform proposal of Schleiermacher’s, von Humboldt had founded the first universitas litterarum, the University of Berlin (with the ambitious Fichte, of all people, as its first Vice Chancellor!) and it became the model for all 19th Century German research universities, and soon for all universities worldwide. For nearly a century, in fact, since just after the American Civil War when America’s older colleges began to embrace the new German model of the universitas litterarum and the first American universities were born, this tacit assumption has been knocking about. For an insightful discussion of this, and some of its implications for liberal arts education in America and beyond, see Prof. Anthony T. Kronman’s excellent Education’s End: Why our Colleges and Universities Have Given Up on the Meaning of Life, New Haven: Yale University Press, 2007. Cf. also Stephen Toulmin’s 2001 Return to Reason (Cambridge, Mass: Harvard University Press)
The exceptions, as the late John Ziman pointed out throughout his later work, are mostly where science had fortunately become “post-academic,” which is why some of the best science, even some of the best pure science, was now being carried out in multidisciplinary places like Bell Labs and Xerox PARC. Outside Schleiermacher and von Humboldt’s discipline-riven Academy, the name of the game was to learn something new that was truly innovative and of demonstrable value, and not merely to push back the boundaries of some single discipline. See for example, Ziman’s “‘Post-Academic Science’: Constructing Science with Networks and Norms,” in Science Studies, Vol. 9, 1996.
With the possible exception, for a fatefully brief and most unfortunate time, of the great neuroscientist Ralph Gerard (one of the revolutionary partisans), who, in a moment of weakness at the Office of Naval Research (for which he is doubtless still turning in his grave), gave us the peer review system in the precise form still in use everywhere today, in grant-funding and in academic publishing—a still unrivalled and universally reviled system which, in one of those typical ironies with which the history of science is littered, has consistently been one of the main counter-revolutionary forces to have slowed the wider adoption of the new thinking. The story goes, that in 1946, after the Babel of the first, cantankerous, Macy Conference, a wholesale bun fight in which a badly-behaved Warren McCulloch, the Chairman, insulted the eminent neuroscientist Gerard and rudely stopped him from speaking (McCulloch’s immortal words were, “Oh shut up, Gerard, you can talk next year!”) and before Gerard’s eminently successful experiences, by his own account, at the convivial 1948 interdisciplinary Hixon Symposium and comparatively civilized post-1949 Macy Conferences, he had prematurely concluded that interdisciplinary science was simply a non sequitur, or at least a bloody pain in the arse to be avoided at all costs. In Gerard’s enduring peer review system, we are today still the doubtful beneficiaries of the great man’s understandable umbrage. (In 20th-century History of Science, the equivalent of Cleopatra’s Nose in changing the course of naval history, may well be McCulloch’s Cheek.)
"It was only in the context of this particular, functioning idiosyncratic whole that one could even say, in the first place, what the parts themselves were. " - so the opposite of synergy. is there a word for it? anyway, it does suggest novel compression methods.
"why this rather than that," - i think the standard answer in biology is path dependence (history). i am not sure you mean this to be one of your constraints.