Introduction
We’re back!
After the better part of a year’s sick leave recovering from a brain haemorrhage which left me effectively paralysed and without an intelligible speaking voice for months, I am virtually 100% recovered, soon to be 100%. I’m now fully back to work and workouts, and back to my long-haul international travel schedule.
Best of all, I am able to write and type again, and to speak clearly, the last things to recover. Fortunately, no sensory or cognitive functions were affected, just a few right-sided motor functions put out of action (you could say it cost me an arm and a leg!), but I did need to learn to sit, stand, walk, talk, and use my right hand and arm again, and every learning, each small step forward, was hard-won.
“The basic difference between an ordinary man and a warrior is that a warrior takes everything as a challenge, while an ordinary man takes everything as a blessing or a curse.” (Casteñeda) Following my near-death experience (statistically I had only a one-in-four chance of surviving) it’s been one big challenge after another—an all-out battle—and full-on hard work, but after a year of struggle against overwhelming odds, I’ve done it, through sheer willpower teaching my brain to develop and follow new pathways.
This week’s post, on a theme I promised to write about in my most recent post of March 29th, 2024, is something of an experiment. It is longer than usual, but more significantly, rather than writing to a plan like my previous pieces, this week’s post was written (as planned) stream-of-consciousness, and after a while it seemed to write itself. And it’s a much more personal piece that usual.
I reflect on the nature of reality and the scientific spirit. I touch on some personal autobiographical themes and explore the roots of my work on change in a body of transdisciplinary work in the middle decades of the 20th Century. I go on to explore the study of singularities, the poison of Rationalism, the way language maps onto the world and for the most part fails utterly to do so, and the insidious role of language in blocking off possibilities before us.
Think of it as joining me for a cozy fireside chat in my library over a glass of wine, far from a formal lecture. But it is a unified argument, with no wasted detours, and it somehow all holds together. In the final section, it attempts to show us a hopeful way out of our benighted state, teaching our brains to develop and follow new pathways.
—James
Your Way with Words
Most of the time we are not in touch with reality, which we see only through a glass darkly, approaching it with the thinnest of thin descriptions, and only one or two of those at that—most involving a greater or lesser degree of misunderstanding—and to which we cling.
We cleave to our chosen descriptions, to the exclusion of all others; and we cleave to our framing, and to our pet theory which we have made up unawares and take for part of the fabric of reality. And we filter and embellish and confabulate and trim our experience to fit the Procrustean bed of baseless fantasy we take, under oath, for fact.
Our understanding of things is a misunderstanding. And we ourselves only feel understood by those who share or endorse our particular misunderstanding.
After all, we can marshal any amount of corroborated “facts” to bolster our opinion, no matter what the opinion may be. “Facts,” however, can be the enemy of truth, and oftentimes the corroboration ultimately reflects no more than the consensus of the herd.
Facts are usually only the flagbearers of partial truths which, in context, can completely obscure what is really going on. For Reality exists only at the local, idiosyncratic level where “facts” are all but useless as we are typically dealing with a sample size of one.
Facts are the preserve of third-rate journalists and advertising copywriters. Experts love citing facts. Real scientists do not. “Science,” said the great physicist and teacher of science Richard Feynman, “is the belief in the ignorance of experts. When someone says ‘science teaches such and such’, he is using the word incorrectly. Science doesn’t teach it; experience teaches it.”1
.
The Scientific Spirit
Science never deals in facts. Science knows only observations, questions, and conjecture. It does not pronounce. At its best, science is about the search for greater understanding, looking out for anomalies and what we ourselves have missed, where we had got hold of the wrong end of the stick. It is about delighting in changing our views, and gaining an ever-renewed and modified perspective on things. If all goes especially well, as scientists we can celebrate our having to newly adopt a radically altered, sometimes completely inverted view from our previous one. Science is about not knowing, about humility in the face of wonder, and going to work every day looking forward to learning something new. It is about taking genuine pride and pleasure in changing your mind. It is about not knowing what you are looking for and finding it in the most unexpected place. It is about puzzlement and insight, the quest for the next question. It is exploration and discovery. It is opening your eyes and mind to what has been there all along, unnoticed. It is patient hard graft, creativity, and eventually a stroke of luck. It is about forswearing certainty and always maintaining a healthy disrespect for the received view, and especially a healthy scepticism with respect to our own current views.
The ‘scientific spirit’ necessarily extends beyond the lab to embrace our attitude to the whole intellectual domain, to the world of affairs, and to everyday life. It values objective truth and honesty above partisan views and arid abstractions, where an abstraction is (roughly) anything we cannot encounter—see and hear—without learning how to talk about it (Prof. Elmer Sprague). A small child can come to learn a lot about bugs by observing them, even if he does not know what they are called; not so with credit, or overdrafts, or compassion or intelligence. These are not things given in the world, and we need to learn how to use the word if we are to be able to talk about the abstraction as if it were a thing like bugs or stones.
And so the very technical abstractions we have to deploy as scientists are merely tentative, negotiable faute de mieux, something to be getting on with until we change our minds in favour of ones more useful given a better understanding as we learn more—merely a pis aller as we have to say something. Scientific advance takes the form of a change of language. Freud redefined mind as the realm of the meaningful. Einstein redefined simultaneity.
In the past three or four hundred years, the scientific spirit has never been rarer than today. It is fast disappearing altogether. As recently as the 1950s, in the age of unrestricted interdisciplinary grants, it was still in ample supply. But the heyday of the scientific spirit was in the Age of Enlightenment once Rationalism was on the run, especially in the Germanies by the early 18th century, and certainly by century’s end.
In the vanguard of the anti-Rationalist movement were Bacon and Newton followed by the Newtonians. Even in France, by the middle of the 18th century, the Newtonian experimentalists had succeeded in supplanting the Rationalist Galilean-Cartesian epistemology and worldview, at least until, by the end of the 18th century, Rationalism had made an unexpected comeback and, ever since, has continued its conquest of territories formerly in the hands of those imbued by the scientific spirit, creating a monstrous hybrid of science and Rationalism, cunning and seductive. We will, later on, have more to say about Rationalism, which Michael Oakeshott called “the most remarkable intellectual fashion of post-Renaissance Europe.”
After a mid-20th century renascence of the scientific spirit, the Soviet Union’s launch of Sputnik at the height of the Cold War sparked the stellar rise of institutional Big Science in the West, henceforth largely the creature of the military-industrial complex, and the true scientific spirit was in danger of extinction. It was and still is viewed variously with nostalgia, or with contempt as a quaint relic of a bygone age, or as a mythical creature that never really existed. Today it is left to just a few brave souls.
Real science is overdue for a comeback. After all, the next great scientific frontier is biology, where Cartesian Rationalism just doesn’t and cannot work. It is far too complex.2 The decoding of the genome, advances in biochemistry and molecular biology, and the use of high-speed computers and other new technologies such as Big Data and Artificial Intelligence have distracted us and diverted resources away from fundamental work to answer questions of importance that might actually increase our fundamental understanding of physiology in health and disease.
One true exemplar of the scientific spirit who could use and advance these new technologies in biology without being distracted is the great Robert G. Shulman who celebrated his 100th birthday last year.
He is one of the last of the true scientific giants, last for the time being, a wide-ranging intellect and a humane creative genius of the first water. A chemist and physicist by training, after many years in solid-state physics, he moved into biophysics, a field to which he made major contributions, as well as making major contributions to evolutionary theory, the understanding of cell metabolism, cardiology, diabetes, neurological disorders, imaging including brain imaging, molecular biology, consciousness studies, and a host of other fields, including, for example discovering the explanation of the Warburg effect (how cells convert glucose to ethanol and lactate, a long-standing mystery), and made significant contributions to the philosophy of science.
He has tremendous knowledge and experience with all the imaging techniques including fMRI, spectroscopy and EXAFS and made great use of them in his research, but really he made NMR (nuclear magnetic resonance imaging) his own, measuring energy levels across the entire brain, showing beyond the shadow of a doubt on current knowledge, as the great Kurt Goldstein and a legion of 20th Century neurophysiologists had always been convinced, that in everything we do the entire brain is actively involved.
When I met and spent a couple of days talking with him in his younger days (he was a young 91 then), we talked the whole time about consciousness, his main interest at that time and the subject of his excellent recent book.3 Mainly, we discussed Wittgenstein, as he was a fellow Wittgensteinian and like me a huge fan of Dr Peter Hacker’s work in philosophy, and he understood consciousness in a very Wittgensteinian and Ryleian way.
He knew exactly what the limitations of brain imaging were, and how crude attempts to apply neuroscience to understanding consciousness produced whole libraries of nonsense. He has not been hypnotized by all the clever gadgetry. An empiricist to the core, he is certainly no friend of Rationalism. I told him how I first came to choose to qualify in the neuroscience field as an 18-year-old undergraduate, in part because at the time the so-called Australasian Materialists like J. J. C. Smart and David Armstrong in philosophy were writing obvious nonsense about the mind based on their (mis-)understanding of neuroscience, and I thought I’d better study neurophysiology seriously if I was to combat what I rightly feared would become a growing trend in philosophy. An always nuanced thinker and explorer of the depths, Bob Shulman is my ideal of the scientist’s scientist. Above all, he could use the latest tools without forgetting how to think.
But I digress. Because I want to talk in this article specifically about the theoretical basis of Minimalist Intervention (“MI”), how difficult it is to talk about and why, and how this bears on our everyday misunderstanding of reality, and how that tacit, everyday epistemological error gets us into hot water over and over again in our practical endeavours.
.
Cybernetics and Psychoanalysis
Sometime in 1979, I went into Blackwells in Oxford, at that time the largest academic bookstore in the world, and after a fruitless search through the shelves I asked a manager, a man in his 50s, where they kept the cybernetics books. He laughed out loud and “reminded” me condescendingly that it was 1979 not 1949, and “informed” me that no one reads cybernetics anymore and suggested I try the Bodleian Library. I might as well have asked the manager of Halfords where they kept their buggy whips. Psychoanalysis, for its part, was still very much in fashion in 1979. Fred Crews hadn’t yet published his ignorant mad ravings, and so the first shot had not yet been fired in the Freud Wars, and psychoanalysis was still all the rage amongst British intellectuals and academics.
The situation is reversed today. Cybernetics sounds perfectly modern, even trendy, while psychoanalysis is often seen as having gone out with high-button shoes. “No one, even in some psychoanalytic circles, reads Freud anymore,” or so you will be told. “It’s 2025, not 1925.”
Now the theory behind Minimalist Intervention owes much to a great many scientific and allied fields, including, inter alia, psychoanalysis, cybernetics, biosemiotics, physiology (including, but not principally, neurophysiology), analytical philosophy, and so on. Psychoanalysis in fact has probably had a far larger influence on the development of MI than any other field but certainly greater than the contribution of cybernetics. Yet people always glom onto the cybernetics bit alone, to the neglect of the rest.
Why? There are several reasons for this.
We’ve already drawn attention to one reason: fashion. Cybernetics cool. Psychoanalysis uncool. And worse, they are bewildered and embarrassed by the association of psychoanalysis in their minds, and in the popular mind generally, with sex, a topic which for some bizarre reason still embarrasses most 21st Century humans. Go figure! But there is more to the avoidance of psychoanalysis in this context than fashion and sex.
For to understand the contribution of psychoanalysis to metamorphology (the science behind MI) takes too much study and in-depth specialist knowledge, and the connections between metamorphological theory and psychoanalytic theory are harder to convey because they go via a much more technical route. The same is true of biophysics, biosemiotics and all the other fields I mentioned and a few I haven’t mentioned.
It’s actually true of cybernetics as well! But when laymen refer to “cybernetics” they often know it only as a name, and are thinking of cybernetics minus the mathematics, or even minus the science, as if the maths and science didn’t matter. It is like references to quantum mechanics or relativity theory by non-mathematicians.
Not long ago, I ordered a copy of Sir Edmund Whittaker’s classic two-volume A History of the Theories of Aether and Electricity, first because it is one of the most universally highly acclaimed works on the history of science, which I somehow had never read, second because it was meant to contain the most lucid and accessible exposition of General Relativity, and third, most pressing, because I was seriously interested in the way in which theories of the aether are making a comeback in physics, and how some of the great quantum mechanicists had never relinquished it.
When Whittaker’s history arrived, at nearly 1000 pages, it weighed a ton. I opened it with great excitement and bated breath. What a disappointment! It was 1000 pages of all mathematics. His argument, his great history, was basically the history of certain seminal equations and mathematical proofs. And so that was the way he had to tell the story. It’s the only way to relate the history of the subject from the scientists’ point of view, to capture their thought processes, what they were trying to do, and the excitement of discovery.
It is much the same with cybernetics, or for that matter psychoanalysis. The substance, and the truly exciting part, is all in the technical detail, impenetrable and deadly boring to the non-specialist casual reader.
There is a natural human tendency to sciolism, as there is so much to know in the world, increasing every day and increasingly specialist, and we have so little time. We are encouraged in this unfortunate tendency by widespread half-baked science journalism, and by the threefold social imperative amongst the chattering classes not to appear wrong, not to be without an opinion even on things you know nothing whatsoever about, and above all not to appear ignorant. If you say, “We know from cybernetics that blah-di-blah,” in most circles you can safely assume that no one will be able to contradict you.
The British humorist Stephen Potter, who coined the phrase and the concept “one-up-manship,” talked about one species of that genus, “But-not-in-the-Southmanship.” He gives the example of an expert “who has just come back from a fortnight in Florence,” saying, “And I was glad to see with my own eyes that this Left-wing Catholicism is definitely on the increase in Tuscany,” where all that is required to appear more knowledgeable than the speaker is to interject, “Yes, but not in the South,” and the discomfited speaker is bound to agree, “Yes yes, true,” when in fact he hasn’t a clue one way or the other.
You don’t really need to know any cybernetics to speak about it without fear of contradiction—it is enough to know some takeaways—because you are unlikely in everyday social intercourse to run into a real cybernetician. The same is true of running into a psychoanalyst. Cybernetics minus the mathematics, and often minus the science and the context, is all too easy to put into aphoristic, gnomic remarks, without genuine understanding. For this reason, as with psychoanalysis, or the philosophy of Wittgenstein, cybernetics is a field more likely to attract the attention and casual followership of cranks and poseurs than far more popular but equally technical fields requiring as great an amount of study to master, such as law or medicine. For quoting provocative epigrams in social conversation and for personal interest you can get by with just a few takeaways from cybernetics, but for serious understanding such sciolism, skating on the surface of knowledge, won’t help you, and will more likely mislead.
I’m reminded of the old Mr Natural cartoon, “Hey Mr Natural, what’s ‘Do-wah-diddy-diddy’?” “If you don’t know, don’t mess with it.”
These days, “Hey Siri,” let alone ChatGPT, is unlikely to get you such a wise and honest response when the context of the query actually demands it.
When asked about the theoretical underpinnings of MI, and about metamorphology, the science that alone makes MI possible, and where it all comes from, my team members typically fall back on saying, “from cybernetics.” I plead guilty to having sometimes done so myself, out of laziness or not feeling like getting drawn into conversation, because that one word, like the word “sex,” is often pretty well guaranteed to get your interlocutors to want to talk about something else, anything else.
But “it comes from cybernetics” is misleading and to some extent a non-sequitur. In saying “it comes from cybernetics,” this is referring, if truth be told, almost entirely to my own contributions to cybernetics mainly over a 15-year period, which moreover have not yet been published, and/or those of my mentor and cybernetics colleague Dr D. J. Stewart, which have likewise not been published. Otherwise, the most seminal relevant works of cybernetics are so old, rare and obscure, many of them almost unobtainable now, that only a specialist of a certain age would even know they existed.
Even among my own team, few have even seen those rare works and none have mastered them. And the bulk of Dr Stewart’s monumental work has only been passed down orally to his students. And for the time being, at least, the same is true of mine. If, upon learning “it comes from cybernetics,” the questioner were to try and throw more light on MI and our view of change by going to the library and taking out a pile of books on cybernetics and making a thorough study of them, they would be very disappointed and have nothing to show for their efforts. Which is why it is particularly misleading to say, “this comes from cybernetics.” They would be better off reading and studying their way through the 75 papers (and counting) in the archive of past posts here on Change.
.
A Transdisciplinary Quest
Even allowing for all this, to say “from cybernetics” ignores the equally great or in some cases greater contributions to metamorphology of psychoanalysis, biosemiotics, analytical philosophy, physiology, kinesics, general semantics and a host of other fields, in work beginning before 1912, that for the most part was interdisciplinary or transdisciplinary.
To understand this huge body of transdisciplinary work properly, to grasp the significance of a particularly seminal paper or book, symposium or conference, therefore takes some fluency, some working knowledge, in multiple fields, typically including physiology, cybernetics, biophysics, mathematics, experimental psychology, linguistics, academic philosophy, and psychoanalysis and, at the time the seminal texts were written and the major symposia convened, the then still nascent field of “low-voltage” electrical engineering. This is a tall order, a big ask, in today’s overspecialised world, but was perfectly expectable among the original transdisciplinary work’s intended audience.
Most of the early cyberneticians, for example, who most of them had at least dabbled in constructing electro-mechanical models, were primarily interested in psychoanalytic or psychiatric matters, a sizeable number were medically qualified, those who, as in many cases, were Central European emigrés would have had to have acquired a good background in academic philosophy back home as trained medical scientists and physicians, and very few indeed would have lacked sufficient knowledge of all the above-mentioned fields to follow a given author’s very technical argument principally conducted in any one of those fields.
Gregory Bateson found he sorely lacked the mathematics, however; he was fine with the rest. When John Weakland, a chemical engineer in New York who sought him out when Bateson was staying there for a grant-seeking meeting with the storied Chester Barnard, President of The Rockefeller Foundation, Bateson hoped Weakland would be able to tutor him and help him understand the mathematics that was over his head. But Weakland didn’t really want to talk about math, he only wanted to talk about how he could get out of engineering and into anthropology, Bateson’s original discipline, because that’s why he had tracked Bateson to his lair. When Bateson got the Rockefeller Foundation grant the next day he invited Weakland to go out to California and join him on the newly minted Palo Alto Project, as it was later known,4 as his first team member. The mathematics was long forgotten.
The world was different then, and a good deal more intellectually sophisticated, in part because better educated. I am somewhat unusual, for my generation, in having trained and qualified in all these fields and more, and only because I had become fascinated as a teenager with this extraordinary 20th-Century revolution in ideas, and from that tender age I determined to make a serious study of it which required me to pursue a lot of—what are today (that’s the operative word!)—widely diverse subjects.
What are ‘widely diverse subjects’? Leonardo da Vinci was highly accomplished as a painter, sculptor, draughtsman, craftsman, architect, engineer, mathematician, cartographer, astronomer, anatomist, botanist, palaeontologist and inventor, and quite a few other things besides, such as a designer of military ordnance and builder of stage machinery for court spectacles; he was undoubtedly a genius at a level of accomplishment standing head and shoulders above his contemporaries; but was he a polymath? Was he ahead of his time or typical of it? In ranging across most of his fields of work he was actually no different from a great many of his contemporaries, or only by a matter of degree. Many were not only equally ‘wide-ranging’ over much the same fields, but also dazzlingly ingenious and prolifically inventive.
His chosen areas of work and study were not wide-ranging back then, it was not “many things” he was into back then, for most of these fields of endeavour were part and parcel of a single métier. It had not yet been divided up into separate specialisms. It was a thing. Call it an “artist and…” “quattrocento engineer,” or “master craftsman to the gentry,” or perhaps “elite jobbing designer.” It was what you did if you were someone who wanted to do that sort of work, and it was a well-known if insufficiently well regarded sort of work. Science as we know it wasn’t invented yet.
I’m reminded of the definition of an engineer, as “someone who learns whatever he needs to learn to get the job done.” Leonardo did that, and his relatively recently rediscovered private Notebooks bear testimony to a relentlessly curious intellect. This is not to detract from Leonardo’s greatness, but to see it other than through a veil of decidedly anachronistic hype. In terms of profession, he was one of a great many “Renaissance men” who did much the same range of things he did. And his true greatness and originality were undoubtedly as an artist.
Moving forward in time five centuries, consider the ease with which Bob Shulman has still been able to move between disciplines—you’ll recall he moved from chemistry to solid-state physics, then biophysics, evolutionary theory, molecular biology, cardiology, endocrinology, neurology, consciousness studies, philosophy of science and philosophical psychology to name just a few areas to which he has made major contributions over the course of his long and distinguished career. He never dabbled. Even for someone of Shulman’s genius, it would be rather more difficult to do the same today—the silos are taller and windowless, and the barriers to entry into each are far higher and shockingly bureaucratic.
I myself meandered across so many academic disciplines at a time when those barriers to entry were far higher than in Shulman’s day but nowhere near as high as they are nowadays—I actually had to be taking a degree course or pursuing a professional programme leading to a formal qualification in each field I felt I needed to master, in order to sit at the master’s feet. But it’s what I needed to do. I was on a quest. Hunting down “the thing in the bushes.” And unlike my late friend Harry, it behoved me to see each course through to a qualification.
My Canadian friend Harry Moore was introduced to me with the words, “This is Harry Moore. Harry should be in a museum.” It was nearer the truth to say that Harry was a museum, a museum of 20th Century history of science. He was a cybernetician and everything else, a generation or two older than me, managed to study in far more scientific fields than I ever did and to meet and work with seemingly every great scientist of the 20th Century, all household names (if you live in the right household), because he purposefully went to sit at the feet of the greatest living scientists in multiple fields of study, signing up for and starting loads of degrees and never bothering to finish any of them. For he was pursuing his scientific interests for pleasure and didn’t need a degree or to worry about a CV. He wanted to learn from the great scientific minds of the time and so he went to wherever they were teaching, but he hated exams and grades so he dispensed with them. You see, although he came from a modest background, he had become independently wealthy before he was 21, having invested all his schoolboy pocket money and earnings from his paper round in a highly speculative copper mine near his hometown in British Columbia as a young teenager, and it struck pay dirt, big time. He couldn’t be bothered getting any richer, he had more money all of a sudden than he knew what to do with, so he travelled the world, and above all the world of science.
In my own case, I only studied the range of topics I did because I wanted to understand how change worked in theory and in the field, and I was sure that these guys in the vanguard of the 20th Century revolution in ideas had been getting close to an answer. I was right. But I felt I needed to read and study and understand their stuff in some depth, and to study with them or their students.
It is of great significance that these investigators, through the 1960s, were each tied to no single discipline, and had little or no idea what they were dealing with. Above all, it is of great significance that they knew it. They were breaking new ground in entirely unexplored territory. They had no ready-made categories or abstractions with which to conceptualise their empirical observations. There could be no question of generalising their findings at first and for a long while.
For example, the aforementioned Palo Alto Project, whatever Bateson may have babbled to the Rockefeller Foundation when he approached Chester Barnard for funding, internally had the sole objective of answering the question, “What is this project about?”
A new world was opening up. The host of divers new phenomena they were observing and studying had never been studied before. Some scientists among these new-look investigators, transdisciplinary to a man, would build something that emitted behaviour never before observed, and they would put it on the lab bench, literally or metaphorically, and study it, and muse on it, without having any idea at first how to think about, let alone talk about, what they were finding. They found, er, uh, um, uh, some findings. But they didn’t know in advance what they were looking for, or know what to make of it when they found it, or even what the “it” was. They certainly didn’t know what to make of their findings. In time, theories would emerge.
The self-referential Palo Alto Project, to take the same example again, despite the vagueness of its aims, or perhaps because of it, made numerous major contributions to science, of which the best known but arguably least important, was the double-bind theory of schizophrenia, which itself spawned a vast body of research. Chester Barnard, who had funded the project, in their first meeting, after Bateson had tried incoherently to convey what he was interested in exploring, finally sighed in exasperation and interrupted Bateson’s rambling gobbledygook discourse to respond, “Look. I don’t know what it is you’re wanting to research, or what you’re going to discover. But if I did, there’d be no point in giving you any money, now would there?” and with those words he got out the Rockefeller Foundation check book and asked, “How much do you need?” and Bateson named the first big number that came into his head and the Project was up and running.
.
The Study of Singularities . . .and Its Significance
One thing this new breed of scientific revolutionaries all had in common was that they were always studying singularities, singletons, one-offs. Although this was not unusual in itself for scientific investigators in the pre-Sputnik era, what was perhaps more unusual was that the significance of scientifically studying only singularities, and the unique opportunities it afforded, was not lost on any of them.
“The best material model for a cat,” in the famous formulation of Arturo Rosenblueth, the great Mexican physiologist and cybernetician (kicked off the faculty at Harvard in 1944 for the explicit reason that he was discovered to be of Jewish extraction), “is another, or preferably the same, cat.” This was a new way of doing science. You knew you were always dealing with a sample size of one. Or rather, not a new way of doing science as such, but a newly self-conscious variant of what had always been core to the practice of a great many scientists through the ages—perhaps the majority before the rise of Big Science in the post-Sputnik 1960s.
In many (I am tempted to venture to say most) sciences, and in almost all applied sciences, we are, again, only ever dealing with a sample size of one. In sciences and applied sciences as diverse as archaeology, botany, psychoanalysis, cybernetics, engineering for example, there is little and often no point in concocting large statistical studies with a control group et cetera, as if imitating drug trials, where, exceptionally, there is some point. Statistically-based studies, as we now know since John Ioannidis’s work on “why most published research findings are false,” are fraught with difficulty, to say the least!
Even in physics, most great advances dealt with a sample size of one. Popular misconceptions of what scientists do, combine with Karl Popper’s grotesque armchair misconceptions of scientific proof, to cloud even our own view of our job, as scientists. Grant-driven Big Science also must share a lot of the blame, if not most of it.
But in my own scientific work, I recognised this early on and set about working out the logic of single-case studies, which required major epistemological reform. In short, it involves substituting the concept of constraint for the concept of cause. But as I have written about this large subject throughout the pages of Change, for now I will confine myself to only a few brief remarks in this connection.
If you are studying a single unique instance, a singleton or singularity, you are open to all its idiosyncratic detail, which of course is infinite. (“Reality is infinitely re-describable.”) And you are not only studying something in all its individuality but seeking to explain it in its own terms, and not in terms of something else.
Studying a single unique considerability, you can put negative and positive occurrences on a par, observing what might have happened but did not. You can thus view each occurrence as an item in a set, where of all the alternative possibilities, mutually exclusive, only this has occurred—which in Ashby’s terminology is the exhibiting of constraint, a particular kind of relationship between sets. And we can seek to account for the observed constraint in terms of the occurrence of other constraints, and these in terms of still other constraints, until we can explain, in purely idiosyncratic terms, how all possibilities are excluded except the one currently observed. That means that we can view an idiosyncratic situation or phenomenon as the only state-of-affairs not currently precluded, and explanation can come to an end.
.
The Hermeneutics of Desire
Now as Wilhelm Windelband showed,5 every science makes use of two distinct modes of explanation.
Sometimes we seek to generalise across broad, artificial abstract categories of things or events, seeking to identify some ever tentative, always defeasible universal scientific laws, and explain a given considerability as being a case of the operation of some general rule or regularity. This is called ‘nomothetic’ explanation.
Then again, sometimes we particularise a given unique considerability, considered now in all its idiosyncratic individuality, and explain it in its own terms—so-called ‘idiographic’ explanation. Here we are more concerned with the meaning of an event or pattern.
In different sciences one or other of these two modes of explanation will tend to predominate—for example, nomothetic in physics and chemistry, and idiographic in archaeology and psychoanalysis. But no science does without both modes.
In the human domain, you cannot get away with not considering meaning, and language assumes an overarching importance at multiple levels. This is certainly true of metamorphological analysis, and therefore of MI.
Psychoanalysis, whatever else it may be, is above all a hermeneutics of desire. But the same is true of metamorphological analysis, and the design of a minimalist intervention involves translating the problem owner’s or change agent’s desire for a particular outcome into a set of concrete, immediate actions that will realize that outcome straight away. In making this translation we are following out, and steering the course of, the play of the signifier.
There are a significant number of laws in metamorphology, and a much larger number of principles that guide the analysis, guide how we steer, but these basically concern how the universe works, for want of a better phrase. To that extent, there are certainly rather prominent nomothetic aspects to the science behind MI.
However when it comes to analysing a particular case, we are in the purely idiographic realm, though we are guided by the principles, while the laws act as curb stones showing where the road isn’t, marking out the limits of the possible. But a metamorphological analysis, as a hermeneutics of desire, is really a purely interpretative matter, trying to arrive at a reading of the client’s text.
This requires a good deal of exegesis, even to discern what the individual’s desired outcome is, let alone how to achieve it right now with minimum effort in the context of other parties’ desires and the ineluctable constraints in the situation. A true hermeneutics of desire. We already know enough to know that we haven’t a clue what the client is talking about—if we think we do at any point, we are quick to shake off that illusion. We don’t take the client’s word; we get underneath it, to what, at the end of the day, and unbeknownst to them, they really mean.
In all this, we are clearly only dealing with singularities, idiosyncratic through and through, with situations which, rightly understood, have never occurred before anywhere in the world and will never occur again. This despite all appearances to the contrary, at least if we were to take seriously the abstractions with which the person’s utterances are replete. Of course we don’t.
.
The Flight from Abstractions
The client may, to start with, think they have a cultural problem, or a talent problem, or a quality control problem or what have you. We ignore that diagnosis. It’s just noise. Those words are mere mouthfuls of air, empty signifiers, communicating no information of any value.
Our job is to get underneath all the abstractions with their built-in severely limited range of options. We aren’t interested in what the client thinks. That’s neither here nor there. But by listening in a different way, hanging on their every word, reading the text differently, we gradually open the client up to their actual reality, stripped of abstractions and limiting assumptions. But the real work begins, the big fun begins, only once we at last have dropped the abstractions and get real.
We begin the analysis knowing nothing about the client’s situation and what to do about it—we only know we’ve never seen or heard of anything like this before—and we end up at the end of three or four hours, if all goes well, with absolute certainty about the singular way forward.
They walk in our door with at most three or four or twelve or fifteen possible avenues between which they feel forced to choose. That is never reality. That is abstractionland.
In reality there is an almost infinite range of possible routes for intervening in their situation to secure their desired outcome. The challenge is in filtering it down to the one optimum route.
At the level of abstractions there are few possibilities, none of them optimal, or anywhere near. The usual Rationalist approach is accepting of and respectful of abstractions—it takes abstractions seriously and it takes them for reality. Accordingly, it is a search for, and application of, a small number of abstract principles, rationally comprehensible, generalizable across whole classes of problems, supposedly universal, and characterized by their obvious relevance.
If all goes well, the result is four or five possible routes to the goal, “evidence-based” and known to “work” (after a fashion) and which may have been used successfully (up to a point) elsewhere, which the client can choose between. Those who have used them were in “exactly your situation” at the level of abstractions. You then choose your route, and embark on an uncertain journey to your desired outcome. Good luck.
By contrast, MI, as you will have gathered by now, is no respecter of abstractions. We don’t take abstractions seriously in the sense of mistaking them for reality. We are certain of only one thing: that no one has been in your situation before and no one, including you, will be again. MI grapples instead with an infinite number of concrete details, unique to this situation and resolutely local, ungeneralizable, rationally incomprehensible, and characterized by their apparent complete irrelevance. They are certainly irrelevant to the problem but will be key to the solution.
MI is always an adventure, a daring expedition into the unknown, and a thrilling encounter with the infinitely idiosyncratic, correspondingly ripe with infinite possibility. And the result is always a singular set of idiosyncratic actions which will without a doubt secure the desired outcome straight away. No routes needed, and so no maps are possible or necessary. “You have arrived at your destination.”
.
Language and Reality: The Museum Fallacy
An abstraction is an arrest in description (Oakeshott). Language doesn’t name things that we encounter in the world.6 Rather, we are interested in or wish to refer to some selected aspects of reality, and choose a signifier—an abstraction—to go proxy for the collection of attributes we wish to advert to.
A friend of mine, when he and his younger brother were growing up, lived near a railway line before the last few remaining steam trains were phased out in the 1960s. Every time a steam train approached the station, it would blow its whistle. The parents were very proud to let their 18-month-old show off for dinner guests by shouting “train, train!” and rushing to the window and pointing every time he heard the whistle blow. Until one evening they put a kettle on the stove to make tea for the guests, and when the kettle’s whistle blew, he ran excitedly into the kitchen shouting, “train, train!” and pointing at the kettle.
A great deal of metamorphological theory deals in enormous detail with our own novel account of the way in which language latches onto the world, and it is impossible to summarise. But the usual misunderstanding of the nature of abstractions and their relationship to reality makes errors far graver then the toddler’s mistake about “train,” and with grave consequences in thinking of the abstractions we bandy about as signifying real things in the world, components of reality which we accept as given.7
One of the most common, all-pervasive and dangerous logical fallacies is the Museum Fallacy. You won’t find it in the textbooks or online—I identified and wrote it up decades ago and have used it regularly in my teaching at Oxford, but I don’t believe I published it except perhaps in the pages of Change on Substack. I also talk about it alternatively in terms of the Museum Theory of Nature or the Museum Theory of Reality. This is the fallacious theory that everything in the world could be afixed with a label stating what it is.
It doesn’t apply even to physical objects (Austin‘s “medium sized dry goods”) let alone most of the things we have occasion to talk about, and yet almost everyone in practice acts as if it were true. We act as if we know how to deal with something, or there is someone who does, once we know what it is. Then we know “what it is we are dealing with” and can deal with it accordingly.
But to say it once again, “reality is infinitely redescribable” (Stuart Hampshire). And unfortunately, our chosen descriptions dictate our available options—a self-imposed, ridiculously limited range of options—and we rarely go back and reconsider those descriptions. We’re like the Triumph motorcycle between the wars with the hard rubber tires, of which it was said that if it ever got stuck in the tram tracks, it would go all the way to the end of the line. No wonder we so often end up somewhere other than where we wanted to go.
.
Rationalism and Self-Deception
The descriptions we have chosen, whether or not we’re aware or—worse—unaware of having chosen them, not only set our course and determine what questions we ask and answer next and the actions we take, but they distort our perception. They literally blind us to all kinds of critically important information which simply drops off our radar. We either don’t look for it or we discount it as irrelevant when it emerges. We “turn a blind eye” quite unconsciously.
Sometimes, probably often, we actually consciously or unconsciously take in things which ought to prompt us to reconsider our views and actions, so egregiously do they contradict our approach. But the cognitive dissonance is too great, and our chosen delusory descriptions win out, the countervailing evidence being either ignored, interpreted according to our delusional system of descriptions, or rendered effectively invisible. To indicate just how real the invisibility can be, consider the popular video clip of the players passing a basketball. If you haven’t seen it…
Perversely, reality is simultaneously taken note of and ignored or denied.
Self-deception is the norm—the human condition.
We view the world only through a veil of abstractions—through the categories and classifications to which we assign things. Most of the complexity of reality, along with its infinity of possibilities, is completely ignored, for we have already filtered out all that is unique and idiosyncratic, which is to say, virtually everything.
We see only the surface and miss all the depth where the action really is, what makes things tick, what moves the world and guides its course—its productive powers, Reality writ large.
We dwell in a world of caricatures and stereotypes. Our two- and sometimes one-dimensional view of the world has, more often than not, all the depth of a Hollywood backlot.
Everything is far more complex than our models of them, which are largely fanciful, though we take these favoured models and metaphors for verified fact, endlessly corroborated.
Yet complexity is also an illusion—we look for complexity in the wrong place, among our generic abstractions, not in the realm of the purely local and idiosyncratic where it in truth resides.
Abstractions do not advert to features of the world. They do not designate part of the territory or part of our map but part of the territory as we have chosen—more or less arbitrarily on a specific occasion, in a specific context—to map it. The abstractions exist not in reality but in the realm of description. They are “made up.” The thinner they are, the more useless.
Nothing is more complex than interpersonal interaction, which is not only infinitely more complex than we give it credit for, but infinitely more complex than we can even imagine. It is infinitely, inexhaustibly re-describable veridically.
Rationalism, as I have defined and described it at length elsewhere,8 is the enemy:
Rationalism can be epitomized as the Myth of Disembodied Knowledge, the myth that knowledge principally takes the form of general propositions independent of the context of their application. The Rationalist takes knowledge to consist of truths that are abstract rather than concrete, holding universally rather than ‘merely’ locally and forming a systematic whole. For him, all genuine knowledge is technical knowledge, knowledge of technique. It can be formulated in rules, encoded in the symbols of language and mathematics, expressed in purely theoretical terms, and put in a book. It is certain, admitting of no exceptions, and applies universally to the objects with which it deals, irrespective of differing local practical circumstances or contexts of inquiry. Items in the world can be identified once and for all, labeled with the appropriate abstractions, and systematically classified according to their kind. Once we know what abstract kind of thing something is we can select the appropriate technique for dealing with it, which follows from our theory of how such things work. Hence Rationalism is the basis also of the contemporary, spurious authority of the expert. The map becomes the territory. Procedure is all.
Disturbingly, today we tolerate descriptions that are more abstract and thinner than ever before in history. Our understanding of understanding has become irredeemably Rationalist.
As for interpersonal interaction, our reality where we live and move and have our being, our cosmological home, no machine learning—no AI—could ever grasp it. It is ungraspable. It cannot be compassed in its entirety, even when considering just the interaction between only two people at a particular point in time.
Its description in words, impossible in principle except for now this aspect, now that, would involve a non-denumerable infinity of contextual question-and-answer complexes. No artificial intelligence, no matter how advanced and no matter how far in the future, no cloud-based network of quantum computers, no cosmic plasma superbrain9 in the vastness of space, could ever begin to describe it, let alone explain it. Not least because, but not only because, it has more than googolplex dimensions (1 followed by googol zeroes, where a googol is 1 followed by a hundred zeroes), let alone the number of possible values along each of those dimensions, which would need to be considered. I daresay it is by far the single most complex phenomenon in the entire universe with its two trillion galaxies each with some 100 billion stars and the countless worlds that orbit around them.
The mystery of interpersonal interaction makes sciolists of us all. For, faced with such bewildering complexity, we must needs fall back on guesswork and the crudest categories, dogma and abstractions, arid and barren. In consequence, we are living life at many removes from the richness of idiosyncratic reality, lush and fertile, ripe with possibility.
Once we leave behind us the desert of abstractionland, however, we enter a land overflowing with milk and dripping with honey, where the desert only offered us mirages of oases that vanish as we approach.
But we are more comfortable with our mirages, our illusions, more comfortable keeping reality at a distance. After all, “humankind cannot bear very much reality.” It gives us some sense of control.
This sense of control is illusory of course. We address only a surrogate world of our own creation. We arrive on the scene with our stock of ready-made categories, dogmas, frames, prejudices and postjudices, favoured abstractions, hopes and fears, and so on, and we attempt to deal with this surrogate world with its artificial sense of coherence.
It gives us some comfort at least to fancy we know where we stand. No matter how uncomfortable our situation as we have construed it, it’s comforting to “know” that “I’m all right, Jack—it’s the other buggers.” And it’s a reassuring illusion that we command a clear and unambiguous view of the mess we’re in, complete with a clear and sensible plan of action for our fool’s errand, our myopic, quixotic adventure.
We spend half our life tilting at windmills. Our home is castles in the air. And we call all this, “being realistic.” And in our hearts we believe it!
.
Miracles
There is an old saying, “be realistic, plan for a miracle.” This is much more sensible than the quixotic Rationalist approach. As Augustine said, “Miracles happen, not in contradiction to nature, but in contradiction to our understanding of nature.”
Of course, everything is a miracle. A fairy fly is no less a miracle than the sudden appearance of a fairy would be. In fact, the humble housefly is as much a miracle as an angel. Everything in Creation is a miracle, from the structure of a leaf to the automatic functioning of the human body. Miracles are everywhere. They surround us. They are the norm rather than the exception. On this basis, should we not expect miracles to happen? The realist is the man who believes in miracles, looks out for and expects them, but does not rely on them.
MI is able to work miracles, for the simple reason Augustine articulated. We have a radically different understanding of nature, a radically different epistemology. We can see the possibilities hidden in plain sight, which were overlooked by those saddled with the old epistemology E1 and ridden by the Museum Fallacy.
They had bamboozled themselves with the wrong descriptions, essentially the wrong words, and got on the wrong track. And as Bernard Malamud said, “if you’re on the wrong track, every station you come to will be the wrong station.”
The wrong track they were on had previously been proven right by the endlessly corroborated facts—which proved wrong. Or useless. The facts were completely the wrong facts. And besides, all the facts in the world and $2.90 would buy them a ride on the New York subway. They needed to think again.
Reality, as we said, and as we’ve written about extensively in Change, consists of an infinity of question-and-answer complexes. It is infinitely rich and laden with possibilities. That tacit everyday epistemological error, the Museum Fallacy, gets us into hot water over and over again in our practical endeavours.
The cybernetician Stafford Beer once said,10 “the appearance of coincidence in your life is simply a reflection of your previous failure to recognise what was really important.” Let’s say you leave an MI session with a minimalist intervention that involves buttonholing a colleague whom you said you “never just run into,” but on getting back to the office and taking the elevator up, it stops at the third floor and ‘miraculously’ he gets on, and as it’s only the two of you in the elevator, you implement the intervention then and there.11 You probably shared an elevator with him umpteen times before but never noticed him, never gave him a second thought. This is what Beer was talking about.
But every minimalist intervention which seems miraculous to you was likewise due only to your previous failure to recognise what was salient in your situation.
.
Your Way With Words
It is often said, after someone has very aptly characterized something, put their finger on it, and indeed it has been said to me, “I wish I had your way with words ….”
The mark of a true, thoroughgoing practitioner of MI is that they might well go on to say, “…But I’d rather have my way with your words.” Ashby once put his finger on it, got to the nub of it, when he characterized one aspect of cybernetics as “the science of How To Get Your Own Way.” And when we are trying to help the Other get to their destination as quickly as possible, and with minimum risk and effort, metamorphology, or one aspect of it, can justly be characterized in that context as the science of How to Get Your Own Way with the Words of the Other, “my way with your words” (the play of the signifier again).
There is a whole science to utilizing the other person’s language, descriptions, framing, construal of a portion of reality, in short, their words, to get our own way and especially, when we are doing MI or thinking in MI terms, what that means is the way the other person themselves has been trying to get. But they can’t seem to get there, or can’t seem to get all the way there on their own—they need our helping hand. For they are blinded by their own overly restricted, subjective take on their reality and are compounding their error by taking it for the real thing, the whole enchilada. Beguiled by their own way with words, taken in by their own take and swallowing it whole—hook, line, and sinker—they can nonetheless, via the magic of well-chosen language, our way with words, be gently escorted back to the path that will take them where they really wanted to go in the first place, utilizing their own language, combining creative forces, ours and theirs, in a way we described in “Knowledge, Co-Design and Strategy.”
But even if we are only trying to have our own way, so long as we are operating from within the new epistemology E2, which again we’ve written extensively about here in Change, that will inevitably be a way that works even better for the other person. For that’s the only way to get our own way, at least besides the E1 (old epistemology) way, which is almost always a form of violence, however subtle.
Indeed, once E2 becomes second nature, it’s so easy to see something else that was hidden in plain sight: how much subtle (or not-so-subtle) violence people do to one another when operating, as the whole world and his brother and sister operate, from E1, and often chiefly because they are operating from E1.
Making sure the other gets their own way as the only effective way to get our own way, is baked into, hardwired in the metaphorical ‘firmware’ of the E2 epistemology itself, which à la Ashby we might dub the Epistemology of the Win-Win.
What gets you into trouble and makes your work or your life a good deal harder than it needs to be, and has you as often as not “barking a long way up the wrong gum tree” (to borrow Austin’s playful pleonastic phrase), is your way with words.
Mired in E1, you literally talk yourself into your problems, because it’s the wrong talk, the wrong descriptions, the wrong selection from the infinite richness of idiosyncratic concrete reality with its dazzling, infinite possibilities, any one of which could be yours …and is already, tantalizingly, at your fingertips.
And it’s the wrong talk because it’s also much of it based on false assumptions, or empty abstractions, empty signifiers.
You’re not alone. We all do it. I do it too when I’m off duty, in my own life. Sometimes I can be the worst offender, even though I know better. In my personal life, surrounded only by E1 thinkers, no one with an E2 epistemology is anywhere to be found. And so I have on hand no one I can talk with about my situation from a genuinely E2 perspective.
Hence there is no one around whom I can trust to confront me with my own blindness, to critically and creatively complement my vision, to see around my soul’s blind spot—to see for me, from an E2 perspective, what I cannot see for myself, helping me to navigate the territory of my situation with its rough terrain, painful contradictions and paradoxes, and open me up to the unsuspected possibilities, and to the miracles only waiting for a chance to happen.
And so I am no better off than anyone else, slipping into the same old errors, Museum Fallacy and all, hobbled by false tacit assumptions I take for the real world, dancing with empty signifiers.
Where I am better off than others is that scattered around the world across ten time zones I have my own team members to whom I can turn. And so to take a leaf out of Ryle’s book, the ailment for which I have laboured my entire career to concoct an effective prescription, is a malady I know about best from my own case, and have attempted to treat, though not always successfully. I have taken some of my own medicine, at least on the woefully rare occasions when I’ve caught myself—in time! (it’s usually too late)—as falling into an E1 approach. As I say, not always successfully.
“Physician, heal thyself.” Yes, I agree in principle, but in the case of my own medicine, it’s difficult if not impossible to treat yourself adequately. You can’t see what’s in your own blind spot unaided. And since it’s in everyone else’s blind spot too, you need to find someone who knows their onions—their metamorphology and MI, and who is therefore not in thrall to the old epistemology. That’s the fastest way to break out of your prison that you yourself have built with words—the nearest thing to a Get Out of Jail Free card.
.
Recapitulation and Envoi
As I said at the outset, most of the time we are not in touch with reality because we approach it at any juncture with only one or two of the thinnest of thin descriptions, themselves either misunderstandings or irrelevant to the solution or, usually, both.
And as I said, worse yet, we cling to our chosen descriptions, to the exclusion of all others, along with our framing of our situation and our pet theory about it which we have made up unconsciously but take for part of the very fabric of reality, filtering and embellishing our experience, confabulating a past for ourselves that only ever existed in our heads.
It’s all baseless fantasy for the most part and we take it for fact, ‘confirmed’ by other baseless fantasies of ours or borrowed from others who have no special hotline to the truth, limiting our experience and possibilities accordingly—imprisoning ourselves cruelly and without trial, often subjecting ourselves to torture—imprisoning and torturing ourselves with nothing more or less than words in our head of which we ourselves are the author.
We limit ourselves with language, building a linguistic prison for ourselves out of descriptions. But the path to liberation is also through language.
Is it any surprise that there exists a ‘talking cure’? That we can liberate ourselves through language, the very thing with which we hornswoggled ourselves and imprisoned ourselves in the first place? That through language we can liberate ourselves from the stultifying effect of our own way with words?
It is difficult, I realise, when everyone you meet, almost everyone in the world in 2025, is a Rationalist, systematically confusing words and things, seeing the world as made up of their own chosen selection of abstractions—yes, chosen!—thinking that they can generalise about people and events and so on, and suffering from the dangerous delusion that they can discern or deduce what’s really going on from first- or second- or third-hand descriptions.
Rationalist to the core, they do not even dream of looking at every situation, every person, every relationship, every interaction as genuinely unique and idiosyncratic, as singularities.
They cannot understand that what they know from elsewhere, or from other people or from others’ situations as they or others (mis-)understand them, cannot possibly in truth apply to us. They don’t know know how to think about singularities, how to inquire into them in a truly scientific spirit.
They cannot imagine an infinitely rich world of possibilities as being our reality, for they think of reality as containing only fixed abstractions with names, and that these define our available possibilities, between which one can choose. Hence they cannot conceive of real freedom—unlimited, uncatalogued, uncategorisable richness of possibility which has no names.
They live under the unquestioned illusion that knowledge consists principally of abstract general propositions or rules holding universally of the objects with which they deal, independent of the context of their application, inquiry or practical circumstances, and admitting of no exceptions—applying to situations and persons that the formulators of the rules or generalisations have never themselves met. And that this pseudo-knowledge can be formulated as rules, written down and put in a book or online.
And do they stop to consider that only the tiniest fraction of one percent of knowledge about the world will ever be written down and put online no matter how far into the future, or that today most of what is known is nowhere to be found online?
In this way—“garbage in, garbage out”—AI can only reflect the world’s current state of natural ignorance and cobble it together for us conveniently in concise but garbled form, introducing a few risible errors, non-sequiturs and inanities of its own, all through the application of Artificial Ignorance. Worse still, it can only seek to capture an image of a static world, whereas the real world in which we all live is dynamic and living and on the move, resisting any fixed description, even in the infinitesimal instant.
At least Science as a human activity is a dynamic, living thing dedicated to constantly changing its views, often radically, offering epistemic complacency no safe harbour. Scientific knowledge is not cumulative. It is continuously revolutionary. The true scientist is perhaps more than anything a born rebel, constantly ready to rise up against all and against herself and overthrow her own most cherished views.
Metamorphology, the study of change and the branch of science devoted to the understanding of singularities, is itself constantly changing and developing.
In application, particularly in the technology of Minimalist Intervention, it becomes, as we discussed, a hermeneutics of desire, eschewing abstractions, revealing unsuspected tacit assumptions, getting underneath the manifest content of what people say they want, and translating it into what will truly satisfy them. And by taking into account the desires of others, it then follows out the play of the signifier to pinpoint how, “miraculously,” to make the apparently impossible realisable right now through the most minimal and unobtrusive of interventions.
I know of no more thrilling adventure. It has been a privilege to devote my career to it; it is a privilege to pursue the journey further over the decades ahead, not knowing what exciting new discoveries still await me around the next corner; and it is an especial privilege to share the adventure with you.
.
© Copyright 2025 Dr James Wilk
The moral right of the author has been asserted
.
The Pleasure of Finding Things Out. London: Penguin Books, 2001, p.187
Cf. the redoubtable Walter M. Elsasser, for example in his Reflections on a Theory of Organisms, Baltimore and London: Johns Hopkins University Press, 1998
Brain Imaging: What it Can (and Cannot) Tell Us About Consciousness, New York: Oxford University Press, 2013.
even though it was based not in Palo Alto, California but in nearby Menlo Park.
in Geschichte und Naturwissenschaft, Strassburg: Heinz und Mündel, 1894
The only exception (though even this is not a straightforward matter) are logically proper names, identifying unique individuals such as Fred Bloggs or the Titanic or Excalibur.
There most definitely are givens in reality but not at the abstract level, and it is the real external world that dictates what abstractions we select to describe it—there is nothing arbitrary or contingent about it. Again, we reject and refute both a “social constructionist” and a “constructivist” account; however considerations of time, space and relevance to our topic mean we cannot go into the matter here.
in a paper of mine, soon to be published here in Change. The words here are mine, but I am heavily indebted to Michael Oakeshott whose classic account I am merely trying to summarise.
Robert Temple and Chandra Wickramasinghe, “Kordylewski Dust Clouds: Could They Be Cosmic ‘Superbrains’?” Advances in Astrophysics, Vol. 4, No. 4, November 2019 pp. 129-132.
personal communication, 1990
This is an actual and typical example of what happens following an MI session.
Life is real when I am.
Wonderful to have you back!