Reading the news has always provided a salutary reminder of how fast the world we value and take for granted can be transformed beyond recognition. Throughout history, even before newspapers, people have been all-too-aware of such vulnerability. But why are we all so ready to accept that things can, suddenly, be utterly transformed for the worse, and yet find it hard to believe that things can suddenly be transformed, unrecognizably, for the better?
This discrepancy is easily explained. For it doesn’t take an imaginative gift or creative leap to appreciate what you already have—what matters to you most—and imagine the myriad ways it might be taken away. However, to conceive of something specific and unexpected, new and desirable entering your life tomorrow and transforming it for the better takes a leap of imagination. By definition, you don’t yet know what it will be or whence it might come. As the possibilities are infinite, the details of the ones that actually materialize invariably take us by surprise. You already know all the potential benefits and all the small print of the business deal or grant award you are worried you might lose. But the even better offer just ‘round the corner that will seem to come out of nowhere you can say nothing about until it comes knocking at your door.
With hindsight, sudden, pleasant surprises and improvements tend to occur more frequently in our life than the innumerable sudden disasters we spend so much time worrying about. “I am an old man,” quipped Mark Twain, “and have seen many troubles. Most of which never happened.” We humans, like all living things, are thankfully hardwired to focus more of our time and mental energy on spotting potential dangers and avoiding them than on—impossibly—forecasting the particular strokes of good fortune that may come our way.
These considerations account not only for our near-universal tendency to worry unnecessarily, but also for the discrepancy in our conception—and perception—of the suddenness of across-the-board transformations. We find it easier to imagine across-the-board overnight disasters than across-the-board overnight improvements. It is true that what takes a generation to build can be destroyed in a morning. Yet sudden, dramatic, across-the-board improvements are far more common occurrences in organizations, in business performance, and in the wider world, than you might think at first.
My topic here is, specifically, a subset of these across-the-board improvements: the desired changes we deliberately set out to achieve in our business and in the world, and how we need to think about them—perhaps the most widely misunderstood matter today, not only in the business and financial world but in the world of affairs more generally.
Change is Quick and Easy
For contrary to popular belief, creating lasting, significant change, depending on how you go about it, can in every case be easy, effortless, and take little or no time. The caveat about “how you go about it” is, of course, critical.
My own consistent experience over nearly four decades in the C-suite of some of the world’s leading corporations and of major public-sector and third-sector organizations, as well as the experience of all my closest, longtime colleagues—a mix of accomplished, international leaders and their chief advisers—is that designing an effective intervention to create a major transformation in organizations, in solving even the most intractable problems, or in reaching ambitious targets typically takes two people no more than about three to five hours, ready for immediate implementation, which should itself take virtually no time at all.
The ensuing change catalyzed, which invariably takes place in an all-or-none flip virtually overnight and precisely as designed, will typically then be validated within a matter of a week or two at most, and after that it’s just “business as usual.” Not the old usual, of course, but the new usual—business from now on like it had never been previously. The desired change sticks—it’s permanent—and the marginal cost of the intervention usually turns out to have been close to zero.
Admittedly, our experience is unusual. The change methods I have just described, which we have collectively researched, developed, observed, and extensively tested over many years, are unique and remain cutting-edge, decades ahead of anything else out there, as far as we have been able to determine. However, the point is that our consistent experience over nearly forty years has demonstrated compellingly what is actually possible, and how very far removed this is from conventional management thinking circa 2022.
Logically speaking, such consistent outcomes must prompt us all to radically recalibrate our expectations regarding change. There can be no going back. But the good news is that, starting now, we can all get out from under the superannuated, demoralizing myth that change takes time, a myth operating destructively as a self-fulfilling prophecy. For if you assume change will take years, then it will.
Our findings to date apply equally to eliminating comparatively small but irritating short-term problems, and to resolving major, longstanding intractable corporate problems, a great many of which were valued in the billions or tens of billions of dollars. They also apply equally to designing an intervention enabling a company to reach new, ambitious, sometimes even impossible-seeming targets in accelerated, often “impossible” timeframes: It need only take a couple of people four hours to design an intervention ready for immediate implementation, with the ensuring transformation being complete and validated within a week or two.
And precisely the same timeframe seems to hold for changing the corporate culture across the board, as I can tell you from firsthand observation of some thousands of successful transformations, a significant proportion of which were aimed at changing the corporate culture of major corporations. Culture change in any organization, in my experience, takes days, sometimes even weeks; but it need never take months or years.
Scale, Scope and Difficulty are Irrelevant
Nor does it seem to matter what the issue is, or how longstanding the problems are, or indeed how big the transformation is that might be required. Neither changes nor problems come in sizes; rather, they have lesser or greater consequences. Nor does the length of time a problem has persisted have any bearing on how long it will take to resolve it. Whether a clock has shown the wrong time for five minutes or five years, the remedy may be the same—say, plugging it in.
Moreover, a problem’s alleged “difficulty” is only relative to what solutions have been tried. Our findings likewise have demonstrated conclusively that there are in reality no “wicked” problems in organizations of the kind alleged in a currently fashionable management myth (which inappropriately repurposed Horst Rittel’s and Melvin Webber’s otherwise useful term of art from forty years ago, which was relevant only to government policy making). If you think your organization has a “wicked” problem (if you have come across that bit of jargon), then you are simply thinking about it wrong. You need to think again.
In this short, two-part, introductory article I cannot begin to convey anything of the epistemology, hard science, and R&D behind this radically new approach to change, let alone the highly technical analysis itself. However, future issues of Change will be devoted exclusively to throwing as much light as we can on all of these topics.
Here, for now, I want only to alert you to this radical but tried-and-tested, emerging new paradigm for creating change, by way of encouraging you in the first instance—if nothing else—to raise your expectations. And here I can also try, at least, to make a little sense for you of why you would, after all, actually expect things to work this way. You may even start to notice when they do, and seize opportunities more often to effect lasting change far more quickly and easily.
Sooner or later, however, we will all need to make sense of these game-changing observations, because they turn out after all to be the reality of things, however much the real-world results challenge conventional received opinion and upend widely-held, fallacious tacit assumptions.
Shifting Our Tacit Assumptions
To achieve such results as these likewise requires a challenge to your own tacit assumptions—not only your assumptions about the nature of change in general but about the specific change you are attempting. Tacit assumptions don’t appear to be assumptions at all; they are not hidden assumptions, or rather they are hidden in plain sight. They look and feel exactly like rock-solid, objective reality, part of the very furniture of the world—how things just are. And that’s where we all get taken in by appearances.
Sir Francis Bacon (1561-1626), in his Novum Organum of 1620, “the birth certificate of modern science,” declared, “It would be an unsound fancy and self-contradictory to expect that things which have never yet been done can be done except by means which have never yet been tried.” Laying the philosophical foundations, doing the hard science, and developing and testing a reliable, rigorous, technology for achieving such fast and effective change took our team of investigators over half a century of dedicated scientific research and development, along with decades of parallel academic research we carried out at the University of Oxford, Brunel University, London (in what was formerly the Institute of Cybernetics), and elsewhere around the world.
One upshot was the development of a unique, radically new technology for analyzing any system to pinpoint—in advance of intervening in it—the smallest intervention into the system that would flip it from the existing state to the desired state and no other, all at once, with nothing in between, and with absolute precision. But along the way what we also discovered was, more significantly, that this first entailed a new and unfamiliar perspective on the familiar—nothing less than a revolution in epistemology.
Too often, for decision makers, reality is artificially restricted to what their peers in other organizations are already doing and thinking. For Bacon, however, as for all innovators, scientific discovery and technological advance require of us, above all, that we continually wrest ourselves away from conventional opinion, prejudices, and unquestioned assumptions, devising experiments to accumulate diverse sets of observations, continually critiquing and revising our own conception of things as we go. “Nature can be commanded,” he said, “only by obeying her”: we need to do things nature’s way, and to do that we first need to learn exactly how she does things, and not how our peers already happen to do things.
Sixteen centuries ago, St. Augustine argued that miracles happen not in contradiction to nature but in contradiction to our understanding of nature. Every day in the news we read of new technological advances that only a few years ago would have seemed like miracles. To have made such seeming miracles possible, what changed was our understanding of nature, and with it, our ability to tap its latent potential.
The Information Age?
The 20th Century revolution in ideas from which this radical new approach to change emerged substituted a new epistemology of form-and-pattern, information-and-communication for the old epistemology of substance-and-forces, cause-and-effect. These ideas, which took off in Central Europe just after the First World War, had roots going back to the early 18th Century.
The cybernetic revolution just after the Second World War, of which computers were but one spin-off amongst many others, did much to further develop and promulgate these ideas, this time mostly in the anglophone world. Bear in mind, however, that most of the pioneers of cybernetics, though they were largely polymaths, were not engineers or mathematicians by trade at all but medical men, psychiatrists and physiologists for the most part, their scientific work driven by a desire to promote human health and welfare.
One of the great ironies of recent history of science is that a stepchild of cybernetics, computer science, has in many respects turned back the clock. By making it all too easy for us to tolerate unnecessary complexity, leaving technology to handle it for us rather than keeping things simple in the first place, computing has arguably done more than anything else to derail this revolution in ideas and postpone rather than hasten the advent of a true Information Age.
Likewise, the technological culture that was a spinoff of advances in computer technology, ironically, did much to bury these radical ideas and to ensure that widespread conventional prejudices and myths would slow down rather than speed up the pace of achieving desired change in every area of human life.
The Myth of Complexity
Information technology enables us to model and manage complexity, whereas cybernetics was all about filtering it out of account in the first place. By contrast, cybernetics viewed complexity as a fault of our maps, not a feature of the territory. Complexity, from this point of view, exists only in the eye of the beholder. As Einstein said, “If you understand something it is simple.” Here is an example:
For many years, a large European industrial operation had been faced with a supposedly complex and increasingly expensive problem. Every year, a group of technicians and their managers arranged their annual leave. They were each entitled to five weeks’ holiday per year, of which three weeks could be taken in succession. Everyone wanted to take their three weeks’ consecutive leave during the five-week school holiday period. However, it was required by law that there be at all times a minimum of 50% engineering coverage of the plant at this level of technical responsibility. Non-compliance courted exemplary government fines in the tens of millions of euros or more. Each manager was responsible for about 40 of these engineers. And every year, each manager was involved in scores of hours of expensive management time negotiating and arranging annual leave.
Across the board, hundreds and hundreds of man–hours were wasted this way every year, and it always went wrong. Complaints to the workers’ council over broken leave agreements were numerous, and not infrequently led to bitter industrial relations crises. And it often transpired, particularly during the summer, that there was well under 50% coverage on a considerable number of days, leaving the company exposed to potentially grave risk. Last-minute changes in holiday plans of course caused total havoc.
To solve the annual leave problem, the top management consultancy companies, one after another, brought their complex and expensive, high-tech digital solutions which were all tried. All failed.
After years of attempted technological solutions and complex, time-consuming workarounds, the eventual solution, simplicity itself and obvious with hindsight, took just a couple of hours of inquiry to devise, and it resolved the problem instantly and permanently. It took each manager no more than 20 minutes in total to implement, and delivered perfect results each year ever since, with complete flexibility for the engineers to change holiday arrangements around whenever they wanted to alter their leave dates. And there was always a minimum of 50% coverage, without a single exception. And no digital assistance was needed at all.
The most inflexible constraint was ensuring 50% coverage of the plant, while permitting free choice in booking vacation days. But what does “50% coverage” even mean? How would a seven-year-old draw a picture of 50% coverage? Well, maybe there would be stick figures on the beach in hard hats playing with a beachball, and there would be a crayon line from each figure on the beach to one other stick-figure back on duty at the plant. The situation was, in other words: how do we ensure that every technician on holiday is matched by someone who is covering back at the plant?
For anyone but a computer, this was now a no-brainer to solve, as the only action required was to ask the engineers all to choose a partner and pair up, telling them, “book your leave days for whenever you like, so long as your partner covers for you; and change it around later and as often as you like, subject only to the same proviso. Oh and by the way, you’ll probably want to pick as a partner someone you can be sure you will never want to go on vacation with, and whom you wouldn’t miss if they were out of your hair for at least 10 weeks a year!”
This example may seem simple, but remember, a lot of smart people had spent untold millions of dollars on management consultants and other outside contractors, and on a series of failed high-tech solutions including AI, before the problem was finally solved without them. Like every epiphany it may have been obvious afterwards but it had eluded the best minds for years. And in our experience, even the most complex, intractable problems can be solved with equally simple-sounding solutions, and even the most dramatic and game-changing corporate transformations can invariably be achieved with equally simple interventions.
The Unintelligent Use of Artificial Intelligence
This is not to disparage digital solutions as such. Even if overhyped and still at a very early stage of development, the future potential of artificial intelligence, including the promise of machine learning, is very real and an inescapable part of our collective future, as is IoT, particularly with the rise of the industrial internet.
However, the current misuse and overuse of AI is concerning and destructive—especially our insatiable appetite for complex, high-tech solutions to low-tech, fundamentally simple, “high-touch” problems, as in the annual leave case. The availability of artificial intelligence is neither an invitation to forego the exercise of natural intelligence nor an excuse for doing so. In the race to secure our place in the future we are, quite naturally, loath to be left behind. But in running to catch the latest bandwagon we can easily trip ourselves up and fall out of the running altogether.
Too often these massive, only half-understood investments, while sometimes helping to fuel business growth, instead become “toys for boys,” only to be discarded on Boxing Day or, far worse, persisted in to justify the investment long after they have proved counterproductive, in order to protect the hapless executives who had signed off on the big spend. Take the case of
a prohibitively costly, centralized, countrywide, “Big Data” AI system that top management were convinced, without real evidence, could anticipate supermarket demand better than local store managers. Ever since the system’s launch, local store staff could only look on helplessly as unsold goods went stale on the shelves and customers were driven away to competitors as their favourites were always out of stock. The desperate pleas of the store manager were dismissed by corporate HQ with, “you can’t expect to argue with the computer! Can you match that much hard data?”1
As the inventor and computer scientist Danny Hillis put it, “Technology is all the stuff that doesn't work yet.”
© Copyright 2022 Dr James Wilk
The moral right of the author has been asserted
Next: The flight into abstraction and the lost art of management
I followed this case firsthand through the general manager of a local supermarket and it will be described in detail in a future issue of Change.
"When life gets scary and difficult, we tend to look for solutions in places where it is easy or at least familiar to do so, and not in the dark, uncomfortable places where real solutions might lie."
I saw this quote from Robert Maurer’s book, “One Small Step Can Change Your Life” and immediately feel in love with the idea.