“Talent is wasted on servicing expensive machines for data acquisition instead of advancing new ways of thinking.” Mihai Nadin’s assessment of current research trends is scathing. A Professor in Interactive Arts, Technology, and Computer Science at the University of Texas at Dallas and the founder and key protagonist of anticipation research, Nadin is one of the most vocal advocates for a new approach to knowledge, and to human-machine interaction in particular.
The reason he gives is simple: Relying on machines with huge data processing capacity, we limit our perspective on life around us, and on the challenges it poses to us. A machine “is the outcome of mathematics, and not a characteristic of life,” he insists. In fact, by losing our willingness to see more of the world than machines are able to see, we enslave ourselves to the workings of the machine: “the machine rewards those who accept that they are machines—and who behave accordingly—by making them more machine-like. Many experiments turn out to be mere instances of conditioning.”
This should definitely give us pause for thought. Are we being conditioned, much like Pavlov’s dog, when we keep asking for survey data?
Explains Nadin, “The empirical observation of changes in the living always leads to multiple descriptions, corresponding to multiple possible states, sometimes simultaneous, none exclusive of the other.”
And here’s the beauty of it: by breaking away from the narrow conventions of data analysis, we gain a window onto what characterises “the living, which is purposeful”; we are now able to ask: ‘What does it mean?’ Data does not interpret itself – we need to make sense of it. And the questions guiding our interpretation will not be thrown up by data itself.
Nadin’s reminds us that data is simply a man-made representation of the world, and “representation is not neutral. In the representation, the represented is reduced to whatever is intentionally, or accidentally, of interest, to what is significant. The illusion that a representation, such as a number, is objective, independent of the representer, is the source of the many ‘religions’ developed inside science over time.” Strong stuff. So let’s ask Mihai Nadin himself for more guidance!
Professor Nadin, in a recent interview you pointed out that (algorithmic) computers may be able to answer many of our questions but will never come up with any. What is it that makes questions so important? Isn’t more important to find answers and come up with solutions to the problems that surround us, rather than to create new problems?
Questions, fully articulated or only expressed in actions, are a bridge to the future. The finger in the wind is a question! Any movement of our bodies is a question, as are the actions in the living world. “What was that noise?” is a question not about the physics of sound propagation, but about sensing danger or opportunity. Machines can answer many questions about that noise: How loud was it? What was its frequency or spectrum? What might be its source?
With machine learning and, even more so, with deep learning (which is the next step in training machines to identify patterns), we can really get answers to almost everything that experience has confronted us with. But to build a bridge into the future – as we do when we ask a question – means to become aware of change and retain the ability to act in accordance with our own goals and ideals. Neither one of these have been produced by machines. They reflect intimately the uniqueness of us as individuals asking questions. No machine is meant to be individual; au contraire, they’d better be identical and operate always in the same way!
You say that we want – and should be able – to react to it in a way that allows us to pursue our goals and not betray our ideals. In how far does the availability of data – maybe even as a result of the processes you describe (analysis of frequency and spectrum etc.) make this easier for us? Are we able to make ‘better’ decisions because we are no longer ‘alone’ with the sudden noise, as it were? In other words, do machines put us in a very different position from which to enact our ideals? And what about the line between, on the one hand, information provided by machines and, on the other hand, our autonomous, sovereign decision?
We hear not only with our ears. Therefore a perfect machine that hears everything and can distinguish even sound values that human ears do not discriminate would be as useful as the machine replacing the human being. There is no difference between the noise produced by a tree falling in the forest and nobody is there to hear and see it and the same sound recorded by this perfect machine. The difference is that the machine does not facilitate the constitution of meaning. In the physical world there is no meaning. The sound we humans hear is also a sound we “see,” “smell,” “taste,” “touch.” We are holistic.
Machine assistance is in itself of great help. And it does not take any deep prophecy to realize that technology will continue to permeate our lives. But that’s the mechanistic viewpoint.
Machines do not realize the meaning of the data they process. Anticipation actually means to make sense of what is happening; and we make sense usually by coming up with questions (to ourselves, to others). The sound associated with the image of a prowling cat leads us to a different action than the sound associated with some danger around the corner. The major difference between us and any form of machine is that machines process data fed into them, while humans not only take in information, but also produce information.
So you are in fact saying that the nature of machines themselves imposes restrictions on what we can do with them. Why do these limits matter?
Some parts of the world are what we call ‘decidable’: they can be completely described in a consistent manner. Physics produces such descriptions. These descriptions are useful, and they inform our activity as it pertains to physical phenomena. We build machines that are fully specified and work in a consistent manner. The enormous success of technology is the expression of our mastering the decidable. It is a scientific accomplishment in the first case—think about Newton’s physics, and Einstein’s view of the dynamics. Big data is part of the description—a rocket will have a larger description than that describing the falling of a stone, although the same laws apply. Being decidable guarantees that the car and the rocket will behave consistently.
The living cannot be fully described—it evolves, there is always the open-endedness within which change takes place. If we could measure all the parameters of a person (from physical aspects—height, weight, etc. to physiological—blood pressure, heart beat, sugar, etc. what each doctor tries to get to through an analysis of the blood, etc), we would still not get a full image of the person exactly because with our next heart beat we will already be different. Plus: the data (HUGE, not big) will be inconsistent—high blood pressure when you run or are sexually active is not a symptom of deficient health. The opposite is the case! What counts in the meaning: high blood pressure because of cardiac insufficiency is a different story than that of high performance. The pill given to people with high blood pressure addresses symptoms—but that is not what health is. The data that has informed cancer treatment for the last 20 years was such that the meaning of healthcare was lost. 80% of cancer interventions proved to be of minor if not null consequence. What is worth: they did not improve the quality of life of those suffering.
The reactive model of medicine—treat the human being as a machine, i.e. as a decidable entity—leads to drug abuse and even to the current crisis of addiction to opiates. We do not need more knees and hips replaced: what we need is healing—way slower, way more challenging, but in line with life and not with the physics-based medicine of spare parts based on data which is inevitably inconclusive!