sfwork logo

NLP, Science and Intersubjectivity

Mark McKergow, Principal Consultant, Mark McKergow Associates

Jaap Hollander's recent article ('NLP and Science - Six Recommendations for a Better Relationship, Hollander, 1999) is a welcome and informed addition to the debate about NLP and science. I particularly like his suggestion of the 'modelling trail' method of description, which would help others to be more clear about the claims and limitations of models derived using NLP (or any other) methods. I also acknowledge the stubbornness and closed-minded reception accorded to NLP ideas by "scientists" with little understanding or interest in the field. In my view these people devalue their profession by failing to find out about NLP before commenting.

The relationship between NLP and science is in interesting one, with potential for misunderstandings on both sides. As a one-time professional scientist myself (with a physics PhD to show for it) I will address some of the points raised by Jaap Hollander, and hope to outline a number of possible alternative ways for improving the relationship. I will start off by examining the question as to what scientists believe, which in my view turns out to be more NLP-friendly than Hollander might suppose. We will follow this with the ways in which science could investigate NLP, and finally look at how similar processes are being used to investigate other psycho-social processes.

The process of "science"

Let us start by looking at a definition of science. Microsoft Encarta 2000 defines the word thus:

Science may be defined as the systematic investigation of objectively verifiable material phenomena. In the purest-minded view of the profession, its tools are rationality, experimentation, objectivity, and the free exchange of reliable information.

It is this desire for reliable information that lies at the heart of the scientific endeavour. It is tempting to look at the concept of reliability here as a digital parameter - either something is reliable, and hence 'scientifically proven', or it is not. However, most scientists would agree that reliability is more of an analogue parameter - we might think of a scale of reliability.

So how can we define reliable in this context? We may start by examining Hollander's (unsourced) claim that scientists believe

'that there is a universal truth, that statistically evaluated experiments generate the best understanding of this truth, and that this understanding will be beneficial to mankind.'

This may be Hollander's view. I cannot find these words elsewhere, nor do I believe they are the words a scientist would choose. I think that Hollander sets up a straw man - a caricature of a scientist - which he then objects to. The issues of subjectivity and epistemology in science have been known about (and worried about) by thoughtful scientists and philosophers for many years. There are regrettably still people adhering to Ascientism (the belief that all knowledge should be like physics or some other scientific branch) and it may be such people that Hollander has had run-ins with in the past. However, they are outside the mainstream of thoughtful scientists.

Those scientists who take these matters seriously have a different and useful view. Professor John Ziman, formerly Professor of Physics at the University of Bristol and, coincidentally, my former Head of Department, has put forward a considered view of the way in which scientific knowledge is gathered and judgements of reliability may be made. In his book Reliable Knowledge (1978), Ziman considers the philosophical issues of subjectivity and objectivity, and states that the 'goal of scientific method is reliable knowledge - knowledge which is coherent and maximally consensual'.

The idea of consensuality is very interesting. Ziman states that, rather than getting bogged down in the old arguments about objectivity, the scientific process can be very accurately described as a process talking about subjectivity - but subjectivity created between people, in the form of socially constructed and tested shared knowledge. We might think of intersubjective knowledge - that which is agreed on by different people.

I have seen this kind of intersubjective knowledge being used at NLP conferences. When I ask someone to 'pass me a handout please ', they usually respond by picking up some papers and handing them to me. We have agreed, intersubjectively and without comment, that that thing is, for us, a handout. Situation normal - two people agree that there is a pile of handouts, which makes it a bit more reliable. Now other people approach the pile - there's a shout from the back, 'Are there handouts?' - 'Yes!', cry two or three others. Before we know where we are, there are twenty or thirty people who will have joined in our intersubjective game of reliably knowing about handouts.

There is another possibility, of course. I ask for the handout and am met by blank looks and puzzled grunts. 'The handouts!!' I exclaim. 'No, no handouts here mate'. Unreliable knowledge - not shared, not intersubjective.

This idea is laid out very nicely in Fritz Simon's book 'My Psychosis, My Bicycle and I' (1996), and shown in Figure 1. The 'objective' is the shared intersection between two subjective individual descriptions.

[Figure 1] Caption: Intersubjectivity: Two individuals concur to create a piece of 'objective' (for them) knowledge. After Simon, 1996.

Ziman's point is that the more people concur with a piece of knowledge, the more reliable it is. Knowledge can be more or less reliable - an example near to the 'totally reliable' end being the force we call gravity (for example that an object, when released and in the absence of other restraining forces, moves downwards towards the earth's surface). This can be easily and quickly tested by anyone with a stone, a cannonball or an apple to hand. An example nearer the 'unreliable' end would be the 'cold fusion' issue - a few people have observed rather different things from the majority and results are impossible to reproduce.

When we start to look at science as intersubjective, the process of scientific observation takes on a different light. Rather than each scientist publishing his work and claiming it to be 'true', we see a range of trained observers each reporting 'what they have observed'. Scientific papers are always clearly labelled with the names of the observers, so we know whose subjective reports we're looking at.

The communication and repeatability of results is therefore central to science. This demands a reduction of ambiguity in communicating results - how else can we know that we are comparing like with like. Hence the preference for numbers, which less ambiguous than words (as long as the numbers relate to something well defined, of course!).

I always used to wonder about the scientific status of unrepeatable events. For example, did England beat West Germany by 4 goals to 2 in the soccer World Cup final of 1966? If we were to repeat the 'experiment' later on, even one day afterwards, the result would not be the same - a different game would be played, sometimes with a different winner. In the intersubjective view, the consenuality comes from the fact that there were several independent observers (about 100,000 at the stadium, with millions more watching on TV, that there are many recordings the event on film, tape and in the press, that there are relics of the day (the ball, the jerseys), and that all these reports basically fit together. There are still disputes about whether England's third goal crossed the line or not, but everyone involved agrees that the goal was given. That, as they say, is (as close as we can get) a fact.

This mode of study works very well with the natural sciences, where the observations are usually seen to be consensual quite easily. Is this an accident? No - the physicists study the things they study precisely BECAUSE these things are tractable using the method. The results are generally established as consensual, sometimes after a period of debate and disagreement, and are 'firmly established and accepted without serious doubt by an overwhelming majority of competent well qualified scientists' (Ziman).

Application of 'science' processes to psycho-social areas.

The extension of the scientific method outlined above into the vastly more complex areas of human actions and relationships is fraught with difficulties. However, I submit that it is not as insurmountable a problem as Hollander suggests. Hollander paints a gloomy picture with his outline experiment for examining opinion-development in children. Although the lines of his experiment are logically correct, he seeks to be very rigorous and reliable about a small piece of knowledge - so reliable that the attempt ends up looking ludicrous. As a piece of stimulating overstatement this is entertaining, and makes Hollander's point for him. In terms of taking a sensible scientific line to psycho-social research it is comedic.

If we look at what knowledge can be gathered reliably and consensually, and accept that there are degrees of both these things, then some different possibilities begin to emerge. NLP is about helping people to find the well-loved 'difference that makes a difference', and as Hollander points out this involves working with a sample of one. The 'right' method for that individual is indeed the one that works.

The point, surely, is to what extent that person finds benefit from the process. There are, in addition to the qualitative research approach outlined by Hollander, various ways to do this. These are commonly employed in evaluating psychotherapeutic approaches.

1. Outcome research. This quantitative broad-brush approach simply looks at the extent to which the individuals involved meet their goals. For example, someone who asks for help with their depression will be asked about whether they are less depressed. There are a number of variants, following the NLP 1st, 2nd and 3rd positions - most often the individual will be asked for their own assessment, the therapist or operator may be asked for their assessment by definition not so good as the client is in a better position to know about their own depression, but useful too. A 3rd position assessment might include standardised questionnaires as well, or the views of others who know the client. This may be done straight after treatment, and/or at some more distant time such as 6 months later.

It's interesting to ponder what this approach measures. Assuming the operator is doing 'their thing' rather than following some standard procedure, it measures the efficacy of the operator. It says little about the Method, other than this particular operator's degree of skill with it. So, a zero result could equally be a sign of a hopeless operator as it was a sign of an ineffective method. However, a high result equally does not indicate that the method 'works' - it indicates that this operator produces these results. In terms of validating a method, then, it's not much help, other than indicating that there may be something worth further investigation here. This in itself is valuable - it weeds out the charlatans who claim that they can provide results, and then don't. Ziman says that its in nobody's interest that interesting phenomena are ignored, and this is one way of finding out how interesting the phenomenon under study is.

This method could be made a little more reliable and consensual by having different operators carry out the same kinds of study, and seeing how the results match up. Where there is a consistent good performance, there is more reliable evidence of something worth investigating.

2. Empirical quantitative research. This is a first step along the road to validating a particular method. For this, outcome data is obtained as above, but in addition the operators follow some kind of agreed process. There may also be a comparison or 'control' of this process with something else - another method or doing nothing. Where the method under study shows good results, particularly in comparison with the control, we start to get mildly-reliable knowledge that the method is helping some people.

This method is in use, however, to examine the effectiveness of solution focused brief therapy (SFBT). SFBT, although stemming from similar roots to NLP in terms of Erickson, Bateson, systemic therapies and language, has a much simpler model than NLP (see for example de Shazer, 1988 and Berg & De Jong, 1998). The derivation of an agreed procedure which allowed flexibility for the therapist to respond to their clients individually, and yet have enough commonality to be doing something like the same thing, was merely agonising and slow rather than next to impossible.

The results gathered so far (Gingerich and Eisenhardt, 1999) are interesting. In the 15 studies published to date, clients reported improvement in 60 - 80% of cases. These studies have been carried out in a wide variety of settings, including mental health, school behaviour problems, anger management, family and marital therapy, occupational health and rehabilitation, problem drinking and prison. These figures are as good or better than comparative treatments, were mostly achieved in between one and five sessions. Interestingly, in all but one of the studies the work was implemented by relatively inexperienced workers, in many cases just recently trained.

This step towards empirical validation is seen by those in the SFBT community as a way of demonstrating that their model has a wide application and demonstrable results, and thence to garner support from funding bodies, controlling institutions and so on. When Hollander claims that, on his reading of science, 'no psycho-social method can honestly claim to be scientifically supported', I suggest that there are matters of degree, and the some have sought support more effectively than others.

How could NLP be supported 'scientifically'?

I must start this section by agreeing with Hollander that NLP is not about applying a recipe book approach to a given situation. Rather, it is about applying a set of beliefs, presuppositions and skills to the situation in hand to reach an outcome. This does indeed involve studying the exceptions, and taking every case afresh, an approach derived from Erickson. This means that conventional experiments which look at bits of NLP-ish stuff, interesting though they are, are not 'full' tests of NLP per se. (Neither do they claim to be.) Examples may be found in Hancox and Bass (1995), where altering submodalities changed the rate of saliva production in human subjects. If we assume that it would be desirable to have some ammunition to counter Hollander's Scientism devotees, then there is at least one way forward.

Outcome research

I've always thought that NLP was particularly amenable to this approach. With the focus on outcome - acuity - flexibility, the well formed outcome gives an excellent basis for evaluating progress. As I pointed out above, this kind of research primarily examines the efficacy of the practitioners concerned, rather than 'the model'. However, it would give some kind of basis that there was something there to investigate further. It would also give Jaap Hollander something to tell his English tobacco smoking scientists about. I am not aware of any such study, and would certainly be most interested learn of those which may have been done

I have written in these pages before about one particularly controversial meta-study of outcome research. Miller, Duncan and Hubble (1997) examined outcome research from different therapy models, and concluded that the therapeutic model made no difference to the number of patients who were successfully treated! Rather, they found that the differences which made differences overall were the therapeutic relationship, the client, hope, expectation and chance events. I asked then, and I ask again, whether the NLP community is interested in responding to these findings. I have seen no response to date.

Empirical quantitative research

This method seeks to compare the NLP way of doing things with something else to obtain comparative data. I have already mentioned the difficulty of arriving at a simple description of what the NLP way of doing things. An extension of outcome research might be possible, but the scientific community may still be sceptical if what's done is not well documented.

There is a kind of experiment I've seen suggested by well-meaning but ignorant scientists, along the lines of 'Do a V/K dissociation on 50 people with phobias and we'll find out whether NLP works'. This ignores, of course, the possibility that the client may be producing their phobia by another means, and so would not be a good test even of that, let alone the whole canon of NLP. Indeed the richness and complexity of NLP would make the design of the agreed process difficult, if not impossible, and probably futile too.

'But NLP is about subjective experience?'

There are those who may be thinking at this point that NLP states that it is the study of the structure of subjective experience, and therefore cannot be examined using fuddy-duddy old 'objective' science. I think that the view of science as intersubjective gives a useful new angle. If different people find similar things happen to their subjective experience (insofar as they can be compared), then that's potentially scientific knowledge.

Also, if NLP is to help people be better communicators, therapists, influencers and so on, there must be some kind of impact on an external level. The basic way we interact with others is by doing things - talking, moving around, gesturing, wearing things - which the other person can sense and respond to. Such changes may start at in internal subjective level, but must then show themselves to make an interactional difference to the world. If we want the world to respond differently to us, we must surely make some kind of change to the world. The differences may be subtle, small, even unconscious - but they must be there for a difference to be made. And as soon as an external difference is made, then we are out of the solely subjective world and into the intersubjective one.

NLP and science - a relationship

Hollander concludes his piece with recommendations to improve the relationship between NLP and science. His first four recommendations were for scientists. I can think of braver places than NLP World to publish finger-wagging criticisms of the world scientific community, and so will hold my ideas for scientists until my next paper in Nature. The following ideas may appeal to NLPW readers, be you scientists or not.

1. The view of science as an intersubjective process gives an interesting way of viewing 'knowledge' which may be helpful to NLPers. After all, if NLP is totally subjective, why the need for journals, trainings or conferences?

2. I think that there are various ways in which NLP could be researched 'scientifically'. These include outcome research as a possible first step.

3. The extent to which the NLP community is interested in collaborating with scientists seems questionable. The field has been around for nearly 25 years now, with little in the way of scientific activity to be seen. Indeed, all the papers listed over the years of NLP Abstracts have been from 'NLP' journals (Rapport, Anchor Point etc). There is no sign that the NLP community wants or welcomes independent observation or examination. This is a choice that we can all make.

4. If we all choose not to pursue a scientific line, then scientists will most likely continue to suck their teeth and question what we do. We can choose to live with that or not.

5. Anyone who states in response to some negative finding that NLP is the study of the structure of subjective experience and so has nothing to say about the outside world is engaging in sophistry, and might consider prefacing their work by announcing 'This may not help you to change the way you respond to the world'.

Finally, Hollander says that scientists have never proved that the scientific method will lead, immediately or eventually, to better results than non-scientific methods. Is there a generalisation here? To which fields of endeavour is Hollander referring? Taking a few fields at random, the scientific method has given us antiseptics, electricity and the computer on which I write this. Please send examples of the results of non-scientific methods, by foot, to the NLP World offices.

References

Berg IK and De Jong P, Interviewing for Solutions, Brooks/Cole (1998)

De Shazer S, Clues: Investigating Solutions in Brief Therapy, WW Norton (1988)

Gingerich WJ and Eisengart S, paper prepared for presentation to the International Family Therapy Association, Akron Ohio, April 15 1999. More information at www.gingerich.net

Hancox J and Bass A, NLP World 2, No 3 pp 43 - 52 (1995)

Hollander J, NLP World Vol 6, No 3, pp 45 - 75 (1999)

McKergow M, Dinosaur or dolphin, NLP World 5, No 2 pp 63 - 65 (1998)

Miller S, Duncan B and Hubble M, Escape from Babel, WW Norton (1997)

Simon F, My Psychosis, My Bicycle and I, Aronson (1996)

Ziman J, Reliable Knowledge, Cambridge University Press (1978)