Kurt and I just returned from an exhilarating conference in Mexico that inaugurated the new Mexican Society for Biofeedback and Neurofeedback with its first “International Congress of Neurofeedback in Mexico”, taking place at the resort hotel Hacienda San Miguel Regla, in the mountainous regions north of Mexico City.
The scheduling of the conference had been adjusted a number of times to suit the needs of the invited speakers, and ultimately most of the invitees were able to make it: Barry Sterman, Joel Lubar, Jay Gunkelman, Marvin Sams, Peter Smith, Tom Collura, Steven Baskin, and I. The conference organizers had received governmental support for the conference, and in fact the health minister of the State of Hidalgo was present for the opening session and even addressed the audience.
I had two presentations on the program: One on migraine, and one on the autism spectrum. The migraine presentation was on the first day, and on this occasion most of the other presenters were present to hear about the inter-hemispheric training approach. Steve Baskin was now hearing this for the second time, and on this occasion decided to keep any reservations he may still have to himself. This was perhaps Joel’s first exposure to our current approach, and Barry’s first opportunity to confront me directly and publicly on the method.
In the question session, Barry reminded the audience of his own and Joel’s early work on the worsening of seizures with up-training at low frequency. That research is well known to us, and probably accounts for the extreme conservatism with which we first approached the extension of this protocol to low frequencies years ago. So, are we now thirty years behind by virtue of not taking his research into account? Or are we in fact 30 years ahead of his research in the early seventies, as indeed we should be?
I have the recollection that Barry walked in near the end of my talk just to ask his question, and in fact had not been there to hear the presentation of the data. The disagreement with the early data is probably not as great as he thinks. The essential point of the talk was that the training always needed to be tailored to the individual, particularly when clients are as reactive to the training as migraineurs are. The proper reward frequency could almost always be unambiguously determined at the half-Hertz level, and probably always to within one Hertz. The flip-side is that other reward frequencies in the band would not lead to as favorable an outcome, and might possibly even lead to the triggering of a migraine. We would therefore not even disagree with the proposition that any fixed reward band, in particular one at low frequency, might lead to adverse outcomes if applied indiscriminately to a clinical population of seizure patients or migraineurs. The early research by Joel and Barry therefore does not even contradict our proposition that for any individual a benign and beneficial reward frequency can be found somewhere within the band of 0-30 Hz.
Barry’s discomfort level goes deeper, however. The trouble starts with interpreting the inter-hemispheric training in terms of the traditional single-site up-training that has become standard in the field. The ground truth is that this training does not seem to behave the same way. We don’t routinely get up-training effects in the reward band that one might expect. Then again, we did not routinely get up-training effects in the SMR and low beta bands with single-site training either. This disagreeable fact is what originally drove Lubar to adopt the theta-beta ratio as a better criterion of progress. And that criterion really indexed the partial normalization of theta amplitudes that usually occurs rather than increases in SMR or beta amplitudes.
Gruzelier and his group did publish an early study on SMR training with normal college students that showed a systematic increase in SMR amplitudes, but the increases were quite small, and against a background of a normal EEG. In a clinical population, the initial band amplitudes are more likely to be elevated, in which case one would expect a decline toward normal values even if up-training is employed. The more fundamental reality is that the training, if it is effective, will strengthen the regulatory networks and thus lead toward more normal EEG distributions, wherever that may lie.
It is in the complexity of the EEG that the scientific enterprise gets to wallow in its autistic propensities to engage with detail. Problem is, the larger truth of neurofeedback is not to be found in these details. This is perhaps the most critical stumbling block that has hindered scientific acceptance of the claims of both biofeedback and neurofeedback. The case for specificity of technique breaks down, and the case for specificity of outcome, where you actually get specifically what you train for, also breaks down. Resolution of these apparent paradoxes can only be found at a higher level, namely one of understanding the interplay of complex regulatory networks. In that regard, I prefer to think in terms of classical models of the stability of feedforward/feedback regulatory systems because they are familiar to me from physics and engineering courses in servomechanisms. Val Brown prefers to think in terms of the emerging models of the CNS as a nonlinear dynamical (NLD) system. Either of these models gets you there conceptually. That is to say, an understanding of these models leaves us untroubled by the clinical realities as we find them (although the more inclusive NLD model is also the more amorphous). Without such models, however, you don’t get there at all. Rather, you remain beset by a host of paradoxes.
On the second day, I presented on autism, and on this occasion was happy to be able to present data that Leslie Hendrickson just sent me. These data beautifully make the case for the Disregulation Model, in which symptoms across the board are seen to remediate jointly and collectively. All attempts at categorization of the training effects break down, and that holds true even for our own attempts at classification into the domains of arousal, attention, affect regulation, cognitive function, sensory system excitability, motor control, and working memory, etc. The truer account is one that accommodates the generality of effects rather than their selectivity. Our remaining burden of paradox is to square the specificity of training parameters for the inter-hemispheric approach with the observed generality of effects.
In this regard, it is easier to say what is not the case, i.e. to argue by exclusion. Jay argued on his eeg_images list recently that Richard Soutar’s tracking of the instantaneous results of inter-hemispheric training during the session (and reported at the last Winter Brain meeting) ruled out our supposition that the induction of phase change was the important variable. This again conflates the effect with the cause. It is obvious from the mathematics alone that inter-hemispheric training disfavors synchronous activity at the homologous sites. That follows straight-forwardly from the phase-sensitivity of common-mode rejection of the differential amplifier. Hence the training challenges the phase relationship between the sites at the center frequency of the reward band and rewards ‘anything but synchrony.’ We are effectively doing anti-coherence training. But this does not mean that phase changes corresponding to the challenge are also the expected outcome of the training. The execution of any functional challenge will move the brain toward synchrony at the same homologous sites, and that will remain the case even after training. We are not in any way handicapping or constraining the brain in its ability to organize its affairs.
Rather, the reinforcement takes the brain out of its instantaneous comfort zone and compels it to react. The challenge utilizes one of the most dynamic variables that is actively managed by the CNS, i.e. the phase, and this is clearly a challenge to which the brain responds promptly and massively. Like all successful neurofeedback, the challenge must be administered within the zone of stability of the CNS, and at a frequency that minimizes the emergence of adverse side effects. The effects of the reinforcement are non-local and non-specific, in the sense that any other neurofeedback protocol that was equally successful in addressing the symptoms would have essentially the same outcome in EEG terms. That proposition has not yet been proved, and could serve as a testable hypothesis. (I would submit that if all of us had to get it right at the detail level when challenging the brain, we would not all be getting such good results across the board.)
If one surveys the EEG anomalies that are associated with one or another of the psychopathologies, there is a strong bias toward excess EEG amplitudes, which can also be understood as an excess in local synchrony. With respect to coherence and phase anomalies, there is a general bias toward increased coupling between sites as opposed to diminished coupling. In general, therefore, training that tends toward the desynchronization of networks whenever the EEG amplitudes are elevated is almost universally a good idea. If we are dealing with a disconnect syndrome, the training challenges the two sites into communication, so we are helpful here as well. We call this the homeopathy model, in which even training in the ostensibly wrong direction will also effect the desired outcome.
But back to the conference: Barry Sterman presented some additional findings out of the early research history which supported his theory that the consolidation of learning in neurofeedback is contingent on a period of functional disengagement, i.e. a PRS interlude (for post-response synchronization). Building on his results with the B-2 pilots, where optimum performance in a continuous performance test was accompanied by a respite between trials in which the brain made excursions into alpha spindles occipitally, Barry has been suggesting (since at least the ISNR Conference in St. Paul, Minn., where I first heard him on this topic) that optimum feedback should be done with discrete trials, with discrete rewards, and with a refractory period built in that would allow consolidation of the learning. In support of this proposition, Barry showed the results of one study in which the training time to criterion level in rats was inversely related to the amplitude of the PRS component in their EEG. (Of course it is possible that PRS values also correlate with anxiety levels in an inverse relationship, and that anxious rats don’t learn as well….) The original suggestion that all learning is contingent on “drive reduction” goes back to one of the earliest of researchers in the biofeedback field, the late Neil Miller. Said Miller: “Successful feedback must establish drive and provide for episodes of drive reduction.”
Barry’s model reminds us of just far we have moved away from his original conception in the way training is now commonly done in the field. Whereas Barry would like to assert universality for his model—in consequence of which the rest of us are simply out to lunch—I would rather see this as yet another training strategy that surely has a place in our emerging clinical multiverse. Currently the clinical world divides around the issue of restoration of normalcy. A key observation one might make with respect to the early cat work is that normalcy of the EEG had nothing to do with it. Barry was training for specific brain behavior and that had specific beneficial effects. The cats did not qualify by virtue of being abnormal or deficient in any way. Subsequently both the venture into QEEG analysis and the broadening of neurofeedback to cover the whole field of psychopathologies has brought about a focus on normality that was quite absent at the outset. Even Joe Kamiya was taken aback when it was once suggested that his alpha training restored some kind of normalcy. “Normalcy has nothing to do with it,” he said. The current preoccupation with “renormalization” does not apply to Barry’s early work. The heightening of the seizure threshold in the “normal” cats cannot be explained in terms of a general self-regulation model, but is quite clearly the result of a specific brain challenge that had a specific result. Why should the same not be true of humans, even though the sensorimotor rhythm is not synchronized in the waking state? Barry has always thought so, and he’s probably right.
So pushing along that original research track may indeed hold additional promise for seizures and perhaps other instabilities. That would be welcome indeed. But no matter how good the outcome in those cases might be, this discrete feedback model cannot be used to erase or invalidate what has been accomplished in the framework of the General Self-Regulation Model and through the emergence of continuous analog feedback. Not only are discrete rewards not necessary to effect good clinical outcomes, the EEG is not even necessary! How is it possible to still insist at this point that there is only one right way to do this work when Hershel Toomim and Jeff Carmen are getting wonderful results without reference to any EEG at all, not to mention the absence of discrete feedback for discrete events? If their work has any meaning whatsoever then it must at a minimum succeed in broadening the scope of the discussion. The more inclusive perspective would allow that any variable which reflects the disregulation can serve as a target of reinforcement in a “renormalization” model, and even variables that do not reflect the disregulation can be used in an “exercise” model. Much as Barry would like to think otherwise, his early work was just the firing of the starter pistol. It was not the whole race.
Peter Smith and Marvin Sams reported on further progress with their study of eating disorders. They established three research groups. The first utilizes the traditional SMR/beta protocols and inter-hemispheric training at 12-15 Hz. The remaining two utilize QEEG-based protocols, with one getting the latest in Marvin Sams QEEG-based schema. I will be reporting on this research in more detail after I have a chance to review the data. The outcome, unsurprisingly, was that by far the best results were obtained with Marvin Sams’ current protocols. He does a 9-site QEEG prior to every neurofeedback session, and then trains accordingly, with a bias toward inhibits. The objective is to increase communication efficiency between sites. Unfortunately the protocol-based training used in comparison is a bit stale—like at least five years or so. Nevertheless, some 86% of the participants rated neurofeedback as effective for themselves. Long-term outcomes are being tracked.
The eating disorder population has the highest mortality of any of the psychopathologies, which is surely traceable among other things to the prominence of severe psychological trauma in this population. With the inter-hemispheric training we start getting at the heart of the problem even before alpha-theta training is initiated. But one must be willing to train at low frequencies. I see the Marvin Sams approach and the protocol-based approach has having very different virtues. The cortical features that Marvin locks onto selectively focus on the physiological aspects of disregulation, which may or may not get at the core issue of trauma. The protocol-based training may or may not get at all of the physiological complaints, but more assuredly it will address the core issue of trauma without which long-term remediation is questionable. I was happy to see Peter Smith sit in on my workshop, so he is now acquainted with the latest clinical decision tree for his continuing work.
Steven Baskin presented on headache and migraine. He emphasized the need to characterize the headache very specifically. That may be so, but surely the need for this diminishes when one has a technique of getting rid of the headaches irrespective of their detailed characteristics and of their specific diagnoses. I am reminded of a panel discussion at a California Biofeedback Society meeting some years ago in which the challenge was to characterize panic disorder in full clinical detail in order to inform a better biofeedback strategy. We were already getting good results with panic disorder at the time, without much engagement with any of the particulars being discussed. There is simply no way to connect up the EEG world with all the peripheral measures being discussed. Yet we were clearly succeeding. The answer was not to be found in the details. This is obvious when it comes to the EEG, but most likely it applies with respect to the peripheral measures as well. The same is likely true in the case of headaches also.
It was of course a preoccupation with the specifics of a migraine that got the whole field off the rails with the vascular hypothesis of migraine genesis. I recall the time when Steve Baskin put me off years ago with regard to our claims for neurofeedback on the basis of the (by now discredited) vascular hypothesis. So why do migraine researchers persist in wallowing in the minutiae of a migraine trajectory when it misled them so badly once already? I suspect that this interest will fade once it is realized that a categorical remedy is clearly available. Jeff Carmen has documented a 95% success rate in 100 cases with pIR HEG, a report that is currently in press, and we are matching that with our entirely different technique.
Baskin has one argument in particular that gives him pause with respect to our claims. It is the prominence of rebound headaches in the chronic migraine population. These folks eventually may end up in a medication strategy that is counter-productive, with the medication exacerbating headache incidence. The mere reduction of medication is often sufficient to bring about a thirty to fifty percent improvement in headache incidence. Knowing this, docs often try a strategy of medication reduction. Of course this cannot explain the neurofeedback results, since medication reduction—if any—comes only after clinical improvement is observed, not before. It cannot be the prime mover in the non-medical setting of a neurofeedback office since NF clinicians cannot direct patients to reduce their meds. Moreover, our expectation is for 100% reduction in migraine incidence, not thirty to 50%. By now Baskin offers up that he believes neurofeedback holds promise. With respect to EMG feedback he says: “I couldn’t care less what the EMG does. I am interested in what it says about the brain…. That’s why I think NF is the future…..”
Jaime Romano Micha, who practices in Mexico City, presented on a model for neurofeedback that combines information on symptom presentation, the clinical EEG, and the QEEG to derive an individualized training protocol. This was satisfying to hear. Interestingly, he argues against the use of any of the databases. He asks: “Should we have a database of normal EEGs or of normal persons?” The former would essentially consist of a “super-normal” population, one in which outliers are systematically excluded—for no other reason than that they are outliers. He points out that early Alzheimer’s patients may continue to have entirely normal QEEGs despite the relentless advance of their functional deterioration. And at this conference Marvin Sams acknowledged that “We have children who are very malnourished, but they do not recover weight even though their EEG normalizes.”
Mischievously Dr. Romano Micha showed one EEG pattern that looked very much like a pathological pattern of synchronous firing at low frequencies, but it was merely a case of “hypnogogic hypersynchrony,” deemed to be of no clinical significance. That is to say, one cannot judge from the EEG alone. One must know enough of the context to recognize whether a certain pattern is state-appropriate.
Somewhere along the line a neurologist spoke up and reminded people that the Fourier transform assumes stationarity of the signal, and thus is not appropriate for the processing and evaluation of paroxysmal activity. From the ambiguous response to this remark, it became clear to me that confusion may still reign on this point even among the elite. It is transient events, complex waveform morphology, and linkages to the instantaneous state of the subject that make it necessary for us to maintain our connection to real-time clinical EEG data. One cannot live with the QEEG alone, and that was a key message from Dr. Romano Micha both here and in his recent ISNR presentation.
The collective weight of all these presentations from the QEEG contingent was a huge burden on the assembled clinicians, so I think my workshop on the third day was received with some relief. It was also very well attended. Jay’s and Joel’s workshops were largely about the analysis of EEG data. Barry’s workshop had been entirely about obtaining a valid QEEG data set and analyzing it with SKIL. The consistent message was that QEEG data is necessary to do good biofeedback. One treasures the honesty of a David Joffe, the creative mind behind Lexicor, who said recently that “we were aware as early as 1995 that our claims for the QEEG were not being borne out in practice.”
Kurt attended Barry’s workshop, and told me afterwards that a local practitioner had asked him what he thought about the use of inter-hemispheric training at low frequency, since she was in fact using that technique successfully. Barry was rather brusque and dismissive in his answer, challenging her to show him the research that documents such training. Well, as a matter of fact there is the recent publication by Andrea Sime on trigeminal neuralgia in the JNT. And there was the 1995 paper by Douglas Quirk, which covers a larger “n” than any other study on neurofeedback that I am aware of. It refers to the inter-hemispheric protocol only in the SMR band of 12-15 Hz, but then Barry’s opposition to the use of inter-hemispheric training is categorical, and not restricted to low frequencies.
While leisurely scouting this ancient hacienda, which dates back to the 1740’s, I variously tried to make sense out of this insistent drumbeat in behalf of a very flawed model. At the top level, the problem is that the factoids of the EEG are taken as evidence for much larger claims, and the inductive leaps involved are rarely examined. At another level, one sees kind of a Las Vegas phenomenon going on of talking about one’s winnings and forgetting about the losses. At yet another level, the different perspectives on the Q have to compete with each other, which is a further disincentive to be forthright about the shortcomings. And finally, the preoccupation with QEEG data serves the objective of restoring the proper hierarchy in which the scientists are at the top, and the clinicians at the bottom.
So in my lecture I was very conscious of having to deliver “the clinical perspective” as a kind of counter-balance to the weight of all the other presentations. But of course I am not a clinician either, and by instinct and training just as much a left-brained scientist as the rest of them. The plain fact was that among the Gringo speakers brought to the conference, there was not a single person who makes his living by clinical work! Matters were somewhat put to right by the Spanish-speaking invitees, many of whom were clinicians. But those presentations weren’t always translated back into English.
Back to my musings among the ancient ruins. What strikes me is the similarity in outlook between traditional religion and modern science. It is the very essence of monotheism that there be one right answer, and in our religious tradition that answer is codified in Scripture. Our partitioning of the world into the absolutes of right and wrong, of light and dark, go back to Zoroastrianism. Time and again we encounter facts that contradict beliefs, and it is the beliefs that survive intact. That’s fine when we are dealing with religious beliefs that cannot be adjudicated by mere facts, but it is amazing to find how much scientists are in the grip of belief systems that should be more amenable to modification.
Part of this religion of science is the insistence that we publish our data before anyone will give it credence. If it does not appear in Scripture then it does not exist. Some of us think of facts as existing independently of being so recognized. Some of us think that conversations should be possible among colleagues about data that has not yet been published. The real science happens upon discovery, not when the scribes take over. Tom Collura pointed out in this regard that Lord Bertrand Russell’s writing of the Principia Mathematica was separable from the essentially creative act that preceded it. Likewise, the creative event in the discovery of Special Relativity was a right-hemisphere phenomenon in which Einstein “felt” what the answer needed to be well before the answer was enshrined in a paper, or even in an equation.
Perhaps the case needs to be made that clinical work is always in essence a creative act, one that is not reducible to what can be readily articulated. It therefore does not stand subservient to the scribes but apart from them. I am reminded of the psychologist who described his work with the San Diego Chargers. He recognized that the offense and defense are very different. The offense constructs plans for an orderly universe. The defense must be prepared for whatever confronts it. The clinician must be a bit of both, contending with a nature that is always larger and more ambiguous than can be readily comprehended, but decisions need to be made provisionally nonetheless, just as plays need to be called on the field even in the face of uncertain intelligence.
Is there a solution to the challenge of the QEEG that accommodates the complexity and the absence of stationarity that attends the clinical setting? I am reminded of a description by my High School history teacher: “The Greeks and Persians divided the ocean between them. The Greeks took the upper half, and the Persian flotilla took the lower.” At least on this particular day. In any event, we need a different solution. Curiously absent from the debate has been an acknowledgment of the disparity in “areas of excellence” in the two key approaches to neurofeedback. When Barry wants to mount his best case he shows us a QEEG for a venous malformation or a hamartoma or cyst formation. When we want to make our best case, we talk about aborting migraines in a thirty-minute session from a standing start and ending rapid-cycling bipolar swings in twenty sessions. (Incidentally, Peter Smith told me on this occasion that he has disrupted an established migraine in as little as two minutes.) There is a whole ocean of psychopathology in between.
When it comes to the instabilities, I would most want to do the dynamically-based training. The QEEGs of bipolars and of narcolepsy and multiple chemical sensitivities and of multiple personality disorder are hopelessly unstable and variable. The “average” does not tell the story here, no matter how meticulously obtained. When it comes to specific learning disabilities, on the other hand, Kirt Thornton’s activation procedure would seem to be the best-established pathway to a targeted training strategy. For almost everything else, matters are much less settled. For our part, it is obvious that the QEEG-based model, in its current state, is completely incapable of accommodating our clinical data. Yet the bumblebee flies.
Marvin Sams may yet demonstrate superiority of his approach over the latest mechanisms-based approaches, but that remains to be seen. Even so, there is the matter of cost-effectiveness and of teachability. There is also, finally, the political realm to consider. The more the ultimate point of reference is the functioning of the individual, the more we cement this technique in the mainstream of the mental health community. The more the point of reference becomes EEG data that is detached from organismic behavior, the more we prepare the ground for the medical usurpation of this technique.
It was inevitable that when this group of speakers got together this intimately for such an extended period of time, all of the historically significant issues would be warmed up again. That should not obscure the fact that this occasion was seen by all as a very propitious beginning for a new and enthusiastic practitioner community. And on that positive note I should also mention Tom Collura’s presentation toward the end on what the future might hold in store for us. Over the near term, we are headed for a clinical model that requires minimal clinical decision-making with respect to neurofeedback itself. The burden on the clinician will be reduced, not enlarged. (If Val Brown had been here, he would say that that point has already been reached.) Over the longer term the future becomes much more murky. This will be the topic for a panel discussion at the upcoming Winter Brain Conference. We are at this moment still hearing the reverberations of the starter pistol. This field has only just begun. This is not the time to constrain our thinking, and even less to consolidate an orthodox position.