Thursday, March 19, 2009
Stoller’s Rules: Thoughts on Psychoanalysis and Education Research
In his 1985 book, OBSERVING THE EROTIC IMAGINATION, the late psychoanalyst, Robert J. Stoller, offers a number of rules for psychoanalytic research that could well be of usefulness in considering what exists and what is possible in conducting (and consuming) education research (as well as the research in other social sciences. I first give Stoller’s list of rules, without putting them in the precise context in which they were originally presented, for fear of prejudicing some readers about taking these ideas seriously. Note that words in brackets are changes I have inserted to make the rules more obviously relevant in the context of educational research. Original language being replaced is specifically psychoanalytic.
Rule 1: anyone can assert anything.
Rule 2: no one can show anyone is wrong, since no one can check anyone’s observations (including his or her own).
Rule 3: ignorance can be wisdom (“The way toward better understanding, then begins with our understanding how little we understand.”)
Rule 4: use [description of motives] warily
Rule 5: ease up, forswear rhetoric, love clarity, relax.
Rule 6: describe people as we see, hear, or otherwise sense them, carefully and in detail. Do not use [educational jargon] in the midst of . . . descriptive sentences.
Rule 7: When it comes to [using educational jargon], less is more
Rule 8: stop picking on students or teachers
Rule 9: let us then, regarding [education], start afresh.
Final note: this is not really a report about [psychoanalysis], but, rather, one that uses psychoanalysis] as an example of the failure of [educational research], so far, as science.
So, what is this? More bashing of education research but from an unexpected source? Hardly. In fact, it is a call for honesty on the part of educators (at various levels), administrators (from buildings, to districts, to states, to Washington, DC), researchers, policy-makers, politicians, journalists, talking heads, business leaders, think-tank pundits, foundation funders, ideologues, and various other stake-holders when it comes to what education research is, can be, or should be.
Like Stoller in his remarks on psychoanalytic research literature and its jargon, I offer this commentary out of concern that both practitioners and critics are destroying education research by trying to make of it something it is not: a scientific literature akin to that of physics. I will resist at this juncture the temptation to go further by looking at the work of Paul Feyerabend and other post-modern philosophers of science who are critics of the notion that even the hardest science is not devoid of subjectivity, judgment, and, for want of a better term, humanity. It suffices to state that it is an error to try to make a "pure" science out of an area of inquiry that cannot afford to lose contact with its essential humanness and humanity.
Thus, allowing those who think that "the answer" to problems in education and education research are the same, and that this alleged single answer is foundational, I suggest that we turn to the instructional precedent of the history of the philosophy of mathematics itself, where logicism (as exemplified by the efforts of Russell and Whitehead in PRINCIPIA MATHEMATICA) failed to establish purely logical grounds for all of mathematics, due to underlying problems with completeness and consistency revealed by the work of Kurt Godel. The failure of the Russell and Whitehead project did not lead to the death of mathematics, of course, but rather to the demise of foundationalist attempts at creating some "ultimate" underpinnings for mathematics. Among the implications of Godel's Incompleteness Theorem are the realization that there will always be unprovable but true theorems as well as false hypotheses that forever resist refutation. Is this a tragedy, or in fact, as I believe, a signal to mathematicians that new ideas and inventions in their discipline will never be exhausted?
Returning to education research, if mathematics, the "queen of sciences" cannot be ultimately grounded, why should it be necessary to ground education or its research literature strictly in quantifiable, statistical/mathematical terms? The fact is that no such grounding in some sort of absolute and objective reality is possible. Neither is such a chimerical pursuit necessarily desirable if it could be attained.
One of the peculiar things I noticed as a new (but hardly young) graduate student in mathematics education at the University of Michigan in July, 1992, was the on-going controversy and conflict in my newly-chosen field between those who advocated for purely quantitative research and those who supported primarily qualitative methods. Given that my adviser and the principal investigator on the project that funded my graduate work at the time was engaged in fundamentally qualitative research, it might seem predictable that I would be unduly prejudiced against quantitative methods. It bears noting, however, that I had already taken two graduate-level courses in statistics and quantitative methods and experimental design while doing graduate work at the University of Florida in psychological foundations of education. I was not ignorant of or intimidated by statistics. I did, however, find myself somewhat skeptical of the notion that educational issues would readily be settled through the kinds of experiments that could be well-analyzed by the quantitative methods I studied (somewhat ironically, the other students in the courses I took were all doctoral candidates in clinical psychology, many of whom planned to be psychotherapists, not research psychologists. And the text we used, the classic STATISTIC FOR EXPERIMENTERS by Box, Hunter, & Hunter, seemed to draw all its examples from industry: nothing could have been more quantitative and, seemingly, objective. I could make sense of it, but I didn't see it applying readily to educational research and still do not.
Neither, I suspect, would Robert J. Stoller. He makes clear in book after book that he feels obligated to remove a false sense of objective truth (conveyed in no small part through the use of psychoanalytic jargon, which he both decries and eschews) from his psychoanalytic studies of particular patients (several of his books deal primarily or exclusively with but a single case history) and issues of gender identity, sexual attitudes, practices, and feelings, etc. In the latter part of his career, he worked directly with an ethnographer/anthropologist, Gilbert Herdt, and even traveled to meet with and study the Sambia tribe in New Guinea that Herdt had been investigating (they co-authored a book on this collaboration). Stoller began doing a form of what he termed clinical ethnography that extended to studying several marginalized segments of American society.
Throughout this work, he makes crystal clear in the books he authored that in the sort of science he engaged in, he, the analyst, interviewer, ethnographer, IS the instrument of (not under) investigation, and hence cannot be kept out of the awareness of readers. The idea is not to guarantee a sort of false reliability by, in effect, telling readers: "Look, I'm drawing your attention to the fact that I'm the lens through which all of this is being filtered (it's important to note that Stoller made a major point of reviewing analytic notes and manuscripts with patients and those he interviewed for other studies before publishing and always got their explicit approval), and so you can trust me because I'm telling you this." Rather, he repeatedly states that the best he can do is to periodically remind readers of his own choices of what to report and how, and to do his best to get his viewpoint, biases, and methods "out there on the table" for readers to consider and examine. This practice doesn't make him right, or reliable, or objective, or anything of the kind. But it does make him much more honest than many other writers. And it does go a long distance towards demystifying and "descientizing" what he is up to.
On my view, this approach of Stoller's is precisely what educational researchers should be doing. While there are places where purely quantitative research may be possible, I believe those places are far fewer than many would have us believe, especially those during the last eight years who have pushed for so-called "data-based research" as the alleged gold standard. My sense of this move is that it has been a smokescreen for marginalizing research that looks closely at individual and small cases, the very sort of work that brings to life what happens in real classrooms with real teachers and kids. It may be possible that a blend of qualitative and quantitative research methods will emerge during this century that will allow research teams to include close studies of individuals with "bigger picture" statistical studies, the former fleshing out and giving real life to the latter. My fear, however, is that the biases both outside and inside the research community will continue to make such work difficult. The misguided desire to appear scientific will allow pressures from politically-motivated forces to keep qualitative researchers on the defensive, when in fact it may well be that it is purely quantitative research that we need to be most suspicious of. The idea that "data speak," that there is some objectivity about deciding what data to collect, how to collect it, what experiments to conduct, what statistical methods to employ, and how to interpret the results of statistical analysis, are all highly doubtful and dangerous. This sort of think is similar to that of people who believe that documentary films are somehow objective, when in fact every single aspect of them entail subjective choices by the filmmaker(s) that shape what the viewer gets to see and attempts to craft a particular sort of response. The fact remains that such films can be, if anything, less true, less honest than "fictional" movies.
If it were possible to do so, I would strive to convince all educational researchers to cast off the guise of objective science and to turn to methods similar to those employed by Stoller. I would urge them to put themselves and their prejudices and viewpoints more explicitly into their research reports, while preserving the sound tradition within social science research of examining alternative interpretations and hypotheses, something Stoller does with great frequency. And I would counsel that they abandon as much as possible jargon-filled writing that alienates teachers in the field, parents, and other stakeholders from being able to make sense or use of much educational research, jargon that primarily is aimed, I believe, in making the research appear to be weightier and more objectively scientific than in fact it is or could possibly be.
Like Stoller, I suspect that I am here trying to swim upstream. But if his work is as valuable to psychoanalysis as I believe it to be, and if he, a physician who could readily have hidden behind both psychoanalytic and medical jargon, refused to do so in the belief that his work would be far more meaningful and beneficial if written in more accessible language, then how much easier should it be for educational researchers to abandon much of the pseudo-scientific trappings of their work? And how much more effective would that work be, in the long run, if through honesty and simplicity they stopped trying to pretend to be physicists and stood up to those who demand that educational research produce theorems and laws to which the enormous complexity that comprises both teaching and learning can be supposed to have been reduced.