Thursday, March 19, 2009
In his 1985 book, OBSERVING THE EROTIC IMAGINATION, the late psychoanalyst, Robert J. Stoller, offers a number of rules for psychoanalytic research that could well be of usefulness in considering what exists and what is possible in conducting (and consuming) education research (as well as the research in other social sciences. I first give Stoller’s list of rules, without putting them in the precise context in which they were originally presented, for fear of prejudicing some readers about taking these ideas seriously. Note that words in brackets are changes I have inserted to make the rules more obviously relevant in the context of educational research. Original language being replaced is specifically psychoanalytic.
Rule 1: anyone can assert anything.
Rule 2: no one can show anyone is wrong, since no one can check anyone’s observations (including his or her own).
Rule 3: ignorance can be wisdom (“The way toward better understanding, then begins with our understanding how little we understand.”)
Rule 4: use [description of motives] warily
Rule 5: ease up, forswear rhetoric, love clarity, relax.
Rule 6: describe people as we see, hear, or otherwise sense them, carefully and in detail. Do not use [educational jargon] in the midst of . . . descriptive sentences.
Rule 7: When it comes to [using educational jargon], less is more
Rule 8: stop picking on students or teachers
Rule 9: let us then, regarding [education], start afresh.
Final note: this is not really a report about [psychoanalysis], but, rather, one that uses psychoanalysis] as an example of the failure of [educational research], so far, as science.
So, what is this? More bashing of education research but from an unexpected source? Hardly. In fact, it is a call for honesty on the part of educators (at various levels), administrators (from buildings, to districts, to states, to Washington, DC), researchers, policy-makers, politicians, journalists, talking heads, business leaders, think-tank pundits, foundation funders, ideologues, and various other stake-holders when it comes to what education research is, can be, or should be.
Like Stoller in his remarks on psychoanalytic research literature and its jargon, I offer this commentary out of concern that both practitioners and critics are destroying education research by trying to make of it something it is not: a scientific literature akin to that of physics. I will resist at this juncture the temptation to go further by looking at the work of Paul Feyerabend and other post-modern philosophers of science who are critics of the notion that even the hardest science is not devoid of subjectivity, judgment, and, for want of a better term, humanity. It suffices to state that it is an error to try to make a "pure" science out of an area of inquiry that cannot afford to lose contact with its essential humanness and humanity.
Thus, allowing those who think that "the answer" to problems in education and education research are the same, and that this alleged single answer is foundational, I suggest that we turn to the instructional precedent of the history of the philosophy of mathematics itself, where logicism (as exemplified by the efforts of Russell and Whitehead in PRINCIPIA MATHEMATICA) failed to establish purely logical grounds for all of mathematics, due to underlying problems with completeness and consistency revealed by the work of Kurt Godel. The failure of the Russell and Whitehead project did not lead to the death of mathematics, of course, but rather to the demise of foundationalist attempts at creating some "ultimate" underpinnings for mathematics. Among the implications of Godel's Incompleteness Theorem are the realization that there will always be unprovable but true theorems as well as false hypotheses that forever resist refutation. Is this a tragedy, or in fact, as I believe, a signal to mathematicians that new ideas and inventions in their discipline will never be exhausted?
Returning to education research, if mathematics, the "queen of sciences" cannot be ultimately grounded, why should it be necessary to ground education or its research literature strictly in quantifiable, statistical/mathematical terms? The fact is that no such grounding in some sort of absolute and objective reality is possible. Neither is such a chimerical pursuit necessarily desirable if it could be attained.
One of the peculiar things I noticed as a new (but hardly young) graduate student in mathematics education at the University of Michigan in July, 1992, was the on-going controversy and conflict in my newly-chosen field between those who advocated for purely quantitative research and those who supported primarily qualitative methods. Given that my adviser and the principal investigator on the project that funded my graduate work at the time was engaged in fundamentally qualitative research, it might seem predictable that I would be unduly prejudiced against quantitative methods. It bears noting, however, that I had already taken two graduate-level courses in statistics and quantitative methods and experimental design while doing graduate work at the University of Florida in psychological foundations of education. I was not ignorant of or intimidated by statistics. I did, however, find myself somewhat skeptical of the notion that educational issues would readily be settled through the kinds of experiments that could be well-analyzed by the quantitative methods I studied (somewhat ironically, the other students in the courses I took were all doctoral candidates in clinical psychology, many of whom planned to be psychotherapists, not research psychologists. And the text we used, the classic STATISTIC FOR EXPERIMENTERS by Box, Hunter, & Hunter, seemed to draw all its examples from industry: nothing could have been more quantitative and, seemingly, objective. I could make sense of it, but I didn't see it applying readily to educational research and still do not.
Neither, I suspect, would Robert J. Stoller. He makes clear in book after book that he feels obligated to remove a false sense of objective truth (conveyed in no small part through the use of psychoanalytic jargon, which he both decries and eschews) from his psychoanalytic studies of particular patients (several of his books deal primarily or exclusively with but a single case history) and issues of gender identity, sexual attitudes, practices, and feelings, etc. In the latter part of his career, he worked directly with an ethnographer/anthropologist, Gilbert Herdt, and even traveled to meet with and study the Sambia tribe in New Guinea that Herdt had been investigating (they co-authored a book on this collaboration). Stoller began doing a form of what he termed clinical ethnography that extended to studying several marginalized segments of American society.
Throughout this work, he makes crystal clear in the books he authored that in the sort of science he engaged in, he, the analyst, interviewer, ethnographer, IS the instrument of (not under) investigation, and hence cannot be kept out of the awareness of readers. The idea is not to guarantee a sort of false reliability by, in effect, telling readers: "Look, I'm drawing your attention to the fact that I'm the lens through which all of this is being filtered (it's important to note that Stoller made a major point of reviewing analytic notes and manuscripts with patients and those he interviewed for other studies before publishing and always got their explicit approval), and so you can trust me because I'm telling you this." Rather, he repeatedly states that the best he can do is to periodically remind readers of his own choices of what to report and how, and to do his best to get his viewpoint, biases, and methods "out there on the table" for readers to consider and examine. This practice doesn't make him right, or reliable, or objective, or anything of the kind. But it does make him much more honest than many other writers. And it does go a long distance towards demystifying and "descientizing" what he is up to.
On my view, this approach of Stoller's is precisely what educational researchers should be doing. While there are places where purely quantitative research may be possible, I believe those places are far fewer than many would have us believe, especially those during the last eight years who have pushed for so-called "data-based research" as the alleged gold standard. My sense of this move is that it has been a smokescreen for marginalizing research that looks closely at individual and small cases, the very sort of work that brings to life what happens in real classrooms with real teachers and kids. It may be possible that a blend of qualitative and quantitative research methods will emerge during this century that will allow research teams to include close studies of individuals with "bigger picture" statistical studies, the former fleshing out and giving real life to the latter. My fear, however, is that the biases both outside and inside the research community will continue to make such work difficult. The misguided desire to appear scientific will allow pressures from politically-motivated forces to keep qualitative researchers on the defensive, when in fact it may well be that it is purely quantitative research that we need to be most suspicious of. The idea that "data speak," that there is some objectivity about deciding what data to collect, how to collect it, what experiments to conduct, what statistical methods to employ, and how to interpret the results of statistical analysis, are all highly doubtful and dangerous. This sort of think is similar to that of people who believe that documentary films are somehow objective, when in fact every single aspect of them entail subjective choices by the filmmaker(s) that shape what the viewer gets to see and attempts to craft a particular sort of response. The fact remains that such films can be, if anything, less true, less honest than "fictional" movies.
If it were possible to do so, I would strive to convince all educational researchers to cast off the guise of objective science and to turn to methods similar to those employed by Stoller. I would urge them to put themselves and their prejudices and viewpoints more explicitly into their research reports, while preserving the sound tradition within social science research of examining alternative interpretations and hypotheses, something Stoller does with great frequency. And I would counsel that they abandon as much as possible jargon-filled writing that alienates teachers in the field, parents, and other stakeholders from being able to make sense or use of much educational research, jargon that primarily is aimed, I believe, in making the research appear to be weightier and more objectively scientific than in fact it is or could possibly be.
Like Stoller, I suspect that I am here trying to swim upstream. But if his work is as valuable to psychoanalysis as I believe it to be, and if he, a physician who could readily have hidden behind both psychoanalytic and medical jargon, refused to do so in the belief that his work would be far more meaningful and beneficial if written in more accessible language, then how much easier should it be for educational researchers to abandon much of the pseudo-scientific trappings of their work? And how much more effective would that work be, in the long run, if through honesty and simplicity they stopped trying to pretend to be physicists and stood up to those who demand that educational research produce theorems and laws to which the enormous complexity that comprises both teaching and learning can be supposed to have been reduced.
Tuesday, March 17, 2009
From 2000 to 2003, I taught the same intermediate algebra course semester after semester to (mostly) high school sophomores whom I was trying to prepare to take and pass with at least a C the same course given by a community college mathematics department for dual-enrollment credit (this was at what is called a "middle college," located in Ann Arbor and serving a diverse population of students drawn from around eight counties in southeast Michigan).
I taught from a variety of materials during the nine semesters in which I taught this course, from very traditional to more contemporary and progressive textbooks, all with accompanying use of graphing calculators to varying degrees. Following the order of topics in the books always resulted in presenting quadratic equations, their graphs, and the relationships between their transformations and parameters before exploring the same issues with absolute value equations and their graphs in the Cartesian plane. And student understanding and mastery as evidenced by performance on assessments was often poor on the first topic and abysmal on the second. When we had to look at quadratic and absolute value inequalities and their respective graphs and transformations, things deteriorated further for many. (Of course, many kids "got it" all the time, based on their test results and occasional class participation, and overall my students did well both in my class and when they moved into the college course, but I'm speaking here of the ones who did not).
One semester, for reasons I don't recall, I reversed the order. Remarkably, or so it seemed to me at the time, many of the students I had at the time who had seemed indifferent and/or lost when we worked on linear equations gave evidence both in classroom discussions and on subsequent assessments of "getting" how the graphs of absolute value equations graphed and moved around as they played with the parameters (and vice versa). Later, when we looked at the same issues for quadratic equations, their understanding seemed to carry over. The overall success of the two units went up dramatically compared with past semesters. Had I inadvertently stumbled upon something of value, or was the result an utter fluke that could rarely, if ever, be replicated by other instructors or me?
Before considering that question, let me share my speculations on why things may have gone as they did. It struck me that for students who had some minimal understanding of the behavior of linear equations and their graphs, it might have been easier to move to a look at absolute value equations and their graphs because those graphs are comprised of two linear "legs" that meet at a vertex. A look at the interplay between the graphs and the algebraic expressions that produced them was easier to gain if one graphed by hand, because all that was needed once students understood the basic shape of these graphs was to find the vertex and one point on each leg. Naturally, with the use of graphing calculators (or computer software) it would be even easier to play with and think about the graphs, but for students who did not have access to these tools or who were expected to work with out them at the beginning of (or even entirely throughout) each unit, graphing a symmetric pair of line segments that meet at a common vertex is relatively easy by hand. It is also very easy to find the coordinate pairs needed to produce the graphs.
By contrast, calculating the y-values for quadratic equations can be more challenging for many students. And anticipating how the graphs will look seems to be complicated by how certain parts of the graph (e.g., for non-zero x-values between -1 and 1) will turn out because squaring numbers in that interval results in smaller absolute value outputs for simple quadratic expressions, a somewhat counter-intuitive concept for many students.
My sense was afterwards that students were able to deal with these absolute value equations and their graphs more easily when they saw them immediately after looking at linear expressions and graphs, and before they had been (possibly) confused by the quadratic ones. They were then more able to look at the transformations and subsequently apply what they learned to the quadratic situation.
Of course, it would be wildly irresponsible to claim that my experience with these students would obtain consistently or even in a majority of cases with other students and/or other instructors. While my analysis may be plausible, to know whether it's correct would require significant further research. To expect teachers, textbook authors, policy makers, and other stakeholder to consider seriously that this may be a more effective order to teach the topics in question, there would need to be reliable data that support the above. Indeed, for some, only controlled a double-blind experiment would suffice.
Unfortunately, it is not possible to conduct such an experiment. Teachers know the order in which they teach topics. Students know the order in which they are being taught them. If different classes in the same school are taught in different ways, it is extremely difficult to keep the differences between the approaches walled off from one another. Students have friends in other classes. These are only a few of the obstacles. Indeed, it would not be easy to conduct the simpler, non-blind experiment with controls. Because as a rule, parents are not happy about letting their children be subject to "educational experiments." And when it comes to investigating mathematics education, they are perhaps least inclined to do so, given the hostile propaganda against meaningful reform that has been spread by the American media, fed to them by conservative think-tanks, foundations, pundits, and propaganda groups like Mathematically Correct and NYC-HOLD. Testing even as simple a question as in which order is it more effective to teach two basic and related topics in elementary algebra would likely face real opposition, should it come to the attention of such groups. The seeds of suspicion have been sewn and in many places already have taken root.
Opposition aside, a sole practitioner would find it daunting to try to conduct research of this kind. Finding support for it within a public school setting would be far from trivial. Why, after all, should other teachers, let alone administrators, take the question seriously? And if they did, why should they take the risk of investigating it given some of the things I've raised above? Where would the funding come from, especially in these difficult economic times? Why not leave well enough alone?
Perhaps the most viable option for a single classroom teacher looking to investigate the sort of question I've raised here would be to connect with a university-based researcher who has or could obtain funding. With the funding and (relative) influence and authority of a professor, it might be possible to convince a district to allow such research, though many of the above-mentioned concerns and limitations would still obtain.
I'm not suggesting that research is impossible or that teachers shouldn't be reflecting on practice and using it to inform future teaching. But I do despair to some extent that in the recent and current educational climate, rhetoric about "data-based research" is probably the biggest obstacle to actually conducting meaningful research there is.
Friday, March 13, 2009
In the Feb/Mar 2009 MAA FOCUS, current President David Bressoud's column is entitled, "Mind the Gap." In response, Dom Rosa, former associate professor of mathematics at Teikyo Post University in Waterbury, Connecticut and full-time crank, posted to firstname.lastname@example.org:
According to the article: "For four-year undergraduate programs, calculus and advanced mathematics enrollments dropped from 10.5% of all students in 1985 to 6.36% in 2005. This occurred while high school students were taking ever more mathematics at ever higher levels."
This should not surprise anyone who is minimally aware of the current state of pseudo-education in the U.S. I keep meeting more and more students who do nothing more than scribble on photocopied handouts that are distributed in so-called Geometry, Algebra II, and Precalculus courses. They never open their bloated doorstops. Recently I met a student who was told, "You don't need your [junk] book; just leave it at home."
The article also points out: "... more students arrive at college having earned credit for Calculus I, but they have not produced larger enrollments for Calculus II. Over these same 20 years, Fall term enrollments in Calculus II dropped from 115,000 to 104,000. Across the board, students are arriving at college and failing to take the next course in their mathematical progression."
My question is this: How is it possible that Bressoud who is the current MAA president, his predecessors, and other hierarchs appear to be so clueless about the mathematical pseudo-education of American students? It seems to me that this is where the real "Gap" exists.
This post drew a couple of rather skeptical replies, the first from Metropolitan State College of Denver mathematician Lou Talman, who asked:
Dom, have you considered the possibility that "the current MAA
president, his predecessors, and other hierarchs appear to be so
clueless about the mathematical pseudo-education of American
students" because *you* perceive a problem that isn't there?
And Lou's response engendered the following from anti-progressive Internet ghost, Haim Pipik (aka, Edmund David):
The premise of the argument is,
"According to the article: 'For four-year undergraduate
programs, calculus and advanced mathematics enrollments
dropped from 10.5% of all students in 1985 to 6.36% in
2005. This occurred while high school students were
taking ever more mathematics at ever higher levels.'"
Do you accept the above as an accurate representation of the facts? If this is a fact, do you accept that declining enrollments in the calculus is a problem?
Dom thinks this declining enrollment is a problem, and he asserts that the source of the problem is bad teaching in K-12. If you agree it is a problem, what do you think are some of the major contributing causes?
It's simple enough to dismiss Dom Rosa as a conspiracy theorist: he has a track record of posting and writing letters to various media outlets about his three favorite themes: the pseudo-education of America's youth in mathematics; "doorstop" textbooks; and various "rackets" in education, from standardized testing to calculators to any notion, book, method, or tool conceived after his beloved golden era of learning math in Massachusetts, a time when, if he is to be believed, teachers, books, and students were all simply wonderful, and everyone graduated with a genuinely deep, rich knowledge of mathematics. The question of how we still managed to produce yet another generation of Americans who fear and loathe mathematics during that period never seems to cross Dom's mind, or, if it does, he ignores it.
Nonetheless, the far less "cranky," but far more dangerous neo-conservative cum Libertarian, Mr. Pipik, uses Dom's latest bit of crankery to once again raise the specter of evil on the part of America's public schools. For those not playing along at home, Pipik's recurrent themes are that: we spend too much money on public education, much of it wasted, mis-spent, pilfered, given inordinately to lazy teachers and administrators who fail to show accountability and don't produce high test scores, etc.; the private sector would do a much better job of educating kids; we're wasting time and money trying to teach mathematics to most kids, who really can't learn it, don't want it, and don't need it; and anyone who disagrees with him is stupid and a communist who loves Stalin, Mao, and wants to eat the rich (well, maybe I'm exaggerating with that very last part).
Thus, while I don't think Dom or Haim has much of a case, I offer the following serious reply:
My ReponseHaving just read Bressoud's piece, I'm wondering if either Dom or Haim bothered to do so. Had they actually read it, they'd see that Bressoud gives us a good deal to chew over, all of which needs to be looked at carefully, not skimmed to find the one or two tidbits that can be spun just the way the reader has already concluded the "truth" requires.
It's necessary to look at Bressoud's analysis paragraph by paragraph to get a realistic picture of what real questions it raises:
DB: “Mind the Gap” is an appropriate metaphor for one of the greatest challenges facing undergraduate mathematics education today. There is a significant gap between students’ experience of mathematics in high school and the expectations they face on entering college, and there are troubling signs that this gap may be widening. There are serious problems in K–12 mathematics education, but college faculty also need to look to their own house and think about the first-year experience of their own students.
MPG: It's a safe bet that Mr. Rosa is NOT looking to his own house. He never does. And Mr. Pipik? We don't even know what his house might be, but he rarely criticizes departments of mathematics (too many Mathematically Correct and HOLD allies there to risk doing so). No, his focus, like Dom's is always on the other guys: K-12 public school teachers, not because he gives a rat's patootie about how much math gets taught or learned by most kids, but because of his political agenda: destroying public education and turning things over to the private sector. But since he has nothing to contribute to any conversation about how to actually improve teaching at any level in any subject (unless being snide and smug count as tips for bettering things), he can be safely ignored for the duration of this analysis.
DB: In my article “Is the Sky Still Falling?” (2009), I observed that four-year college mathematics enrollments at the level of calculus and above declined from 1985
to 1995 and have since recovered to slightly below the 1990 numbers.
MPG: Well, let's be sure to gloss over that last statement. I hate to be the first person to point this out, but there are cycles in most areas of study. And the reasons vary enormously from cycle to cycle. But if there was a drop from 1985 to 1995 and then a recovery to just below 1990 levels, is the sky actually falling or merely moving up and down rather lazily? And one might assume that four year colleges, not community colleges, represent the majority of stronger, better prepared students.
DB: Two-year colleges saw calculus enrollments rise in the early ‘90s, then fall to well below the 1990 number, while the number of their students requiring remedial mathematics exactly doubled. In percentages, the picture is dismal. For four-year undergraduate programs, calculus and advanced mathematics enrollments dropped from 10.05% of all students in 1985 to 6.36% in 2005.
This has been debated and discussed widely. Bressoud doesn't offer any real analysis here. No hypothesis, just numbers and the word "dismal," which of course implies a great deal but tells us nothing. If enrollments in community colleges are growing rapidly, it's quite conceivable that what this means regarding the PERCENTAGES of students who enroll in calculus and beyond SHOULD be dropping. Indeed, it would almost have to drop, because we're getting vastly more students in such colleges who are coming from the lower echelons of high school graduates. It does not follow from his statements that things are any worse in terms of the qualifications of comparable percentile ranks of high school graduates today and at various points in the past. Until someone actually comes up with that sort of comparison, with concrete examples to illustrate the nature of any apparent decline or growth, we really don't know squat based on the above other than that more kids are enrolling in two year colleges but a lower percentage of them go into higher math. And when one considers who goes to these schools and why, this seems utterly NON-dismal.
DB: This happened while high school students were taking ever more mathematics at ever higher levels. In 1982, only 44.5% of high school graduates had completed mathematics at the level of Algebra II or higher. By 2004, this had risen to 76.7%. In 1982, 10.7% had completed precalculus. By 2004, it was 33.0%, over a million high school graduates arriving in college ready — at least in theory — to begin or continue the study of calculus. Yet over the years 1985–2005, Fall term enrollments in Calculus I dropped from 264,000 to 252,000.
MPG: Hmm. Why do I not find myself panicking over a drop in freshman calculus enrollment of 12,000 students? Aside from many issues that even Bressoud touches on below, anyone paying attention may have noticed that we're not exactly dying for professional mathematicians in this country. Any opening to teach mathematics at the post-secondary level has dozens of qualified applicants. In some parts of the country, it's no easy task to find a public school that is hiring in grade 6 - 12 mathematics classrooms. Engineers are getting laid off in many disciplines. No general cry for more physicists has gone out. Straight A's in four semesters of calculus is no guarantee of admission to medical school, even less so into veterinary school (though exactly why those courses are prerequisites for either program remains a bit of a mystery). That 12,000 student drop, which remains unanalyzed as to possible causes, doesn't seem linked to the current economic crisis: indeed, one might reasonably assume that all those brilliant economists and MBAs took lots and lots of calculus.
DB: Admittedly, many more students today arrive at college already having earned credit for Calculus I, but they have not produced larger enrollments for Calculus II. Over these same 20 years, Fall term enrollments in Calculus II dropped from 115,000 to 104,000. Across the board, students are arriving in college and
failing to take what should be a next course in their mathematical progression.
MPG: It would be important to know if Bressoud's figures account for two things: first, how many of those students who come to college with a semester's credit already in hand wisely choose to wait until second semester to take Calculus II with friends and to give themselves an easier first semseter in college, something I have frequently recommended to high school seniors. There isn't a ticking clock, and there are lots of other courses in which a bright student might wish to enroll regardless of whether s/he is ready for second semester calculus in the fall of freshman year; and second, what percentage of those students NEVER take another mathematics class at the level of calculus or above? Without providing information on either of these concerns, Dr. Bressoud really has no grounds for concern on the above numbers, yet he seems to imply we should be terribly worried.
DB: The college community is not blameless. Too many good students are turned off by their initial college experience in mathematics. Too often, first-year courses are large and impersonal, instructors — especially adjunct faculty and graduate teaching assistants — are under-prepared, and little thought has gone into implementing appropriate pedagogies. Moreover, a common complaint that I hear from high school teachers is that colleges focus exclusively on what students do not know, with the result that many students find themselves assigned to classes they find stultifying.
MPG: I'm not willing to put too much blame on adjuncts or graduate students. I have had some very good instruction from some of them. At the University of Michigan, one of the finest mathematics professors I studied with was a post-doctoral student. When I decided to take a pair of refresher summer classes in calculus, both instructors were graduate students and were very clear, concerned and competent instructors. On the other hand, one of the most horrid mathematics classes I had was taught by a full professor. He lacked just about every requisite skill imaginable for good teaching other than subject matter competence.
That said, Bressoud would have it right and do students a huge service if he didn't try to foist the responsibility for bad pedagogy on lower-rank teachers and admitted that there is a vast amount of bad instruction in mathematics departments regardless of the rank of the teacher. Of course, there are also outstanding mathematics professors of higher rank (and I don't merely mean great mathematicians who can communicate with three or four truly gifted students, but the Polyas, the Edward Burgers, and others who are deeply committed to teaching mathematics to as many students as they can possibly reach.
DB: This last is a tricky issue. The answer cannot be that colleges lower their expectations of what it means to know algebra or calculus. It does mean that colleges need to rethink how to get students from where they are as they enter college to where they need to be. It does mean offering more routes into good mathematics and restructuring existing courses so that they acknowledge and build upon what students do know while remaining mindful of and addressing the gaps in this knowledge. Especially when a student needs to relearn a topic that appears familiar, we must ensure that the course is structured so that it provides fresh challenges that entice students to keep moving forward.
MPG: No argument with the above, especially if Dr. Bressoud can make a commitment for the professional mathematics community in post-secondary institutions to communicate the above to parents, employers, K-12 educators, administrators, guidance counselors, college admissions officers, politicians, and the media. Further, to try to minimize the counterproductive activities of the small, vocal minority of activists within or affiliated with the professional mathematical community who are set on preserving the status quo at all costs and who actively oppose any sort of reforms along the lines he mentions. It is high time that both major organizations of professional mathematicians took a principled stand against groups such as Mathematically Correct and NYC-HOLD.
Additionally, if Dr. Bressoud is serious about creating alternative routes and attractive options for students without sacrificing mathematical standards, it is high time that the professional mathematics community support teaching discrete mathematics, the underlying mathematics of computer science, and powerful connections to technology and the Internet as important paths for youngsters to explore and study before they reach college. The notion that calculus is the exclusive meaningful goal for K-12 mathematics education is an outmoded idea that will be modified in the minds of the public and teaching community only when it comes with the support of professional mathematicians.
DB: We have learned a lot about teaching undergraduates in the past 20 years. There are proven programs for bridging the gap. The Emerging Scholars Program is one. Stretching Calculus I over two terms with precalculus topics treated on a just-in-time basis is another. But there are no magic bullets. Each college and university must examine what others have done and adapt to its own situation those programs that are most appropriate.
MPG: I can agree with the statement about the lack of panaceas. But if the focus remains solely on getting students into calculus, all the other fine things Dr. Bressoud mentions will simply break against the walls of tradition, in all likelihood. As long as mathematics is viewed so narrowly through the lens of analysis, the traditionalists will feel they still have a mandate to demand control over the K-12 curriculum. The preceding sixteen years of the Math Wars show clearly that the hard-core educational conservatives will never concede an inch towards meaningful reform of mathematics teaching at ANY level, K-12 or post-secondary, as long as they can promulgate the idea that they represent the entirety of professional mathematicians or nearly so.
It is incumbent upon anyone in Dr. Bressoud's position, if s/he wishes to make positive changes feasible, to be much more careful in the examination, analysis and interpretation of statistics such as those cited in "Mind the Gap." While few people doubt or deny that we can continue to improve the quality and effectiveness of mathematics education at all levels in this country, the sky really is not falling. American kids are neither more stupid nor more ignorant today than in the past. Nor are they being "pseudo-educated." They could use, however, better guidance from the professional mathematics and mathematics education communities about what mathematics really is and the many ways in which they could positively engage with it. As always, the question is whether those who understand the need for change have the courage to stand up for it, even against respected colleagues desperate to stand in its way.
Friday, March 6, 2009
In a recent post to the Gerald Bracey's EDDRA (aka, Education Disinformation Detection and Reporting Agency) discussion list, Mark Shapiro, who blogs as The Irascible Professor, wrote regarding FairTest, a group that monitors and fights against the abuse of standardized and other tests:
I'm no great fan of PISA. But really "Fair Test" is basically "No Test".
While I don't think that high stakes tests alone should be used to determine a student's fate, I do think that standardized tests have a useful place in the assessment of educational progress both at the individual and global level.
Just as there are no perfect teachers, no perfect teaching techniques, there likely will never be perfect tests. But that doesn't mean the tests are worthless. Imperfect knowledge is better than complete ignorance.
I think that in fact there are times where ignorance is actually better than partial knowledge, especially if one uses the partial knowledge as if it were complete knowledge with the power for prediction we normally reserve for physical laws. People who recognize the extreme limitations of their partial knowledge and proceed with utmost caution seem to be all too few when it comes to educational and psychological testing and measurement, unfortunately for all of us (except maybe those who profit from acting as if limited knowledge is omniscience).
So-called "intelligence" testing has a particularly long, ugly, racist, xenophobic, and highly politicized history, perhaps more so in the US than anywhere else on the planet (England may be the runner-up, but I'm not certain of that). Much of that history spilled over quite predictably into education, with destructive results for millions of Americans (not to mention those would-be Americans left stranded to die in Nazi camps in Europe because they could not emigrate to the US under restrictive immigration laws passed by Congress in 1923 due in no small part to Carl Brigham's testimony that was grounded in the meaningless results of Yerke's data from the Army Intelligence Tests given during WWI recruitment:
The decline of American intelligence will be more rapid than the decline of the intelligence of European national groups, owing to the presence of the negro. These are the plain, if somewhat ugly, facts that our study shows. The deterioration of American intelligence is not inevitable, however, if public action can be aroused to prevent it. There is no reason why legal steps should not be taken which would insure a continuously upward evolution.
The steps that should be taken to preserve or increase our present intellectual capacity must of course be dictated by science and not by political expediency. Immigration should not only be restrictive but highly selective. And the revision of the immigration and naturalization laws will only afford a slight relief from our present difficulty. The really important steps are those looking toward the prevention of the continued propagation of defective strains in the present population. (Brigham 1923) (The Mismeasure of Man, Stephen Jay Gould, p. 260)
The horror and wrongness of what he'd done led Brigham to later publicly recant:
“This review has summarized some of the more recent test findings which show that comparative studies of various national and racial groups may not be made with existing tests, and which show, in particular, that one of the most pretentious of these comparative racial studies—the writer’s own—was without foundation.” [Brigham, C. C. 1930. Intelligence test of immigrant groups. Psychological review 37:158-165. (p.165)]
though this didn't help the dead people he helped doom.)
The damage done to kids through the abuse of tests of all sorts (psychological, academic, and so forth) is never going to be fully documented. It goes on every day. It is not for the most part a matter of willful abuse, of course: few educators have adequate training in assessment. Far fewer still have any real understanding of statistics or psychometrics. Given some comments on this and other lists I read, I'd say that some opinions here appear to reflect similar ignorance.
The conclusion isn't to do away with tests. Assessment is necessary. It's the nature of assessment, the quality of the tests, the way in which test results are used (or misused) and the purposes behind testing that must be vigilantly interrogated at all times. I see nothing to lead me to the conclusion that FairTest is against testing. What it is against, as we all should be, is testing abuse. The idea that any data is better than no data is ridiculous if the damage done by collecting the data and then misusing it can be reasonable shown to far outstrip the good done with it.
If someone knows and openly admits that the meaning and usefulness of data s/he collects is highly speculative at best, and vigilantly seeks to protect those being assessed and others from unfair punishment and needless suffering (assuming anyone really has that much power), then it may be ethical to collect that data in hopes that it will lead to better, less speculative understanding and assessment in the future. But it is far more common that what in fact occurs is that test results become the evils unleashed from Pandora's Box, and no one has the power to return them to that box once it is opened. Kids are wrongly made to feel stupid and to suffer, pushed around in the school system in idiotic ways, all because of bad testing practices. Teachers, schools, administrators, districts, states (and now even nations) are wrongly viewed and mistreated due to unethical and/or ignorant abuse of basic principles of psychometrics. Parents and non-parents alike suffer as their property values are hurt by test scores, regardless of whether those scores mean a great deal or nothing at all.
And what makes this all so heinous is that there are ostensibly intelligent people, some with backgrounds in mathematics and/or science (but apparently not in statistics or psychometrics) who become apologists or even cheerleaders for bad testing practices and attack mercilessly anyone who dares even question what's going on. In my experience, such people are invariably politically and/or economically motivated, and the most polite word I can use for this practice is "unethical," though some may prefer one of the following: immoral, amoral, hypocritical.
Once again, there is nothing wrong with the idea of assessment, but there is much wrong with how it's done in this country and a great deal of evil perpetrated as a result of testing abuse.
For further reading on this vital subject, I recommend the following:
Gould, Stephen Jay: The Mismeasure of Man
Hoffman, Banesh: The Tyranny of Testing
Mansell, Warwick: Education By Numbers: The Tyranny of Testing
Nichols, Sharon L. and Berliner, David C.: Collateral Damage: How High-Stakes Testing Corrupts America's Schools
Owen, David: None of the Above: Behind The Myth Of Scholastic Aptitude