Saturday, November 24, 2007

The Book Gods?

Something that always blows me away is how teachers will follow books blindly in the face of what should be big warnings from what they know about their own students. All too few teachers are immune to book worship, having been led to believe by their own experience as kids and education students: the math textbook (and its magical authors) knows more than any regular old K-6 teacher.

Two cases in point: I was coaching upper elementary teachers in math at a low-performing K-6 school in a district near Detroit a few years ago. They were using the Everyday Math program for the first time. I was asked to guest teach some lessons on fractions in a couple of the 4th and 5th grade classrooms. I noticed that in one lesson, involving pattern blocks, there were three problems for classroom discussion. The first one was clearly needed to establish the relationships amongst the smaller shapes (triangles, parallelograms, trapezoids, that could be fit together to make a hexagon (if you have the standard pattern blocks, the relationships were that two triangles formed a parallelogram; three triangles made a trapezoid; six triangles made a hexagon, which in the first problem was the "whole" or "1." Also, you could make the trapezoid from a triangle and a parallelogram; and you could make the hexagon from two trapezoids or from three parallelograms, etc.

In the first problem, therefore, students establish that the triangle is 1/6 of the unit hexagon (which is an outline drawing in the book that you cover with these various combinations); the parallelogram is 1/3 of the unit hexagon; and the trapezoid is 1/2 of the unit hexagon.

While some students had difficulty with this, most did pretty well as I had figured. But when I saw the second problem, I sensed danger (interestingly, the third problem was much easier, and relatively few students struggled with it). #2 was an outline drawing of TWO of the hexagons with an adjacent edge, but the line where the edges met was erased, so you had a double hexagon as the new unit. The idea was that students would cover this new figure and see it as a new "unit"; it would take, for instance, 12 triangles to cover it, and so the triangle would now represent 1/12 of this unit, and so on. Each figure would represent a fraction half its previous size, since the new unit was double the old unit.

I would like to say that I brilliantly smelled a rat, but I didn't. Or that is to say, I smelled it, but I didn't trust my reactions sufficiently. Then again, I can plead lamely that I was not experienced with this book (though the mathematics wasn't a problem for me) or teaching kids this age. I was there because I had some previous experience as a coach, and because I knew the math well, and because I am very quick to adapt to new ideas and approaches).

In any event, I went ahead with the first group and had them do the problems in the order given by the book. As I had sensed but failed to act upon, the students struggled mightily with the second problem. They couldn't wrap their minds around the shift in the unit. The picture, I suspect, was perceived by most of them not as a new "unit" but simply as "two"; "obviously" it WAS equal to two of the OLD units connected to one another. They likely were visually filling in the removed boundary line where the adjacent edges met. It was very difficult to get a lot of them to make the leap to seeing this as a new "one." Even when they did the individual tasks ("cover this new shape with the triangles. How many triangles does it take?") and got the correct numbers, they were not going to budge from the notion that if a little green triangle was 1/6th in the first problem, then it was still 1/6th in the second problem. When I asked about why it now took 12 triangles, not 6, to cover the figure, they just said in essence, "Well, sure: there are two hexagons there."

I hope I have made this clear enough without the exact problem drawings that would likely made it more transparent.

So, when I guest-taught the same lesson in another class, I changed the order of the problems. I got much better results, overall. And I warned them that the problem just described, now their last problem, was challenging and might prove upsetting until they played with it a while. Interestingly, some of the metaphors I tried that bombed the previous time worked well here (e.g., a quarter is a half of a half-dollar, but it's only 1/4th of a whole dollar, etc.). Was it really that simple? Just change the order, build their confidence, warn them of quicksand, and things would go better? I'm not sure.

But the teacher in the second class was truly SHOCKED that I changed the order, and after class, when we debriefed, it took a lot to convince her that I had made an informed choice based on the previous class and my own previously ignored intuition that there was too great a leap in that second problem to go to it directly from the first one and without offering some warning bells. Her feeling was that the textbooks authors knew more than she did and must have a good reason for the order of the problems.

I assured her that SHE was the expert on her students in her class, and no author would dare usurp that position. I was making an educated guess, and had the experience of the previous class to back me up. (I know one of the main authors of EM personally, I told the teacher I would be e-mailing him with a summary of the experiences I had that day. I told her that I had no doubt at all he would understand my decision. May he might suggest a change in the next edition as a result. This was all amazing to her. And she was not a rookie teacher.

In another example from the same school and book, I chose to omit entirely a couple of problems that introduced mixed numbers into a situation where they were NOT the main point of the lesson. I anticipated (and this time trusted my instinct) that throwing mixed numbers into the fray was an error that would distract students from the real mathematical residue I wanted them to take away. Again, the classroom teacher (not the same one), was surprised by my choice and not confident that she was allowed to make a decision to remove problems, temporarily or permanently (I made clear in class to the students that we would come back to them another day).

I think this is tragic. And I think it extends even to home schooling parents, even though they know their own child(ren). They can make pedagogical choices based on that knowledge, as well as other factors. There's no textbook that can anticipate the needs of each kid, and a good teacher must intervene to make the best choices s/he can given the limitations of either whole-class or individual instruction. Clearly, teaching one child or only a few is a huge advantage and allows great flexibility. Not being under the thumb of a district or even the state or NCLB makes things even better. But the god of the book is intimidating. What if you're wrong?

But I must add this caveat: if math isn't "your thing," you need to think carefully about your choices and you are going to make some definite mistakes. Few of them will be fatal (I'm talking about the sorts of pedagogical choices already described, not mathematical errors, which are a separate issue entirely). There is what is called "pedagogical content knowledge": an understanding of both the subject and effective ways to teach it. If you're weak in content, it's hard to be strong in this area, but if you are strong in content, you may not be strong in it anyway. (And college math departments prove this daily throughout the land). It takes a lot of thinking, before, during, and after teaching lessons to be effective. It takes a willingness to really anticipate how a reasonable-seeming problem could be a mine-field for your student(s), to think about where your student(s) may go astray in the task and/or topic at hand, based on their strengths and weaknesses AND your knowledge of the mathematics (obvious example: kids learning fractions will be prone to do addition by adding the numerators and adding the denominators. It's a good idea to be prepared with an example like 2/3 + 1/5 so that you can ask the student(s) if it makes sense that the answer would be 3/8, since 2/3 is already greater than 1/2, but 3/8 is less than 1/2).

So I don't advocate just tossing things out because they look unappealing or you don't like the topic or something like that. Or because you read somewhere that a particular method isn't good. You want to think it through and in terms of the student(s). Then, you do what you can, being prepared to change course if necessary. No shame in that. Indeed, it's a wise teacher who can admit error, doubt, and change course. Too bad some people in Washington, DC seem to lack that wisdom.

Friday, November 23, 2007

Mastery of What?

A lot of conversation on one of the lists I read about spiraling vs. mastery in mathematics curricula. Nothing new about that. Just another issue that strikes me as adding more confusion than clarity by creating false dichotomies instead of seeing that most of these things go hand in hand. Frankly, I'm hard-pressed to see how it would be possible to teach or study something as enormous as merely the tiny slice of mathematics we want all students to learn and be able to use in K-12 education (which brings us up to pretty much nothing invented in the field as recently as the 17th century and still excludes enormous amounts of what was already known when Newton and Leibniz were inventing the differential and integral calculus), without doing a reasonable amount of "spiraling" (which is to say that we must revisit already-explored ideas again when students have more sets of numbers to look at, say, or have developed sufficient mathematical maturity to delve deeper into things that many of us mistakenly think of as simple, elementary, "easy," basic, etc. At the same time, it's hard to move forward (at least in the linear way we teach the subject in the US), if you haven't attained some facility with the procedures (if not the actual ideas behind them) that are often referred to as "the basics" (that's the stuff educational conservatives keep telling us we need desperately to "get back to," as if we've somehow been skipping that and teaching partial differential equations, differential geometry, and category theory to elementary students while they weren't watching us crazy progressives carefully enough, and now we must return to sensible arithmetic, a taste of geometry, some faux algebra, etc.) However, interesting questions arise about whether it is absolutely mandatory to shove every child through the same narrow funnel, and whether much wouldn't be gained by giving kids (and teachers) a lot more options about what route(s) they want to explore on their ways up and around in the tree of mathematics. (see Dan Kennedy's provocative "Climbing Around In The Tree of Mathematics"

In any event, here's some of what I've been thinking about regarding the whole "mastery" thing that so many people worry about (or say that they do):

People talk a great deal about "mastery" in K-12 math, as if mathematics was somehow like typing. You practice and attain mastery.

But that part of mathematics, while important up to a point, isn't really what mathematics is except for kids (and then, only a small piece of it).

Consider the following quotation:

"There is something odd about the way we teach mathematics. We teach it as if assuming our students will themselves never have occasion to make new mathematics. We do not teach language that way. . . the nature of mathematics instruction is such that when a teacher assigns a theorem to prove, the student ordinarily assumes that the theorem is true and that a proof can be found. This constitutes a kind of satire on the nature of mathematical thinking and the way new mathematics is made. The central activity in the making of new mathematics lies in making and testing conjectures." (Judah I. Schwartz and Michal Yerushalmy, quoted in "Geometer's Sketchpad in the Classroom" by Tim Garry, in GEOMETRY TURNED ON, p. 55).

You could readily change a few words above and talk about problem-solving as well. We pose problems to students the solutions for which are well-known. Students cry "Foul!" when confronted by: 1) problems they haven't specifically been trained to do. That is, they think it's dirty pool to be asked to solve problems that aren't identical to others the teacher/book has explicitly worked through with them, or nearly so. These, of course, aren't problems, but rather they are exercises, just like typing drills. I show you how to use the keys with your right index finger, then drill you on that, and so forth; b) problems for which the solution pushes them beyond the immediate topic, perhaps calling on general strategies they've learned, things they've worked on, but also a bit more, stretching their minds, asking them to reach a bit, speculate, imagine; and c) any sort of problem that has no known solution (or maybe just no solution with the methods or numbers they're familiar with) just so they learn that math never ends, mathematics is always being extended and invented, and part of what drives that is unsolved problems (along with new problems, new math, and so on). It could be something as simple as asking a student who hasn't learned about negative numbers what 6 - 8 equals, or someone who hasn't learned about complex numbers to consider the equations x ^2 + 1 = 0. Or it could be having students explore accessible but unsolved problems like Fermat's Last Theorem (when it was still unsolved) or the Goldbach Conjecture. Or something like the three utilities problem or Konigsberg Bridge problem which led Euler (see photo above) to invent new mathematics (graph theory). Fun stuff, really, but many kids think all problems in math class should be trivial exercises, not real problems for them to speculate about and experiment with.

I think much of the error we make in putting so much emphasis on mastery lies in cheating students of knowing what it means to think mathematically, even though they are quite capable of doing so. There is a body of work out there that suggests kids can do much more mathematical thinking than we give them credit for. But for most, by the time they get to do some, they hate mathematics (even though they actually don't know what it is, really).

Just a little late-night food for thought.

Thursday, November 15, 2007

Language, division, and calculators.

Since several related issues are floating around on various math education lists I'm reading these days, I thought the following problem and what I observed recently with some African-American students would be worth sharing.

This problem appeared on an actual ACT exam; it should be noted that it was only #5 of 60 questions in a section of a test which allows 60 minutes total time for those problems:

The oxygen saturation level of a river is found by dividing the amount of dissolved oxygen the river water currently has per liter by the dissolved oxygen capacity per liter of the water and then converting to a percent. If the river currently 7.3 milligrams of dissolved oxygen per liter of water and the dissolved oxygen capacity is 9.8 milligrams per liter, what is the oxygen saturation level, to the nearest percent?

A) 34% B) 70% C) 73% D) 74% E) 98%

First, I want to observe that there are quite a lot of words in the above problem, most of which make it difficult to read and which don't flow terribly smoothly. The actual mathematics is hardly at the high end of difficulty for high school students, if they get to it, or at least we would hope that to be the case. But the language seems quite "high end" for where this problem appears in the section, and that would potentially be an obstacle for students who are not good readers, not native English speakers, or who may be thrown by a science situation with which they are unfamiliar. Is it a good idea to embed this particular mathematical task in language like this? What, exactly, is being tested? Here in Michigan, where the ACT is now the official "exit" exam for high school students, these concerns are not trivial.

That said, let's look first at how I went over this problem with some students, once they had it set up as 7.3/9.8 or agreed that such was a reasonable way to begin. Of course, the ACT allows calculators (but unlike the SAT, does not permit the TI-89, which has a computer algebra system included). But I always start with the assumption that it might be quicker and safer to do mental math, and that knowing how to do the problems several ways is worthwhile. (That's also a personal holdover from the period prior to 1995, when calculators first were allowed on the SAT. I learned how to excel on these timed tests by studying them starting in 1979. Mental math, estimation, and knowing that fractions are often the easiest form in which to do quick arithmetic has stayed with me and I still encourage students to think that way on timed tests. I rarely use calculators for doing them, though it's not a bad idea to have one, if you know how to use it intelligently.)

First, I pointed out that we could multiply by 10/10, getting the equivalent expression, 73/98. Then I asked about 34%, hoping that students would see that it was far too low, 73 being clearly less than 50% of 98. Most students agreed. I suggested further that 98% was unlikely, as 73 is not "almost all of" 98. Again, this line of reasoning was amenable to most of the students. I next asked about 73%. Some students saw that for that to be correct, the denominator would have to be 100. That left 70 or 74, and most (though not all, of course) students at this point saw that if the denominator was less than 100, then 73 would comprise more than 73%, so that 74% was more reasonable than 70% would be.

The advantage of this sort of estimating and "number sense" approach avoids some of the errors that I found occurred with alarming frequency among those who attacked the problem with more traditional methods or with calculators. One student in particular ran into interesting mistakes, not unique to him. Without my help, he correctly set up 73/98. He articulated that he should divide, but then proceeded to start dividing 98 by 73. Since this yields approximately 1.34, it's not a coincidence that one of the choices is 34%: students simply assume that they should "drop" the "1" and round. Rather than considering that they've converted a fraction considerably less than one half into a percentage that initially (before the groundless dropping of the 1) represented more than 100%. Would they be likely to do this with the mental math approach?

But this student took it further. He grabbed his low-end calculator (non-graphing), pressed some buttons, and told me that he had been right to begin with, because he still got "34%" as the answer. Of course, he'd failed to realize or recall that when entering 73/98 into a calculator, his "wrong" impulse in the hand calculation was now the correct order (and such would be the case as far as the numbers are concerned were he using a HP or other Reverse Polish Notation device).

What I find provocative is that students can wind up going wrong on the same problem in so many ways and for reasons that aren't strictly mathematical. Not only is the language in which the calculation is imbedded more dense than one might expect, but the language of division "73 divided by 98" is in the opposite order from the order in which students are taught to divide, leading many to then try to divide 98 by 73. And then, having made that error, they find their mistake reinforced by the fact that the calculator wants them to enter things exactly in the order they appear in print (at least in THIS example). Our way of expressing division in language is at least partly to blame for a lot of the confusion in even setting up a correct calculation, let alone carrying it out successfully.

There are other difficulties with division surrounding the conceptual basis for the algorithm, but here we see how students can be led astray regardless of whether they know how to do long division by hand on an exercise that simply offered 98, the side-ways "L" we put to its immediate right, and 34 placed above it. This student did the wrong problem correctly by hand. He then made precisely the error the test-makers set him up to make by selecting 34%. I would have been interested to see how he would have done with the calculator alone, or with the calculator first, at any rate, as far as the order of the numbers was concerned.

To conclude, I would hesitate to say that this student didn't know how to do long division. He could successfully carry out the algorithm. Where his problems lay had to do with issues that may have been more linguistic than anything else. How he said the problem to himself or heard me expressing it was not reliably leading him to set up the right calculation. And the answers were, of course, tempting him to make some unsound and unjustified moves to retrofit his results to the answer choice that seemed to fit. This occured both when he did the problem by hand AND with a calculator. The mistake had nothing to do with his relying on the calculator, in fact, since it was his choice initially to do the problem without it. He did the division he set up properly. Yet his answer was dead wrong, regardless of the technology or its absence. I don't claim this proves anything, but it is thought-provoking. And this incident does help emphasize the potential power of estimation and mental math.

Wednesday, November 7, 2007


I am writing to support David Wasserman's decision to refuse to administer a test in which he did not believe and to decry the way in which he was subsequently dealt with by his superiors. I am a mathematics teacher educator, teacher, and expert on standardized test preparation with more than 30 years' experience working with students on various instruments (e.g., SAT, GRE, ACT, LSAT, and GMAT) as well as with grading state tests from Michigan, New York, and Connecticut. With that experience and expertise in mind, I am deeply troubled by the manner in which this nation has been pushed further and further towards accepting an ill-founded religious belief in the power of (for the most part) multiple-choice, multiple-guess tests to measure not only student achievement, a concept which is at best open to question, but teacher, administrator, school, district, and state competency (not to mention national status when viewing similar international tests such as the TIMSS), in total violation of one of the basic principles of psychometrics: never use a test to measure something it has not been specifically designed and normed to measure. This country has long been enamored with numbers and rankings, going back to the early decades of the 20th century, when we shamefully abused IQ scores to restrict immigration in ways that can only be viewed as unscientific and utterly racist. I urge everyone to read Stephen Jay Gould's definitive work on the abuse of "intelligence" testing, THE MISMEASURE OF MAN, for a shocking and sobering account of how standardized tests have been misused and abused in the United States, generally out of racist and chauvinistic ignorance and bias.

It takes a brave person to risk his job and his livelihood, to put himself and his family in jeopardy, in the face of blind obedience on the part of so many of his fellow teachers and education professionals to what is nothing more than an outlandish political ploy to destroy public education, undermine teacher authority and autonomy, punish students, parents, teachers, administrators, schools, and districts MOST in need of support, and to shamelessly promote vouchers and privatization to help those most advantaged and least in need already. Sadly, there is not a single member of the US Congress (and, I suspect, of any state legislature) who has a balanced view of educational politics, who actually has K-12 teaching experience, who has a background in either education or psychometrics, and who understands that measuring something is not the way to improve it. Regardless of political party or identity as liberal, conservative, reactionary, moderate, or libertarian, our politicians have no little interest in reading educational research, theory, or case studies beyond what they need to figure out which way the political winds are blowing and what policies will sound good to the voters, regardless of what professional educators believe or know from hard experience. No Child Left Behind has devolved, predictably, into No Child Left Untested, Untraumatized, Unabused by high-stakes, high-pressure exams that for the most part are not grounded in state or district curricular frameworks, do not reflect best practices, and are not scientifically sound when misapplied for a host of purposes for which they were never intended by the test authors. I have personally witnessed teachers being instructed by administrators in my home state of Michigan to lie to elementary school students about the implications for individual students of their scores on the state tests, to wit that they should tell these children that their scores will "go on their permanent record and follow them throughout their lives." This is a bald-faced lie, as everyone reading this knows full-well, but 7 year olds are not so savvy and often neither are their parents. These administrators were not evil, but merely people succumbing to immoral and unethical bullying pressure originating with a president and federal government bureaucracy that has been acting cynically in the name of helping the very children they are helping to destroy.

Few individuals whose careers and families depend upon their acting like good little Germans and pleading ignorance of wrong-doing have had the courage to do what David Wasserman did. And how was he rewarded? He was immediately made to feel the power of the institution and individuals that wield power over him and his students. Instead of applauding his integrity, they endeavored to crush him or force him to capitulate. And as a parent and husband, he did so, though I am sure it was very painful for him to have to choose between his family and his students, between his principles and his fear.

It is shameful that the administrators in the Madison Metropolitan School District did no more (or less) than those fine citizens of Munich, Berlin, Frankfort, and so many other cities in Germany and other European countries brought under Nazi rule: they used their power to snuff out the first sign of real bravery and dissent, the first act that really was designed to see that no child is forced to take meaningless tests that measure little, if anything, that most educators value. A multiple-choice test can easily reward luck over knowledge, mechanical regurgitation of mere facts over thought, imagination, and creativity, and, of course, conformity over individuality. I would be very much surprised if there is a state or national high-stakes test being used in conjunction with NCLB that is not primarily or entirely of this type. David Wasserman had the guts to stand up for his students and for meaningful assessment over shallow, cheaply processed "data"-gathering and number worship. His colleagues, principal, and superintendent should have applauded him. I suspect many of his students were grateful for even a moment's thought for their plight. Instead, we saw no acts of courage from those with a little more power than a mere classroom teacher. It was business as usual, full speed ahead, and testing uber alles. How utterly sad, and how utterly tragic for real kids and real learning.

Tuesday, November 6, 2007

The Games Ideologues Play

In a 2006 article for the Hoover Institution, Barry Garelick wrote:

"It was another body blow to education. In December 2004, media outlets across the country were abuzz with the news of the just-released results of the latest Trends in International Mathematics and Science Study (TIMSS) tests. Once again despite highly publicized efforts to reform American math education(some might say BECAUSE of the reform efforts) over the past two decades, the United States did little better than average." ["Miracle Math" by Barry Garelick, EDUCATION NEXT, Fall 2006 (vol. 6, no. 4)].

This example of Mr. Garelick's purple prose, replete with "body blows," and the laughable image of media outlets buzzing over ANY education-related item, is a masterful piece of conservative propagandizing in which he clearly sets the groundwork for a classic Hoover Institution tactic: having it both ways.

You see, in 2006, Mr. Garelick was all abuzz, if no one else was, about the shortcomings of U.S. mathematics achievement as measured by TIMSS, with the blame neatly dropped on US reform efforts (and you know who THAT means) of "the past two decades." Hmm, now. He's writing in 2006, so these efforts have been going on since 1986. . .

Quick: ignoring the question of what percentage of U.S. public school classrooms are now using or have exclusively used for the past eleven years since his piece appeared what he and the rest of us would agree are "reform" methods AND materials, what are the numbers for 1986-1996? Pinning the blame on all that alleged failure for twenty years on programs that didn't even exist in 1986 is pretty neat. But it gets better.

Because Barry Garelick argues in Sunday's in a piece entitled, "It Works for Me: An Exploration of 'Traditional Math' Part 1" that actually we HAVEN'T been failing. And why not? Well, he cites The Way We Were?: The Myths and Realities of America's Student Achievement, a book by one of the best and most liberal-minded education reporters in the country, Richard Rothstein, to argue what many folks, most notably Gerald Bracey, have been pointing out for years: the sky isn't falling, or at least not in general, when it comes to US education.

But Mr. Garelick, of course, wants it both ways.

Instead of an honest assessment of the successes AND shortcomings of traditional instruction in mathematics in this country (let alone something similar for progressive mathematics programs and methods), Mr. Garelick is offering a completely skewed version of reality in which all failures are BECAUSE of reform efforts, whether they even were in existence, let alone implemented, while all successes are due to the good old traditional instructional methods of his youth. Or maybe due to Singapore Math. I'm not completely clear on his notions of causality.

In either event, this is remarkable sophistry on his part. Because there's only one undeniable thing in this debate, and that is that there were no NCTM-style reforms going on to any discernible extent prior to 1986 (and pretty much prior to the mid-1990s, if in fact they are truly the dominant texts and methods of today and the past dozen years or so, a very doubtful proposition in itself), but the instructional methods through the mid-1980s were indeed Mr. Garelick's beloved "traditional" teaching.

So we're asked to believe: a) all was well, or nearly so, up until 19___ (I fear to complete the date, because whatever is put there will be so ludicrous that I may be struck dead for writing it even as a hypothetical or belief of someone else); b) we went down the slippery slope in 1986 or so; c) but, see, the sky wasn't really falling after all; d) except that it was, in 2006; e) but it wasn't, in 2007; f) and anyway, if it wasn't traditional methods that had us all rosy-cheeked and healthy in mathematical competence up until that 19___ year, but Singapore Math is a MIRACLE that will bring us all back to health again; g) and all this regardless of whether Singapore Math is actually a diverse set of materials and methods, that there is no clear picture of what methods it allows or proscribes, and despite the fact that the US has so little in common with Singapore (4 million people, ethnically homogeneous by comparison, a highly rigid, restrictive form of government, and a level of affluence that any US politician would KILL to be able to brag about as TYPICAL of his constituents), that it's just a tad difficult to believe that some books alone are going to save us. And this is NOT coming from a negative view of Singapore Math. Far from it. It's coming from a sense that Mr. Garelick is not an educator, but rather an ideologue, that he is like other Hoover Institution folks in his dedication to conservative views of education and education policy, that he is playing a dishonest rhetorical game, and finally that not even Singaporeans are so naive (or dishonest) as to believe that they have all, or even most, of the answers to teaching and learning mathematics.

But don't expect moderation from Mr. Garelick. Or modest claims. Or honesty. Just expect propaganda, trying to have it both ways, trashing NCTM, NSF, and any and all reform ideas and methods (even when they are strikingly similar to what people do in Singapore). Don't expect him to mention that it's difficult to find an Asian mathematics educator who comes to the US who isn't interested in learning from us and isn't critical of the shortcomings in their own countries methods, materials, and accomplishments.

No, from the Mathematically Correct/HOLD and right-wing think-tank groups that Mr. Garelick represents, there are only Singapore Math "miracles," wonderful, golden days of traditional instruction that worked for pretty much everyone, and horrible failures and stupid ideas, books, and practices from progressive educators, the NSF, and NCTM. It may be fun and easy to live in a fantasy world in which things are either black or white, wonderful or horrid, perfect or completely flawed, but unfortunately, US educators have to work in the real world, with real kids, in real communities, with real poverty, real malnutrition, real parental ignorance, real abuse of kids, real drug and substance abuse, real crime, real ethnic and religious conflict and hatred, and a host of other challenges. These are not excuses: they are simply part of reality. We have that reality and the many professionals working with it and doing research to try to improve the quality, depth and effectiveness of mathematics teaching and learning, and then we have Mr. Garelick and the Mathematically Correct/HOLD/Hoover folks with their simplistic viewpoints, propaganda, rejection of research, phony talk of miracle cures, and even in the case of one outspoken former member of NYC-HOLD, a person so cowardly he refuses to use his real name on the internet, the assertion that there are no open questions about mathematics pedagogy.

Yes, the cartoon at the top of this post pretty much says it all.