Earlier this week news broke that one of the authors of a widely celebrated study on persuasion and same-sex marriage has disavowed the study and asked the journal Science to retract it. Donald Green, a political science professor at Columbia, has said that his co-author Michael LaCour—a UCLA graduate student who was slated to join the faculty of Princeton this summer—falsified the data behind the study, which claimed to show how short conversations with canvassers can change people’s minds about same-sex marriage. The study, published in December, was the subject of extensive media coverage.
Some observers have rushed to compare LaCour’s apparently faked study with another high-profile study concerning same-sex marriage—sociologist Mark Regnerus’ 2012 “New Family Structures Study,” which showed different outcomes in the lives of children raised by a parent who has same-sex relationships and those raised by their married, biological parents.
Over at the First Thoughts blog, Matthew J. Franck explains why such comparisons are bogus:
Leave it to the New York Times to find someone willing to claim, ridiculously, that the Regnerus and LaCour cases are fundamentally similar as instances of “debunked” research. Except that in one case we have actual data, validly obtained, rich in their findings, about the interpretation of which the scholars are quarreling. In the other, we have strong reason to believe the data the young scholar claimed to be reporting didn’t exist at all.
The true similarity between the two cases is a rather different one. Regnerus offered the first example of sound social science questioning what the elite in the academy and media desperately want to believe—that same-sex marriage will have absolutely no negative fallout for the young and vulnerable. So of course his research had to be attacked, mischaracterized, or explained away by “re-analysis” at your local dry cleaner. Never mind that his results actually accorded with common sense and historical experience about parents and children.
LaCour, on the other hand, offered those same academic and media elites an astounding reversal of conventional wisdom on public opinion formation, but one that delighted them because it made their work look easier, their future brighter, and their pet cause more imminently triumphant. So of course this was the most exciting breakthrough in social science of the last year!
Franck mentions the recent “re-analysis” of Regnerus’ study, which is being touted in some places as a definitive debunking of his work. Regnerus, who served as a peer reviewer for the new study, argues that it chucks significant amounts of relevant data and thus arrives at a different—and more ideologically acceptable—interpretation:
To their credit, the authors helpfully pointed out a handful of cases that were questionable—respondents whose unlikely answers to other questions (like height, weight, etc.) suggest they weren’t being honest survey-takers. Such a critique is certainly fair and welcome; it’s part of the long-term process of cleaning and clarification in any dataset of substantial size. And removing those questionable cases actually strengthened my original analytic conclusions—and the authors say so: “. . . these adjustments have minimal effect on the outcomes . . . these corrections actually increase the number of significant differences . . .”
However, Professors Simon Cheng and Brian Powell do far more than this, and that’s where my appreciation ends—and the recognition of a historical pattern in the conduct of social science research begins. …
Powell and Cheng “control” their way to few or no significant differences between children of intact biological families and those who spend time in same-sex couple households. How? By sorting respondents according to the stability of their parents’ same-sex relationships, longevity of time in their household, by pooling together married moms and dads with those who eventually divorced or who shared joint custody throughout the respondent’s childhood, and by adding a control for childhood experience of poverty in addition to the income control I had already employed. (Seventy percent of households with a mother, her same-sex partner, and the respondent child received social welfare at some point.) But there are so few stable same-sex relationships in the data that, when analyzed in this way, the statistical power to detect real differences diminishes considerably. Powell and Cheng themselves admit this, and the estimates of difference are hence no longer significant. That’s how one can go from a majority of 40 outcome variables displaying significant differences to just one or two. Had stability rather than instability been “endemic” to the same-sex relationships in the NFSS, I would have split the sample myself!
“The data will often say anything you’d like them to say, if you adjust (and justify) how you analyze them,” Regnerus says. He concludes: “Social science was never going to save marriage’s male-female infrastructure. I never presumed it could or would. What it can do—and that’s what I will always love about it—is reveal what is going on. It has a difficult time laying blame or taking credit, because causality is always challenging to discern. I just wish the charged atmosphere could begin to sustain a healthy and fair debate. Not just yet, it seems.”