Psychology Journal Bans Significance Testing « Science-Based Medicine |
|
Psychology Journal Bans Significance Testing « Science-Based Medicine |
|
The worst thing that can happen to a good cause is, not to be skillfully attacked, but to be ineptly defended. - Frédéric Bastiat
I try to deny myself any illusions or delusions, and I think that this perhaps entitles me to try and deny the same to others, at least as long as they refuse to keep their fantasies to themselves. - Christopher Hitchens
Formerly known as BLUELINE976
Red herring. No one actually says that p>0.05 instantly makes the results "useless". The required significance level always depends on context, and the judgment of experts. For example, in quantum physics experiments you might say that any p>0.001 is not significant. In psychology, perhaps p>0.1 is more appropriate, but it depends on other factors. |
|
Concur with cmind. There may be valid reasons for discouraging use of the p-value, but the 0.05 thing isn't one of them, and you'll notice that it wasn't a reason cited by the article. A p-value of 0.051 would be interpreted pretty much like a p-value of 0.049. Nobody really thinks there's a binary cut-off. |
|
I think my issue about the 0.05 thing has to do with how research gets published. What I mean is, if somebody ends up with p>0.05 in their statistical analysis, how often are their results ignored? I don't mean by the wider community, I mean by the researchers themselves. Negative results aren't published nearly as much as positive results. I should've been more clear. |
|
The worst thing that can happen to a good cause is, not to be skillfully attacked, but to be ineptly defended. - Frédéric Bastiat
I try to deny myself any illusions or delusions, and I think that this perhaps entitles me to try and deny the same to others, at least as long as they refuse to keep their fantasies to themselves. - Christopher Hitchens
Formerly known as BLUELINE976
I think a bigger issue, especially in the softer sciences, is the failure of some researchers to differentiate between observational variables and experimental variables. Usually, this means treating the former as if it was the latter. The difference between them is that an observation can only tell you correlations, whereas an experiment can tell you causes. |
|
Bookmarks