Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

bemildred

(90,061 posts)
Sun Oct 27, 2013, 03:13 PM Oct 2013

Science has lost its way, at a big cost to humanity

In today's world, brimful as it is with opinion and falsehoods masquerading as facts, you'd think the one place you can depend on for verifiable facts is science.

You'd be wrong. Many billions of dollars' worth of wrong.

A few years ago, scientists at the Thousand Oaks biotech firm Amgen set out to double-check the results of 53 landmark papers in their fields of cancer research and blood biology.

The idea was to make sure that research on which Amgen was spending millions of development dollars still held up. They figured that a few of the studies would fail the test — that the original results couldn't be reproduced because the findings were especially novel or described fresh therapeutic approaches.

http://www.latimes.com/business/la-fi-hiltzik-20131027,0,1228881.column

18 replies = new reply since forum marked as read
Highlight: NoneDon't highlight anything 5 newestHighlight 5 most recent replies
Science has lost its way, at a big cost to humanity (Original Post) bemildred Oct 2013 OP
It's know as the "decline effect" and it is indeed a serious problem. Warren Stupidity Oct 2013 #1
It is one of my pet peeves. bemildred Oct 2013 #2
Having taken the time to go through all that now: bemildred Oct 2013 #9
It's pretty disturbing. Warren Stupidity Oct 2013 #12
Why is it disturbing? HuckleB Oct 2013 #14
medical research is on pretty shaky foundation. Warren Stupidity Oct 2013 #16
The article is what is truly shaky. More bad "science" journalism. HuckleB Oct 2013 #17
Ironic.... xocet Oct 2013 #3
+1 HuckleB Oct 2013 #4
An interesting piece, still I wonder how much context it lacks. HuckleB Oct 2013 #5
Skepticism is always good. bemildred Oct 2013 #6
OK, I went through them all. bemildred Oct 2013 #18
Anthropomorphizing is a great way to avoid the blame. Igel Oct 2013 #7
Indeed. bemildred Oct 2013 #8
I posted an article a while ago...Frackademia Exposed: Federally Funded, Industry Driven adirondacker Oct 2013 #10
The need of money corrupts everything it touches. nt bemildred Oct 2013 #11
"PubMed Commons ... to counteract the "perverse incentives" in scientific research and publishing kristopher Oct 2013 #13
He means medical "science", as in medicine. jsr Oct 2013 #15
 

Warren Stupidity

(48,181 posts)
1. It's know as the "decline effect" and it is indeed a serious problem.
Sun Oct 27, 2013, 03:32 PM
Oct 2013
http://www.newyorker.com/reporting/2010/12/13/101213fa_fact_lehrer

The philosophical underpinnings of the scientific method require reproducibility as one of the lynchpins of scientific knowledge.

bemildred

(90,061 posts)
2. It is one of my pet peeves.
Sun Oct 27, 2013, 03:38 PM
Oct 2013

The misuse of what is essentially a doctrine and methodology of extreme doubt and empiricism, phenomenology, essentially, to flog propaganda for dogmatic and authoritarian commercial and political bullshit, and we are submerged in that.

bemildred

(90,061 posts)
9. Having taken the time to go through all that now:
Sun Oct 27, 2013, 08:27 PM
Oct 2013

It sounds like a lot of cases of outlier results being followed by reversion to the norm, and a certain optimism about the reliability of statistical methods, and confirmation biases, it sort of like magic thinking. I'm appalled at what I read there, now I think about it.

 

Warren Stupidity

(48,181 posts)
16. medical research is on pretty shaky foundation.
Mon Oct 28, 2013, 07:50 AM
Oct 2013

Read the linked article. What we "know" is highly questionable.

xocet

(3,871 posts)
3. Ironic....
Sun Oct 27, 2013, 06:13 PM
Oct 2013

It seems that the main initial complaint is that a (possibly large) number of biomedical research studies have not been confirmed by independent studies and that the original researchers are driven to publish questionable and sensational results so that they can achieve a measure of professional success - however fleeting that may be.

The irony is that in order to achieve journalistic success - however fleeting that may be - the author of the article resorts to a sensational, over-broad headline to garner attention for his story.

Science is a lot more than just biomedical research, and the fact that other scientists are going back over original research to check that research is in itself not a problem. That science is not properly supported financially is the problem, but one cannot have that when the House Committee on SST is run by a bunch of Republican Congressmen.

At any rate, the article has a sensationalistic headline that neither serves the scientific community nor the journalistic community.


Science has lost its way, at a big cost to humanity
Researchers are rewarded for splashy findings, not for double-checking accuracy. So many scientists looking for cures to diseases have been building on ideas that aren't even true....

By Michael Hiltzik

October 27, 2013

...

The Economist recently estimated spending on biomedical R&D in industrialized countries at $59 billion a year. That's how much could be at risk from faulty fundamental research.

...

http://www.latimes.com/business/la-fi-hiltzik-20131027,0,1228881.column#axzz2ixV2MVQI


Here is a suggestion for a follow-up article:


Journalism has lost its way, at a big cost to humanity
Journalists are rewarded for splashy headlines, not for accuracy. So many citizens looking for reliable information upon which to base their opinions have been building on ideas that aren't even true....

By Michael Hiltzik

October 28, 2013

....

http://www.latimes.com/business/la-fi-hiltzik-20131038,1,2339992.column#axzz3ixV3MVQII


404


HuckleB

(35,773 posts)
5. An interesting piece, still I wonder how much context it lacks.
Sun Oct 27, 2013, 06:56 PM
Oct 2013

Pilot studies are almost always "wrong." That's nothing new. It doesn't mean they're not valuable.

Are Most Medical Studies Wrong?
http://theness.com/neurologicablog/index.php/are-most-medical-studies-wrong/

Reporting Preliminary Findings
http://www.sciencebasedmedicine.org/reporting-preliminary-findings/

Further, there is a growing movement to push for the publication of negative studies. (THANK GOODNESS!)

Negative results in medical research and clinical trials – an interview with Ben Goldacre - See more at: http://blog.f1000research.com/2013/06/10/negative-results-in-medical-research-and-clinical-trials-an-interview-with-ben-goldacre/#sthash.YvlIcx8i.dpuf

bemildred

(90,061 posts)
6. Skepticism is always good.
Sun Oct 27, 2013, 07:17 PM
Oct 2013

I thought that was the point of the OP, really, more skepticism. So we can certainly turn that back on it, and thereby reaffirm it's point.

WRT your posts, I would want any serious work published, positive, inconclusive, or negative. Of course, you are going to have fights over what is and is not "serious", but sometimes there is just no closed-form solution and you have to rely on heuristics.

I think all findings are preliminary, and ought to stay that way.

I do not know enough to comment on what ought or ought not be published in medical research, otherwise than I have, and that was my math training speaking.

bemildred

(90,061 posts)
18. OK, I went through them all.
Mon Oct 28, 2013, 12:01 PM
Oct 2013

I don't really see that they disagree with each other much.

I do find Ioannidis' argument very questionable, but since he is just arguing for plausibility, it seems OK.

It's a very narrow discussion, which Hiltzik tranposes into a wider context.

I think Ioannidis' summary is as good as any:

How Can We Improve the Situation?

Is it unavoidable that most research findings are false, or can we improve the situation? A major problem is that it is impossible to know with 100% certainty what the truth is in any research question. In this regard, the pure “gold” standard is unattainable. However, there are several approaches to improve the post-study probability.

Better powered evidence, e.g., large studies or low-bias meta-analyses, may help, as it comes closer to the unknown “gold” standard. However, large studies may still have biases and these should be acknowledged and avoided. Moreover, large-scale evidence is impossible to obtain for all of the millions and trillions of research questions posed in current research. Large-scale evidence should be targeted for research questions where the pre-study probability is already considerably high, so that a significant research finding will lead to a post-test probability that would be considered quite definitive. Large-scale evidence is also particularly indicated when it can test major concepts rather than narrow, specific questions. A negative finding can then refute not only a specific proposed claim, but a whole field or considerable portion thereof. Selecting the performance of large-scale studies based on narrow-minded criteria, such as the marketing promotion of a specific drug, is largely wasted research. Moreover, one should be cautious that extremely large studies may be more likely to find a formally statistical significant difference for a trivial effect that is not really meaningfully different from the null [32–34].

Second, most research questions are addressed by many teams, and it is misleading to emphasize the statistically significant findings of any single team. What matters is the totality of the evidence. Diminishing bias through enhanced research standards and curtailing of prejudices may also help. However, this may require a change in scientific mentality that might be difficult to achieve. In some research designs, efforts may also be more successful with upfront registration of studies, e.g., randomized controlled trials.

Finally, instead of chasing statistical significance, we should improve our understanding of the range of R values—the pre-study odds—where research efforts operate [10]. Before running an experiment, investigators should consider what they believe the chances are that they are testing a true rather than a non-true relationship. Speculated high R values may sometimes then be ascertained. As described above, whenever ethically acceptable, large studies with minimal bias should be performed on research findings that are considered relatively established, to see how often they are indeed confirmed. I suspect several established “classics” will fail the test [36].

Nevertheless, most new discoveries will continue to stem from hypothesis-generating research with low or very low pre-study odds. We should then acknowledge that statistical significance testing in the report of a single study gives only a partial picture, without knowing how much testing has been done outside the report and in the relevant field at large. Despite a large statistical literature for multiple testing corrections [37], usually it is impossible to decipher how much data dredging by the reporting authors or other research teams has preceded a reported research finding. Even if determining this were feasible, this would not inform us about the pre-study odds. Thus, it is unavoidable that one should make approximate assumptions on how many relationships are expected to be true among those probed across the relevant research fields and research designs. The wider field may yield some guidance for estimating this probability for the isolated research project. Experiences from biases detected in other neighboring fields would also be useful to draw upon. Even though these assumptions would be considerably subjective, they would still be very useful in interpreting research claims and putting them in context.


Since he is basically arguing for more context, considering studies in their context, less enthusiasm and more doubt and looking around, I would think that would be good.

Igel

(35,323 posts)
7. Anthropomorphizing is a great way to avoid the blame.
Sun Oct 27, 2013, 07:47 PM
Oct 2013

It's no "science." It's "scientists."

Reproducibility matters. There's no gain in duplicating research. You wind up reproducing the research either as graduate student training, when you use the research in your own research and fail to get the expected results and go backtracking for the reason, or when you think it's wacked.

Lots of research never gets checked. Much of that gets believed, however.


Often the problem is that the scientists, psychologists, etc., have taken a "statistics for the _________ sciences" class, one that focuses on how to use various statistical techniques without actually having to understand the underpinnings of the techniques you're using.

Saw dozens of prestigious, peer-reviewed papers come unravelled when a guy with his MS in statistics took on his PI's groundbreaking paper and showed that for that technique the most common stat reference had omitted an important point. For that technique you start counting degrees of freedom not from 1, but from 0. Later on there's a +1 added to n, but it simplifies things. Everybody started counting from 1. Oops. That was only discovered because the PI thought a competitor's article was horribly wrong but couldn't find the error. So he had a couple of students replicate it. When it didn't fail replication, he had them reanalyse his data. It was a problem, so he had a student who was about to dissertate summarize the literature for the lab. Anybody without a good grounding in stats would have beat his head against the wall.

I've seen other papers that were assumed to be random but which were carefully constructed. The stats weren't applicable, but you couldn't tell that from the paper or the dataset that was made available online.


This, of course, is a different matter from the sweeping generalizations you see in the popular science press. And the even more egregious examples in mainstream science reporting.

bemildred

(90,061 posts)
8. Indeed.
Sun Oct 27, 2013, 08:01 PM
Oct 2013

I am reminded of the propensity of software engineers to fail to test their own code thoroughly, or to incorporate a test harness for it as they go along, or to test it at all as they go along.

I remember the start from zero thing, from the stat course, but it never became natural til I started coding C, where it is the norm for everything except public display.

Every time I look at randomness I start thinking we don't have any idea what we are talking about. But it's useful.

adirondacker

(2,921 posts)
10. I posted an article a while ago...Frackademia Exposed: Federally Funded, Industry Driven
Sun Oct 27, 2013, 09:11 PM
Oct 2013

by Katherine Cirullo


"Recently, Steve Horn of the DeSmog Blog uncovered shocking information that leaves us shaking our head at our nation’s leaders and our once trusted scholars. Embedded in the Energy Policy Act of 2005 is section 999, which describes the U.S. Department of Energy-run Research Partnership to Secure Energy for America (RPSEA). We knew previously that oil and gas companies and industry executives have funded and advised academic research on fracking, but the U.S. government has a major role in these projects, too. Federal funding of oil and gas industry controlled “frackademia” leaves us concerned for the future of fracking, and for our air, water and public safety."

https://www.commondreams.org/view/2013/09/18-5

kristopher

(29,798 posts)
13. "PubMed Commons ... to counteract the "perverse incentives" in scientific research and publishing
Sun Oct 27, 2013, 11:45 PM
Oct 2013

This is encouraging news, imo. It shows movement towards open access and long-term accountability - things that translate directly into better quality science.

...PubMed Commons is an effort to counteract the "perverse incentives" in scientific research and publishing, says David J. Lipman, director of NIH's National Center for Biotechnology Information, which is sponsoring the venture.

The Commons is currently in its pilot phase, during which only registered users among the cadre of researchers whose work appears in PubMed — NCBI's clearinghouse for citations from biomedical journals and online sources — can post comments and read them. Once the full system is launched, possibly within weeks, commenters still will have to be members of that select group, but the comments will be public.

Science and Nature both acknowledge that peer review is imperfect. Science's executive editor, Monica Bradford, told me by email that her journal, which is published by the American Assn. for the Advancement of Science, understands that for papers based on large volumes of statistical data — where cherry-picking or flawed interpretation can contribute to erroneous conclusions — "increased vigilance is required." Nature says that it now commissions expert statisticians to examine data in some papers.

But they both defend pre-publication peer review as an essential element in the scientific process — a "reasonable and fair" process, Bradford says.

Yet there's been some push-back by the prestige journals against the idea that they're encouraging flawed work ...

jsr

(7,712 posts)
15. He means medical "science", as in medicine.
Mon Oct 28, 2013, 02:57 AM
Oct 2013

It's well known that much of medical research is make-work garbage.

Latest Discussions»Issue Forums»Editorials & Other Articles»Science has lost its way,...