Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search
 

Boojatta

(12,231 posts)
Sun Mar 25, 2012, 04:10 PM Mar 2012

Question for Determinists

Intro to Question:
Suppose that some pharmaceutical is being tested for its effectiveness and safety in preventing, curing, or treating some medical condition. Suppose, in particular, that a double-blind testing protocol, with some randomization procedure, is used. Regardless of intentions, the outcome of a so-called "randomization" procedure is, by assumption, predetermined.

Question:
How do we know that the procedures that are intended to provide randomization for double-blind testing aren't providing more-or-less accurate predictions? How do we know that the difference between receiving the pharmaceutical and receiving a placebo is what causes the difference in the two sets of outcomes?

8 replies = new reply since forum marked as read
Highlight: NoneDon't highlight anything 5 newestHighlight 5 most recent replies
Question for Determinists (Original Post) Boojatta Mar 2012 OP
Faith? orpupilofnature57 Mar 2012 #1
It all depends on who looks at the data Speck Tater Mar 2012 #2
Do you know how each group evaluated the data? Jim__ Mar 2012 #3
This was forty years ago, and I don't recall the details. However, Speck Tater Mar 2012 #8
You just desribed the human condition deacon_sephiroth Mar 2012 #4
The term 'pseudoskeptic' tama Mar 2012 #6
There are producers or random data tama Mar 2012 #5
Suppose the pharmaceutical is polonium 210 FarCenter Mar 2012 #7
 

Speck Tater

(10,618 posts)
2. It all depends on who looks at the data
Sun Mar 25, 2012, 05:48 PM
Mar 2012

In graduate school I prepared a bunch of computer simulated data with a small, but deliberate bias in the outcomes. I gave copies of the identical data to two groups of math and physics professors asking them if the data was truly random, or if it showed any signs of systematic bias.

Group 1 was told the data represented the output from a hardware random number generator I was designing.
Group 2 was told the data from an ESP experiment.

Everyone in group 1 agreed unanimously that the data showed a definite systematic bias and that the random number generator was flawed because it was not behaving in a truly random manner.

Everyone in group 2 agreed unanimously that the data was completely random and showed no bias of any kind, and that the outcome was due entirely to chance.

So the same data will draw different conclusion depending on

1. The preconceptions of the person looking at the data, and
2. What they think the data they are looking at represents

This taught me to be skeptical of the skeptics. They believe that they know far more than they actually do know. For them, the data, regardless of what it objectively contains, will be seen to confirm their own beliefs.

Jim__

(14,083 posts)
3. Do you know how each group evaluated the data?
Mon Mar 26, 2012, 02:07 PM
Mar 2012

Tests for randomness are usually done through deterministic mathematical tests. But, of course, you can use different hypotheses to run the data against; and you can agree to accept different levels of error.

For Group 1, were the random numbers supposedly from a uniform(0,1) distribution? I think that's where most random number generators begin. There may be a standard suite of tests to run against this type of data.

For Group 2, how did data that could be generated by a random number generator represent data from an ESP experiment? Testing this type of data may be more ad hoc than tests of random number generators.

The experiment that you describe is interesting. I think that the statistical tests of data are accurate within the given error bounds of the tests. But, how people decide what tests to run, and what error level to accept could have a psychological bias. Did you make that type of determination?

 

Speck Tater

(10,618 posts)
8. This was forty years ago, and I don't recall the details. However,
Mon Mar 26, 2012, 11:16 PM
Mar 2012

at the time the conclusion I came to was that whether a deviation was considered significant or not depended on the subjective judgement of the probability of the hypothesis claimed to explain the deviation. It is very believable that a grad student could design a faulty piece of hardware that failed to generate truly random numbers. It is very difficult to believe that something happened which you think contradicts the basic laws of nature.

So you have a balancing act:

Case 1: probability of student error VS probability that the deviation is random
Case 2: Probability of something you believe to be impossible VS probability that the deviation is random

Or an even more extreme case: Suppose you were told that the deviations were caused by Leprechauns dancing on the computer when the trials were run. What will you believe:

Case 3: Probability that dancing Leprechauns caused the deviation VS probability that the deviation is random.

Given those two extreme alternatives, any sane person would conclude that the deviation was random before admitting that dancing Leprechauns had anything to do with it.

The problem is that events considered impossible under today's paradigm might not actually be impossible. The French Academy rejected claims of meteors falling from space because under their paradigm such a thing was impossible. So suppose, just suppose, that some small, but measurable ESP effect did exist, and that some theoretical framework were to be found to explain it without "violating the laws of physics", then given plausible theoretical underpinnings the balancing act between ESP VS random deviation might shift and the same data could then be seen to support the claim of real ESP. However, such a change of attitude can never happen without a paradigm shift, and so the reigning paradigm rejects statistically significant data because it considers accepting the data as significant to be anathema within the paradigm.

And so surgeons could not be convinced to wash their hands between an autopsy and a surgery, and respected astronomers could not be persuaded to believe that rocks were floating around in outer space, and scientists (except physicists, of course. But they're always ahead of their time) cannot be persuaded to accept the possibility that information transfer between biological computers is possible without violating any laws of physics.

Arthur Clarke put it best: "When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong."

Or as Bertrand Russel said: "What we need is not the will to believe but the will to find out." and those who see only impossibility in the data lack the will to find out. Or, to quote my own words in an essay on the subject: "If they say 'I can't imagine how this could be true.' they have not proven it false, they have simply confessed to a failure of imagination."

deacon_sephiroth

(731 posts)
4. You just desribed the human condition
Mon Mar 26, 2012, 04:30 PM
Mar 2012

the desire to want your thoughts and ideas to be right or supported by "data" of some kind is just part of being a human being. It's nice to feel right, but the greater discussion is how to we KNOW what's right and wrong and what can we not know. Be skeptical, criticla thinking is the name of the game, and yes be skeptical of the skeptics, but forgive me if I'm skeptical of you. I've learned to be skeptical of those that are skepitcal of the skeptics.

 

tama

(9,137 posts)
6. The term 'pseudoskeptic'
Mon Mar 26, 2012, 06:17 PM
Mar 2012

was coined by Truzzi, cofounder of CSICOP. He's been called skeptic of skeptics, and his criticism can be taken by as pyrrhonian skeptical criticism of scientific skepticism that likes to believe in at least in something, at least in the individual and cultural prejudices of the scientific skeptic in question.

 

tama

(9,137 posts)
5. There are producers or random data
Mon Mar 26, 2012, 06:09 PM
Mar 2012

based on the QM theory or interpretation of physical "genuine" randomness of events like radioactive decay.

But even there - randomness of radioactive decay - there is strange regularity called shnoll effect, first observed in biomatter. Some info here: http://noosphere.princeton.edu/shnoll2.html


Latest Discussions»Issue Forums»Religion»Question for Determinists