Skip navigation
Sidebar -

Advanced search options →

Welcome

Welcome to CEMB forum.
Please login or register. Did you miss your activation email?

Donations

Help keep the Forum going!
Click on Kitty to donate:

Kitty is lost

Recent Posts


Russia invades Ukraine
Yesterday at 09:34 PM

Muslim grooming gangs sti...
Yesterday at 09:31 PM

Islam and Science Fiction
February 11, 2025, 11:57 PM

New Britain
February 11, 2025, 09:32 PM

اضواء على الطريق ....... ...
by akay
February 08, 2025, 01:38 PM

German nationalist party ...
February 07, 2025, 01:11 PM

Do humans have needed kno...
February 06, 2025, 03:13 PM

Lights on the way
by akay
February 05, 2025, 10:04 PM

Gaza assault
February 05, 2025, 10:04 AM

AMRIKAAA Land of Free .....
February 03, 2025, 09:25 AM

The origins of Judaism
by zeca
February 02, 2025, 04:29 PM

Qur'anic studies today
by zeca
February 01, 2025, 11:48 PM

Theme Changer

 Topic: ESP evidence airs science's dirty laundry

 (Read 2388 times)
  • 1« Previous thread | Next thread »
  • ESP evidence airs science's dirty laundry
     OP - January 17, 2012, 07:35 PM

    A barrage of experiments seems to show that we can predict the future – but they may tell us more about the scientific method

    FEW sounds quicken the pulse like the clatter of the roulette ball as it drops. Fortunes can be won or lost as it settles into its numbered slot. Find a way to predict where it will come to rest, and you would soon become the envy of every other gambler and the worst nightmare of every casino.

    Michael Franklin thinks he might be able to make that claim. Over thousands of trials, he seems to have found a way to predict, with an accuracy slightly better than chance, whether the ball will fall to red or black. You'll find no divining rods or crystal balls about Franklin's person. Nor does he operate from a murky tent swathed in lace and clouded with the fumes of burning incense; he works in a lab at the University of California, Santa Barbara.

    Franklin is one of a small group of psychologists who are investigating precognition - the ability to foretell the future. Astonishingly, some of the groups, including Franklin's, are returning positive results. "I still want to see that I can actually win money with this," says Franklin, who rates his confidence in the data so far at about 7.5 out of 10. "As a scientist, I need to be agnostic."

    If precognition does turn out to be real, it will shake the foundations of science and philosophy. Few researchers will be putting money on this conclusion, though; most expect that the puzzling results will begin to evaporate as others attempt to repeat the experiments. Even so, that could change science as we know it. Franklin and his colleagues are all using standard research methods that normally go unquestioned. If those methods can lead respected scientists to such startling errors, how many other studies might be similarly flawed?

    A closer look

    Despite widespread scepticism from mainstream scientists, studies of precognition and other forms of extrasensory perception crop up time and again. In the 1940s it was card-guessing, in the 1980s the ability to influence random-number generators, and in the last decade so-called "presentiment" - in which volunteers showed changes in skin conductance just before they saw disturbing images. In every case, however, independent researchers failed to repeat the initial results, eventually concluding that they were the result of procedural flaws or coincidence.

    However, a year ago even the sceptics had to take a closer look when Daryl Bem, a well-respected psychologist at Cornell University, New York, reported some positive results in the Journal of Personality and Social Psychology (vol 100, p 407). "When Daryl Bem speaks, we listen," says Jeff Galak, of Carnegie Mellon University in Pittsburgh, Pennsylvania.

    The appeal of Bem's work is that he used well-accepted psychological experiments, but with a twist. Countless studies, for example, have established that writing out a list of words makes it easier to recall those words later. Bem merely switched the order of the two events. His subjects viewed the list briefly and were tested on their initial recall. They were then given a smaller, random selection of words from the same list, which they were asked to type out and memorise. Surprisingly, they were more likely to have remembered these words during the initial memory test - before they had even seen the list. The difference between the two sets of words was slight - only 2.27 per cent - but statistically significant.

    Another of Bem's experiments focused on a psychological effect known as habituation; if volunteers are asked to choose between two similar images, they will tend to prefer an image they have seen before over one they have not. Again, Bem reversed the sequence - he showed subjects two new images, and asked them to choose which one they liked better, before showing them one of the images again immediately afterwards. Bem found that about 54 per cent of the time, people preferred the image they would end up seeing again later.

    He reported nine such time-reversed experiments, in eight of which later events appeared to influence people's earlier behaviour. The effects were subtle, typically swaying the results by 2 to 4 per cent, but Bem's statistical tests suggested that they were unlikely to have occurred by chance. Taken together, the experiments strongly suggested that people could indeed "feel the future", Bem concluded.

    Since Bem's study appeared, two other researchers presented similar results at the Towards a Science of Consciousness conference in Stockholm, Sweden, in May 2011. Franklin was one of them. He asked volunteers to identify certain complex geometric shapes, some of which they would use for practice later on. In line with Bem's results, their performance on the first task seemed to be affected by the shapes they saw in the later practice run.

    Franklin wondered whether he could adapt his method to make more useful predictions. Suppose the choice of practice shapes was determined by the spin of a roulette wheel, for example. He could then use the subject's test results to predict whether the ball would fall on red or black ahead of time.

    "If you know which practice condition they're going to get, then you can say OK, I'm going to predict that the roulette spin is going to be red," Franklin says. Sure enough, when he actually tried this he correctly called the fall of a roulette ball in another room about 57 per cent of the time - enough to make a tidy profit, had he bet real money on a game.

    Dick Bierman, a physicist and cognitive psychologist at the University of Amsterdam in the Netherlands, meanwhile, showed volunteers a standard optical illusion known as a Necker cube - a two-dimensional drawing of a cube that appears to alternate between top and bottom views (see diagram). The subjects had to indicate which view they saw and when their perception switched.

    Then Bierman showed each volunteer a filled-in drawing of a cube that was unambiguously either top or bottom view. Those who later saw the top view spent more time perceiving the Necker cube in top view, while those who later saw the bottom view tended to see the Necker cube in bottom view, he found. It was as if their initial view predicted what they would be shown the second time. As with Bem's study, the difference was slight - about 3 per cent - but unlikely to have occurred by chance.

    At a first glance, these experiments appear to blow the conventional notion of cause and effect out of the water. It would suggest your hunches may really tell you something about the future. Perhaps you could even "remember" next week's lottery results.

    Surprisingly, none of this would break the laws of physics as we know them, since the equations of physics are mostly "time-symmetric", notes Daniel Sheehan, a physicist at the University of California, San Diego. "If you were to say the past influences the present, everyone would say OK. But you can equally say that the future boundary conditions affect the present," says Sheehan, who organised a symposium on reverse causation in 2011 for a regional meeting of the American Association for the Advancement of Science in San Diego. "The future has an equal say in the present as does the past."

    Cracks appear

    Indeed, even some physics experiments can be interpreted as causation moving backward in time. A photon passing through one or two narrow slits will behave as either a wave or a particle, depending on how many slits are open - even if the slits are adjusted after the photon has passed through (Science, vol 315, p 966). The experiment can also be explained without invoking reverse causation, but the arguments tend to be tortuous, says Sheehan.

    The vast majority of scientists won't be embracing these interpretations just yet. "Before you jump to attempts to accommodate these phenomena in physics, show us that the phenomena are there," says James Alcock, a psychologist at York University in Toronto, Canada. And if you take a closer look at the latest studies, cracks begin to appear in their seemingly convincing facade.

    To begin with, the statistical analyses, which are meant to weed out results that could have occurred by chance, may in fact be hiding false positives. Using the standard technique, researchers try to estimate the probability of their results occurring under a so-called "null hypothesis" that nothing unusual is going on - in this case, the idea that precognition does not occur. If this probability is suitably small, they conclude that an alternative hypothesis must be true instead - in these experiments, the idea that precognition really exists.

    But nowhere in this process do the experimenters calculate how likely the observations would be under the alternative hypothesis. When you do perform this calculation, using a different method known as Bayesian statistics, the results can be disconcerting. Suppose you toss a coin a 1000 times, and end up getting 527 tails. The odds of getting this far away from a 50:50 divide using a fair coin are a little less than 1 in 20. It's tempting to think this means the odds are 20 to 1 that the coin is biased - but that's not the case, says Jeff Rouder, a psychologist at the University of Missouri in Columbia. Even the best alternative - a coin that gives tails 52.7 per cent of the time - yields a probability of just 0.025 of getting exactly 527 tails, which is only a four-fold improvement over the odds of producing exactly 527 tails with a fair coin. "In other words, this new alternative is not much more probable than the one I rejected," says Rouder. Other alternatives - that the coin comes up tails 55 per cent of the time, say - offer even less of an edge over the null.

    Looking at Bem's data in this way, Bayesian reanalyses by Eric-Jan Wagenmakers, a mathematical psychologist at the University of Amsterdam, and Rouder found the case for precognition to be unconvincing. Such analyses have their own issues, though, since Bayesian statisticians must initially estimate the size of the possible effect - in this case, how much precognition should sway the results of the tests. Bem argues that Wagenmakers's estimate was too high. His own Bayesian analysis concluded that his experiments, taken as a group, provided odds of about 13,000 to 1 in favour of precognition.

    Whatever the outcome of this debate, most sceptics agree that another factor - prevalent throughout science - may have boosted the chances of producing these puzzling results. The issue arises from the difficulties of setting up a study, when you choose which variables to measure, how many samples to take, and which confounding factors to consider.

    Ideally, all these decisions are made ahead of time, before any measurements have been collected. But in practice, experimenters often wing it, adjusting their techniques based on the results they see. For example, a researcher might measure two related variables, then report the one that gives the clearer (that is, more significant) result - providing two places for a false positive to pop up. The common practice of "optional stopping", which involves analysing the data, then collecting more if the result is not quite significant, can also sway the chances of a false result.

    Torturing the data

    When researchers pick over their results in this way, they are in essence "torturing the data until they confess", says Wagenmakers. The effect can be huge: a study with four common errors of this sort could end up with false positives more than 60 per cent of the time, according to simulations by Leif Nelson, a psychologist at the University of California at Berkeley, and his colleagues.

    Bem says he has been careful to avoid these pitfalls. However, the sceptics - and even some who are inclined to think the results are real - wonder whether some unknown factors may have remained unchecked. "There's a genuine concern that these effects - including the ones that I've observed - could be a product of all the different ways that you could analyse the data to make the research fit the hypothesis," says Jonathan Schooler, a psychologist at the University of California at Santa Barbara who is a co-author of Franklin's study.

    Fortunately, there is a straightforward way to settle the question - repeating the studies, with all procedures specified ahead of time in a public forum where everyone can see them. This eliminates the flexibility that can inflate the risk of false positives. If these studies still find evidence of precognition, then it deserves a closer look. If they do not, then the initial results were probably just false positives.

    Bem, to his credit, has made this easier by making his set-up available to other researchers, and several replications have already been done. One, by Eugene Subbotsky of the University of Lancaster, UK, found results almost identical to Bem's, while six others, including tests by Galak and Nelson, Schooler, Wagenmakers and Richard Wiseman of the University of Hertfordshire, UK, have failed to find any supporting evidence. Franklin, too, has finished a replication of his results that gave just barely significant support for precognition.

    In the end, most scientists expect that most replications will fail and the current controversy over precognition will fade away, as all previous theories about ESP have before. "People are going to fail to replicate it, and that's what's going to make his statistics unimpressive in the end," says Rouder.

    But even if precognition does amount to nothing, Bem and Franklin may have made an important contribution to science by drawing our attention to how easily good researchers can be misled. After all, ESP researchers are not the only ones who torture their data. Thanks to increased computing power, researchers in every field have started to test one variable after another in search of interesting results, laying the ground for false positives to pop up like mushrooms after rain. "I think an awful lot of what's published out there is wrong," says Jim Berger, a statistician at Duke University in Durham, North Carolina.

    To remedy this, Schooler, Nelson and others are calling for a more rigorous approach, in which researchers routinely do what some are now doing for Bem's work: declare the experimental set-up beforehand, and report the results no matter what they look like.

    The proposal may be more radical than it seems, because most researchers are reluctant to do "mere replications" that aren't breaking new ground.

    "It's systemic," says Wiseman. "Funders don't like funding replications, scientists don't like carrying them out, and journals don't like publishing them." Changing that mindset may be the biggest challenge of all.

    Bob Holmes is a consultant for New Scientist based in Edmonton, Canada

    http://www.newscientist.com/article/mg21328471.800-esp-evidence-airs-sciences-dirty-laundry.html?full=true
  • Re: ESP evidence airs science's dirty laundry
     Reply #1 - January 17, 2012, 08:40 PM

    I find reincarnation research interesting as well.

    قل للمليحة في الخمار الأسود
    مـاذا فـعــلت بــناسـك مـتـعـبد

    قـد كـان شـمّر لــلـصلاة ثـيابه
    حتى خـطرت له بباب المسجد

    ردي عليـه صـلاتـه وصيـامــه
    لا تـقــتـلــيه بـحـق ديــن محمد
  • Re: ESP evidence airs science's dirty laundry
     Reply #2 - January 17, 2012, 09:06 PM

    ^Elaborate.
  • Re: ESP evidence airs science's dirty laundry
     Reply #3 - January 18, 2012, 03:16 AM

    Long article. I mean pre-cognition makes sense if you believe in a b-series view of time.

    I would be interested to see if the same results happened a year later, or if there was a direct statistical correlation between length of elapsed time and pre-cognition.
  • Re: ESP evidence airs science's dirty laundry
     Reply #4 - January 18, 2012, 03:29 AM

    GUYS, THE EXISTENCE OF PRECOGNITION OR OTHERWISE IS NOT THE POINT OF THE ARTICLE - READ IT ALL. It's about the shortfalls of modern scientific research. Which is kinda important, ya know? -_-
  • Re: ESP evidence airs science's dirty laundry
     Reply #5 - January 18, 2012, 03:37 AM

    Something you might find interesting

    http://www.youtube.com/watch?v=hBNeuG10-ac

    Too fucking busy, and vice versa.
  • Re: ESP evidence airs science's dirty laundry
     Reply #6 - January 18, 2012, 03:40 AM

    Couldn't get past the first 30 seconds.
  • Re: ESP evidence airs science's dirty laundry
     Reply #7 - January 18, 2012, 03:42 AM

    Suit yourself.

    Too fucking busy, and vice versa.
  • Re: ESP evidence airs science's dirty laundry
     Reply #8 - January 18, 2012, 03:44 AM

    Lulz, I wouldn't mind reading a transcript, but I don't have the attention span to watch a minute-long vid at the best of times, and the guys voice is a little too soothing. bedtime2
  • Re: ESP evidence airs science's dirty laundry
     Reply #9 - January 18, 2012, 03:51 AM

    Well, here's a few articles that the video references:

    Why Most Published Research Findings are False
    Are Most Medical Studies Wrong?
    The cranks pile on John Ioannidis' work on the reliability of science

    Too fucking busy, and vice versa.
  • Re: ESP evidence airs science's dirty laundry
     Reply #10 - January 18, 2012, 04:02 AM

    k I watched it all anyway. We've briefly touched data analysis in my physics labs but apparently we're going to go more in-depth next month. So hopefully I'll have a firmer grip on the math/stats. But yeah I think I need to read a lot more into how scientific research is conducted, seems I had a slightly airbrushed view of it before coming to Uni.
  • Re: ESP evidence airs science's dirty laundry
     Reply #11 - January 18, 2012, 04:40 AM

    There isn't much to worry about really. The fact that most scientific studies are false is irrelevant when the scientific method takes that fact into account. It's normally the layman, the sensationalist journalist, the pseudo-scientist, or the ideologically-driven that misunderstand or abuse that fact.

    Too fucking busy, and vice versa.
  • Re: ESP evidence airs science's dirty laundry
     Reply #12 - January 18, 2012, 04:50 AM

    Well I think the conveyance of science to the public is at least as important as scientific research. Even a lot of established journals sensationalise studies. And sure you would think the scientific method would weed out the false positives eventually, but if the aims and publication and funding of studies is affected by errant mindsets, it going to take a while longer than it should.
  • Re: ESP evidence airs science's dirty laundry
     Reply #13 - January 18, 2012, 05:06 AM

    Sure. Science education faces more than its fair share of problems. There is pretty much a war going on there.

    I mean only the scientific method itself, as an ideologically-impartial institution, isn't under threat. Articles like this are not going to find a loose end and pull at it until the whole foundation of science collapses and it is all exposed as a sham. Scientists are not stupid. They are not blindly adhering to archaic dogma or trapped in ritual and leading us to folly. There is a method to their madness: question everything.

    Too fucking busy, and vice versa.
  • Re: ESP evidence airs science's dirty laundry
     Reply #14 - January 18, 2012, 05:42 AM

    Science denialism is annoying enough, but I think the author of this article is simply trying to draw more attention to researchers properly or not employing the scientific method. He has written a lot of good stuff for NS (although his analysis of the loaded-coin toss seems to be flawed).

    I don't know how prevalent these problems are, but I really want to find out.
  • Re: ESP evidence airs science's dirty laundry
     Reply #15 - January 18, 2012, 06:04 AM

    Scientific research is often poorly conducted? No shit.
  • Re: ESP evidence airs science's dirty laundry
     Reply #16 - January 18, 2012, 06:05 AM

    k I knew you were in a sour mood Grin
  • Re: ESP evidence airs science's dirty laundry
     Reply #17 - January 18, 2012, 06:09 AM

    http://www.youtube.com/watch?v=Kj6RSmnVFs8

    Too fucking busy, and vice versa.
  • Re: ESP evidence airs science's dirty laundry
     Reply #18 - January 18, 2012, 06:24 AM

    OK I finally read all of your precious article. The issue here isn't with scientific method, it's with statistical analysis. Probably a bunch of scientists who don't know how to properly analyze data. My brother is a statistician and he often analyzes the same group of data several different ways. Different statisticians have different paradigms of how to measure data, I guess this article seems to advocate one specific method.
  • Re: ESP evidence airs science's dirty laundry
     Reply #19 - January 19, 2012, 09:35 PM

    My brother had this to say, if it helps:

    Quote
    This is basically a question of power and significance level. Significance level would basically be the probability level you set of getting an “extreme” result given the null hypothesis was true. Basically it’s meant to say “Chances are if we flip a coin 1000 times and get 800 tails, we are very confident that the probability of getting tails is not 50%, because the chances of flipping 800 tails is very low.”
     
    The second part of this analysis would involve power. This is basically the chance that we will reject the null hypothesis given it is actually false. For example, if the null hypothesis of 50% tails is not true, and tails truly come up 51% of the time, if we flip 100 coins, we would probably have low power because if we got 51 tails from 100 flips, we wouldn’t conclude that the “null hypothesis” is false, because 51 tails is close enough to the 50 that we would expect under the null.
     
    So basically the statistical power and significance level affect the accuracy of a statistical test. With a fixed number of flips (say 100) if you decrease your statistical power, you increase your significance level and vice versa. They key to increasing both is to increase your sample size. For example if you flip a coin 10000 times and get 5100 tails and 4900 heads, you would now probably have a high power to reject the null hypothesis of 50% tails.
     
    It seems like the psychologist is missing the point of these tests…to find out the true distribution of the coin. He’s right; you probably wouldn’t flip 527 tails even if the coin truly gives tails tails 52.7% of the time, which is why experiments are usually replicated before anything actually gets accepted as true.
     
    If one study shows that drug A is awesome at curing a disease and 20 others show that the drug does nothing, then it’s very possible that first study just had randomly better results with Drug A.

  • 1« Previous thread | Next thread »