Social scientist Duncan Watts talks about how the Web can deliver on its decade-old promises of delivering researchers with unprecedented access to fodder for behavioral research
DUNCAN WATTS: For more than a decade social scientist Duncan Watts has been studying the Internet?s impact on social behavior, not to mention the impact that social behavior is having on the Internet. Image: Courtesy of Microsoft Research
In many ways the Internet is the ultimate virtual laboratory. Social media and news sites tell the casual observer much about our priorities and interests, whether it's the grave prognosis of the U.S.'s ongoing "fiscal cliff" political negotiations or elation over England's royal pregnancy. Social scientists believe that, beyond such superficial revelations, the Internet can also be a tool for conducting expansive, yet inexpensive research experiments at unprecedented speed.
Duncan Watts has been studying the Internet?s impact on social behavior, and vice versa, for more than a decade. In 2001 Watts and fellow Columbia University sociologists published the results of their Small World Project, an e-mail version of sociologist Stanley Milgram?s famous 1967 "six degrees of separation" experiment that used snail mail to test the theory that every person on the planet is separated from everyone else by a chain of about six people. In 2006 Watts worked with a team of researchers on Music Lab, an online experiment that illustrated the difficulty of predicting a song's popularity among a diverse group of listeners.
Now a principal researcher at Microsoft Research's New York City offices, Watts is focusing on improving Internet-based research methods and finding new ways to more effectively leverage the Web as a tool for crowd-sourced knowledge. Superstorm Sandy, which hammered New York City and the U.S. Northeast little more than a month ago, is fresh in his mind, as are the ways in which the Web successfully and not so successfully coordinated emergency response and disaster relief efforts.
The Web could have a profound impact on social science because it offers unprecedented access to people willing to participate in experiments, Watts says. One such experiment, he adds, could be testing the value and accuracy of crowd-sourced information during emergencies as well as instructing Web users on how best to coordinate their resources when disasters strike.
Scientific American recently spoke with Watts about the Web's ability to revolutionize social science, why paying online test participants more money doesn't guarantee more accurate data, and how to make the most of crowd-sourcing to assess what's happening on the ground during a crisis.
[An edited transcript of the interview follows.]
Why are you so interested in the Web as a tool for conducting social science experiments?
The Web offers new opportunities for social science because it dramatically changes the cost structure for running experiments, the scale and speed at which those experiments can be run, and the diversity of the people you can include in your subject pool.
What did Small World and Music Lab teach you about conducting Web-based social science research?
Small World and Music Lab were successful, but in some ways they highlighted the difficulty of doing experiments online. One advantage was the ability to recruit tens of thousands of people to participate. It would be prohibitively expensive to pay that many? participants, so, in effect, we had to "gamify" our experiments to make them appealing. This approach led to tradeoffs. On the one hand by making the research fun and engaging for participants, we ran some very large experiments at very low cost. But the most interesting research questions don?t necessarily lend themselves to fun, engaging games, while conversely most fun games are too complicated to lend themselves to the kind of clean hypotheses that come from theory. Another problem is that running experiments online also runs into certain methodological problems to do with sampling and measurement. I really think we're in the middle of all that with respect to virtual lab-style experiments.
How has Amazon?s Mechanical Turk digital labor marketplace?introduced in 2005?impacted online research?
The great thing about Mechanical Turk is, most of the tasks are incredibly boring, so we don?t have to worry so much about making our experiments fun, or like games, because maybe we could pay participants after all, even if we only have to pay them a little bit.
A few years ago, working with my [former] Yahoo colleague Winter Mason, we demonstrated how to use Mechanical Turk to conduct behavioral research and to make it easier for researchers to bene?t from Amazon's platform (pdf). We looked at the effect of financial incentives on participant performance. If you pay people more money to do a particular job, how does it affect their performance? [The task in question asked participants to sort a set of images taken from a traffic camera at two-second intervals into chronological order.] We found that increasing payment will increase the amount of work they'll do, but it does not improve quality of their performance at all. Mostly what it does, in fact, is increase how much they think they should be paid!
It sounds as though the Web is tailor-made to be used as a research environment. Where is it lacking?
We keep finding that the biggest challenge in running experiments online isn't the database or the user interface or the algorithms for designing networks. All of that is pretty straightforward. The hard part is actually recruiting people in a reliable way. A lot of our thinking moving forward is how to build a better infrastructure for recruiting and keeping track of people in a way that is transparent and subject to the usual principles of informed consent.
That sounds like a challenge you would have with more traditional social science experiments as well. Doesn't the Web make it easier to recruit participants?
For the last few experiments, we've recruited 100 people and used maybe 20 or 30 of them at a time for different studies. But six months or a year later, when we run the next experiment, all of the people from the past experiment have moved on because there's a lot of churn in Mechanical Turk land.
Source: http://rss.sciam.com/click.phdo?i=2211edb396100ab6774b409ae06a6949
cmas cmas tcu dr. oz heart attack grill las vegas the heart attack grill joe kennedy iii
কোন মন্তব্য নেই:
একটি মন্তব্য পোস্ট করুন