top of page
Writer's pictureKirk Hartley

Invitation from Acclaimed Researchers to Join an Expert Prediction Team as Part of An Experiment

The invitation/appeal pasted below also is online here. In short, it’s an invitation/appeal to be part of an expert prediction team as part of an experiment on smart people making predictions, versus using data to make predictions without judment.

The invitation/appeal arrives with a hat tip and thanks to the Marginal Revolution, a blog focused on progress through marginal changes, to use a very short label.

To whet your interest in the research and one of the acclaimed researchers seeking your input, set out below is Wikipedia’s entry on Philip Tetlock, and his findings/view that data create better predictions than do human judgments. Then read on below to see more on the study, and the invitation itself. But, again, the link is here to join.

_______________________________________________________________________________

From Wikipedia:

Philip E. Tetlock is Leonore Annenberg University Professor of Psychology at the University of Pennsylvania. He has also written several non-fiction books on politics, including Counterfactual Thought Experiments in World Politics (with Aaron Belkin; 1996) and "Expert Political Judgment: How Good Is It? How Can We Know?".

His Expert Political Judgment: How Good Is It? How Can We Know? (2005) describes a twenty-year study in which 284 experts in many fields, from professors to journalists, and with many opinions, from Marxists to free-marketeers, were asked to make 28,000 predictions[1][2] about the future, finding that they were only slightly more accurate than chance, and worse than basic computer algorithms which was the recipient of the 2008 University of Louisville Grawemeyer Award for Ideas Improving World Order. Dr. Tetlock was awarded the Woodrow Wilson Award for best book published on government, politics, or international affairs and Robert E. Lane Award for best book in political psychology, both from American Political Science Association in 2005. Forecasters with the biggest news media profiles were especially bad. The study also compared the records of "foxes" and "hedgehogs" (two personality types identified in The Hedgehog and the Fox).[3][4]

The Value of Statistics

Tetlock’s conclusion about expert opinion is that statistics when properly used are more reliable than human judgement in every sphere of activity.[5]

_______________________________________________________________________________

Posted: 03 Aug 2011 02:28 PM PDT


"He is one of the most important social scientists working today, and he requests that I post this appeal:

Prediction markets can harness the “wisdom of crowds” to solve problems, develop products, and make forecasts. These systems typically treat collective intelligence as a commodity to be mined, not a resource that can be grown and improved. That’s about to change.Starting in mid-2011, five teams will compete in a U.S.-government-sponsored forecasting tournament. Each team will develop its own tools for harnessing and improving collective intelligence and will be judged on how well its forecasters predict major trends and events around the world over the next four years.The Good Judgment Team, based in the University of Pennsylvania and the University of California Berkeley, will be one of the five teams competing – and we’d like you to consider joining our team as a forecaster. If you’re willing to experiment with ways to improve your forecasting ability and if being part of cutting-edge scientific research appeals to you, then we want your help.We can promise you the chance to: (1) learn about yourself (your skill in predicting – and your skill in becoming more accurate over time as you learn from feedback and/or special training exercises); (2) contribute to cutting-edge scientific work on both individual-level factors that promote or inhibit accuracy and group- or team-level factors that contribute to accuracy; and (3) help us distinguish better from worse approaches to generating forecasts of importance to national security, global affairs, and economics.

There is more at the link and they even offer a small honorarium."

______________________________________________________________________________


Set out below is more on the study from the link at the invitation to join

Despite its importance in modern life, forecasting remains (ironically) unpredictable. Who is a good forecaster? How do you make people better forecasters? Are there processes or technologies that can improve the ability of governments, companies, and other institutions to perceive and act on trends and threats? Nobody really knows.

The goal of the Good Judgment Project is to answer these questions. We will systematically compare the effectiveness of different training methods (general education, probabilistic-reasoning training, divergent-thinking training) and forecasting tools (low- and high-information opinion-polls, prediction market, and process-focused tools) in accurately forecasting future events. We also will investigate how different combinations of training and forecasting work together. Finally, we will explore how to more effectively communicate forecasts in ways that avoid overwhelming audiences with technical detail or oversimplifying difficult decisions.

Over the course of each year, forecasters will have an opportunity to respond to 100 questions, each requiring a separate prediction, such as “How many countries in the Euro zone will default on bonds in 2011?” or “Will Southern Sudan become an independent country in 2011?” Researchers from the Good Judgment Project will look for the best ways to combine these individual forecasts to yield the most accurate “collective wisdom” results. Participants also will receive feedback on their individual results.

All training and forecasting will be done online. Forecasters’ identities will not be made public; however, successful forecasters will have the option to publicize their own track records.

Who We Are

The Good Judgment research team is based in the University of Pennsylvania and the University of California Berkeley. The project is led by psychologists Philip Tetlock, author of the award-winning Expert Political Judgment, Barbara Mellers, an expert on judgment and decision-making, and Don Moore, an expert on overconfidence. Other team members are experts in psychology, economics, statistics, interface design, futures, and computer science.

We are one of five teams competing in the Aggregative Contingent Estimation (ACE) Program, sponsored by IARPA (the U.S. Intelligence Advanced Research Projects Activity). The ACE Program aims "to dramatically enhance the accuracy, precision, and timeliness of forecasts for a broad range of event types, through the development of advanced techniques that elicit, weight, and combine the judgments of many intelligence analysts." The project is unclassified: our results will be published in traditional scholarly and scientific journals, and will be available to the general public.

You can keep in touch with the Good Judgment Project, see related news, and view updates of the project status at our blog: http://goodjudgmentproject.blogspot.com.

3 views0 comments

Recent Posts

See All

Comments


bottom of page