The Systematizing Confidence in Open Research and Evidence (SCORE) project is an attempt to replicate hundreds of social science papers, and to search for patterns that predict what types of papers are more likely to replicate. You can read all about it at their site, and get a sense of its bigger picture importance in this great post by someone who participated in their prediction markets.
I’ve been involved in the project replicating and reviewing papers, and I plan to do a long post about what it taught me later this year. For now I just wanted to highlight that you can still join the project now, but the chance is ending soon, probably at the end of the month. I think its a great opportunity to advance science, work with the Center for Open Science on a DARPA-funded project, and get paid:
We are recruiting researchers and data analysts across the social-behavioral sciences for replication projects that use existing data that was not part of the original study. For these projects, you will select an original finding to replicate, receive or find alternative data to test the original claim, plan the preregistration of your analysis, receive peer review of your plan, and report your findings in a structured format. You will receive $3,000-$7,800 for each replication study, and you will also be eligible for co-authorship on the report of all replication studies.