DonorsChoose Supplement Part 3: Market Corrections

Note: In preparation for the results announcement by DonorsChoose, this series is meant to carve up different issues raised by my work on the DonorsChoose Data and address them directly and more fully.  You can find the original announcement and report at Predicting Success on DonorsChoose.org.

If, based on my findings, we believe that there are some deserving projects that are being unwarrantably disadvantaged, by say teacher gender or metropolitan location or even the state of origin, there are a couple of ways we can use the algorithm to change the dynamics of the market and test the efficacy of those changes.  My philosophy behind this is that, given that the DonorChoose market is biased towards urban schools, for example, if we don’t believe urban schools are any more deserving than suburban or rural schools (see my discussion on deservingness for why this might be true), then I would call that systematic under-valuation in the market.  Using the algorithm, we can test potential correctives for that.

The first intervention was actually suggested to me by Jonathan Eyler-Werve.  He suggested that search pages could weight the search results based on whether they were urban/suburban/rural or by state, for example, such that under-valued projects could be found earlier.  Technically speaking, this random sort would be weighted, such that more rural and suburban projects, randomly selected from those returned by a user’s search, would show up earlier.  So, say that you’re looking to help out a music project that’s coming down to the wire.  You might not care whether it’s urban or suburban, but, as things stand now, the higher number of urban projects in the system means that roughly 60% of the project’s you’ll see will be urban.  You’ll more likely donate to an urban school just by sheer roll of the dice.  With this weighted, random sort, the search results will balance out the proportion of urban, suburban, and rural projects.  Of course, this would not apply to searches that explicitly ask for urban, suburban, or rural projects.  Testing the impact of these corrections would involve re-running the analysis that produced this model on the post-implementation rates of success and seeing whether the significance of the urban/suburban/rural variables decreased.  If the significance decreases, then the bias has decreased.

Another form of market correction, and one which I mentioned in the report, would be allowing donors to see a project’s chances of success or sort their results by them.  This directly informs donors of the value given to these projects by the market and let’s donors decide if the project is really deserving of a 30% chance.  Thus, a donor could look at two similar projects, like two music programs in Chicago, and know that one has a 60% chance of success and the other an 80% chance.  If the donor thinks the first one is actually more deserving, they might be more motivated to donate to it to try and help its chances.  They may even start a giving page around it.  This approach is a donor-driven market correction in which donor’s can use their own set of preferences to determine if the 60% project is really less deserving than the 80% project.

Monitoring the effect of this implementation would involve re-running the model after this has been implemented and testing any changes in the probability of projects.  Thus, if the original algorithm predicted a 60% probability of success and the post-implementation data shows some projects going to 80%, we can see which variables in those projects correlate with the increase in probability.  If we find, for example, that projects posted by female teachers increase in probability, then we can infer that donors are correcting for the existing gender bias.  The same goes for any variable measured.

Finally, offline strategic initiatives can be developed to target under-valued projects.  For example, a foundation focusing on rural development may be very interested in trying to build support on DonorsChoose for rural projects.  Most importantly, this research provides justification for this strategy, in that rural schools are less likely to reach project completion.  Thus, such a foundation might be convinced to offer matching funds to rural projects or distribute gift cards to rural areas to raise awareness of DonorsChoose and build the rural donor pool.  The same goes for under-engaged states.  I’m not sure what the retention rate of gift cards is, though it could easily be figured out from the data provided for this competition.  In the case of a matching funds initiative, assessing the impact would involve the first method mentioned above, seeing whether significance of the rural variable decreased during the fund period.  As for recruiting new donors through gift cards, not only can we assess how many people used the gift cards, but, using the second method mentioned above, we can estimate the lasting effect of the initiative.

If you have any other ideas of how this prediction algorithm might be used to improve the DonorsChoose market, please feel free to discuss it in the comments section below.

Advertisements
This entry was posted in Applied Research, Economy, Education, Internet, Nonprofits and tagged , , , , . Bookmark the permalink.

One Response to DonorsChoose Supplement Part 3: Market Corrections

  1. eylerwerve says:

    Thanks for the shout out. I think you have to be very careful in what you weight and how you communicate that to the users. I think search is actually a dangerous one – it’s not very transparent, and people expect a certain level of integrity there. Much safer to use “featured project” space, which is explicitly editorial — this is someone’s opinion of a good project, and we’re OK with that. Another would be a custom search for “unloved” projects, with an explanation of why some qualify as “unloved” and that maybe it would be nice to show them some love. Very transparent, very deliberate.

    I think full scale tinkering with search results is asking for a user revolt, especially since the history of algo tinkering is full of unintended weird results. As soon as someone finds a well documented, nonsense result, they’ll want an explanation, and might be a little upset about it.

    Also, small point: if 60% of projects are urban, and 60% of funded projects are urban, that’s a fair market because everyone is getting equal success rates. It’s only a bias problem if 40% of projects are urban and 60% of funded projects are urban — the urban projects are getting extra love.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s