Bias Bounty Programs as a Method of Combatting Bias in AI

This policy comes about as a response to continuous deployment of biased Artificial Intelligence systems into production, only to quickly be found biased with the only consequences being unfavorable news coverage. Bias Bounty Programs could provide scalable oversight to harmful discrimination by AI.

This policy assumes that this is unfavorable for both parties:

Those affected by bias - they will rarely receive enough news coverage of the bias to maybe get an apology and maybe a fix

Those deploying biased systems - usually a homogenous small group deploying a biased system, with no clear guidelines and imperative to debias it (in fact they could be penalized for taking the time to do so), but will maybe get a retroactive slap on the wrist if the media picks up on their bias.

Proposal

A similar problem exists in information security and one solution gaining traction are “bug bounty programs”. Bug bounty programs seek to allow security researchers and laymen to submit their exploits directly to the affected parties in exchange for compensation.

The market rate for security bounties for the average company on HackerOne range from $100-$1000. Bigger companies can pay more. In 2017, Facebook has disclosed paying $880,000 in bug bounties, with a minimum of $500 a bounty. Google pays from $100 to $31,337 for exploits and Google paid $3,000,000 in security bounties in 2016.

It seems reasonable to suggest at least big companies with large market caps who already have bounty reporting infrastructure, attempt to reward and collaborate with those who find bias in their software, rather than have them take it to the press in frustration and with no compensation for their efforts.

Determining what bias is

It is assumed here that the company will determine what bias is in accordance with “their company’s values” as they want to market to perceive them.

Potential Problems

Spam

Possibly the most cited issue with security bounties is the amount of false reports that come in, and the amount of people it takes to triage them. However, this does not seem like too much to ask from companies who professionally make content prioritization software.

Reluctance to Hire Triage Staff

These companies are controversial already for not hiring staff to even interact with paying customers, so this could be a hard sell. However, press pressure has led to hiring of more moderators at both Youtube and Facebook recently.

Adoption

Option A. Voluntary Enrollment

Companies decide this is a great idea (or better than eventual government intervention) and budget and implement them themselves.

Option B. Regulation

Bounty programs can be mandated by the government, most easily in any software government themselves use.

UX Practices

Where should the bias bounty program live?

  1. In the application? (e.g. under “help”)
  2. On a company run separate webpage?
  3. An independent bias bounty marketplace where companies can work together to share biased models?

I think a combination of one and two are the most likely, with one and two being mobile and web versions of submission forms, respectively.

Conclusion

This is a first attempt at solving a hard problem. Feel free to send jb@rubinovitz dot com feedback. I would love to figure out a way to hasten the iterations on debiasing in production AI models while compensating those affected by them who have to expend labor reporting them.

Acknowledgements: Thank you to Omar Bohsali for sharing his expertise in information security bounties.

Stay up to date

Get notified when I publish something new, and unsubscribe at any time.