To Peace, Love and Managing a Bug Bounty

Chris Howell, CTO

It was bound to happen someday, right? "If you manage a bug bounty program long enough,” said a friend to me almost three years ago when Wickr was preparing to announce its program, "there will inevitably come a day when you are criticized for the way you manage it.” I laughed it off at the time, deep down fearing she was right but convinced that I could make it different. 

Screen Shot 2017-05-28 at 12.44.32 PM.png

Someday for me was a few days ago, when a colleague sent me this blog post describing how researchers at a security firm went public to criticize Wickr for not doing right by their bug bounty submissions. Needless to say I took it personally. Rightfully so, since as CTO from inception of the program I’ve led the team that reviews these submissions, works with researchers, and proposes awards. Heck, at times, I was the team.  

When I looked into records of submissions from the group on this issue, I found the following:

1. Six submissions received in 2014; 

2. Three additional inbound communications (in 2015 and 2016) pertaining to the 2014 submissions;

3. Two replies to the inbound communications (excluding auto-reply), the latest of which was Oct 1, 2016.

The latest message we sent to the group's admin noted some confusion in accounting for all of their past submissions. In a recent query they mentioned a total of 11 issues reported to us, however our records only indicated 6, so we asked them to provide access to the full lot of issues to see what we may have missed.  I’m pleased to report that as of this writing, communications are ongoing.

Two things I take away from my review of the history of this issue: 1) we need to communicate better; and 2) we need to communicate better! Looking back at the related threads of conversation, such that the records survived through x email clients and y laptop rebuilds in your typical dynamic startup environment, I lament the fact that they were not nearly as rich and responsive as I would have liked them to be. That’s the giant, honking red flag root of the issue. There were clearly also some differences of opinion in our assessment of the risk/significance of the reported issues vs. that of the researcher. These differences are common in the context of a bug bounty, but I’m willing to bet that if we had been more communicative we would have found common ground. I’m going to focus on making improvements in this area right away, starting with these researchers. I also plan on looking into third parties to help keep us on point with the program overall as we grow and expand our product portfolio.

As to the specific claims made in the article, we have never and would never fail to pay a researcher for disclosing a worthy security issue or knowingly patch a significant issue reported under the program and fail to recognize the source. To do that is to defeat the purpose of running a bug bounty program in the first place.

I take pride in the fact that almost 20% of all researchers who have provided qualifying submissions have earned a cash award over the history of our program. Many of these awards were even granted simply because the issue was cool, if not a particularly significant security risk. Encouraging security researchers to engage in responsible disclosure of security issues is the whole point of the program. That’s the world that I and so many other security professionals want to live in and what I believe is best for our users – the peace and love part. The rest of it – managing a bug bounty – is, well, hard, especially in a startup. If you haven’t heard, let me be the first to tell you. But let me also tell you that it’s worth every hour spent reading submissions, performing analysis, and most of all interacting with a super talented, committed white-hat security/hacker community that’s doing its part to help protect our users. 

*Nov. 2nd 2016 Update: It has been our experience that about 20% of qualifying security research submissions (i.e. within the scope of the Bug Bounty program) get awarded with monetary bounty. The remaining 80% of submissions are either repetitive of the bugs other researchers have previously identified and got awarded for or are low quality bugs as calculated by likelihood and impact on our users, or do not rise to the level of what the team considers serious enough for the program (e.g. "you can take screenshots on iOS").

From our discussions with the colleagues within the information security industry, it is clear that Wickr’s experience with a signal-to-noise ratio in bug bounty isn’t unique. For instance, the Bug Crowd’s 2016 annual report confirmed that: “of total submissions, 24,516 (45.38%) were marked invalid and 19,574 (36.23%) were marked duplicate. Valid, non-duplicate submissions account for the remaining 9,963 submissions, resulting in a signal-to-noise ratio of 18%.”

R Z