When I began my bug bounty hunting journey, I was incredibly fascinated by the leaderboards. Maybe I was a bit too fascinated, as they nearly became an obsession. At that time, I couldn’t understand how these top hunters constantly earned far more reputation points than I could and how they managed to repeat such achievements month after month. Of course, they probably had access to more programs than I did back then, but that couldn’t be the only reason. I always considered myself a pretty decent penetration tester/red teamer with extensive real-life experience, so there was no way I was that bad. It was only much later that I understood what was happening: they were simply using a different approach than I was.
In the bug bounty space, hackers use various types of hunting styles. I would argue that each hacker probably has their own way of approaching targets. However, after running some high-level, nonscientific analysis of these styles, I can safely group them into two main categories: the systemic vs. the manual approaches. Of course, we can also consider all variants of hybrid styles, but I’ll keep things simple for the benefit of this post. In the following sections, I will explain the main differences between these two approaches and what they mean for hunters, as well as program owners.
Have you ever heard of the concept of “bug bounty farming?” If you ask your favorite LLM, it’ll return some definitions that, to me, aren’t exactly right. In my opinion, the real definition of “farming” is the following: repeating the same exploitation techniques over and over again while targeting as many bounty programs as possible to maximize monetary outcomes based on submission volume. However, to be more fair to these researchers, this could also be considered systemic testing.
Systemic testing is an automation-based approach. The goal is not to be efficient but to run as many testing scenarios as possible on as many endpoints as possible. Yes, that also means running automated XSS detection on static files (if automation workflows are not properly configured). While I don’t know exactly how the “farmers” have built their infrastructure, they automate basically everything, from recon to reporting. Such an approach has significant advantages, including almost zero human interaction, the generation of numerous submissions (high submission volume) that then earn more reputation points, and the maximization of potential income. However, there are consequences.
The biggest caveat of systemic testing is obviously the cost. Although I don’t have a definite number, it can’t be cheap—several thousand dollars per month, maybe more. It requires substantial computational assets, storage, and maintenance. The second biggest caveat is the noise ratio. Since there is very little human interaction in these workflows, false positives and low-impact bugs are likely to slip through the cracks and get submitted to programs. Such low-severity bugs, while still paid, reduce a researcher’s average impact statistics on the leaderboard.
From a program owner’s perspective, it would be very easy to recognize this style of hunting. If you receive multiple templated reports for the same vulnerability type in a very short period of time (e.g., multiple subdomain takeovers or similar exploitation payloads), there is a very high probability that these issues were found using automation.
This type of approach is much easier to explain. All-manual researchers simply run their recon tools manually, analyze the data, understand the business context of the application, and then start finding issues. This is a much slower approach, so the risk of finding duplicates is higher, but it goes much deeper into the application or target logic. However, the chance of finding high- to critical-impact vulnerabilities is significantly higher since very few tools (if any) can reliably extract these issues. Very often, manual testers uncover tricky injection points or business logic flaws that have gone under the radar for years. Of course, the time commitment associated with this approach is much higher, and you’re unlikely to make the top five of the monthly leaderboard.
Researchers who leverage manual hunting are also easy to identify. The issues they submit tend to be more complex, sometimes requiring many steps to reproduce. These vulnerabilities are also often chained with control bypasses (e.g., WAF bypasses) for more effective exploitation. However, this complexity may also result in an appearance of lower-quality reports, as the technical details are more difficult to explain.
I’m going to throw out an engineering answer to that question: it depends. If you want to generate numerous submissions, accumulate many points, and automate everything because you enjoy it, you may choose a systemic approach. However, keep in mind the much higher operating costs and the fact that your average impact will likely be medium to low. Conversely, if you prefer to take your time, focus on business context, and really try to maximize the impact of your submissions, a manual approach is essential. Such an approach is likely to generate more high- and critical-impact submissions.
It’s important for program owners to understand these various approaches as well. Bug bounty programs should be tied to internal objectives. Whether your program is designed to tackle external perimeter attack surface management, challenge the crowd to find issues that automated tools missed, or test very specific functionality requiring unique expertise, you need to understand how the crowd thinks and approaches targets to maximize outcomes. In other words, you’ll need a mix of systemic and manual researchers depending on your objectives.
I’m sure you’ll now see the monthly leaderboards a little differently.