Bug Bounty: The road to my first $1000 through hacking public websites
Disclaimer: I do not work in cybersecurity, nor hold an IT job. I do this research out of pure passion and curiosity. This article is for educational purposes only and not to promote illegal activity. Always acquire the permission of a company before testing on their infrastructure and stay within scope.
What is bug bounty? In simple terms, bug bounties are payments, from companies, awarded to researchers for finding security vulnerabilities on their scoped infrastructure. This can range from coding flaws that allow an attacker to run code on a victim’s browser, exposed sensitive information, denials of service, and more. These bugs are triaged in the form of criticality. Criticality is based on the CIA triad, Confidentiality, Integrity and Availability and is typically rated through a scoring system called CVSS (Common Vulnerability Scoring System). The more this triad is affected, the more severe the vulnerability, and the greater (usually) the payment for discovering and reporting it.
You may see many posts on X stating “Thanks HackerOne for the payment of $300!” and these same people may even be nice enough to call out their findings’ bug classes, but rarely will they expand on how they got to finding it… What were they thinking, i.e. how did they get in the mindset of an attacker? What indicators led to the vulnerability discovery? The purpose of this article is to educate beginners on the thought process on thinking like an attacker. These next 5 bug findings will elaborate on this thought process with a lesson learned for each, so you too can have better success at finding bugs and securing the internet while getting paid for your work.
Registered users mass e-mail (info) disclosure
Severity: High — Payout: $250
Summary
I was testing a very large retail company’s infrastructure but due to NDA I cannot publicly disclose their company name, so we will call them redacted.com. During my testing, I came across an Application Programming Interface (API) on one of their subdomains. Inspecting the network traffic, I saw that config.json was requested when visiting the webroot, which is always something interesting to look at. In this file, there was a path that no popular wordlist would pick up via fuzzing, and contained the word “backend” in it. I browsed to it which exposed an API-style interface. From there, I fuzzed further which uncovered a hidden directory called “uploads”. Simply browsing to that I noticed a large response size from the server, and upon inspection — saw there to be hundreds of e-mail addresses. Some were from gmail.com, hotmail.com, redcated.com, etc. I knew right away this should not be exposed to anyone and reported it right away. This was triaged as a High finding and I was awarded $250. This was a low payment for a High finding simply because this company chooses to not pay for vulnerabilities unless they are High/Critical severities, and is totally at their discretion. In this case, they decided to award me with $250. The company fixed this vulnerability within 48 hours.
Steps Taken
- Browsed to the subdomain manually and viewed the loaded sources.
- A loaded javascript config file contained a path to a /xyz.backend.tools/ directory.
- Browsing to this backend directory showed me a template API page for the Django REST Framework.
- By fuzzing this API directory, I discovered a hidden endpoint that lead to disclosure of emails when simply browsed to it. Here, anyone had unauthorized access to 1,000x e-mail addresses of users registered to this application.
- I reported this to the company, which they fixed within 48 hours.
Impact
With these 1000x e-mail addresses, I could have launched a password-spraying attack to gain access to several accounts. Statistically speaking, people set weak passwords. With such a large set of e-mails (i.e. targets) I’m confident at least a couple of these accounts would have been breached with simple password guesses if I sprayed passwords on each of these users.
Lessons Learned
Always test APIs: fuzz them for endpoints and analyze the responses thoroughly to uncover content that shouldn’t be exposed to anyone.
Manual Subdomain Exploration: Manual exploration of subdomains is crucial, as automated processes may overlook critical pages that could lead to severe vulnerabilities.
CVE-2022-39195: Listserv Reflected XSS
Severity: Low — Payout: $200
Summary
Like I always do on a wildcard scoped-domain, I begin by finding subdomains. This one contained a subdomain that ran Listserv, let’s call it listserv.redacted.com, which is a software used for file sharing. When you find a piece of technology, you should always first learn a bit about it and then look for publicly available exploits — which in this case, there were several. The first few public exploits did not work (i.e. patched), but luckily for me, one did with a slight bit of code modification. This was a Reflected XSS vulnerability which allowed me to run code in a victim’s browser by sending them a maliciously crafted link. This was triaged quite quickly, and I was awarded a Low severity bounty of $150 as I could not readily demonstrate a larger impact to the company. Once fixed, I was offered a retest for $50 to confirm the code fix, totalling this one low severity finding to $200.
Steps Taken
- Subdomain enumeration, yields listserv.redacted.com.
- My nuclei scan picked up CVE-2022–39195 on this subdomain, but when visiting the public exploit Proof of Concept (PoC), it didn’t work.
- With some logical reasoning, it was because the file naming structure from the public exploit vs what the company actually used was slightly different.
- I modified the exploit slightly, which led to this exploit working — and successfully executing a Cross-Site Scripting (XSS) vulnerability on this domain.
Impact
An attacker can send a link from the company’s domain (user is more likely to trust it) that contains malicious instructions to deploy javascript in the context of a victim. This can allow an attacker to run malicious code on the victim who clicks this link, which could lead to additional requests on behalf of the victim… even to malicious sites controlled by the attacker.
Lessons learned
If a scanner finds an exploit that you think is a false positive, look into it anyway. Try tampering with the exploit a bit before giving up, worst case scenario is that the exploit fails, and you move on. Best case, is that you may have secured a vulnerability for this company and made some money doing so.
Developer’s Wordpress Migration database file exposure via /.git/index
Severity: Medium — Payout: $300
I came across a domain owned by this company that was based out of a different country (the TLD was from a European country). As this website looked older, I decided to search for hidden directories through fuzzing. I came across a /.git/index path that resulted in a 200 OK response from my fuzzer. I immediately browsed to it, and saw my browser download this file. In this file were a bunch of paths you would typically find in a git index file, most of which I already knew about by simply browsing the site. One, however, was a .gz extension (gzip compressed file) which indicated to me it was probably a backup file of some kind. Backup files are something you never want to miss when testing as they may contain sensitive information such as customer data, credentials, source code, etc. In this specific backup file I saw that it was a Wordpress developer’s migration file. This file contained the entire website packaged up in a wordpress database. This exposure included developer password hashes, API keys and the entire schema (tables, columns and all the values) of the website. Because this was a static website with no user registration, there was no immediate danger to customer data as this was a migration file, and not a live database. The password hashes were not easily crackable (I did not succeed) therefore the company deemed it to not be any higher than a Medium finding.
Steps Taken
- Fuzzing an endpoint that looked older lead to exposure of /.git/index
- Visiting that endpoint exposed a .gz compressed file, which contained a path to a Wordpress MySQL database migration file.
- Confirming the file contents yielded password hashes, all the database code and API keys.
Impact
If the developers did not rotate their passwords frequently, or reused these same passwords on different areas of the company, cracking these hashes could lead to developer account takeovers. If the API keys remain valid, an attacker can use these APIs to their benefit, which could result at a monetary cost for the company (some are pay-per-use), or even access personal data they shouldn’t be allowed to. Think of API keys as potential keys to a (or many) safe.
Lessons Learned
Just because it’s a static site with limited (or no) user interaction, does not mean it cannot be vulnerable to attacks. Sometimes developers fail to protect hidden backup data, in which a slip-up can offer attackers sensitive data, such as API keys or password hashes that can be used for further attacks.
I do not usually spend a lot of time on static sites, but I’ll always once-over with a directory fuzz to see if I can find easily accessible data that was intended to be hidden.
Disclosure of hard-coded credentials in front-end js file
Severity: Medium — Payout: $200
Summary
I came across a subdomain that contained the word admin in it. This is a strong hint that there will be some sort of authentication mechanism in place, as typically admin domains shouldn’t be accessible to just anyone. I was right, and upon browsing to it, got hit by a login screen with no ability to self-register. What can be done in this instance? See what else is requested in the background is my first go-to method of bypassing this authentication screen. In this case, there was a JavaScript file that contained credentials. I tried using these credentials in the login panel, but this did not work. By analyzing the source code surrounding these credentials, I learned that these were used on a different endpoint. By applying these via a basic authentication method (i.e., passing them through an Authenticate: Bearer <base64-encoded creds> found that it worked. This was triaged as a medium severity finding as I did not demonstrate I could exfiltrate user data. This was paid out at $200.
Steps Taken
- I navigated to an interesting admin version of a subdomain.
- Viewed a js file that contained hard-coded credentials.
- By analyzing the source code of this js file, found a way to test these creds via basic www auth to receive a different response on the vulnerable endpoint. (i.e. a header such as Authorization: Basic c3dlaHRwYW50ejpwYXNzd29yZA== where the Base64 string is the encoded user:pass)
- I reported this without going further (which in retrospect, I should have went further).
Impact
Exposing credentials can always be disastrous for a company if an attacker can use these to escalate an attack through bypassing an authentication mechanism they otherwise shouldn’t be allowed to bypass. This can lead to sensitive data exposure of employees and customers which can come at a very severe fine to the company in some countries.
Lessons learned
If you find creds and they don’t work at first, try to understand other methods or different endpoints they could be used on. Reading the source code is a great start.
When finding credentials, go further. Try to escalate this to a Critical finding by exposing sensitive data, if the rules of engagement allow it. Reporting this right away gives the company an avenue to triage this as a lower finding, fix it, and you may miss out on some serious cash like I did.
Reflected XSS on hidden endpoint
Severity: Medium — Payout: $200
I came across a page that was mostly blank, but had a response like “wrong URL”. I figured there was some sort of functionality if I passed the right parameter. After some fuzzing, it was apparent that “redirect=<anytext>” was causing a different response. I inspected the code in the response and saw that my input was being directly reflected back… even special characters. When you see special characters reflected back, the first thing you should be on the lookout for is input sanitization, or in this case — the lack of it. If the response does not sanitize the code being presented back to the user, there is possibility for cross-site scripting in a victim’s context.
The blank page, Fuzz to find redirect endpoint and param miner to find a parameter that had unsanitized reflection.
New private program, found many pages that led to a blank page with a small error message. Fuzzed params, found redirect=
Saw that the value was reflected and not sanitized. XSS worked.
Steps Taken
- Found redirect endpoint through fuzzing.
- Navigating to it threw an error message relating to wrong url. I tried fuzzing the parameters, which gave a different response when giving it a query parameter of app_id.
- Parameter value was reflected in output with no input sanitization after trying payloads similar to “><s>test</s.
- Constructing a successful XSS payload was trivial as there was no WAF in place. This was something like <script>alert(1)</script>.
Impact
Same as the XSS vulnerability found earlier, an attacker can run malicious code in a victim’s browser by sending them a maliciously crafted link.
Lessons Learned
Analyze the error responses. Never pass over an error message as this could be exactly what you need to indicate what’s missing for your exploitation phase.
Conclusion
These were my first five paying bugs in Bug Bounty. Finding bugs is hard for most, as it is very competitive with lots of smart security researchers from around the world testing the same target as you are. At the time of publishing this article, I have found a few additional bugs in other companies, but these were under Vulnerability Disclosure Programs (i.e. they do not pay you for your findings, but are appreciative of it) therefore no cash in wallet. I’ve since taken a slight pause on hunting to hone in on my skills, and study esoteric tactics to boost my hunting game shortly.
Although the competition is fierce, there is new code being deployed every single second out there on the internet, and luckily for us bug hunters, this code is usually written by humans. Humans make mistakes, and these mistakes can be very costly for a company, and profitable for attackers.
Set a goal for yourself. My goal was to get paid for at least one vulnerability finding from May ’23 (when I started hunting) to end of Dec ’23. I was paid out for five. In 2024, my goal is to secure a single finding that pays in the 4-digits… with persistence I’ll get there, and so will you if you put in the time and effort.
Recommended resources
There are way too many to list, but I’d say if you’re looking for a starting point, read writeups either on Medium or on X (search for #bugbounty and/or #bugbountytips). Reading published bug reports is also a stellar way to improving your skills. Learning platforms such as TryHackMe, HackTheBox, TCM Security and PentesterLab were great sources of information, but for Web App Security — Portswigger Academy takes the cake as the best free resource available out there, in my opinion.
Although the list is non-exhaustive, here’s some fantastic content creators that have helped me learn through my journey and have had the fortune to meet some in-person:
Followed YouTubers: Jason Haddix, Nahamsec, InsiderPHD, Rana Khalil, rs0n_live, Critical Thinking Bug Bounty Podcast, TCM Security, STOK, John Hammond, Ryan John.
Some (of many) X followed accounts: @HunterMapping, @ADITYASHENDE17, @Jayesh25_, @bxmbn, @intigrity, @albinowax, @joaxcar, @Rhynorater, @hunter0x7, @disclosedh1.
Thanks for reading, and I wish you great success in your bug hunting journey. If this helped or if you have any comments, feel free to drop me a message on X : @swehtpantz.