README
¶
• Features • Demo • Installation • Usage • Running Subsnort • Notes •
A simple but blazingly fast, unconventional yet surprisingly competent subdomain discovery tool written in Go, what will recursively crawl websites within a user-defined scope, looking for subdomains. Pretty damn effective when used in conjunction with other tools.
Features:
subsnort
takes a list of URLs or domains from stdin, scrapes each one for URLs, adds those to the crawl queue, extracts the domain names and writes them to stdout, before crawling those URLs for more URLs and so on, concurrently and recursively.
A scope can be specified with -s
/ --scope
. This will prevent Subsnort from crawling domains that do not match the specified string. This will only effect the crawling mechanism, and found domains not matching the scope, will still be written to the output. Pipe the output to grep
for further filtering.
Subsnort will by default use a new random User-Agent
header for every request, from a list of common ones (defined in agents.txt
and embedded in the binary on build). Additional headers can be specified with -H
(like curl
– for God's sake, developers, can't we agree to all use this syntax for our tools, pretty please??)
Demo:
Installation:
go install github.com/n0kovo/subsnort@latest
Usage:
acidbrn@gibson# subsnort -h
________ ________ ________ ________ _________ ________ ________ __________
╱ ╲ ╱ ╱ ╲ ╱ ╱ ╱ ╲ ╱ ╱ ╲ ╱ ╲ ╱ ╲ ╱ ╲
╱ ———╱ ╱ ╱ ╱ ╲ ╱ ———╱ ╱ ╱ ╱ ╱ ╱ ╱ ╱ ╱ ╱_ _╱
╱——— ╱ ╱ ╱ ╱ ╱ ╱ ╱——— ╱ ╱ ╱ ╱ ╱ ╱ ╱ _╱ ╱ ╱ v0.01 by
╲________╱ ╲________╱ ╲________╱ ╲________╱ ╲___╱_____╱ ╲________╱ ╲____╱___╱ ╲______╱ @n0kovo
[ Subsnort is an UnCoNveNTiOnal subdomain discovery tool that recursively crawls sites looking for subdomains ]
Usage of subsnort:
-d, --depth int Maximum depth to crawl (default 3)
-H, --header strings Adds a header to the requests. Can be specified multiple times.
--no-color Disable color output. (Stdout will always be colorless)
-q, --quiet Suppress status and error messages
-r, --random-agent Sets a random User-Agent for every HTTP request. (default true)
-s, --scope string Only crawl URLs where the domain name contains the specified string
-t, --threads int Maximum number of concurrent goroutines (default 10)
--timeout int Timeout in seconds (default 2)
Examples:
cat urls.txt | subsnort -s hackerone.com -d 3 --threads 10 -H "Cookie: uid=1312-161-HWDP-1337" -H "Accept: */*"
echo "hackerone.com" | subsnort -s hackerone --timeout 5 -d 10 -t 100 --random-agent --quiet | grep hackerone
subfinder -all -d hackerone.com | waybackurls | subsnort -s hackerone.com -d 3 -q -t 5
Running Subsnort:
Subsnort works best in conjunction with other tools like subfinder, waybackurls, dnsx, httpx, alterx, hakip2host etc.
See above for a couple of exampless. Get thoses pipes steaming m8!
Notes:
Subsnort has been written by a very inexperienced Go developer (me), with a lot of help from our old friend GPT-4, so please don't judge my inevitable Go sins and shitty code. If it sucks, GPT-4 wrote it. If it's awesome, I did. (Please make PRs 🥹)
PS: If you're into subdomain enumeration, check out this cool wordlist I made, and the accompanying lengthy technical blog post I wrote about it:
Subdomain Enumeration: Creating A Highly Efficient Wordlist By Scanning The Entire Internet: A Case Study
Documentation
¶
There is no documentation for this package.