meg

command module
v0.1.1 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Dec 30, 2017 License: MIT Imports: 15 Imported by: 0

README

# meg

meg is a tool for fetching URLs. Lots of URLs.

## Install

```
▶ go get -u github.com/tomnomnom/meg
```

Or [download a binary](https://github.com/tomnomnom/meg/releases).

## Basic Usage

Given a file full of *suffixes*:

```
/robots.txt
/.well-known/security.txt
/package.json
```

And a file full of *prefixes*:

```
http://example.com
https://example.com
http://example.net
```

`meg` will request each *suffix* for every *prefix*:

```
▶ meg --verbose suffixes prefixes
out/example.com/45ed6f717d44385c5e9c539b0ad8dc71771780e0 http://example.com/robots.txt (404 Not Found)
out/example.com/61ac5fbb9d3dd054006ae82630b045ba730d8618 https://example.com/robots.txt (404 Not Found)
out/example.net/1432c16b671043271eab84111242b1fe2a28eb98 http://example.net/robots.txt (404 Not Found)
out/example.net/61deaa4fa10a6f601adb74519a900f1f0eca38b7 http://example.net/.well-known/security.txt (404 Not Found)
out/example.com/20bc94a296f17ce7a4e2daa2946d0dc12128b3f1 http://example.com/.well-known/security.txt (404 Not Found)
...
```

And save the output in a directory called `./out`:

```
▶ head -n 20 ./out/example.com/45ed6f717d44385c5e9c539b0ad8dc71771780e0
http://example.com/robots.txt

> GET /robots.txt HTTP/1.1
> Host: example.com
> User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36

< HTTP/1.1 404 Not Found
< Expires: Sat, 06 Jan 2018 01:05:38 GMT
< Server: ECS (lga/13A2)
< Accept-Ranges: bytes
< Cache-Control: max-age=604800
< Content-Type: text/*
< Content-Length: 1270
< Date: Sat, 30 Dec 2017 01:05:38 GMT
< Last-Modified: Sun, 24 Dec 2017 06:53:36 GMT
< X-Cache: 404-HIT

<!doctype html>
<html>
<head>
```

Without any arguments, meg will read suffixes from a file called `suffixes`,
and prefixes from a file called `prefixes`. There will also be no output:

```
▶ meg
▶
```

But it will save an *index* file to `./out/index`:

```
▶ head -n 2 ./out/index
out/example.com/538565d7ab544bc3bec5b2f0296783aaec25e756 http://example.com/package.json (404 Not Found)
out/example.com/20bc94a296f17ce7a4e2daa2946d0dc12128b3f1 http://example.com/.well-known/security.txt (404 Not Found)
```

You can use the index file to find where the response is stored, but it's
often easier to find what you're looking for with `grep`:

```
▶ grep -Hnri '< Server:' out/
out/example.com/61ac5fbb9d3dd054006ae82630b045ba730d8618:14:< Server: ECS (lga/13A2)
out/example.com/bd8d9f4c470ffa0e6ec8cfa8ba1c51d62289b6dd:16:< Server: ECS (lga/13A3)
```

If you want to request just one suffix, you can specify it directly as an argument:

```
▶ meg /admin.php
```

## Detailed Usage

meg's help output tries to actually be helpful:

```
▶ meg --help
Request many paths (suffixes) for many hosts (prefixes)

Usage:
  meg [suffix|suffixFile] [prefixFile] [outputDir]

Options:
  -c, --concurrency <val>    Set the concurrency level (defaut: 20)
  -d, --delay <val>          Milliseconds between requests to the same host (defaut: 5000)
  -H, --header <header>      Send a custom HTTP header
  -s, --savestatus <status>  Save only responses with specific status code
  -v, --verbose              Verbose mode
  -X, --method <method>      HTTP method (default: GET)

Defaults:
  suffixFile: ./suffixes
  prefixFile: ./prefixes
  outputDir:  ./out

Suffix file format:
  /robots.txt
  /package.json
  /security.txt

Prefix file format:
  http://example.com
  https://example.edu
  https://example.net

Examples:
  meg /robots.txt
  meg hosts.txt paths.txt output
```

### Concurrency

By default meg will attempt to make 20 concurrent requests. You can change that
with the `-c` or `--concurrency` option:

```
▶ meg --concurrency 5
```

It's not very friendly to keep the concurrency level higher than the number of
prefixes - you may end up sending lots of requests to one host at once.

### Delay
By default meg will wait 5000 milliseconds between requests to the same host.
You can override that with the `-d` or `--delay` option:

```
▶ meg --delay 10000
```

**Warning:** before reducing the delay, ensure that you have permission to make
large volumes of requests to the hosts you're targetting.

### Adding Headers

You can set additional headers on the requests with the `-H` or `--header`
option:

```
▶ meg --header "Origin: https://evil.com"
▶ grep -h '^>' out/example.com/*
> GET /.well-known/security.txt HTTP/1.1
> Origin: https://evil.com
> Host: example.com
> User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36
...
```

### Saving Only Certain Status Codes

If you only want to save results that returned a certain status code, you can
use the `-s` or `--savestatus` option:

```
▶ meg --savestatus 200 /robots.txt
```

### Specifying The Method

You can specify which HTTP method to use with the `-X` or `--method` option:

```
▶ meg --method TRACE
▶ grep -nri 'TRACE' out/
out/example.com/61ac5fbb9d3dd054006ae82630b045ba730d8618:3:> TRACE /robots.txt HTTP/1.1
out/example.com/bd8d9f4c470ffa0e6ec8cfa8ba1c51d62289b6dd:3:> TRACE /.well-known/security.txt HTTP/1.1
...
```

Documentation

The Go Gopher

There is no documentation for this package.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL