Documentation ¶
Index ¶
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
This section is empty.
Types ¶
type Crawler ¶
type Crawler struct {
// contains filtered or unexported fields
}
Crawler is used to crawl webpages and populate a linked list with the extracted information
func (*Crawler) Start ¶
Start is used to initiate the crawling process. It takes a root URL and will trigger a couple of goroutines to crawl the extracted data. It will create a buffered channel for the links, with a max buffer provided to the crawler to ensure concurrency is not unbounded. Once all the data is extracted it will loop over the linked list on the Crawler and populate and return slice of Page
Click to show internal directories.
Click to hide internal directories.