Documentation ¶
Index ¶
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
This section is empty.
Types ¶
type CrossDomainFilter ¶
type CrossDomainFilter struct {
Domain string
}
func (CrossDomainFilter) Filter ¶
func (filter CrossDomainFilter) Filter(urls []string) []string
type DefaultProcessor ¶
type DefaultProcessor struct{}
DefaultProcessor is the default implementation of the crawler.Processor interface. This just creates a appropriate CrawlReport instance from the results of the collector.
func (DefaultProcessor) Process ¶
func (processor DefaultProcessor) Process(URL *url.URL, res *http.Response, anchors []Anchor, err error) executor.Report
Process creates a CrawlReport instance from the given parameters. It should be noted that when http.Response is nil then the HTTPStatus in the CrawlReport is set as 0.
type NoneFilter ¶
type NoneFilter struct{}
func (NoneFilter) Filter ¶
func (filter NoneFilter) Filter(urls []string) []string
type Report ¶
Report represents the results of the crawling task of a single URL.
type URLCollector ¶
URLCollector is an implementation of the collector interface that collects Anchors for the pages that the crawler visits.
Click to show internal directories.
Click to hide internal directories.