youtube

package
v0.7.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Jan 28, 2021 License: GPL-3.0 Imports: 17 Imported by: 3

Documentation

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

func Title

func Title(id string) (string, error)

Title extracts the page title of the given youtube clip id.

Types

type Result

type Result struct {
	// contains filtered or unexported fields
}

Result represents a youtube.com search result, i.e.: a youtube clip.

func FromURL

func FromURL(u, title string) (*Result, error)

FromURL parses the given url to extract the id and create a youtube result. see NewResult.

func NewResult

func NewResult(id, title string) *Result

NewResult creates a new youtube result. ID is required and should not be empty for it to be a valid youtube clip. Title is an arbitrary string that will be used as the title, this can be fetched using Title(id string) or Result.UpdateTitle().

func Search(q string) ([]*Result, error)

Search queries youtube.com for search results matching the given query.

func (*Result) DownloadURL

func (r *Result) DownloadURL() (*url.URL, error)

DownloadURL asks youtube-dl to create a (temporary) download / stream url of the clip's contents.

func (*Result) ID

func (r *Result) ID() string

ID returns a the clip id.

func (*Result) Title

func (r *Result) Title() string

Title returns the title associated with this Result.

func (*Result) URL

func (r *Result) URL() *url.URL

URL constructs the youtube url for this clip.

func (*Result) UpdateTitle

func (r *Result) UpdateTitle() error

UpdateTitle uses Title to update the clips title using its id.

type Scraper

type Scraper struct {
	// contains filtered or unexported fields
}

Scraper is a wrapper around github.com/frizinak/libym/scraper to extract youtube Results.

func NewScraper

func NewScraper(s *scraper.Scraper, cb func(*Result)) *Scraper

NewScraper creates a new youtube url scraper with the given scraper. cb will be called with each match after a call to Scrape or ScrapeWithContext.

func (*Scraper) Scrape

func (s *Scraper) Scrape(uri string) error

Scrape calls ScrapeWithContext without context.

func (*Scraper) ScrapeWithContext

func (s *Scraper) ScrapeWithContext(ctx context.Context, uri string) error

Scrape start the scrape of the given url and can be canceled using ctx.

type ScraperCallback

type ScraperCallback struct {
	// contains filtered or unexported fields
}

ScraperCallback is the actual url matcher for Scraper which you probably want to use.

func NewScraperCallback

func NewScraperCallback(cb func(*Result)) *ScraperCallback

NewScraperCallback creates a new ScraperCallback.

func (*ScraperCallback) Callback

func (s *ScraperCallback) Callback(uri *url.URL, doc *goquery.Document, depth, item, total int) error

Callback is the actual function that can be passed to a github.com/frizinak/libym/scraper.Scraper.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL