NZB
Golang NZB Library
NOTE: this package takes a NZB for processing, but also provides Save(file string)
and Load(file string)
functions.
These functions however do not write the NZB back as XML but as JSON; and the data will also include more information than the original NZB.
NOTE: This library also fixes a common NZB download problem. Currently it happens a lot that there are escaped/special characters within the NZB article ID name. This is due to the fact that NZB is XML, and gets escaped automatically when the NZB is generated by tools like PowerPost. Without unescaping it, you will receive a NNTP 430 Article not found
while the article is actually present on the server. This is automatically fixed within the library on parsing.
NOTE: Implementation which uses this NZB Verify
Install
go get github.com/GJRTimmer/nzb
NZB Structure after parsing
*NZB
Size
(Total size of the content in bytes (int))
- FileSets
[]Fileset
(NZB may contain multiple filesets)
- Name
string
- ParSet
*ParSet
- Parent
*File
(This is the parent *.par2 file)
- TotalBlocks
int
Total amount of par2 repair blocks
Size
(Total size of the ParSet (int))
- Files
[]*ParFile
(Contains all the *.vol###.par2 repair files)
- A ParFile is *File, extended with the number of par blocks
Size
(Total size of the FileSet)
- Files
[]*File
(All the files of the fileset)
- Filename
string
Size
(Size of the file in bytes (int))
- Poster
string
- Date
int
- Subject
*Subject
- Groups
[]strinng
(NNTP Groups in which the file was posted)
- Segments
[]*Segment
(All the NNTP Article(s) which make up a file)
- ID
string
(NNTP Article ID)
- Number
int
(Number of segment within the File)
- Bytes
int
(The size of the segment in bytes)
- Exists
bool
Exists is special addition, I use it in combination when saving and loading
because with the NNTP STAT %
command it is possible to check for
the existence of a article on the NNTP server before fetching the body
this can be used to quickly verify if a download is completly available.
Project which uses this 'NZB Verify'
type NZB struct {
Size Size `json:"Size"`
FileSets []*FileSet `json:"FileSets"`
}
// FileSet .
type FileSet struct {
Name string `json:"Name"`
ParSet *ParSet `json:"ParSet"`
Files []*File `json:"Files"`
Size Size `json:"Size"`
}
// File .
type File struct {
Filename string `json:"Filename"`
Size Size `json:"Size"`
// XML
Poster string `xml:"poster,attr" json:"Poster"`
Date int `xml:"date,attr" json:"Date"`
Subject Subject `xml:"subject,attr"`
Groups []string `xml:"groups>group" json:"Groups"`
Segments []*Segment `xml:"segments>segment" json:"Subject"`
}
// ParSet .
type ParSet struct {
Parent *File `json:"Parent"`
TotalBlocks int `json:"TotalBlocks"`
Files []*ParFile `json:"Files"`
Size Size `json:"Size"`
}
// ParFile .
type ParFile struct {
*File
Blocks int `json:"Blocks"`
}
// Segment of File in NZB
type Segment struct {
ID string `xml:",innerxml" json:"ID"`
Number int `xml:"number,attr" json:"Number"`
Bytes int `xml:"bytes,attr" json:"Bytes"`
Exists bool `json:"Exists"`
}
// Size .
type Size uint64
// Subject of NZB File
type Subject string
Type Size
Size is a custom type which implements the fmt.Stringer interface, Size.String()
will print the Size
in human format like 372.12 MB
Type Subject
Subject holder the following helper functions, see comments for more information
ExtractFilename() (string, error)
ExtractPartNumber() (int, error)
ExtractTotalParts() (int, error)
ExtractYEncPartNumber() (int, error)
ExtractYEncTotalParts() (int, error)
Chunks
& Chunk
Looping over an NZB to download each segment after each other is not every efficient. Therefor this library provides a nice helper for download work. One of the reasons is that the NNTP Groups to which a file is posted is located within the NZB under it's corresponding file. But when downloading a article you must also know the groups; because you need to be able to issue the NNTP ``GROUP %``` command to search for each Article. The structure of a NZB is based upon generation when posting, not downloading.
Solution:
My personal favorite is generating a Chunks
list of what need to be downloaded, then Detach the NZB pointer from memory, and only reload it from a JSON file (this is faster and more memory efficient) when I need to process all downloaded parts.
A Chunk
is a downloadable part from a NZB with all required information.
This will probably be updated and extended over time.
type Chunk struct {
Groups []string
Segment *Segment
}
As you can see, it contains the memory pointer to a segment of the NZB and has the required Groups with it so you can easily perform the NNTP GROUP %
command, and then fetch the article.
Chunks
is nothing more than an array of Chunk
with some private functions to aid in the generation process of the the array.
// Chunks list of all chunks within NZB
type Chunks struct {
c []*Chunk
mu *sync.Mutex
Total int
Marker int
}
Chunks
also provides a Marker
and Total
which can be used to calculate the progress of the entire NZB download.
The Chunks
array is generated with the GenerateChunkList() *Chunks
function on the *NZB
object.
NOTE: The order of the Chunks
list is different than that of the NZB, as a nice helper towards download clients, the chunk list is build according to the following schematic.
[]FileSet
- *.par2 parent file
- FileSet files
- Par2 voolume repair Files
This will ensure that the download client will first fetch the parent par2, then the files of the FileSet en afterwards the par2 repair volumes. In the future this will be updated so it's possible to exclude the par2 volumes from the Chunks
list so you can download the par2 parent, the FileSet, then repair and check how many par2 repair blocks are missing. And then generate a Chunks
list with all the *Segments
which holds the par2 files with the required repair blocks. (Work in process :-))
Usage
To create a NZB object you have to use the Parse(r io.Reader)
function which takes an io.Reader
Example Parsing
import (
"github.com/GJRTimmer/nzb"
)
nFile, err := os.Open("test.nzb")
defer nFile.Close()
if err != nil {
panic(err)
}
fmt.Printf("Parsing NZB...")
n, err := nzb.Parse(nFile)
if err != nil {
panic(err)
}
fmt.Println("[DONE]")
Example Chunks
// Generate a Chunks list for downloading
chunks := n.GenerateChunkList()
If you want to get the first / next chunk from the chunk list
you can use the following this will also automatically update the
marker
c := chunks.GetNext()
When you are using a ConnectionPool to a NNTP server
with multiple connections for example 30 connections to a single server
and you communicate with channels the following helper function generates a list of chunks which can be
send to the worker queue (Yes I have this all working, just need the time to upload it)
The additional + 10 I used to make sure my workerQueue is filled up.
See working programm @ NZB Verify
cList := chunks.GetChunks(maxConnections + 10)
Example for Dummies (Loop the files with the NZB NZB)
// Loop over the FileSet(s) within the NZB
for _, fs := range n.FileSets {
// Loop over ParSet Files
for _, ps := range fs.ParSet.Files {
// ps == *ParFile
// Loop over the segments for each file
for _, s := range ps.Segments {
// s == *Segment
}
}
// Loop Files within the FileSet
for _, f := range fs.Files {
// f == *File
// Loop over the segments for each file
for _, s := range f.Segments {
// s == *Segment
}
}
}
// Enjoy