README ¶
feedme
feedme is an infrastructure for creating Atom and RSS feeds from any website. It consists of a crawler and a web service. The crawler fetches feed definitions like the website URL and what to fetch from the website out of a database backend, crawls the website, transforms the crawled information into consistent feed items and stores them in the database. The web service generates a valid Atom and RSS feed using these items of a given feed.
Requirements
- Go 1.2 or higher
- PostgreSQL as the database backend
Set up feedme
Please note that the following commands use the users's default PostgreSQL user, database and password. If you want to use different login settings you have to specify them using the corresponding psql, feedme-crawler and feedme-server arguments
Fetch feedme with the go command and install all dependencies.
go get github.com/zimmski/feedme
cd $GOPATH/src/github.com/zimmski/feedme
go get ./...
Initialize the database backend. Make sure that this works without errors.
psql < $GOPATH/src/github.com/zimmski/feedme/scripts/postgresql_ddl.sql
Create binaries for the crawler and server.
go install github.com/zimmski/feedme/feedme-crawler
go install github.com/zimmski/feedme/feedme-server
Please note that you could also use just the usual go run
to start the crawler or server.
Start the server
$GOBIN/feedme-server --enable-logging
Insert your feeds with the transformation definitions into the database and execute the crawler for the first time. Make sure that this works without errors.
$GOBIN/feedme-crawler --verbose
Test your feeds with your RSS reader or browser by going to http://localhost:9090/, http://localhost:9090/yourfeedname/atom and http://localhost:9090/yourfeedname/rss. If everything works you can run the crawler as cron job to update your feeds automatically.
Add feeds to the database
Currently there is no UI for modifying feed definitions. You have to insert and update them manually through your favorite PostgreSQL interface. In the folder /examples
you can find examples for transformations.
For example /examples/dilbert.com.json holds the transformation for the divine Dilbert comic. This definition will add the current comic image of the home page of dilbert.com if it does not already exists in the database.
You can add the dilbert.com feed to your database by issuing the following SQL statement.
INSERT INTO feeds(name, url, transform) VALUES ('dilbert.com', 'http://dilbert.com/', '{"items": [{"search": "div.STR_Image","do": [{"find": "a","do": [{"attr": "href","do": [{"regex": "/strips/comic/(.+)/","matches": [{"name": "date","type": "string"}]}]}]},{"find": "img","do": [{"attr": "src","do": [{"copy": true,"name": "image","type": "string"}]}]}]}],"transform": {"title": "Strip {{.date}}","uri": "/strips/comic/{{.date}}/","description": "<img src=\"http://dilbert.com{{.image}}\"/> Strip {{.date}}"}}');
The name
column of the feeds
table must be unique and states the identifying name of the feed for the feed URL of the web service. The url
column defines which page should be fetched and transformed for the feed generation. The transform
column holds the transform definition.
Transformation (definition)
A transformation definition uses JSON as its format. The base consists of the two elements items
(an array of selectors) and transform
(a hash of templates for the feed item fields).
An empty transformation definition:
{
"items": [
],
"transform": {
}
}
The transform
hash holds key-value pairs of templates. For example the following transform hash would assign all found feed items the title "News title", the uri "/the/news/uri" and the description "This just in. An important news.":
{
"items": [
],
"transform": {
"title": "News title",
"uri": "/the/news/uri",
"description": "This just in. An important news."
}
}
A definition for the transform
element does not make much sense without items that can be transformed. The items
element defines selectors for selecting DOM elements from the feed's URL and also holds definitions on what information should be stored. Stored information can be accessed by the transform
element through their identifiers.
For example
{
"items": [
],
"transform": {
"title": "News {{.title}}",
"uri": "/images/{{.image}}",
"description": "The title {{.title}} belongs to the image {{.image}}."
}
}
would access the stored informations of title
and image
for each feed item.
The following identifiers are defined per default and can be overwritten
- date - The current date formatted in ISO 8601
Selecting nodes
Selecting nodes can be nested through their do
element and can contain storing nodes.
search
Search uses a CSS selector to select many elements.
{
"search": "CSS selector",
"do": [
]
}
find
Find uses a CSS selector to select at most one element.
{
"find": "CSS selector",
"do": [
]
}
attr
Attr selects exactly one attribute of the parents element and can only contain storing nodes in its do
element.
{
"search": "attribute name",
"do": [
]
}
text
Text extracts the combined text contents of the current node and its children.
{
"text": true,
"do": [
]
}
Storing nodes
copy
Copy copies the attribute value direclty for the feed item transformation.
{
"copy": true,
"name": "storing name",
"type": "int or string, which is the type of the value"
}
regex
Regex uses its regex string on the parents attribute value to parse it and store matching groups for the feed item transformation. The matches
element holds an array of name-type pairs for storing item information and must match the count of the matching groups of the regex.
{
"regex": "regex with capturing groups",
"matches": [
{
"name": "storing name of first match",
"type": "int or string, which is the type of the value"
}
]
}
For example
{
"regex": "id=(\\d+)&image=(.+)",
"matches": [
{
"name": "id",
"type": "int"
},
{
"name": "image",
"type": "string"
}
]
}
would parse the value of the given attribute and store the parsed values into id
and image
for transforming the feed items.
Example file
{
"items": [
{
"search": "div.news",
"do": [
{
"find": "a",
"do": [
{
"attr": "href",
"do": [
{
"regex": "id=(\\d+)",
"matches": [
{
"name": "id",
"type": "int"
}
]
}
]
}
]
},
{
"find": "img",
"do": [
{
"attr": "src",
"do": [
{
"copy": true,
"name": "image",
"type": "string"
}
]
}
]
}
]
}
],
"transform": {
"title": "News {{.id}}",
"uri": "/news/{{.id}}",
"description": "<img src=\"{{.image}}\"/>"
}
}
This transformation selects all div.news
elements of the fetched page and looks into every div.news
element for the first a
element and img
element. The href
attribute of the a
element gets parsed by a regex for the news id which is stored with the identifier id
. The src
attribute of the img
element gets copied into the image
identifier.
Every div.news
elements represents a feed item as the selection for div.news
elements is in the root array of the items transformation. All stored identifiers will be given to the templates of the fields in the transform hash. After inserting their information into the feed item field values the final values are stored into the database.
feedme-crawler
CLI arguments
--config= INI config file
--config-write= Write all arguments to an INI config file or to STDOUT with "-" as argument
--feed= Fetch only the feed with this name (can be used more than once)
--list-feeds List all available feed names
--max-idle-conns= Max idle connections of the database (10)
--max-open-conns= Max open connections of the database (10)
-s, --spec= The database connection spec (dbname=feedme sslmode=disable)
--test-file= Instead of fetching feed URLs the content of this file is transformed. The result is not saved into the database
-t, --threads= Thread count for processing (Default is the systems CPU count)
-w, --workers= Worker count for processing feeds (1)
-v, --verbose Print what is going on
-h, --help Show this help message
The crawler fetches per default all defined feeds. By using the --feed
argument, which can be used more than once, it is possible to fetch only specific feeds. The --spec
argument uses the connection string parameter of the excellent pg
package. Please have a look at the official documentation if you need different settings.
Configuration file
All CLI arguments can be defined via a INI configuration file which can be initialized via the --config-write
argument and then used via the --config
argument.
Please note that CLI arguments overwrite settings from the configuration file
Environment variables
FEEDMESPEC sets the --spec CLI argument through the environment
Please note that CLI arguments overwrite settings from the environment and the environment overwrites settings from the configuration file.
feedme-server
CLI arguments
--config= INI config file
--config-write= Write all arguments to an INI config file or to STDOUT with "-" as argument
--enable-logging Enable request logging
--max-idle-conns= Max idle connections of the database (10)
--max-open-conns= Max open connections of the database (10)
-p, --port= HTTP port of the server (9090)
-s, --spec= The database connection spec (dbname=feedme sslmode=disable)
-h, --help Show this help message
The --spec
argument uses the connection string parameter of the excellent pg
package. Please have a look at the official documentation if you need different settings.
Configuration file
All CLI arguments can be defined via a INI configuration file which can be initialized via the --config-write
argument and then used via the --config
argument.
Environment variables
FEEDMESPEC sets the --spec CLI argument through the environment
Please note that CLI arguments overwrite settings from the environment and the environment overwrites settings from the configuration file.
Routes
/
- Displays all feed definitions via JSON./<feed name>/atom
- Displays an Atom feed for the given feed./<feed name>/rss
- Displays an RSS feed for the given feed.