Documentation ¶
Index ¶
- Constants
- Variables
- type ChartHandler
- type Conn
- type ConnectionStore
- type Count
- type Database
- func (db *Database) ApplyMigrations() error
- func (db *Database) Cleanup(ctx context.Context, threshold time.Time) (int, error)
- func (db *Database) Close() error
- func (db *Database) CountRows(ctx context.Context) (int, error)
- func (db *Database) Execute(ctx context.Context, sql string, args ...orm.QueryOption) error
- func (db *Database) ExecuteWrite(ctx context.Context, sql string, args ...orm.QueryOption) error
- func (db *Database) MarkAllHistoryConnectionsEnded(ctx context.Context) error
- func (db *Database) RemoveAllHistoryData(ctx context.Context) error
- func (db *Database) RemoveHistoryForProfile(ctx context.Context, profileID string) error
- func (db *Database) Save(ctx context.Context, conn Conn, enableHistory bool) error
- func (db *Database) UpdateBandwidth(ctx context.Context, enableHistory bool, processKey string, connID string, ...) error
- type DatabaseName
- type Equal
- type Manager
- type MatchType
- type Matcher
- type Min
- type OrderBy
- type OrderBys
- type Pagination
- type Query
- type QueryActiveConnectionChartPayload
- type QueryHandler
- type QueryRequestPayload
- type RuntimeQueryRunner
- type Select
- type Selects
- type Sum
- type TextSearch
Constants ¶
const ( ConnTypeDNS = "dns" ConnTypeIP = "ip" )
Available connection types as their string representation.
const ( LiveDatabase = DatabaseName("main") HistoryDatabase = DatabaseName("history") )
Databases.
const InMemory = "file:inmem.db?mode=memory"
InMemory is the "file path" to open a new in-memory database.
Variables ¶
var ConnectionTypeToString = map[network.ConnectionType]string{ network.DNSRequest: ConnTypeDNS, network.IPConnection: ConnTypeIP, }
ConnectionTypeToString is a lookup map to get the string representation of a network.ConnectionType as used by this package.
var DefaultModule *module
DefaultModule is the default netquery module.
Functions ¶
This section is empty.
Types ¶
type ChartHandler ¶
type ChartHandler struct {
Database *Database
}
ChartHandler handles requests for connection charts.
func (*ChartHandler) ServeHTTP ¶
func (ch *ChartHandler) ServeHTTP(resp http.ResponseWriter, req *http.Request)
type Conn ¶
type Conn struct { // ID is a device-unique identifier for the connection. It is built // from network.Connection by hashing the connection ID and the start // time. We cannot just use the network.Connection.ID because it is only unique // as long as the connection is still active and might be, although unlikely, // reused afterwards. ID string `sqlite:"id,primary"` ProfileID string `sqlite:"profile"` Path string `sqlite:"path"` Type string `sqlite:"type,varchar(8)"` External bool `sqlite:"external"` IPVersion packet.IPVersion `sqlite:"ip_version"` IPProtocol packet.IPProtocol `sqlite:"ip_protocol"` LocalIP string `sqlite:"local_ip"` LocalPort uint16 `sqlite:"local_port"` RemoteIP string `sqlite:"remote_ip"` RemotePort uint16 `sqlite:"remote_port"` Domain string `sqlite:"domain"` Country string `sqlite:"country,varchar(2)"` ASN uint `sqlite:"asn"` ASOwner string `sqlite:"as_owner"` Latitude float64 `sqlite:"latitude"` Longitude float64 `sqlite:"longitude"` Scope netutils.IPScope `sqlite:"scope"` WorstVerdict network.Verdict `sqlite:"worst_verdict"` ActiveVerdict network.Verdict `sqlite:"verdict"` FirewallVerdict network.Verdict `sqlite:"firewall_verdict"` Started time.Time `sqlite:"started,text,time"` Ended *time.Time `sqlite:"ended,text,time"` Tunneled bool `sqlite:"tunneled"` Encrypted bool `sqlite:"encrypted"` Internal bool `sqlite:"internal"` Direction string `sqlite:"direction"` ExtraData json.RawMessage `sqlite:"extra_data"` Allowed *bool `sqlite:"allowed"` ProfileRevision int `sqlite:"profile_revision"` ExitNode *string `sqlite:"exit_node"` BytesReceived uint64 `sqlite:"bytes_received,default=0"` BytesSent uint64 `sqlite:"bytes_sent,default=0"` // TODO(ppacher): support "NOT" in search query to get rid of the following helper fields Active bool `sqlite:"active"` // could use "ended IS NOT NULL" or "ended IS NULL" // TODO(ppacher): we need to profile here for "suggestion" support. It would be better to keep a table of profiles in sqlite and use joins here ProfileName string `sqlite:"profile_name"` }
Conn is a network connection that is stored in a SQLite database and accepted by the *Database type of this package. This also defines, using the ./orm package, the table schema and the model that is exposed via the runtime database as well as the query API.
Use ConvertConnection from this package to convert a network.Connection to this representation.
type ConnectionStore ¶
type ConnectionStore interface { // Save is called to perists the new or updated connection. If required, // It's up to the implementation to figure out if the operation is an // insert or an update. // The ID of Conn is unique and can be trusted to never collide with other // connections of the save device. Save(context.Context, Conn, bool) error // MarkAllHistoryConnectionsEnded marks all active connections in the history // database as ended NOW. MarkAllHistoryConnectionsEnded(context.Context) error // RemoveAllHistoryData removes all connections from the history database. RemoveAllHistoryData(context.Context) error // RemoveHistoryForProfile removes all connections from the history database. // for a given profile ID (source/id) RemoveHistoryForProfile(context.Context, string) error // UpdateBandwidth updates bandwidth data for the connection and optionally also writes // the bandwidth data to the history database. UpdateBandwidth(ctx context.Context, enableHistory bool, processKey string, connID string, bytesReceived uint64, bytesSent uint64) error }
ConnectionStore describes the interface that is used by Manager to save new or updated connection objects. It is implemented by the *Database type of this package.
type Count ¶
type Count struct { As string `json:"as"` Field string `json:"field"` Distinct bool `json:"distinct"` }
Collection of Query and Matcher types. NOTE: whenever adding support for new operators make sure to update UnmarshalJSON as well.
type Database ¶
type Database struct { Schema *orm.TableSchema // contains filtered or unexported fields }
Database represents a SQLite3 backed connection database. It's use is tailored for persistence and querying of network.Connection. Access to the underlying SQLite database is synchronized.
func New ¶
New opens a new in-memory database named path and attaches a persistent history database.
The returned Database used connection pooling for read-only connections (see Execute). To perform database writes use either Save() or ExecuteWrite(). Note that write connections are serialized by the Database object before being handed over to SQLite.
func NewInMemory ¶
NewInMemory is like New but creates a new in-memory database and automatically applies the connection table schema.
func (*Database) ApplyMigrations ¶
ApplyMigrations applies any table and data migrations that are needed to bring db up-to-date with the built-in schema. TODO(ppacher): right now this only applies the current schema and ignores any data-migrations. Once the history module is implemented this should become/use a full migration system -- use zombiezen.com/go/sqlite/sqlitemigration.
func (*Database) Cleanup ¶
Cleanup removes all connections that have ended before threshold from the live database.
NOTE(ppacher): there is no easy way to get the number of removed rows other than counting them in a first step. Though, that's probably not worth the cylces...
func (*Database) Close ¶
Close closes the underlying database connection. db should and cannot be used after Close() has returned.
func (*Database) Execute ¶
Execute executes a custom SQL query using a read-only connection against the SQLite database used by db. It uses orm.RunQuery() under the hood so please refer to the orm package for more information about available options.
func (*Database) ExecuteWrite ¶ added in v0.9.1
ExecuteWrite executes a custom SQL query using a writable connection against the SQLite database used by db. It uses orm.RunQuery() under the hood so please refer to the orm package for more information about available options.
func (*Database) MarkAllHistoryConnectionsEnded ¶ added in v1.3.0
MarkAllHistoryConnectionsEnded marks all connections in the history database as ended.
func (*Database) RemoveAllHistoryData ¶ added in v1.3.0
RemoveAllHistoryData removes all connections from the history database.
func (*Database) RemoveHistoryForProfile ¶ added in v1.3.0
RemoveHistoryForProfile removes all connections from the history database for a given profile ID (source/id).
func (*Database) Save ¶
Save inserts the connection conn into the SQLite database. If conn already exists the table row is updated instead.
Save uses the database write connection instead of relying on the connection pool.
func (*Database) UpdateBandwidth ¶ added in v1.3.0
func (db *Database) UpdateBandwidth(ctx context.Context, enableHistory bool, processKey string, connID string, bytesReceived uint64, bytesSent uint64) error
UpdateBandwidth updates bandwidth data for the connection and optionally also writes the bandwidth data to the history database.
type DatabaseName ¶ added in v1.3.0
type DatabaseName string
DatabaseName is a database name constant.
type Equal ¶
type Equal interface{}
Collection of Query and Matcher types. NOTE: whenever adding support for new operators make sure to update UnmarshalJSON as well.
type Manager ¶
type Manager struct {
// contains filtered or unexported fields
}
Manager handles new and updated network.Connections feeds and persists them at a connection store. Manager also registers itself as a runtime database and pushes updates to connections using the local format. Users should use this update feed rather than the deprecated "network:" database.
func NewManager ¶
NewManager returns a new connection manager that persists all newly created or updated connections at store.
func (*Manager) HandleFeed ¶
func (mng *Manager) HandleFeed(ctx context.Context, feed <-chan *network.Connection)
HandleFeed starts reading new and updated connections from feed and persists them in the configured ConnectionStore. HandleFeed blocks until either ctx is cancelled or feed is closed. Any errors encountered when processing new or updated connections are logged but otherwise ignored. HandleFeed handles and persists updates one after each other! Depending on the system load the user might want to use a buffered channel for feed.
type MatchType ¶
type MatchType interface {
Operator() string
}
Collection of Query and Matcher types. NOTE: whenever adding support for new operators make sure to update UnmarshalJSON as well.
type Matcher ¶
type Matcher struct { Equal interface{} `json:"$eq,omitempty"` NotEqual interface{} `json:"$ne,omitempty"` In []interface{} `json:"$in,omitempty"` NotIn []interface{} `json:"$notIn,omitempty"` Like string `json:"$like,omitempty"` }
Collection of Query and Matcher types. NOTE: whenever adding support for new operators make sure to update UnmarshalJSON as well.
type Min ¶ added in v1.3.0
type Min struct { Condition *Query `json:"condition,omitempty"` Field string `json:"field"` As string `json:"as"` Distinct bool `json:"distinct"` }
Collection of Query and Matcher types. NOTE: whenever adding support for new operators make sure to update UnmarshalJSON as well.
type OrderBy ¶
Collection of Query and Matcher types. NOTE: whenever adding support for new operators make sure to update UnmarshalJSON as well.
func (*OrderBy) UnmarshalJSON ¶
UnmarshalJSON unmarshals a OrderBy from json.
type OrderBys ¶
type OrderBys []OrderBy
Collection of Query and Matcher types. NOTE: whenever adding support for new operators make sure to update UnmarshalJSON as well.
func (*OrderBys) UnmarshalJSON ¶
UnmarshalJSON unmarshals a OrderBys from json.
type Pagination ¶
Collection of Query and Matcher types. NOTE: whenever adding support for new operators make sure to update UnmarshalJSON as well.
type Query ¶
Collection of Query and Matcher types. NOTE: whenever adding support for new operators make sure to update UnmarshalJSON as well.
func (*Query) UnmarshalJSON ¶
UnmarshalJSON unmarshals a Query from json.
type QueryActiveConnectionChartPayload ¶
type QueryActiveConnectionChartPayload struct { Query Query `json:"query"` TextSearch *TextSearch `json:"textSearch"` }
Collection of Query and Matcher types. NOTE: whenever adding support for new operators make sure to update UnmarshalJSON as well.
type QueryHandler ¶
QueryHandler implements http.Handler and allows to perform SQL query and aggregate functions on Database.
func (*QueryHandler) ServeHTTP ¶
func (qh *QueryHandler) ServeHTTP(resp http.ResponseWriter, req *http.Request)
type QueryRequestPayload ¶
type QueryRequestPayload struct { Select Selects `json:"select"` Query Query `json:"query"` OrderBy OrderBys `json:"orderBy"` GroupBy []string `json:"groupBy"` TextSearch *TextSearch `json:"textSearch"` // A list of databases to query. If left empty, // both, the LiveDatabase and the HistoryDatabase are queried Databases []DatabaseName `json:"databases"` Pagination // contains filtered or unexported fields }
Collection of Query and Matcher types. NOTE: whenever adding support for new operators make sure to update UnmarshalJSON as well.
type RuntimeQueryRunner ¶
type RuntimeQueryRunner struct {
// contains filtered or unexported fields
}
RuntimeQueryRunner provides a simple interface for the runtime database that allows direct SQL queries to be performed against db. Each resulting row of that query are marshaled as map[string]interface{} and returned as a single record to the caller.
Using portbase/database#Query is not possible because portbase/database will complain about the SQL query being invalid. To work around that issue, RuntimeQueryRunner uses a 'GET key' request where the SQL query is embedded into the record key.
func NewRuntimeQueryRunner ¶
func NewRuntimeQueryRunner(db *Database, prefix string, reg *runtime.Registry) (*RuntimeQueryRunner, error)
NewRuntimeQueryRunner returns a new runtime SQL query runner that parses and serves SQL queries form GET <prefix>/<plain sql query> requests.
type Select ¶
type Select struct { Field string `json:"field"` Count *Count `json:"$count,omitempty"` Sum *Sum `json:"$sum,omitempty"` Min *Min `json:"$min,omitempty"` Distinct *string `json:"$distinct,omitempty"` }
Collection of Query and Matcher types. NOTE: whenever adding support for new operators make sure to update UnmarshalJSON as well.
func (*Select) UnmarshalJSON ¶
UnmarshalJSON unmarshals a Select from json.
type Selects ¶
type Selects []Select
Collection of Query and Matcher types. NOTE: whenever adding support for new operators make sure to update UnmarshalJSON as well.
func (*Selects) UnmarshalJSON ¶
UnmarshalJSON unmarshals a Selects from json.
type Sum ¶
type Sum struct { Condition Query `json:"condition"` As string `json:"as"` Distinct bool `json:"distinct"` }
Collection of Query and Matcher types. NOTE: whenever adding support for new operators make sure to update UnmarshalJSON as well.
type TextSearch ¶
Collection of Query and Matcher types. NOTE: whenever adding support for new operators make sure to update UnmarshalJSON as well.