Documentation ¶
Index ¶
- Constants
- Variables
- func DecompressTileToRawLines(blob []byte) [][]byte
- func GetClassIDsInTileBlob(tile []byte) ([]uint32, error)
- func Migrations(log logs.Log) []migration.Migrator
- func VideoStreamNameForCamera(cameraLongLivedName string, resolution defs.Resolution) string
- type BaseModel
- type Event
- type EventDetectionsJSON
- type EventTile
- type ObjectJSON
- type ObjectPositionJSON
- type TileRequest
- type TrackedBox
- type TrackedObject
- type VideoDB
- func (v *VideoDB) Close()
- func (v *VideoDB) IDToString(id uint32) (string, error)
- func (v *VideoDB) IDsToString(ids []uint32) ([]string, error)
- func (v *VideoDB) MaxTileLevel() int
- func (v *VideoDB) ObjectDetected(camera string, cameraResolution [2]int, id uint32, box nn.Rect, ...)
- func (v *VideoDB) ReadEventTiles(camera string, request TileRequest) ([]*EventTile, error)
- func (v *VideoDB) ReadEvents(camera string, startTime, endTime time.Time) ([]*Event, error)
- func (v *VideoDB) SetMaxArchiveSize(maxSize int64)
- func (v *VideoDB) StringToID(s string) (uint32, error)
- func (v *VideoDB) StringsToID(s []string) ([]uint32, error)
- func (v *VideoDB) VideoStartTimeForCamera(camera string) (time.Time, error)
Constants ¶
const TileWidth = 1024
Number of pixels in on tile. At the highest resolution (level = 0), each pixel is 1 second.
Variables ¶
var ErrInvalidTimeRange = errors.New("invalid time range in tileBuilder.updateObject")
var ErrNoTime = errors.New("no time data in TrackedObject for tileBuilder.updateObject")
var ErrNoVideoFound = errors.New("No video found")
var ErrTooManyClasses = errors.New("too many classes")
Functions ¶
func DecompressTileToRawLines ¶
This is for debug/analysis, specifically to create an extract of raw lines so that we can test our bitmap compression codecs. Returns a list of 128 byte bitmaps
func GetClassIDsInTileBlob ¶
Decode a tile enough to be able to find the list of class IDs inside it, and return that list of IDs.
func VideoStreamNameForCamera ¶
func VideoStreamNameForCamera(cameraLongLivedName string, resolution defs.Resolution) string
Generate the name of the video stream for the given camera and resolution.
Types ¶
type BaseModel ¶
type BaseModel struct {
ID int64 `gorm:"primaryKey" json:"id"`
}
BaseModel is our base class for a GORM model. The default GORM Model uses int, but we prefer int64
type Event ¶
type Event struct { BaseModel Time dbh.IntTime `json:"time"` // Start of event Duration int32 `json:"duration"` // Duration of event in milliseconds Camera uint32 `json:"camera"` // LongLived camera name (via lookup in 'strings' table) Detections *dbh.JSONField[EventDetectionsJSON] `json:"detections"` // Objects detected in the event }
An event is one or more frames of motion or object detection. For efficiency sake, we limit events in the database to a max size and duration. SYNC-VIDEODB-EVENT
type EventDetectionsJSON ¶
type EventDetectionsJSON struct { Resolution [2]int `json:"resolution"` // Resolution of the camera on which the detection was run. Objects []*ObjectJSON `json:"objects"` // Objects detected in the event }
SYNC-VIDEODB-EVENTDETECTIONS
type EventTile ¶
type EventTile struct { Camera uint32 `gorm:"primaryKey;autoIncrement:false" json:"camera"` // LongLived camera name (via lookup in 'strings' table) Level uint32 `gorm:"primaryKey;autoIncrement:false" json:"level"` // 0 = lowest level Start uint32 `gorm:"primaryKey;autoIncrement:false" json:"start"` // Start time of tile (unix seconds / (1024 * 2^level))...... Rename to tileIdx? Tile []byte `json:"tile"` // Compressed tile data }
SYNC-EVENT-TILE-JSON
type ObjectJSON ¶
type ObjectJSON struct { ID uint32 `json:"id"` // Can be used to track objects across separate Event records Class uint32 `json:"class"` // eg "person", "car" (via lookup in 'strings' table) Positions []ObjectPositionJSON `json:"positions"` // Object positions throughout event NumDetections int32 `json:"numDetections"` // Total number of detections witnessed for this object, before filtering out irrelevant box movements (eg box jiggling around by a few pixels) }
An object detected by the camera. SYNC-VIDEODB-OBJECT
type ObjectPositionJSON ¶
type ObjectPositionJSON struct { Box [4]int16 `json:"box"` // [X1,Y1,X2,Y2] Time int32 `json:"time"` // Time in milliseconds relative to start of event. Confidence float32 `json:"confidence"` // NN confidence of detection (0..1) }
Position of an object in a frame. SYNC-VIDEODB-OBJECTPOSITION
type TileRequest ¶
type TileRequest struct { Level uint32 StartIdx uint32 // inclusive EndIdx uint32 // exclusive Indices map[uint32]bool }
TileRequest is a request to read tiles. Do ONE of the following: 1. Populate StartIdx and EndIdx 2. Populate Indices
type TrackedObject ¶
type TrackedObject struct { ID uint32 Camera uint32 CameraResolution [2]int Class uint32 Boxes []TrackedBox LastSeen time.Time // In case you're not updating Boxes, or Boxes is empty. Maybe you're not updating Boxes because the object hasn't moved. NumDetections int32 // Naively equal to len(Boxes), but can be different if some detections were so similar to the previous that we filtered them out. NumDetections >= len(Boxes) }
func (*TrackedObject) TimeBounds ¶
func (t *TrackedObject) TimeBounds() (time.Time, time.Time)
Returns the min/max observed time of this object. We can have any mix of Boxes and LastSeen, but if none of them are set, then we return time.Time{} for both.
type VideoDB ¶
type VideoDB struct { // Root directory // root/fsv/... Video file archive // root/videos.sqlite Our SQLite DB Root string Archive *fsv.Archive // contains filtered or unexported fields }
VideoDB manages recordings
func NewVideoDB ¶
Open or create a video DB
func (*VideoDB) MaxTileLevel ¶
func (*VideoDB) ObjectDetected ¶
func (v *VideoDB) ObjectDetected(camera string, cameraResolution [2]int, id uint32, box nn.Rect, confidence float32, class string, lastSeen time.Time)
This is the way our users inform us of a new object detection. We'll get one of these calls on every frame where an object is detected. id must be unique enough that by the time it wraps around, the previous object is no longer in frame. Also, id must be unique across cameras. This is currently the way our 'monitor' package works, but I'm just codifying it here.
func (*VideoDB) ReadEventTiles ¶
func (v *VideoDB) ReadEventTiles(camera string, request TileRequest) ([]*EventTile, error)
Fetch event tiles in the range [startIdx, endIdx)
func (*VideoDB) ReadEvents ¶
func (*VideoDB) SetMaxArchiveSize ¶
The archive won't delete any files until this is called, because it doesn't know yet what the size limit is.
func (*VideoDB) StringToID ¶
Get a database-wide unique ID for the given string. At some point we should implement a cleanup method that gets rid of strings that are no longer used. It is beneficial to keep the IDs small, because smaller numbers produce smaller DB records due to varint encoding.
func (*VideoDB) StringsToID ¶
Resolve multiple strings to IDs