p4dlog

package module
v0.13.3 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Aug 23, 2024 License: MIT Imports: 13 Imported by: 1

README

go-libp4dlog

go-libp4dlog is a Go language library to parse Perforce p4d text logs, with a command line executable log2sql to process them.

Check the releases page for the latest binary releases.

  • p4dlog - log analyzer (this page)
  • p4locks - lock analyzer - see p4locks README

Contents:

P4D log files are written to a file specified by $P4LOG, or via command line flag "p4d -L p4d.log". We would normally recommend you to set p4d configurables server=3 and track=1 though you need to ensure your log file is regularly rotated as it can become quite large quite quickly.

For outline of how to setup P4LOG:

https://www.perforce.com/manuals/p4sag/Content/P4SAG/DB5-79706.html

Running the log analyzer

The released binaries for this project are available for Linux/Mac/Windows. After downloading you may want to rename to just log2sql (or log2sql.exe on Windows)

It is a single executable log2sql which will parse a text p4d text log file and by default it will generate a Sqlite3 database and VictoriaMetrics metrics file (historical metrics). For some strange philosophical reason, Prometheus does not provide the ability to import historical data (and is unlikely to ever have that implemented). Luckily VictoriaMetrics is both Prometheus-API compatible as well as being more flexible as a long term data store. Highly recommended!

log2sql is considerably faster (but compatible with) the log2sql.py script mentioned below.

Optionally you can get it to produce SQL insert statements which can be used with the sqlite3 CLI, or parsed for MySQL or similar.

$ ./log2sql -h
usage: log2sql [<flags>] [<logfile>...]

Parses one or more p4d text log files (which may be gzipped) into a Sqlite3 database and/or JSON or SQL format. The output of historical
Prometheus compatible metrics is also on by default.These can be viewed using VictoriaMetrics which is a Prometheus compatible data store,
and viewed in Grafana. Where referred to in help <logfile-prefix> is the first logfile specified with any .gz or .log suffix removed.

Flags:
  -h, --help                     Show context-sensitive help (also try --help-long and --help-man).
      --debug=DEBUG              Enable debugging level.
      --json                     Output JSON statements (to default or --json.output file).
      --sql                      Output SQL statements (to default or --sql.output file).
      --json.output=JSON.OUTPUT  Name of file to which to write JSON if that flag is set. Defaults to <logfile-prefix>.json
      --sql.output=SQL.OUTPUT    Name of file to which to write SQL if that flag is set. Defaults to <logfile-prefix>.sql
  -d, --dbname=DBNAME            Create database with this name. Defaults to <logfile-prefix>.db
  -n, --no.sql                   Don't create database.
      --no.metrics               Disable historical metrics output in VictoriaMetrics format (via Graphite interface).
  -m, --metrics.output=METRICS.OUTPUT
                                 File to write historical metrics to in Graphite format for use with VictoriaMetrics. Default is
                                 <logfile-prefix>.metrics
  -s, --server.id=SERVER.ID      server id for historical metrics - useful to identify site.
      --sdp.instance=SDP.INSTANCE
                                 SDP instance if required in historical metrics. (Not usually required)
      --update.interval=10s      Update interval for historical metrics - time is assumed to advance as per time in log entries.
      --no.output.cmds.by.user   Turns off the output of cmds_by_user - can be useful for large sites with many thousands of users.
      --output.cmds.by.user.regex=OUTPUT.CMDS.BY.USER.REGEX
                                 Specify a (golang) regex to match user ids in order to track cmds by user in one metric (e.g. '.*' or
                                 'swarm|jenkins').
      --no.output.cmds.by.IP     Turns off the output of cmds_by_IP - can be useful for large sites with many thousands of IP addresses in
                                 logs.
      --case.insensitive.server  Set if server is case insensitive and usernames may occur in either case.
      --no.completion.records    Set if log was generated with server=1 and thus no completion records expected.
      --debug.pid=DEBUG.PID      Set for debug output for specified PID - requires debug.cmd to be also specified.
      --debug.cmd=""             Set for debug output for specified command - requires debug.pid to be also specified.
      --version                  Show application version.

Args:
  [<logfile>]  Log files to process.

Examples

log2sql log2020-02-01.log

will produce a log2020-02-01.db (stripping off .gz and .log from name and appending .db) and also produce log2020-02-01.metrics.

log2sql -s mysite log2020-02-01.log

will produce same .db as above but sets the siteid to "mysite" in the log2020-02-01.metrics file.

log2sql -d logs log2020-02-01.log.gz

will create logs.db - automatically opening the gzipped log file and processing it.

Also possible to parse multiple log files in one go:

log2sql -d logs log2020-02-*

To create a single logs.db (and logs.metrics) from multiple input files.

Typically you will want to run it in the background if it's going to take a few tens of minutes:

nohup ./log2sql -d logs > out1 &

Run tail -f out1 to keep an eye on progress.

To write SQL statements to a file without creating a Sqlite db:

log2sql --sql -n p4d.log
log2sql --sql --sql.output sql.txt -n p4d.log

Please note it is multi-threaded, and thus will use 2-3 cores if available (placign load on your system). You may wish to consider lowering its priority using the nice command.

Some sample SQL queries

See log2sql-examples.md

Viewing historical metrics via Grafana/Prometheus/VictoriaMetrics

Also contained within this project are a docker-compose environment so that you can run local docker containers, import the historical metrics, and then connect to Grafana dashboard in order to be able to view them.

  • Download a .zip file of this repository

  • cd into metrics/docker directory

    docker-compose up

The first time will build pre-requisite containers which may take a while.

When it has started you will be able to connect to Grafana on http://localhost:3000.

Default creds:

  • login - admin
  • password - admin

Select skip to avoid having to change password if you wish to run container for only a short period of time.

Note that the VictoriaMetrics container exposes a port (default 2003) which you can use to load in your metrics once you have created them:

cat logfile.metrics | nc localhost 2003

This uses the standard Linux/Mac tool nc (netcat).

Then connect to Grafana, select the dashboard P4 Historical and view the time frame. Default is the last 6 months, but you should use options to narrow down to the time period covered by your log file.

You can review p4historical.json or import it into another Grafana setup quite easily (it is auto-installed in this configuration).

Closing down and removing data

If you just run docker-compose down you will stop the containers, but you will not remove any imported data. So if you restart the containers the metrics you previously imported will still be there.

In order to remove the data volumes so that they are empty next time:

docker-compose down -v

P4D Log Analysis

See open source project:

In particular log2sql.py mentioned above.

Also KB articles:

Output of this library

This library can output the results of log parsing as JSON (also SQL statements for SQLite or MySQL).

It is used by:

p4locks - lock analyzer

See p4locks README

Building the log2sql binary

See the Makefile:

make

or

make dist

The latter will cross compile with xgo (due to CGO Sqlite3 library in use). Before running you will need:

docker pull crazymax/xgo:latest
go install github.com/crazy-max/xgo@latest

Documentation

Overview

Package p4dlog parses Perforce Hexlix Core Server text logs (not structured logs).

These are logs created by p4d, as documented by:

https://community.perforce.com/s/article/2525

It assumes you have set configurable server=3 (or greater) You may also have decided to set track=1 to get more detailed usage of access to different tables.

See p4dlog_test.go for examples of log entries.

Index

Constants

This section is empty.

Variables

View Source
var BlockEndPrefixes = []string{
	"Rpc himark:",
	"server to client",
	"server to inter",
	"Forwarder set trusted client address",
	"NetSslTransport::SendOrReceive",
}

Various line prefixes that both can end a block, and should be ignored - see ignoreLine

Functions

func FlagSet added in v0.8.0

func FlagSet(flag int, level DebugLevel) bool

FlagSet - true if specified level set

Types

type Block

type Block struct {
	// contains filtered or unexported fields
}

Block is a block of lines parsed from a file

type Command

type Command struct {
	ProcessKey              string    `json:"processKey"`
	Cmd                     string    `json:"cmd"`
	Pid                     int64     `json:"pid"`
	LineNo                  int64     `json:"lineNo"`
	User                    string    `json:"user"`
	Workspace               string    `json:"workspace"`
	StartTime               time.Time `json:"startTime"`
	EndTime                 time.Time `json:"endTime"`
	ComputeLapse            float32   `json:"computeLapse"`
	CompletedLapse          float32   `json:"completedLapse"`
	Paused                  float32   `json:"paused"` // How long command was paused
	IP                      string    `json:"ip"`
	App                     string    `json:"app"`
	Args                    string    `json:"args"`
	Running                 int64     `json:"running"`
	UCpu                    int64     `json:"uCpu"`
	SCpu                    int64     `json:"sCpu"`
	DiskIn                  int64     `json:"diskIn"`
	DiskOut                 int64     `json:"diskOut"`
	IpcIn                   int64     `json:"ipcIn"`
	IpcOut                  int64     `json:"ipcOut"`
	MaxRss                  int64     `json:"maxRss"`
	PageFaults              int64     `json:"pageFaults"`
	MemMB                   int64     `json:"memMB"`
	MemPeakMB               int64     `json:"memPeakMB"`
	RPCMsgsIn               int64     `json:"rpcMsgsIn"`
	RPCMsgsOut              int64     `json:"rpcMsgsOut"`
	RPCSizeIn               int64     `json:"rpcSizeIn"`
	RPCSizeOut              int64     `json:"rpcSizeOut"`
	RPCHimarkFwd            int64     `json:"rpcHimarkFwd"`
	RPCHimarkRev            int64     `json:"rpcHimarkRev"`
	RPCSnd                  float32   `json:"rpcSnd"`
	RPCRcv                  float32   `json:"rpcRcv"`
	FileTotalsSnd           int64     `json:"fileTotalsSnd`
	FileTotalsRcv           int64     `json:"fileTotalsRcv`
	FileTotalsSndMBytes     int64     `json:"fileTotalsSndMBytes`
	FileTotalsRcvMBytes     int64     `json:"fileTotalsRcvMBytes`
	NetFilesAdded           int64     `json:"netFilesAdded"` // Valid for syncs and network estimates records
	NetFilesUpdated         int64     `json:"netFilesUpdated"`
	NetFilesDeleted         int64     `json:"netFilesDeleted"`
	NetBytesAdded           int64     `json:"netBytesAdded"`
	NetBytesUpdated         int64     `json:"netBytesUpdated"`
	LbrRcsOpens             int64     `json:"lbrRcsOpens"` // Required for processing lbr records
	LbrRcsCloses            int64     `json:"lbrRcsCloses"`
	LbrRcsCheckins          int64     `json:"lbrRcsCheckins"`
	LbrRcsExists            int64     `json:"lbrRcsExists"`
	LbrRcsReads             int64     `json:"lbrRcsReads"`
	LbrRcsReadBytes         int64     `json:"lbrRcsReadBytes"`
	LbrRcsWrites            int64     `json:"lbrRcsWrites"`
	LbrRcsWriteBytes        int64     `json:"lbrRcsWriteBytes"`
	LbrRcsDigests           int64     `json:"lbrRcsDigests"`
	LbrRcsFileSizes         int64     `json:"lbrRcsFileSizes"`
	LbrRcsModTimes          int64     `json:"lbrRcsModTimes"`
	LbrRcsCopies            int64     `json:"lbrRcsCopies"`
	LbrBinaryOpens          int64     `json:"lbrBinaryOpens"`
	LbrBinaryCloses         int64     `json:"lbrBinaryCloses"`
	LbrBinaryCheckins       int64     `json:"lbrBinaryCheckins"`
	LbrBinaryExists         int64     `json:"lbrBinaryExists"`
	LbrBinaryReads          int64     `json:"lbrBinaryReads"`
	LbrBinaryReadBytes      int64     `json:"lbrBinaryReadBytes"`
	LbrBinaryWrites         int64     `json:"lbrBinaryWrites"`
	LbrBinaryWriteBytes     int64     `json:"lbrBinaryWriteBytes"`
	LbrBinaryDigests        int64     `json:"lbrBinaryDigests"`
	LbrBinaryFileSizes      int64     `json:"lbrBinaryFileSizes"`
	LbrBinaryModTimes       int64     `json:"lbrBinaryModTimes"`
	LbrBinaryCopies         int64     `json:"lbrBinaryCopies"`
	LbrCompressOpens        int64     `json:"lbrCompressOpens"`
	LbrCompressCloses       int64     `json:"lbrCompressCloses"`
	LbrCompressCheckins     int64     `json:"lbrCompressCheckins"`
	LbrCompressExists       int64     `json:"lbrCompressExists"`
	LbrCompressReads        int64     `json:"lbrCompressReads"`
	LbrCompressReadBytes    int64     `json:"lbrCompressReadBytes"`
	LbrCompressWrites       int64     `json:"lbrCompressWrites"`
	LbrCompressWriteBytes   int64     `json:"lbrCompressWriteBytes"`
	LbrCompressDigests      int64     `json:"lbrCompressDigests"`
	LbrCompressFileSizes    int64     `json:"lbrCompressFileSizes"`
	LbrCompressModTimes     int64     `json:"lbrCompressModTimes"`
	LbrCompressCopies       int64     `json:"lbrCompressCopies"`
	LbrUncompressOpens      int64     `json:"lbrUncompressOpens"`
	LbrUncompressCloses     int64     `json:"lbrUncompressCloses"`
	LbrUncompressCheckins   int64     `json:"lbrUncompressCheckins"`
	LbrUncompressExists     int64     `json:"lbrUncompressExists"`
	LbrUncompressReads      int64     `json:"lbrUncompressReads"`
	LbrUncompressReadBytes  int64     `json:"lbrUncompressReadBytes"`
	LbrUncompressWrites     int64     `json:"lbrUncompressWrites"`
	LbrUncompressWriteBytes int64     `json:"lbrUncompressWriteBytes"`
	LbrUncompressDigests    int64     `json:"lbrUncompressDigests"`
	LbrUncompressFileSizes  int64     `json:"lbrUncompressFileSizes"`
	LbrUncompressModTimes   int64     `json:"lbrUncompressModTimes"`
	LbrUncompressCopies     int64     `json:"lbrUncompressCopies"`
	CmdError                bool      `json:"cmderror"`
	Tables                  map[string]*Table
	// contains filtered or unexported fields
}

Command is a command found in the block

func (*Command) GetKey added in v0.1.0

func (c *Command) GetKey() string

GetKey - returns process key (handling duplicates)

func (*Command) MarshalJSON

func (c *Command) MarshalJSON() ([]byte, error)

MarshalJSON - handle time formatting

func (*Command) String

func (c *Command) String() string

type DebugLevel added in v0.8.0

type DebugLevel int

DebugLevel - for different levels of debugging

const (
	DebugBasic DebugLevel = 1 << iota
	DebugDatabase
	DebugJSON
	DebugCommands
	DebugAddCommands
	DebugTrackRunning
	DebugUnrecognised
	DebugPending
	DebugPendingCounts
	DebugTrackPaused
	DebugMetricStats
	DebugLines
)

type P4dFileParser

type P4dFileParser struct {
	CmdsCount         int //Count of commands processed
	ServerEventsCount int // Count of server event records processed
	// contains filtered or unexported fields
}

P4dFileParser - manages state

func NewP4dFileParser

func NewP4dFileParser(logger *logrus.Logger) *P4dFileParser

NewP4dFileParser - create and initialise properly

func (*P4dFileParser) CmdsPendingCount added in v0.0.3

func (fp *P4dFileParser) CmdsPendingCount() int

CmdsPendingCount - count of unmatched commands

func (*P4dFileParser) LogParser

func (fp *P4dFileParser) LogParser(ctx context.Context, linesChan <-chan string, timeChan <-chan time.Time) chan interface{}

LogParser - interface to be run on a go routine - commands are returned on cmdchan

func (*P4dFileParser) SetDebugMode added in v0.1.0

func (fp *P4dFileParser) SetDebugMode(level int)

SetDebugMode - turn on debugging - very verbose!

func (*P4dFileParser) SetDebugPID added in v0.8.0

func (fp *P4dFileParser) SetDebugPID(pid int64, cmdName string)

SetDebugPID - turn on debugging for a PID

func (*P4dFileParser) SetDurations added in v0.1.0

func (fp *P4dFileParser) SetDurations(outputDuration, debugDuration time.Duration)

SetDurations - for debugging

func (*P4dFileParser) SetNoCompletionRecords added in v0.12.3

func (fp *P4dFileParser) SetNoCompletionRecords()

SetNoCompletionRecords - don't expect completion records

type ServerEvent added in v0.13.0

type ServerEvent struct {
	EventTime        time.Time `json:"eventTime"`
	LineNo           int64     `json:"lineNo"`
	ActiveThreads    int64     `json:"activeThreads"`
	ActiveThreadsMax int64     `json:"activeThreadsMax"`
	PausedThreads    int64     `json:"pausedThreads"`
	PausedThreadsMax int64     `json:"pausedThreadsMax"`
	PausedErrorCount int64     `json:"pausedErrorCount"`
	PauseRateCPU     int64     `json:"pauseRateCPU"`     // Percentage 1-100
	PauseRateMem     int64     `json:"pauseRateMem"`     // Percentage 1-100
	CPUPressureState int64     `json:"cpuPressureState"` // 0-2
	MemPressureState int64     `json:"memPressureState"` // 0-2
}

ServerEvent

func (*ServerEvent) MarshalJSON added in v0.13.0

func (s *ServerEvent) MarshalJSON() ([]byte, error)

MarshalJSON - handle formatting

func (*ServerEvent) String added in v0.13.0

func (s *ServerEvent) String() string

type Table added in v0.0.2

type Table struct {
	TableName          string  `json:"tableName"`
	PagesIn            int64   `json:"pagesIn"`
	PagesOut           int64   `json:"pagesOut"`
	PagesCached        int64   `json:"pagesCached"`
	PagesSplitInternal int64   `json:"pagesSplitInternal"`
	PagesSplitLeaf     int64   `json:"pagesSplitLeaf"`
	ReadLocks          int64   `json:"readLocks"`
	WriteLocks         int64   `json:"writeLocks"`
	GetRows            int64   `json:"getRows"`
	PosRows            int64   `json:"posRows"`
	ScanRows           int64   `json:"scanRows"`
	PutRows            int64   `json:"putRows"`
	DelRows            int64   `json:"delRows"`
	TotalReadWait      int64   `json:"totalReadWait"`
	TotalReadHeld      int64   `json:"totalReadHeld"`
	TotalWriteWait     int64   `json:"totalWriteWait"`
	TotalWriteHeld     int64   `json:"totalWriteHeld"`
	MaxReadWait        int64   `json:"maxReadWait"`
	MaxReadHeld        int64   `json:"maxReadHeld"`
	MaxWriteWait       int64   `json:"maxWriteWait"`
	MaxWriteHeld       int64   `json:"maxWriteHeld"`
	PeekCount          int64   `json:"peekCount"`
	TotalPeekWait      int64   `json:"totalPeekWait"`
	TotalPeekHeld      int64   `json:"totalPeekHeld"`
	MaxPeekWait        int64   `json:"maxPeekWait"`
	MaxPeekHeld        int64   `json:"maxPeekHeld"`
	TriggerLapse       float32 `json:"triggerLapse"`
}

Table stores track information per table (part of Command)

Directories

Path Synopsis
cmd

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL