zlog

command module
v0.0.0-...-660a5ed Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Sep 23, 2022 License: MIT Imports: 1 Imported by: 0

README

zlog

A lightweight Golang library to handle logging


Index

  1. Overview
  2. Installation
  3. Usage
    1. Loggers usage
      1. Simple Logger
      2. Custom Logger
      3. Multilogger
      4. Output formats
      5. Modular events
      6. Channeled Logger
      7. Callstack in Metadata
    2. Storing log events
      1. Logger as a Writer
      2. Writing events to a file
      3. Writing events to a database
    3. Logging events remotely
      1. gRPC server / client
    4. Building the library
      1. Targets
      2. Target types
  4. Features
    1. Simple API
    2. Highly configurable
    3. Feature-rich events
      1. Data structure
      2. Event builder
      3. Log levels
      4. Structured metadata
      5. Callstack in metadata
    4. Multi-everything
    5. Different formatters
      1. Text
        1. Log Timestamps
      2. JSON
      3. BSON
      4. CSV
      5. XML
      6. Protobuf
      7. Gob
    6. Data Stores
      1. Writer interface
      2. Logfile
      3. Databases
        1. SQLite
        2. MySQL
        3. PostgreSQL
        4. MongoDB
    7. gRPC
      1. gRPC Log Service
      2. gRPC Log Server
        1. Log Server Configs
      3. gRPC Log Client
        1. Log Client Configs
        2. Log Client Backoff
      4. Connection Addresses
  5. Integration
    1. Protobuf code generation
    2. Adding your own configuration settings
    3. Adding methods to a Builder pattern
    4. Adding interceptors to gRPC server / client
  6. Benchmarks
  7. Contributing

Overview

This project started (like many others) as a means for me to learn and understand how logging works (in Go and in general), among other interesting Go design patterns.

Basing myself off of the standard library log package, the goal was to create a new, minimalist logger while introducing great features found in open-source projects like logrus.

Very quickly it became apparent that this project had little or no minimalism as it grew, as I intended to add new features as I learned new technologies and techniques.

That being the case, the goal morphed from simplicity to feature-rich and developer-friendly at the same time -- using abstractions and wrappers to allow more complex configuration or behavior if the dev wants, while (trying to) keep it idiomatic when using simple or default configurations.


Installation

To use the library in a project you're working on, ensure that you've initialized your go.mod file by running:

go mod init ${package_name} # like github.com/user/repo
go mod tidy

After doing so, you can go get this library:

go get github.com/zalgonoise/zlog

From this point onward, you can import the library in your code and use it as needed.

There are plans to add a CLI version too, to serve as a gRPC Log Server binary or a one-shot logger binary. The corresponding go install instructions will be added by then.


Usage

This section covers basic usage and typical use-cases of different modules in this library. There are several individual examples to provide direct context, as well as the Features section, which goes in-depth on each module and its functionality. In here you will find reference to certain actions, a snippet from the respective example and a brief explanation of what's happening.

Loggers usage
Simple Logger - example

Snippet

package main

import (
	"github.com/zalgonoise/zlog/log"
)

func main() {
	log.Print("this is the simplest approach to entering a log message")
	log.Tracef("and can include formatting: %v %v %s", 3.5, true, "string")
	log.Errorln("which is similar to fmt.Print() method calls")

	log.Panicf("example of a logger panic event: %v", true)
}

Output

[info]  [2022-07-26T17:05:46.208657519Z]        [log]   this is the simplest approach to entering a log message
[trace] [2022-07-26T17:05:46.208750114Z]        [log]   and can include formatting: 3.5 true string
[error] [2022-07-26T17:05:46.208759031Z]        [log]   which is similar to fmt.Print() method calls

[panic] [2022-07-26T17:05:46.208766425Z]        [log]   example of a logger panic event: true
panic: example of a logger panic event: true

goroutine 1 [running]:
github.com/zalgonoise/zlog/log.(*logger).Panicf(0xc0001dafc0, {0x952348, 0x23}, {0xc0001af410, 0x1, 0x1})
        /go/src/github.com/zalgonoise/zlog/log/print.go:226 +0x357
github.com/zalgonoise/zlog/log.Panicf(...)
        /go/src/github.com/zalgonoise/zlog/log/print.go:784
main.main()
        /go/src/github.com/zalgonoise/zlog/examples/logger/simple_logger/simple_logger.go:19 +0x183
exit status 2

The simplest approach to using the logger library is to call its built-in methods, as if they were fmt.Print()-like calls. The logger exposes methods for registering messages in different log levels, defined in its Printer interface.

Note that there are calls which are configured to halt the application's runtime, like Fatal() and Panic(). These exit calls can be skipped in the logger's configuration.

More information on the Printer interface and the Logger's methods in the Simple API section.

Custom Logger - example

Snippet

package main

import (
	"bytes"
	"fmt"

	"github.com/zalgonoise/zlog/log"
)

func main() {
	logger := log.New(
		log.WithPrefix("svc"),
		log.WithSub("mod"),
	)

	buf := new(bytes.Buffer)

	logger.SetOuts(buf)
	logger.Prefix("service")
	logger.Sub("module")

	logger.Info("message written to a new buffer")

	fmt.Println(buf.String())
}

Output

[info]  [2022-07-26T17:04:25.371617213Z]        [service]       [module]        message written to a new buffer

The logger is customized on creation, and any number of configuration can be passed to it. This makes it flexible for simple configurations (where only defaults are applied), and makes it granular enough for the complex ones.

Furthermore, it will also expose certain methods to allow changes to the logger's configuration during runtime (with Prefix(), Sub() and Metadata(), as well as AddOuts() and SetOuts() methods). Besides these and for more information on the available configuration functions for loggers, check out the Highly configurable section.


MultiLogger - example

Snippet

package main

import (
	"bytes"
	"fmt"

	"github.com/zalgonoise/zlog/log"
	"github.com/zalgonoise/zlog/log/event"
)

func main() {
	buf := new(bytes.Buffer)

	stdLogger := log.New() // default logger printing to stdErr
	jsonLogger := log.New( // custom JSON logger, writing to buffer
		log.WithOut(buf),
		log.CfgFormatJSONIndent,
	)

	// join both loggers
	logger := log.MultiLogger(
		stdLogger,
		jsonLogger,
	)

	// print messages to stderr
	logger.Info("some event occurring")
	logger.Warn("a warning pops-up")
	logger.Log(
		event.New().Level(event.Level_error).
			Message("and finally an error").
			Metadata(event.Field{
				"code":      5,
				"some-data": true,
			}).
			Build())

	// print buffer content
	fmt.Print("\n---\n- JSON data:\n---\n", buf.String())
}

Output

[info]  [2022-07-28T17:12:44.966084127Z]        [log]   some event occurring
[warn]  [2022-07-28T17:12:44.966220938Z]        [log]   a warning pops-up
[error] [2022-07-28T17:12:44.966246265Z]        [log]   and finally an error    [ code = 5 ; some-data = true ] 

---
- JSON data:
---
{
  "timestamp": "2022-07-28T17:12:44.966187177Z",
  "service": "log",
  "level": "info",
  "message": "some event occurring"
}
{
  "timestamp": "2022-07-28T17:12:44.966234771Z",
  "service": "log",
  "level": "warn",
  "message": "a warning pops-up"
}
{
  "timestamp": "2022-07-28T17:12:44.966246265Z",
  "service": "log",
  "level": "error",
  "message": "and finally an error",
  "metadata": {
    "code": 5,
    "some-data": true
  }
}

The logger on line 21 is merging any loggers provided as input. In this example, the caller can leverage this functionality to write the same events to different outputs (with different formats), or with certain log level filters (writerA will register all events, while writerB will register events that are error and above).

This approach can be taken with all kinds of loggers, provided that they they implement the same methods as Logger interface. By all kinds of loggers, I mean those within this library, such as having a standard-error logger, as well as a gRPC Log Client configured as one, with MultiLogger(). More information on Multi-everything, in its own section


Output formats - example

Snippet

package main

import (
	"github.com/zalgonoise/zlog/log"
	"github.com/zalgonoise/zlog/log/event"
	"github.com/zalgonoise/zlog/log/format/text"
)

func main() {

	// setup a simple text logger, with custom formatting
	custTextLogger := log.New(
		log.WithFormat(
			text.New().
				Color().
				DoubleSpace().
				LevelFirst().
				Upper().
				Time(text.LTRubyDate).
				Build(),
		),
	)

	// setup a simple JSON logger
	jsonLogger := log.New(log.CfgFormatJSON)

	// setup a simple XML logger
	xmlLogger := log.New(log.CfgFormatXML)

	// (...)

	// join all loggers
	multiLogger := log.MultiLogger(
		custTextLogger,
		jsonLogger,
		xmlLogger,
		// (...)
	)

	// example message to print
	var msg = event.New().Message("message from a formatted logger").Build()

	// print the message to standard out, with different formats
	multiLogger.Log(msg)
}

Output

[INFO]          [Sat Jul 30 13:17:31 +0000 2022]                [LOG]           message from a formatted logger
{"timestamp":"2022-07-30T13:17:31.744955941Z","service":"log","level":"info","message":"message from a formatted logger"}
<entry><timestamp>2022-07-30T13:17:31.744955941Z</timestamp><service>log</service><level>info</level><message>message from a formatted logger</message></entry>

When setting up the Logger interface, different formatters can be passed as well. There are common formats already implemented (like JSON, XML, CSV, BSON), as well as a modular text formatter.

New formatters can also be added seamlessly by complying with their corresponding interfaces. More information on all formatters in the Different Formatters section.


Modular events - example

Snippet

package main

import (
	"github.com/zalgonoise/zlog/log"
	"github.com/zalgonoise/zlog/log/event"
)

func main() {

	// Events can be created and customized with a builder pattern,
	// where each element is defined with a chained method until the
	// Build() method is called.
	//
	// This last method will apply the timestamp to the event and any
	// defaults for missing (required) fields.
	log.Log(
		event.New().
			Prefix("module").
			Sub("service").
			Level(event.Level_warn).
			Metadata(event.Field{
				"data": true,
			}).
			Build(),
		event.New().
			Prefix("mod").
			Sub("svc").
			Level(event.Level_debug).
			Metadata(event.Field{
				"debug": "something something",
			}).
			Build(),
	)
}

Output

[warn]  [2022-07-30T13:21:14.023201168Z]        [module]        [service]               [ data = true ] 
[debug] [2022-07-30T13:21:14.023467597Z]        [mod]   [svc]           [ debug = "something something" ]

The events are created under-the-hood when using methods from the Printer interface, but they can also be created using the exposed events builder. This allows using a clean approach when using the logger (using its Log() method), while keeping the events as detailed as you need. More information on events in the Feature-rich Events section.


Channeled Logger - example

Snippet

package main

import (
	"time"

	"github.com/zalgonoise/zlog/log"
	"github.com/zalgonoise/zlog/log/event"
	"github.com/zalgonoise/zlog/log/logch"
)

func main() {

	// create a new, basic logger directly as a channeled logger
	chLogger := logch.New(log.New())

	// send messages using its Log() method directly; like the simple one:
	chLogger.Log(
		event.New().Message("one").Build(),
		event.New().Message("two").Build(),
		event.New().Message("three").Build(),
	)

	// or, call its Channels() method to work with the channels directly:
	msgCh, done := chLogger.Channels()

	// send the messages in a separate goroutine, then close the logger
	go func() {
		msgCh <- event.New().Message("four").Build()
		msgCh <- event.New().Message("five").Build()
		msgCh <- event.New().Message("six").Build()

		// give it a millisecond to allow the last message to be printed
		time.Sleep(time.Millisecond)

		// send done signal to stop the process
		done <- struct{}{}
	}()

	// keep-alive until the done signal is received
	for {
		select {
		case <-done:
			return
		}
	}

}

Output

[info]  [2022-07-31T12:15:40.702944256Z]        [log]   one
[info]  [2022-07-31T12:15:40.703050024Z]        [log]   two
[info]  [2022-07-31T12:15:40.703054422Z]        [log]   three
[info]  [2022-07-31T12:15:40.703102802Z]        [log]   four
[info]  [2022-07-31T12:15:40.703156352Z]        [log]   five
[info]  [2022-07-31T12:15:40.703169196Z]        [log]   six

Since loggers are usually kept running in the background (as your app handles events and writes to its logger), you are perfectly able to launch this logger as a goroutine. To simplify the process, an interface is added: (ChanneledLogger). The gist of it is being able to directly launch a logger in a goroutine, with useful methods to interact with it (Log(), Close() and Channels()). More information on this logic in the Highly Configurable section.


Callstack in Metadata - example

Snippet

package main

import (
	"github.com/zalgonoise/zlog/log"
	"github.com/zalgonoise/zlog/log/event"
)

// set up an indented JSON logger; package-level as an example
//
// you'd set this up within your logic however it's convenient
var logger = log.New(log.CfgFormatJSONIndent)

// placeholder operation for visibility in the callstack
func operation(value int) bool {
	return subOperation(value)
}

// placeholder sub-operation for visibility in the callstack
//
// error is printed whenever input is zero
func subOperation(value int) bool {
	if value == 0 {
		logger.Log(
			event.New().
				Level(event.Level_error).
				Message("operation failed").
				Metadata(event.Field{
					"error": "input cannot be zero", // custom metadata
					"input": value,                  // custom metadata
				}).
				CallStack(true). // add (complete) callstack to metadata
				Build(),
		)
		return false
	}
	return true
}

func main() {
	// all goes well until something happens within your application
	for a := 5; a >= 0; a-- {
		if operation(a) {
			continue
		}
		break
	}
}

Output

{
  "timestamp": "2022-08-01T16:06:46.162355505Z",
  "service": "log",
  "level": "error",
  "message": "operation failed",
  "metadata": {
    "callstack": {
      "goroutine-1": {
        "id": "1",
        "stack": [
          {
            "method": "github.com/zalgonoise/zlog/log/trace.(*stacktrace).getCallStack(...)",
            "reference": "/go/src/github.com/zalgonoise/zlog/log/trace/trace.go:54"
          },
          {
            "method": "github.com/zalgonoise/zlog/log/trace.New(0x30?)",
            "reference": "/go/src/github.com/zalgonoise/zlog/log/trace/trace.go:41 +0x7f"
          },
          {
            "method": "github.com/zalgonoise/zlog/log/event.(*EventBuilder).CallStack(0xc000077fc0, 0x30?)",
            "reference": "/go/src/github.com/zalgonoise/zlog/log/event/builder.go:99 +0x6b"
          },
          {
            "method": "main.subOperation(0x0)",
            "reference": "/go/src/github.com/zalgonoise/zlog/examples/logger/callstack_in_metadata/callstack_md.go:31 +0x34b"
          },
          {
            "method": "main.operation(...)",
            "reference": "/go/src/github.com/zalgonoise/zlog/examples/logger/callstack_in_metadata/callstack_md.go:15"
          },
          {
            "method": "main.main()",
            "reference": "/go/src/github.com/zalgonoise/zlog/examples/logger/callstack_in_metadata/callstack_md.go:42 +0x33"
          }
        ],
        "status": "running"
      }
    },
    "error": "input cannot be zero",
    "input": 0
  }
}

When creating an event, you're able to chain the Callstack(all bool) method to it, before building the event. The bool value it takes represents whether you want a full or trimmed callstack.

More information on event building in the Feature-rich Events section, and the Callstack in metadata section in particular.


Storing log events
Logger as a Writer - example

Snippet

package main

import (
	"fmt"
	"os"

	"github.com/zalgonoise/zlog/log"
	"github.com/zalgonoise/zlog/log/event"
)

func main() {

	logger := log.New()

	n, err := logger.Write([]byte("Hello, world!"))
	if err != nil {
		fmt.Println("errored: ", err)
		os.Exit(1)
	}

	fmt.Printf("\n---\nn: %v, err: %v\n---\n", n, err)

	n, err = logger.Write(event.New().Message("Hi, world!").Build().Encode())
	if err != nil {
		fmt.Println("errored: ", err)
		os.Exit(1)
	}

	fmt.Printf("\n---\nn: %v, err: %v\n---\n", n, err)
}

Output

[info]  [2022-07-30T12:07:44.451547181Z]        [log]   Hello, world!

---
n: 69, err: <nil>
---
[info]  [2022-07-30T12:07:44.45166375Z] [log]   Hi, world!

---
n: 65, err: <nil>
---

Since the Logger interface also implements the io.Writer interface, it can be used in a broader form. The example above shows how simply passing a (string) message as a slice of bytes replicates a log.Info() call, and passing an encoded event will actually read its parameters (level, prefix, etc) and register an event accordingly. More information in the Writer Interface section.


Writing events to a file - example

Snippet

package main

import (
	"fmt"
	"io/ioutil"
	"os"

	"github.com/zalgonoise/zlog/log"
	"github.com/zalgonoise/zlog/log/event"
	"github.com/zalgonoise/zlog/store/fs"
)

func main() {
	// create a new temporary file in /tmp
	tempF, err := ioutil.TempFile("/tmp", "zlog_test_fs-")

	if err != nil {
		log.Fatalf("unexpected error creating temp file: %v", err)
	}
	defer os.Remove(tempF.Name())

	// use the temp file as a LogFile
	logF, err := fs.New(
		tempF.Name(),
	)
	if err != nil {
		log.Fatalf("unexpected error creating logfile: %v", err)
	}

	// set up a max size in MB, to auto-rotate
	logF.MaxSize(50)

	// create a simple logger using the logfile as output, log messages to it
	var logger = log.New(
		log.WithOut(logF),
	)
	logger.Log(
		event.New().Message("log entry written to file").Build(),
	)

	// print out file's's content
	b, err := os.ReadFile(tempF.Name())
	if err != nil {
		log.Fatalf("unexpected error reading logfile's data: %v", err)
	}

	fmt.Println(string(b))
}

Output

[info]  [2022-08-02T16:28:07.523156566Z]        [log]   log entry written to file

This logger also exposes a package to simplify writing log events to a file. This package exposes methods that may be helpful, like max-size limits and auto-rotate features.

More information on this topic in the Logfile section.


Writing events to a database

SQLite Snippet - example

package main

import (
	"github.com/zalgonoise/zlog/log"
	"github.com/zalgonoise/zlog/log/event"
	"github.com/zalgonoise/zlog/store/db/sqlite"

	"os"
)

const (
	dbPathEnv string = "SQLITE_PATH"
)

func getEnv(env string) (val string, ok bool) {
	v := os.Getenv(env)

	if v == "" {
		return v, false
	}

	return v, true
}

func setupMethodOne(dbPath string) log.Logger {
	// create a new DB writer
	db, err := sqlite.New(dbPath)

	if err != nil {
		log.Fatalf("unexpected error: %v", err)
	}

	// create a new logger with the general DB config
	logger := log.New(
		log.WithDatabase(db),
	)

	return logger
}

func setupMethodTwo(dbPath string) log.Logger {
	// create one with the package function
	return log.New(
		sqlite.WithSQLite(dbPath),
	)
}

func main() {
	// load sqlite db path from environment variable
	sqlitePath, ok := getEnv(dbPathEnv)
	if !ok {
		log.Fatalf("SQLite database path not provided, from env variable %s", dbPathEnv)
	}

	// setup a logger by preparing the DB writer
	loggerOne := setupMethodOne(sqlitePath)

	// write a message to the DB
	loggerOne.Log(
		event.New().Message("log entry #1 written to sqlite db writer #1").Build(),
		event.New().Message("log entry #2 written to sqlite db writer #1").Build(),
		event.New().Message("log entry #3 written to sqlite db writer #1").Build(),
	)

	// setup a logger directly as a DB writer
	loggerTwo := setupMethodTwo(sqlitePath)

	// write a message to the DB
	loggerTwo.Log(
		event.New().Message("log entry #1 written to sqlite db writer #2").Build(),
		event.New().Message("log entry #2 written to sqlite db writer #2").Build(),
		event.New().Message("log entry #3 written to sqlite db writer #2").Build(),
	)
}

MySQL Snippet - example

package main

import (
	"github.com/zalgonoise/zlog/log"
	"github.com/zalgonoise/zlog/log/event"
	"github.com/zalgonoise/zlog/store/db/mysql"

	"os"
)

const (
	_         string = "MYSQL_USER"     // account for these env variables
	_         string = "MYSQL_PASSWORD" // account for these env variables
	dbAddrEnv string = "MYSQL_HOST"
	dbPortEnv string = "MYSQL_PORT"
	dbNameEnv string = "MYSQL_DATABASE"
)

func getEnv(env string) (val string, ok bool) {
	v := os.Getenv(env)

	if v == "" {
		return v, false
	}

	return v, true
}

func setupMethodOne(dbAddr, dbPort, dbName string) log.Logger {
	// create a new DB writer
	db, err := mysql.New(dbAddr+":"+dbPort, dbName)

	if err != nil {
		log.Fatalf("unexpected error: %v", err)
	}

	// create a new logger with the general DB config
	logger := log.New(
		log.WithDatabase(db),
	)

	return logger
}

func setupMethodTwo(dbAddr, dbPort, dbName string) log.Logger {
	// create one with the package function
	return log.New(
		mysql.WithMySQL(dbAddr+":"+dbPort, dbName),
	)
}

func main() {
	// load mysql db details from environment variables
	mysqlAddr, ok := getEnv(dbAddrEnv)
	if !ok {
		log.Fatalf("MySQL database address not provided, from env variable %s", dbAddrEnv)
	}

	mysqlPort, ok := getEnv(dbPortEnv)
	if !ok {
		log.Fatalf("MySQL database port not provided, from env variable %s", dbPortEnv)
	}

	mysqlDB, ok := getEnv(dbNameEnv)
	if !ok {
		log.Fatalf("MySQL database name not provided, from env variable %s", dbNameEnv)
	}

	// setup a logger by preparing the DB writer
	loggerOne := setupMethodOne(mysqlAddr, mysqlPort, mysqlDB)

	// write a message to the DB
	loggerOne.Log(
		event.New().Message("log entry #1 written to sqlite db writer #1").Build(),
		event.New().Message("log entry #2 written to sqlite db writer #1").Build(),
		event.New().Message("log entry #3 written to sqlite db writer #1").Build(),
	)

	// setup a logger directly as a DB writer
	loggerTwo := setupMethodTwo(mysqlAddr, mysqlPort, mysqlDB)

	// write a message to the DB
	loggerTwo.Log(
		event.New().Message("log entry #1 written to sqlite db writer #2").Build(),
		event.New().Message("log entry #2 written to sqlite db writer #2").Build(),
		event.New().Message("log entry #3 written to sqlite db writer #2").Build(),
	)
}

PostgreSQL Snippet - example

package main

import (
	"github.com/zalgonoise/zlog/log"
	"github.com/zalgonoise/zlog/log/event"
	"github.com/zalgonoise/zlog/store/db/postgres"

	"os"
)

const (
	_         string = "POSTGRES_USER"     // account for these env variables
	_         string = "POSTGRES_PASSWORD" // account for these env variables
	dbAddrEnv string = "POSTGRES_HOST"
	dbPortEnv string = "POSTGRES_PORT"
	dbNameEnv string = "POSTGRES_DATABASE"
)

func getEnv(env string) (val string, ok bool) {
	v := os.Getenv(env)

	if v == "" {
		return v, false
	}

	return v, true
}

func setupMethodOne(dbAddr, dbPort, dbName string) log.Logger {
	// create a new DB writer
	db, err := postgres.New(dbAddr, dbPort, dbName)

	if err != nil {
		log.Fatalf("unexpected error: %v", err)
	}

	// create a new logger with the general DB config
	logger := log.New(
		log.WithDatabase(db),
	)

	return logger
}

func setupMethodTwo(dbAddr, dbPort, dbName string) log.Logger {
	// create one with the package function
	return log.New(
		postgres.WithPostgres(dbAddr, dbPort, dbName),
	)
}

func main() {
	// load postgres db details from environment variables
	postgresAddr, ok := getEnv(dbAddrEnv)
	if !ok {
		log.Fatalf("Postgres database address not provided, from env variable %s", dbAddrEnv)
	}

	postgresPort, ok := getEnv(dbPortEnv)
	if !ok {
		log.Fatalf("Postgres database port not provided, from env variable %s", dbPortEnv)
	}

	postgresDB, ok := getEnv(dbNameEnv)
	if !ok {
		log.Fatalf("Postgres database name not provided, from env variable %s", dbNameEnv)
	}

	// setup a logger by preparing the DB writer
	loggerOne := setupMethodOne(postgresAddr, postgresPort, postgresDB)

	// write a message to the DB
	loggerOne.Log(
		event.New().Message("log entry #1 written to sqlite db writer #1").Build(),
		event.New().Message("log entry #2 written to sqlite db writer #1").Build(),
		event.New().Message("log entry #3 written to sqlite db writer #1").Build(),
	)

	// setup a logger directly as a DB writer
	loggerTwo := setupMethodTwo(postgresAddr, postgresPort, postgresDB)

	// write a message to the DB
	loggerTwo.Log(
		event.New().Message("log entry #1 written to sqlite db writer #2").Build(),
		event.New().Message("log entry #2 written to sqlite db writer #2").Build(),
		event.New().Message("log entry #3 written to sqlite db writer #2").Build(),
	)
}

MongoDB Snippet - example

package main

import (
	"github.com/zalgonoise/zlog/log"
	"github.com/zalgonoise/zlog/log/event"
	"github.com/zalgonoise/zlog/store/db/mongo"

	"os"
)

const (
	_         string = "MONGO_USER"     // account for these env variables
	_         string = "MONGO_PASSWORD" // account for these env variables
	dbAddrEnv string = "MONGO_HOST"
	dbPortEnv string = "MONGO_PORT"
	dbNameEnv string = "MONGO_DATABASE"
	dbCollEnv string = "MONGO_COLLECTION"
)

func getEnv(env string) (val string, ok bool) {
	v := os.Getenv(env)

	if v == "" {
		return v, false
	}

	return v, true
}

func setupMethodOne(dbAddr, dbPort, dbName, dbColl string) (log.Logger, func()) {
	// create a new DB writer
	db, err := mongo.New(dbAddr+":"+dbPort, dbName, dbColl)

	if err != nil {
		log.Fatalf("unexpected error: %v", err)
	}

	// create a new logger with the general DB config
	logger := log.New(
		log.WithDatabase(db),
	)

	return logger, func() {
		db.Close()
	}
}

func setupMethodTwo(dbAddr, dbPort, dbName, dbColl string) log.Logger {
	// create one with the package function
	return log.New(
		mongo.WithMongo(dbAddr+":"+dbPort, dbName, dbColl),
	)
}

func main() {
	// load mongo db details from environment variables
	mongoAddr, ok := getEnv(dbAddrEnv)
	if !ok {
		log.Fatalf("Mongo database address not provided, from env variable %s", dbAddrEnv)
	}

	mongoPort, ok := getEnv(dbPortEnv)
	if !ok {
		log.Fatalf("Mongo database port not provided, from env variable %s", dbPortEnv)
	}

	mongoDB, ok := getEnv(dbNameEnv)
	if !ok {
		log.Fatalf("Mongo database name not provided, from env variable %s", dbNameEnv)
	}

	mongoColl, ok := getEnv(dbCollEnv)
	if !ok {
		log.Fatalf("Mongo collection name not provided, from env variable %s", dbCollEnv)
	}

	// setup a logger by preparing the DB writer
	loggerOne, close := setupMethodOne(mongoAddr, mongoPort, mongoDB, mongoColl)
	defer close()

	// write a message to the DB
	loggerOne.Log(
		event.New().Message("log entry written to sqlite db writer #1").Build(),
	)

	// setup a logger directly as a DB writer
	loggerTwo := setupMethodTwo(mongoAddr, mongoPort, mongoDB, mongoColl)

	// write a message to the DB
	n, err := loggerTwo.Output(
		event.New().Message("log entry written to sqlite db writer #2").Build(),
	)
	if err != nil {
		log.Fatalf("unexpected error: %v", err)
	}
	if n == 0 {
		log.Fatalf("zero bytes written")
	}

	log.Info("event written to DB writer #2 successfully")
}

There are packages available in store/db which will allow for a simple setup when preferring to write log events to a database (instead of a file or a buffer).

For this, most DBs leverage GORM to make it seamless to use different databases. Here is the list of supported database types:

More information on this topic in the Databases section.


Logging events remotely
gRPC server / client - example

Server Snippet

package main

import (
	"os"

	"github.com/zalgonoise/zlog/log"

	"github.com/zalgonoise/zlog/grpc/server"
)

func getEnv(env string) (val string, ok bool) {
	v := os.Getenv(env)

	if v == "" {
		return v, false
	}

	return v, true
}

func getTLSConf() server.LogServerConfig {
	var tlsConf server.LogServerConfig

	var withCert bool
	var withKey bool
	var withCA bool

	certPath, ok := getEnv("TLS_SERVER_CERT")
	if ok {
		withCert = true
	}

	keyPath, ok := getEnv("TLS_SERVER_KEY")
	if ok {
		withKey = true
	}

	caPath, ok := getEnv("TLS_CA_CERT")
	if ok {
		withCA = true
	}

	if withCert && withKey {
		if withCA {
			tlsConf = server.WithTLS(certPath, keyPath, caPath)
		} else {
			tlsConf = server.WithTLS(certPath, keyPath)
		}
	}

	return tlsConf
}

func main() {

	grpcLogger := server.New(
		server.WithLogger(
			log.New(
				log.WithFormat(log.TextColorLevelFirstSpaced),
			),
		),
		server.WithServiceLogger(
			log.New(
				log.WithFormat(log.TextColorLevelFirstSpaced),
			),
		),
		server.WithAddr("127.0.0.1:9099"),
		server.WithGRPCOpts(),
		getTLSConf(),
	)
	grpcLogger.Serve()
}

Output

[info]          [2022-08-04T11:16:29.624287477Z]                [gRPC]          [listen]                gRPC server is listening to connections         [ addr = "127.0.0.1:9099" ] 
[debug]         [2022-08-04T11:16:29.624489336Z]                [gRPC]          [serve]         gRPC server is running          [ addr = "127.0.0.1:9099" ] 
[debug]         [2022-08-04T11:16:29.624530116Z]                [gRPC]          [handler]               message handler is running
[trace]         [2022-08-04T11:16:29.624588293Z]                [gRPC]          [Done]          listening to done signal

Client snippet

package main

import (
	"fmt"
	"os"
	"time"

	"github.com/zalgonoise/zlog/grpc/client"
	"github.com/zalgonoise/zlog/log"
	"github.com/zalgonoise/zlog/log/event"
)

func getEnv(env string) (val string, ok bool) {
	v := os.Getenv(env)

	if v == "" {
		return v, false
	}

	return v, true
}

func getTLSConf() client.LogClientConfig {
	var tlsConf client.LogClientConfig

	var withCert bool
	var withKey bool
	var withCA bool

	certPath, ok := getEnv("TLS_CLIENT_CERT")
	if ok {
		withCert = true
	}

	keyPath, ok := getEnv("TLS_CLIENT_KEY")
	if ok {
		withKey = true
	}

	caPath, ok := getEnv("TLS_CA_CERT")
	if ok {
		withCA = true
	}

	if withCA {
		if withCert && withKey {
			tlsConf = client.WithTLS(caPath, certPath, keyPath)
		} else {
			tlsConf = client.WithTLS(caPath)
		}
	}

	return tlsConf
}

func main() {
	logger := log.New(
		log.WithFormat(log.TextColorLevelFirst),
	)

	grpcLogger, errCh := client.New(
		client.WithAddr("127.0.0.1:9099"),
		client.UnaryRPC(),
		client.WithLogger(
			logger,
		),
		client.WithGRPCOpts(),
		getTLSConf(),
	)
	_, done := grpcLogger.Channels()

	grpcLogger.Log(event.New().Message("hello from client").Build())

	for i := 0; i < 3; i++ {
		grpcLogger.Log(event.New().Level(event.Level_warn).Message(fmt.Sprintf("warning #%v", i)).Build())
		time.Sleep(time.Millisecond * 50)
	}

	for {
		select {
		case err := <-errCh:
			panic(err)
		case <-done:
			return
		}
	}
}

Client output

[debug] [2022-08-04T11:21:11.36101206Z] [gRPC]  [init]  setting up Unary gRPC client
[debug] [2022-08-04T11:21:11.361184863Z]        [gRPC]  [conn]  connecting to remote    [ addr = "127.0.0.1:9099" ; index = 0 ] 
[debug] [2022-08-04T11:21:11.361299493Z]        [gRPC]  [conn]  connecting to remote    [ index = 0 ; addr = "127.0.0.1:9099" ] 
[debug] [2022-08-04T11:21:11.361876301Z]        [gRPC]  [conn]  dialed the address successfully [ addr = "127.0.0.1:9099" ; index = 0 ] 
[debug] [2022-08-04T11:21:11.361995075Z]        [gRPC]  [log]   setting up log service with connection  [ remote = "127.0.0.1:9099" ] 
[debug] [2022-08-04T11:21:11.362029197Z]        [gRPC]  [log]   received a new log message to register  [ timeout = 30 ; remote = "127.0.0.1:9099" ] 
[debug] [2022-08-04T11:21:11.362151576Z]        [gRPC]  [conn]  dialed the address successfully [ addr = "127.0.0.1:9099" ; index = 0 ] 
[debug] [2022-08-04T11:21:11.36221311Z] [gRPC]  [log]   setting up log service with connection  [ remote = "127.0.0.1:9099" ] 
[debug] [2022-08-04T11:21:11.362242582Z]        [gRPC]  [log]   received a new log message to register  [ remote = "127.0.0.1:9099" ; timeout = 30 ] 
[debug] [2022-08-04T11:21:11.414202683Z]        [gRPC]  [conn]  connecting to remote    [ addr = "127.0.0.1:9099" ; index = 0 ] 
[debug] [2022-08-04T11:21:11.415095627Z]        [gRPC]  [conn]  dialed the address successfully [ index = 0 ; addr = "127.0.0.1:9099" ] 
[debug] [2022-08-04T11:21:11.415162552Z]        [gRPC]  [log]   setting up log service with connection  [ remote = "127.0.0.1:9099" ] 
[debug] [2022-08-04T11:21:11.415192529Z]        [gRPC]  [log]   received a new log message to register  [ timeout = 30 ; remote = "127.0.0.1:9099" ] 
[debug] [2022-08-04T11:21:11.464857675Z]        [gRPC]  [conn]  connecting to remote    [ index = 0 ; addr = "127.0.0.1:9099" ] 
[debug] [2022-08-04T11:21:11.465455812Z]        [gRPC]  [conn]  dialed the address successfully [ addr = "127.0.0.1:9099" ; index = 0 ] 
[debug] [2022-08-04T11:21:11.465509873Z]        [gRPC]  [log]   setting up log service with connection  [ remote = "127.0.0.1:9099" ] 
[debug] [2022-08-04T11:21:11.465537439Z]        [gRPC]  [log]   received a new log message to register  [ remote = "127.0.0.1:9099" ; timeout = 30 ]

Server output

(...)
[trace]         [2022-08-04T11:16:29.624588293Z]                [gRPC]          [Done]          listening to done signal
[warn]          [2022-08-04T11:21:11.361248552Z]                [log]           warning #0
[info]          [2022-08-04T11:21:11.361160675Z]                [log]           hello from client
[debug]         [2022-08-04T11:21:11.362648438Z]                [gRPC]          [handler]               input log message parsed and registered
[debug]         [2022-08-04T11:21:11.362701641Z]                [gRPC]          [handler]               input log message parsed and registered
[warn]          [2022-08-04T11:21:11.414114247Z]                [log]           warning #1
[debug]         [2022-08-04T11:21:11.415435299Z]                [gRPC]          [handler]               input log message parsed and registered
[warn]          [2022-08-04T11:21:11.464787592Z]                [log]           warning #2
[debug]         [2022-08-04T11:21:11.465792823Z]                [gRPC]          [handler]               input log message parsed and registered

Setting up a Log Server to register events over a network is made possible with the gRPC implementation of both client and server logic. This is to make it both fast and simple to configure (remote) logging when it should not be a center-piece of your application; merely a debugging / development feature.

The gRPC framework allows a quick and secure means of exchanging messages as remote procedure calls, leveraging protocol buffers to generate source code on-the-fly. As such, protocol buffers have a huge weight on this library, having the event data structure defined as one.

Not only is it simple to setup and kick off a gRPC Log server, with either unary and stream RPCs and many other neat features -- it's also extensible by using the same service and message .proto files and integrate the Logger in your own application, by generating the Log Client code for your environment.

More information on gRPC service / server / client implementations, in the gRPC section.


Building the library

This library uses Bazel as a build system. Bazel is an amazing tool that promises a hermetic build -- one that will always output the same results. While this is the greatest feature of Bazel (which may not be achievable in other build tools that are dependent on your local environment), it is also very extensible in terms of integration, support for multiple languages, and clear recipes of the dependencies of a particular library or binary.

Bazel also has a dozen features like remote execution, like with BuildBuddy and remote caching (with a Docker container in another machine, for instance).

To install Bazel on a machine, I prefer to use the bazelisk tool which ensures that Bazel is both up-to-date and will not raise any sort of version issue later on. To install it, you can use your system's package manager, or the following Go command:

go install github.com/bazelbuild/bazelisk@latest

Bazel is the build tool of choice for this library as (for me, personally) it becomes very easy to configure a new build target when required; you simply need to setup the WORKSPACE file from presets provided by the tools used (Go rules, gazelle and buildifier); by adding the corresponding WORKSPACE code to your repo's file.

Once the WORKSPACE file is configured, you're able to use gazelle to auto-generate the build files for your Go code. This is done by running the //:gazelle targets listed in the section below:

Targets
Command Description
bazel run //:gazelle -- Generate the BUILD.bazel files for this and all subdirectories. It must be called at the root of the repository, where the WORKSPACE file is located
bazel run //:gazelle -- update-repos -from_file=go.mod Update the WORKSPACE file with the new dependencies, when they are added to Go code. It must be called at the root of the repository, where the WORKSPACE file is located
bazel build //log/... Builds the log package and all subpackages
bazel test //... Tests the entire library
bazel test //log/event:event_test Tests the event package, in the form of //path/to:target
bazel run //:zlog Executes the :zlog (/main.go) target
bazel run //examples/logger/simple_logger:simple_logger Executes the simple_logger.go example, in the form of //path/to:target
bazel run //:buildifier-check Executes the buildifier target in lint-checking mode for build files
bazel run //:buildifier-fix Executes the buildifier target and corrects any build file lint issues
bazel run //:lint Executes golangci-lint run ./... with the .go files in the repository.
Target types

Go Binary

If you have a binary (main.go file), it will be listed as a binary target in a BUILD.bazel file, such as:

load("@io_bazel_rules_go//go:def.bzl", "go_binary", "go_library")

go_binary(
    name = "zlog",
    embed = [":zlog_lib"],
    visibility = ["//visibility:public"],
)

These targets can be called with Bazel with a command such as:

bazel run //:zlog --

...Or if they are in some deeper directory, call the command from the repo's root:

bazel run //examples/logger/simple_logger:simple_logger

Go Library

Libraries are non-executable packages of (Go) code which will provide functionality to a binary, as referenced in its imports:

load("@io_bazel_rules_go//go:def.bzl", "go_binary", "go_library")

go_library(
    name = "simple_logger_lib",
    srcs = ["simple_logger.go"],
    importpath = "github.com/zalgonoise/zlog/examples/logger/simple_logger",
    visibility = ["//visibility:private"],
    deps = ["//log"],
)

go_binary(
    name = "simple_logger",
    embed = [":simple_logger_lib"], # uses the lib above
    visibility = ["//visibility:public"],
)

Gazelle

Used to generate Go build files for Bazel, it's declared in the WORSKAPCE file and referenced in the main BUILD.bazel (at the root of the repository):

load("@bazel_gazelle//:def.bzl", "gazelle")

# gazelle:prefix github.com/zalgonoise/zlog
gazelle(name = "gazelle")

Usage of gazelle just above this section, with both commands used to update Go build files and their dependencies for Bazel.

Buildifier

Used to verify (and fix) build files that you may have created or modified. Similar to go fmt but for Bazel. it's declared in the WORSKAPCE file and referenced in the main BUILD.bazel (at the root of the repository), with two targets available:

load("@com_github_bazelbuild_buildtools//buildifier:def.bzl", "buildifier")

buildifier(
    name = "buildifier-check",
    lint_mode = "warn",
    mode = "check",
    multi_diff = True,
)

buildifier(
    name = "buildifier-fix",
    lint_mode = "fix",
    mode = "fix",
    multi_diff = True,
)

Golangci-lint

Used to perform a lint check in Go files, this tool will not alter any data besides outputting suggested changes to the terminal. Its AMD64 executable is being fetched from the official repository and executed with all Go files, with the command golangci-lint run ./.... The binary is imported in WORKSPACE:

## golangci-lint
http_archive(
    name = "golangci_golangci-lint",
    strip_prefix = "golangci-lint-1.49.0-linux-amd64",
    sha256 = "5badc6e9fee2003621efa07e385910d9a88c89b38f6c35aded153193c5125178",
    build_file = "//:BUILD.golanglint-ci",
    urls = [
        "https://github.com/golangci/golangci-lint/releases/download/v1.49.0/golangci-lint-1.49.0-linux-amd64.tar.gz",
    ],
)

It is bundled locally in the BUILD.golanci-lint file:

filegroup(
    name = "golangci",
    srcs = glob([
        "golangci-lint",
    ]),
    visibility = ["//visibility:public"],
)

And executed alongside all Go files of the repository in BUILD.bazel:

sh_binary(
  name = "lint",
  srcs = [ "@golangci_golangci-lint//:golangci" ],
  data = glob([ "**/*.go" ]),
  args = [ "run", "./..." ],
)

Features

This library provides a feature-rich structured logger, ready to write to many types of outputs (standard out / error, to buffers, to files and databases) and over-the-wire (via gRPC).

Simple API

See the Simple Logger example

The Logger interface in this library provides a set complete set of idiomatic methods which allow to either control the logger:

type Logger interface {
	io.Writer
	Printer

	SetOuts(outs ...io.Writer) Logger
	AddOuts(outs ...io.Writer) Logger
	Prefix(prefix string) Logger
	Sub(sub string) Logger
	Fields(fields map[string]interface{}) Logger
	IsSkipExit() bool
}

...or to use its Printer interface and print messages in the fmt.Print() / fmt.Println() / fmt.Printf() way:

type Printer interface {
	Output(m *event.Event) (n int, err error)
	Log(m ...*event.Event)

	Print(v ...interface{})
	Println(v ...interface{})
	Printf(format string, v ...interface{})

	Panic(v ...interface{})
	Panicln(v ...interface{})
	Panicf(format string, v ...interface{})

	Fatal(v ...interface{})
	Fatalln(v ...interface{})
	Fatalf(format string, v ...interface{})

	Error(v ...interface{})
	Errorln(v ...interface{})
	Errorf(format string, v ...interface{})

	Warn(v ...interface{})
	Warnln(v ...interface{})
	Warnf(format string, v ...interface{})

	Info(v ...interface{})
	Infoln(v ...interface{})
	Infof(format string, v ...interface{})

	Debug(v ...interface{})
	Debugln(v ...interface{})
	Debugf(format string, v ...interface{})

	Trace(v ...interface{})
	Traceln(v ...interface{})
	Tracef(format string, v ...interface{})
}

The logger configuration methods (listed below) can be used during runtime, to adapt the logger to the application's needs. However special care needs to be taken when calling the log writer-altering methods (AddOuts() and SetOuts()) considering that they can raise undersirable effects. It's recommended for these methods to be called when the logger is initialized in your app, if called at all.

Method Description
SetOuts(...io.Writer) Logger sets (replaces) the defined io.Writer in the Logger with the input list of io.Writer.
AddOuts(...io.Writer) Logger adds (appends) the defined io.Writer in the Logger with the input list of io.Writer.
Prefix(string) Logger sets a logger-scoped (as opposed to message-scoped) prefix string to the logger
Sub(string) Logger sets a logger-scoped (as opposed to message-scoped) sub-prefix string to the logger
Fields(map[string]interface{}) Logger sets logger-scoped (as opposed to message-scoped) metadata fields to the logger
IsSkipExit() bool returns a boolean on whether this logger is set to skip os.Exit(1) or panic() calls.

Note: SetOuts() and AddOuts() methods will apply the multi-writer pattern to the input list of io.Writer. The writers are merged as one.

Note: Logger-scoped parameters (prefix, sub-prefix and metadata) allow calling either the Printer interface methods (including event-based methods like Log() and Output()) without having to define these values. This can be especially useful when registering multiple log events in a certain module of your code -- however, the drawback is that these values are persisted in the logger, so you may need to unset them (calling nil or empty values on them).

Note: IsSkipExit() is a useful method, used for example to determine wether a MultiLogger should should be presented as a skip-exit-calls logger or not -- if at least one configured logger in a multilogger is not skipping exit calls, its output would be false.

Highly configurable

See the Custom Logger example

Creating a new logger with, for example, log.New() takes any number of configurations (including none, for the default configuration). This allows added modularity to the way your logger should behave.

Method / Variable Description
NilLogger() create a nil-logger (that doesn't write anything to anywhere)
WithPrefix(string) set a default prefix
WithSub(string) set a default sub-prefix
WithOut(...io.Writer) set (a) default writer(s)
WithFormat(LogFormatter) set the formatter for the log event output content
SkipExit config set the skip-exit option (to skip os.Exit(1) and panic() calls)
WithFilter(event.Level) set a log-level filter
WithDatabase(...io.WriteCloser) set a database writer (if using a database)

Beyond the functions and preset configurations above, the package also exposes the following preset for the default config:

var DefaultConfig LoggerConfig = &multiconf{
  confs: []LoggerConfig{
    WithFormat(TextColorLevelFirst),
    WithOut(),
    WithPrefix(event.Default_Event_Prefix),
  },
}

...and the following (initialized) presets for several useful "defaults":

var (
	DefaultCfg    = LoggerConfigs[0]  // default LoggerConfig
	SkipExit      = LoggerConfigs[1]  // skip-exits LoggerConfig
	StdOut        = LoggerConfigs[7]  // os.Stderr LoggerConfig
	PrefixDefault = LoggerConfigs[8]  // default-prefix LoggerConfig
	FilterInfo    = LoggerConfigs[9]  // Info-filtered LoggerConfig
	FilterWarn    = LoggerConfigs[10] // Warn-filtered LoggerConfig
	FilterError   = LoggerConfigs[11] // Error-filtered LoggerConfig
	NilConfig     = LoggerConfigs[12] // empty / nil LoggerConfig
	EmptyConfig   = LoggerConfigs[12] // empty / nil LoggerConfig
)

It's important to underline that the Logger interface can also be launched in a goroutine without any hassle by using the log/logch package, for its ChanneledLogger interface.

See the Channeled Logger example

This interface provides a narrower set of methods, but instead focuses on setting up controls to interact with the logger and goroutine. Note the list of methods available:

Method Description
Log(msg ...*event.Event) takes in any number of pointers to event.Event, and iterating through each of them, pushing them to the LogMessage channel.
Close() sends a signal (an empty struct{}) to the done channel, triggering the spawned goroutine to return
Channels() (logCh chan *event.Event, done chan struct{}) returns the LogMessage channel and the done channel, so that they can be used directly with the same channel messaging patterns

The ChanneledLogger interface can be initialized with the New(log.Logger) function, which creates both message and done channels, and then kicks off the goroutine with the input logger listening to messages in it. Note that if you require multiple loggers to be converted to a ChanneledLogger, then you should merge them with log.Multilogger(...log.Logger), first.

Feature-rich events

See the Modular events example

Data structure

The events are defined in a protocol buffer format, in proto/event.proto; to give it a seamless integration as a gRPC logger's request message:

message Event {
    optional google.protobuf.Timestamp time = 1;
    optional string prefix = 2 [ default = "log" ];
    optional string sub = 3;
    optional Level level = 4 [ default = info ];
    required string msg = 5;
    optional google.protobuf.Struct meta = 6;
}
Event builder

An event is created with a builder pattern, by defining a set of elements before spitting out the resulting object.

The event builder will allow chaining methods after event.New() until the Build() method is called. Below is a list of all available methods to the event.EventBuilder:

Method signature Description
Prefix(p string) *EventBuilder set the prefix element
Sub(s string) *EventBuilder set the sub-prefix element
Message(m string) *EventBuilder set the message body element
Level(l Level) *EventBuilder set the level element
Metadata(m map[string]interface{}) *EventBuilder set (or add to) the metadata element
CallStack(all bool) *EventBuilder grab the current call stack, and add it as a "callstack" object in the event's metadata
Build() *Event build an event with configured elements, defaults applied where needed, and by adding a timestamp
Log levels

Log levels are defined as a protobuf enum, as Level enum:

enum Level {
    trace = 0;
    debug = 1;
    info = 2;
    warn = 3;
    error = 4;
    fatal = 5;
    reserved 6 to 8;
    panic = 9;
}

The generated code creates a type and two maps which set these levels:

type Level int32

const (
	Level_trace Level = 0
	Level_debug Level = 1
	Level_info  Level = 2
	Level_warn  Level = 3
	Level_error Level = 4
	Level_fatal Level = 5
	Level_panic Level = 9
)

// Enum value maps for Level.
var (
	Level_name = map[int32]string{
		0: "trace",
		1: "debug",
		2: "info",
		3: "warn",
		4: "error",
		5: "fatal",
		9: "panic",
	}
	Level_value = map[string]int32{
		"trace": 0,
		"debug": 1,
		"info":  2,
		"warn":  3,
		"error": 4,
		"fatal": 5,
		"panic": 9,
	}
)

This way, the enum is ready for changes in a consistent and seamless way. Here is how you'd define a log level as you create a new event:

// import "github.com/zalgonoise/zlog/log/event"

e := event.New().
           Message("Critical warning!"). // add a message body to event
           Level(event.Level_warn).		 // set log level
           Build()						 // build it

The Level type also has an exposed (custom) method, Int() int32, which acts as a quick converter from the map value to an int32 value.

Structured metadata

output to the call above:

{
  "timestamp": "2022-07-14T15:48:23.386745176Z",
  "service": "module",
  "module": "service",
  "level": "info",
  "message": "Logger says hi!",
  "metadata": {
    "multi-value": true,
    "nested-metadata": {
      "inner-with-type-field": {
        "ok": true
      }
    },
    "three-numbers": [
      0,
      1,
      2
    ],
    "type": "structured logger"
  }
}

Metadata is added to the event.Event as a map[string]interface{} which is compatible with JSON output (for the most part, for most the common data types). This allows a list of key-value pairs where the key is always a string (an identifier) and the value is the data itself, regardless of the type.

The event package also exposes a unique type (event.Field):

// Field type is a generic type to build Event Metadata
type Field map[string]interface{}

The event.Field type exposes three methods to allow fast / easy conversion to structpb.Struct pointers; needed for the protobuf encoders:

// AsMap method returns the Field in it's (raw) string-interface{} map format
func (f Field) AsMap() map[string]interface{} {}

// ToStructPB method will convert the metadata in the protobuf Event as a pointer to a
// structpb.Struct, returning this and an error if any.
//
// The metadata (a map[string]interface{}) is converted to JSON (bytes), and this data is
// unmarshalled into a *structpb.Struct object.
func (f Field) ToStructPB() (*structpb.Struct, error) {}

// Encode method is similar to Field.ToStructPB(), but it does not return any errors.
func (f Field) Encode() *structpb.Struct {}
Callstack in metadata
{
  "timestamp": "2022-07-17T14:59:41.793879193Z",
  "service": "log",
  "level": "error",
  "message": "operation failed",
  "metadata": {
    "callstack": {
      "goroutine-1": {
        "id": "1",
        "stack": [
          {
            "method": "github.com/zalgonoise/zlog/log/trace.(*stacktrace).getCallStack(...)",
            "reference": "/go/src/github.com/zalgonoise/zlog/log/trace/trace.go:54"
          },
          {
            "method": "github.com/zalgonoise/zlog/log/trace.New(0x30?)",
            "reference": "/go/src/github.com/zalgonoise/zlog/log/trace/trace.go:41 +0x7f"
          },
          {
            "method": "github.com/zalgonoise/zlog/log/event.(*EventBuilder).CallStack(0xc000075fc0, 0x30?)",
            "reference": "/go/src/github.com/zalgonoise/zlog/log/event/builder.go:99 +0x6b"
          },
          {
            "method": "main.subOperation(0x0)",
            "reference": "/go/src/github.com/zalgonoise/zlog/examples/logger/callstack_in_metadata/callstack_md.go:31 +0x34b"
          },
          {
            "method": "main.operation(...)",
            "reference": "/go/src/github.com/zalgonoise/zlog/examples/logger/callstack_in_metadata/callstack_md.go:15"
          },
          {
            "method": "main.main()",
            "reference": "/go/src/github.com/zalgonoise/zlog/examples/logger/callstack_in_metadata/callstack_md.go:42 +0x33"
          }
        ],
        "status": "running"
      }
    },
    "error": "input cannot be zero",
    "input": 0
  }
}

See the Callstack in Metadata example

It's also possible to include the current callstack (at the time of the log event being built / created) as metadata to the log entry, by calling the event's Callstack(all bool) method.

This call will add the map[string]interface{} output of a trace.New(bool) call, to the event's metadata element, as an object named callstack. This trace package will fetch the call stack from a runtime.Stack([]byte, bool) call, where the limit in size is 1 kilobyte (make([]byte, 1024)).

This package will parse the contents of this call and build a JSON document (as a map[string]interface{}) with key callstack, as a list of objects. These objects will have three elements:

  • an id element, as the numeric identifier for the goroutine in question
  • a status element, like running
  • a stack element, which is a list of objects, each object contains:
    • a method element (package and method / function call)
    • a reference element (path in filesystem, with a pointer to the file and line)
Multi-everything

See the Multilogger example

In this library, there are many implementations of multiSomething, following the same logic of io.MultiWriter().

In the reference above, the data structure holds a slice of io.Writer interface, and implements the same methods as an io.Writer. Its implementation of the Write() method will involve iterating through all configured io.Writer, and calling its own Write() method accordingly.

It is a very useful concept in the sense that you're able to merge a slice of interfaces while working with a single one. It allows greater manouverability with maybe a few downsides or restrictions. It is not a required module but merely a helper, or a wrapper for a simple purpose.

The actual io.MultiWriter() is used when defining a io.Writer for the logger; to allow setting it up with multiple writers:

func (l *logger) SetOuts(outs ...io.Writer) Logger {
	// (...)

	l.out = io.MultiWriter(newouts...)
	return l
}

from log/logger.go

func (l *logger) AddOuts(outs ...io.Writer) Logger {
	// (...)

	l.out = io.MultiWriter(newouts...)
	return l
}

from log/logger.go

func WithOut(out ...io.Writer) LoggerConfig {
	// (...)

	if len(out) > 1 {
		return &LCOut{
			out: io.MultiWriter(out...),
		}
	}

	// (...)
}

from log/conf.go

...but even beyond this useful implementation, it is mimicked in other pars of the code base:

type LoggerConfig interface {
	Apply(lb *LoggerBuilder)
}

type multiconf struct {
	confs []LoggerConfig
}

func (m multiconf) Apply(lb *LoggerBuilder) {
	for _, c := range m.confs {
		if c != nil {
			c.Apply(lb)
		}
	}
}

func MultiConf(conf ...LoggerConfig) LoggerConfig {
	// (...)
}
type multiLogger struct {
	loggers []Logger
}

// func (l *multiLogger) {every single method in Logger}

func MultiLogger(loggers ...Logger) Logger {
	// (...)
}
func MultiWriteCloser(wc ...io.WriteCloser) io.WriteCloser {
	// (...)
}

type LogClientConfig interface {
	Apply(ls *gRPCLogClientBuilder)
}

type multiconf struct {
	confs []LogClientConfig
}

func (m multiconf) Apply(lb *gRPCLogClientBuilder) {
	for _, c := range m.confs {
		c.Apply(lb)
	}
}

func MultiConf(conf ...LogClientConfig) LogClientConfig {
	// (...)
}
type LogServerConfig interface {
	Apply(ls *gRPCLogServerBuilder)
}

type multiconf struct {
	confs []LogServerConfig
}

func (m multiconf) Apply(lb *gRPCLogServerBuilder) {
	for _, c := range m.confs {
		c.Apply(lb)
	}
}

func MultiConf(conf ...LogServerConfig) LogServerConfig {
	// (...)
}
type multiLogger struct {
	loggers []GRPCLogger
}

// func (l *multiLogger) {every single method in Logger and ChanneledLogger}

func MultiLogger(loggers ...GRPCLogger) GRPCLogger {
	// (...)
}
type multiLogger struct {
	loggers []LogServer
}

// func (l *multiLogger) Serve() {}
// func (l *multiLogger) Stop() {}

func MultiLogger(loggers ...LogServer) LogServer {
	// (...)
}
Different formatters

See the Output formats example

The logger can output events in several different formats, listed below:

Text

The text formatter allows an array of options, with the text formatter sub-package exposing a builder to create a text formatter. Below is the list of methods you can expect when calling text.New()....Build():

Method Description
Time(LogTimestamp) define the timestamp format, based on the exposed list of timestamps, from the table below
LevelFirst() place the log level as the first element in the line
DoubleSpace() place double-tab-spaces between elements (\t\t)
Color() add color to log levels (it is skipped on Windows CLI, as it doesn't support it)
Upper() make log level, prefix and sub-prefix uppercase
NoTimestamp() skip adding the timestamp element
NoHeaders() skip adding the prefix and sub-prefix elements
NoLevel() skip adding the log level element
Log Timestamps

Regarding the timestamp constraints, please note the available timestamps for the text formatter:

Constant Description
LTRFC3339Nano Follows the standard in time.RFC3339Nano
LTRFC3339 Follows the standard in time.RFC3339
LTRFC822Z Follows the standard in time.RFC822Z
LTRubyDate Follows the standard in time.RubyDate
LTUnixNano Displays a Unix timestamp, in nanos
LTUnixMilli Displays a Unix timestamp, in millis
LTUnixMicro Displays a Unix timestamp, in micros

The library also exposes a few initialized preset configurations using text formatters, as in the list below. While these are LoggerConfig presets, they're a wrapper for the same formatter, which is also available by not including the Cfg prefix:

var (
	CfgFormatText                  = WithFormat(text.New().Build())  // default
	CfgTextLongDate                = WithFormat(text.New().Time(text.LTRFC3339).Build())  // with a RFC3339 date format
	CfgTextShortDate               = WithFormat(text.New().Time(text.LTRFC822Z).Build())  // with a RFC822Z date format
	CfgTextRubyDate                = WithFormat(text.New().Time(text.LTRubyDate).Build())  // with a RubyDate date format
	CfgTextDoubleSpace             = WithFormat(text.New().DoubleSpace().Build()) // with double spaces
	CfgTextLevelFirstSpaced        = WithFormat(text.New().DoubleSpace().LevelFirst().Build()) // with level-first and double spaces
	CfgTextLevelFirst              = WithFormat(text.New().LevelFirst().Build()) // with level-first
	CfgTextColorDoubleSpace        = WithFormat(text.New().DoubleSpace().Color().Build()) // with color and double spaces
	CfgTextColorLevelFirstSpaced   = WithFormat(text.New().DoubleSpace().LevelFirst().Color().Build()) // with color, level-first and double spaces
	CfgTextColorLevelFirst         = WithFormat(text.New().LevelFirst().Color().Build()) // with color and level-first
	CfgTextColor                   = WithFormat(text.New().Color().Build()) // with color
	CfgTextOnly                    = WithFormat(text.New().NoHeaders().NoTimestamp().NoLevel().Build()) // with only the text content
	CfgTextNoHeaders               = WithFormat(text.New().NoHeaders().Build()) // without headers
	CfgTextNoTimestamp             = WithFormat(text.New().NoTimestamp().Build()) // without timestamp
	CfgTextColorNoTimestamp        = WithFormat(text.New().NoTimestamp().Color().Build()) // without timestamp
	CfgTextColorUpperNoTimestamp   = WithFormat(text.New().NoTimestamp().Color().Upper().Build()) // without timestamp and uppercase headers
)

JSON

The JSON formatter allow generating JSON events in different ways. These formatters are already initialized as LoggerConfig and LogFormatter objects.

This formatter allows creating JSON events separated by newlines or not, and also to optionally add indentation:

type FmtJSON struct {
	SkipNewline bool
	Indent      bool
}

Also note how the LoggerConfig presets are exposed. While these are a wrapper for the same formatter, they are also available as LogFormatter by not including the Cfg prefix:

var (
	CfgFormatJSON                  = WithFormat(&json.FmtJSON{})  // default
	CfgFormatJSONSkipNewline       = WithFormat(&json.FmtJSON{SkipNewline: true}) // with a skip-newline config
	CfgFormatJSONIndentSkipNewline = WithFormat(&json.FmtJSON{SkipNewline: true, Indent: true}) // with a skip-newline and indentation config
	CfgFormatJSONIndent            = WithFormat(&json.FmtJSON{Indent: true}) // with an indentation config
)
BSON

CSV

XML

Protobuf

Gob

Data stores
Writer interface

Output of the Logger as a Writer example

Not only Logger interface uses the io.Writer interface to write to its outputs with its Output() method, it also implements it in its own Write() method so it can be used directly as one. This gives the logger more flexibility as it can be vastly integrated with other modules.

The the input slice of bytes is decoded, in case the input is an encoded event.Event. If the conversion is successful, the input event is logged as-is.

If it is not an event.Event (there will be an error from the Decode() method), then a new message is created where:

  • Log level is set to the default value (info)
  • Prefix, sub-prefix and metadata are added from the logger's configuration, or defaults if they aren't set.
  • input byte stream is converted to a string, and that will be log event message body:
func (l *logger) Write(p []byte) (n int, err error) {
	// decode bytes
	m, err := event.Decode(p)

	// default to printing message as if it was a byte slice payload for the log event body
	if err != nil {
		return l.Output(event.New().
			Level(event.Default_Event_Level).
			Prefix(l.prefix).
			Sub(l.sub).
			Message(string(p)).
			Metadata(l.meta).
			Build())
	}

	// print message
	return l.Output(m)
}
Logfile

See the example in examples/datastore/file/

This library also provides a simple Logfile (an actual file in the disk where log entries are written to) configuration with appealing features for simple applications.

type Logfile struct {
	path   string
	file   *os.File
	size   int64
	rotate int
}

The Logfile exposes a few methods that could be helpful to keep the events organized:

Method Description
MaxSize(mb int) *Logfile sets the rotation indicator for the Logfile, or, the target size when should the logfile be rotated (in MBs)
Size() (int64, error) a wrapper for an os.File.Stat() followed by fs.FileInfo.Size()
IsTooHeavy() bool verify the file's size and rotate it if exceeding the set maximum weight (in the Logfile's rotate element)
Write(b []byte) (n int, err error) implement the io.Writer interface, for Logfile to be compatible with Logger as an output to be used
Databases

It's perfectly possible to write log events to a database instead of the terminal, a buffer, or a file. It makes it more reliable for a larger scale operation or for the long-run.

This library leverages an ORM to handle interactions with most of the databases, for the sake of simplicity and streamlined testing -- these should focus on using a database as a writer, and not re-testing the database connections, configurations, etc. This is why an ORM is being used. This library uses GORM for this purpose.

Databases are not configured to loggers as an io.Writer interface using the WithOut() method, but with their dedicated WithDatabase() method. This takes an io.WriterCloser interface.

To create this io.WriterCloser, either the database package's appropriate New() method can be used; or by using its package function for the same purpose, WithXxx().

Note the available database writers, and their features:

SQLite

See the example in examples/datastore/db/sqlite/

Symbol Type Description
New(path string) (sqldb io.WriteCloser, err error) function takes in a path to a .db file; and create a new instance of a SQLite3 object; returning an io.WriterCloser interface and an error.
*SQLite.Create(msg ...*event.Event) error method will register any number of event.Event in the SQLite database, returning an error (exposed method, but it's mostly used internally )
*SQLite.Write(p []byte) (n int, err error) method implements the io.Writer interface, for SQLite DBs to be used with a Logger interface, as its writer.
*SQLite.Close() error method method added for compatibility with DBs that require it
WithSQLite(path string) log.LoggerConfig function takes in a path to a .db file, and a table name; and returns a LoggerConfig so that this type of writer is defined in a Logger
MySQL

See the example in examples/datastore/db/mysql/

Using this package will require the following environment variables to be set:

Variable Type Description
MYSQL_USER string username for the MySQL database connection
MYSQL_PASSWORD string password for the MySQL database connection
Symbol Type Description
New(address, database string) (sqldb io.WriteCloser, err error) function takes in a MySQL DB address and database name; and create a new instance of a MySQL object; returning an io.WriterCloser interface and an error.
*MySQL.Create(msg ...*event.Event) error method will register any number of event.Event in the MySQL database, returning an error (exposed method, but it's mostly used internally )
*MySQL.Write(p []byte) (n int, err error) method implements the io.Writer interface, for MySQL DBs to be used with a Logger interface, as its writer.
*MySQL.Close() error method method added for compatibility with DBs that require it
WithMySQL(addr, database string) log.LoggerConfig function takes in an address to a MySQL server, and a database name; and returns a LoggerConfig so that this type of writer is defined in a Logger
PostgreSQL

See the example in examples/datastore/db/postgres/

Using this package will require the following environment variables to be set:

Variable Type Description
POSTGRES_USER string username for the Postgres database connection
POSTGRES_PASSWORD string password for the Postgres database connection
Symbol Type Description
New(address, port, database string) (sqldb io.WriteCloser, err error) function takes in a Postgres DB address, port and database name; and create a new instance of a Postgres object; returning an io.WriterCloser interface and an error.
*Postgres.Create(msg ...*event.Event) error method will register any number of event.Event in the Postgres database, returning an error (exposed method, but it's mostly used internally )
*Postgres.Write(p []byte) (n int, err error) method implements the io.Writer interface, for Postgres DBs to be used with a Logger interface, as its writer.
*Postgres.Close() error method method added for compatibility with DBs that require it
WithPostgres(addr, port, database string) log.LoggerConfig function takes in an address and port to a Postgres server, and a database name; and returns a LoggerConfig so that this type of writer is defined in a Logger
MongoDB

See the example in examples/datastore/db/mongo/

Using this package will require the following environment variables to be set:

Variable Type Description
MONGO_USER string username for the Mongo database connection
MONGO_PASSWORD string password for the Mongo database connection
Symbol Type Description
New(address, database, collection string) (io.WriteCloser, error) function takes in a MongoDB address, database and collection names; and create a new instance of a Mongo object; returning an io.WriterCloser interface and an error.
*Mongo.Create(msg ...*event.Event) error method will register any number of event.Event in the Mongo database, returning an error (exposed method, but it's mostly used internally )
*Mongo.Write(p []byte) (n int, err error) method implements the io.Writer interface, for Mongo DBs to be used with a Logger interface, as its writer.
*Mongo.Close() error method used to terminate the live connection to the MongoDB instance
WithPostgres(addr, port, database string) log.LoggerConfig function takes in the address to the mongo server, and a database and collection name; and returns a LoggerConfig so that this type of writer is defined in a Logger
gRPC

To provide a solution to loggers that write over the wire, this library implements a gRPC log server with a number of useful features; as well a gRPC log client which will act as a regular logger, but one that writes the log messages to a gRPC log server.

The choice for gRPC was simple. The framework is very solid and provides both fast and secure transmission of messages over a network. This is all that it's needed, right? Nope! There are also protocol buffers which helped in shaping the structure of this library in a more organized way (in my opinion).

Originally, the plan was to create the event data structures in Go (manually), and from that point integrate the logger logic as an HTTP Writer or something -- note this is already possible as the Logger interface implements the io.Writer interface already. But the problem there would be a repetition in defining the event data structure. If gRPC was in fact the choice, it would mean that there would be a data structure for Go and another for gRPC (with generated Go code, for the same thing).

So, easy-peasy: scratch off the Go data structure and keep the protocol buffers, even for (local) events and loggers. This worked great, it was easy enough to switch over, and the logic remained kinda the same way, in the end.

The added benefit is that gRPC and protobuf will create this generated code (from proto/event.proto and proto/service.proto, to log/event/event.pb.go and proto/service/service.pb.go respectively); which it a huge boost to productivity.

An added bonus is a very lightweight encoded format of the exchanged messages, as you are able to convert the protocol buffer messages into byte slices, too.

Lastly, no network logic implementation headaches as creating a gRPC server and client is super smooth -- it only takes a few hours reading documentation and examples. The benefit is being able to very quickly push out a server-client solution to your app, with zero effort in the engine transmitting those messages, only what you actually do with them.

On this note, it's also possible to easily implement a log client in a different programming language supported by gRPC. This means that if you really love this library, then you could create a Java-based gRPC log client for your Android app, while you run your gRPC log server in Go, in your big-shot datacenter.

This section will cover features that you will find in both server and client implementations.

gRPC Log Service

The service, defined in proto/service/service.go, is the implementation of the log server core logic, from the gRPC generated code.

This file will have the implementation of the LogServer struct and its Log() method and LogStream() method -- on how the server handles the messages exchanged in either configuration, as a unary RPC logger or a stream RPC logger.

It also contains additional methods used within the core logic of the gRPC Log Server; such as its Done() method and Stop method.

gRPC Log Server

See the examples in examples/grpc/simple_unary_client_server/server/ and in examples/grpc/simple_stream_client_server/server/

The LogServer interface is found in grpc/server/server.go, which defines how the server is initialized, how can it be configured, and other features. This should be perceived as a simple wrapper for setting up a gRPC server using the logic in proto/service/service.go, with added features to make it even more useful:

type LogServer interface {
	Serve()
	Stop()
	Channels() (logCh, logSvCh chan *event.Event, errCh chan error)
}

A new Log Server is created with the public function New(...LogServerConfig), which parses any number of configurations (covered below). The resulting GRPCLogServer pointer will expose the following methods:

Method Description
Serve() a long-running, blocking function which will launch the gRPC server
Stop() a wrapper for the routine involved to (gracefully) stop this gRPC Log Server.
Channels() (logCh, logSvCh chan *event.Event, errCh chan error) returns channels for a Log Server's I/O. It returns a channel for log messages (for actual log event writes), a channel for the service logger (the server's own logger), and an error channel to collect Log Server errors from.
Log Server Configs

The Log Server can be configured in a number of ways, like specifying exposed address, the output logger for your events, a service logger for the Log Server activity (yes, a logger for your logger), added metadata like timing, and of course TLS.

Here is the list of exposed functions to allow a granular configuration of your Log Server:

Function Description
WithAddr(string) takes one address for the gRPC Log Server to listen to. Defaults to localhost:9099
WithLogger(...log.Logger) defines this gRPC Log Server's logger(s)
WithServiceLogger(...log.Logger) defines this gRPC Log Server's service logger(s) (for the gRPC Log Server activity)
WithServiceLoggerV(...log.Logger) defines this gRPC Log Server's service logger(s) (for the gRPC Log Server activity) in verbose mode -- by adding an interceptor that checks each transaction, if OK or not, and for errors (added overhead)
WithTiming() sets a gRPC Log Server's service logger to measure the time taken when executing RPCs, as added metadata (added overhead)
WithGRPCOpts(...grpc.ServerOption) sets a gRPC Log Server's service logger to measure the time taken when executing RPCs, as added metadata (added overhead)
WithTLS(certPath, keyPath string, caPath ...string) allows configuring TLS / mTLS for a gRPC Log Server. If only two parameters are passed (certPath, keyPath), it will run its TLS flow. If three parameters are set (certPath, keyPath, caPath), it will run its mTLS flow.

Lastly, the library also exposes some preset configurations:

var (
	defaultConfig LogServerConfig = &multiconf{
		confs: []LogServerConfig{
			WithAddr(""),
			WithLogger(),
			WithServiceLogger(),
		},
	}

	DefaultCfg        = LogServerConfigs[0] // default LogServerConfig
	ServiceLogDefault = LogServerConfigs[1] // default logger as service logger
	ServiceLogNil     = LogServerConfigs[2] // nil-service-logger LogServerConfig
	ServiceLogColor   = LogServerConfigs[3] // colored, level-first, service logger
	ServiceLogJSON    = LogServerConfigs[4] // JSON service logger
	LoggerDefault     = LogServerConfigs[5] // default logger
	LoggerColor       = LogServerConfigs[6] // colored, level-first logger
	LoggerJSON        = LogServerConfigs[7] // JSON logger
)
gRPC Log Client

See the examples in examples/grpc/simple_unary_client_server/client/ and in examples/grpc/simple_stream_client_server/client/

There is a gRPC Log Client implementation in Go, for the sake of providing an out-of-the-box solution for communicating with the gRPC Log Server; although this can simply serve as a reference for you to implement your own gRPC Log Client -- in any of the gRPC-supported languages.

This client will act just like a regular (channeled) Logger interface, with added features (and configurations):

// import (
// 	"github.com/zalgonoise/zlog/log"
// 	"github.com/zalgonoise/zlog/log/logch"
// )

type GRPCLogger interface {
	log.Logger
	logch.ChanneledLogger
}

Creating a new gRPC Log Client depends on whether you're setting up a Unary gRPC logger or a Stream gRPC one. The New(...LogClientConfig) function will serve as a factory, where depending on the configuration it will either spawn a Unary gRPC logger or a Stream gRPC logger. Similar to other modules, the underlying builder pattern as you create a GRPCLogger will apply the default configuration before overwriting it with the user's configs.

This client will expose the public methods as per the interfaces it contains, and nothing else. There are a few things to keep in mind:

Method Description
Close() iterates through all (alive) connections in the ConnAddr map, and close them. After doing so, it sends the done signal to its channel, which causes all open streams to cancel their context and exit gracefully
Output(*event.Event) (int, error) pushes the incoming Log Message to the message channel, which is sent to a gRPC Log Server, either via a Unary or Stream RPC. Note that it will always return 1, nil.
SetOuts(...io.Writer) log.Logger for compatibility with the Logger interface, this method must take in io.Writers. However, this is not how the gRPC Log Client will work to register messages. Instead, the input io.Writer needs to be of type ConnAddr. More info on this type below. This method overwrites the configured addresses.
AddOuts(...io.Writer) log.Logger for compatibility with the Logger interface, this method must take in io.Writers. However, this is not how the gRPC Log Client will work to register messages. Instead, the input io.Writer needs to be of type ConnAddr. More info on this type below. This method adds addresses to the configured ones.
Write([]byte) (int, error) consider that Write() will return a call of Output(). This means that you should expect it to return 1, nil.
IsSkipExit() bool returns a boolean on whether the gRPC Log Client's service logger is set to skip os.Exit(1) or panic() calls.
Log Client Configs

This Log Client can be configured in a number of ways, like specifying exposed address, the output logger for your events, a service logger for the gRPC / Log Server activity (yes, a logger for your logger), added metadata like timing, and of course TLS.

Here is the list of exposed functions to allow a granular configuration of your Log Server:

Function Description
WithAddr(...string) take in any amount of addresses, and create a connections map with them, for the gRPC client to connect to the server. Defaults to localhost:9099
StreamRPC() sets this gRPC Log Client type as Stream RPC
UnaryRPC() sets this gRPC Log Client type as Unary RPC
WithLogger(...log.Logger) defines this gRPC Log Client's service logger. This logger will register the gRPC Client transactions; and not the log messages it is handling.
WithLoggerV(...log.Logger) defines this gRPC Log Client's service logger, in verbose mode. This logger will register the gRPC Client transactions; and not the log messages it is handling. (added overhead)
WithBackoff(time.Duration, BackoffFunc) takes in a time.Duration value to set as the exponential backoff module's retry deadline, and a BackoffFunc to customize the backoff pattern. Backoff is further described in the next section.
WithTiming() sets a gRPC Log Client's service logger to measure the time taken when executing RPCs. It is only an option, and is directly tied to the configured service logger. (added overhead)
WithGRPCOpts(...grpc.DialOption) allows passing in any number of grpc.DialOption, which are added to the gRPC Log Client.
Insecure() allows creating an insecure gRPC connection (maybe for testing purposes) by adding a new option for insecure transport credentials (no TLS / mTLS).
WithTLS(string, ...string) allows configuring TLS / mTLS for a gRPC Log Client. If only one parameter is passed (caPath), it will run its TLS flow. If three parameters are set (caPath, certPath, keyPath), it will run its mTLS flow.

Lastly, the library also exposes some preset configurations:

var (
	defaultConfig LogClientConfig = &multiconf{
		confs: []LogClientConfig{
			WithAddr(""),
			WithGRPCOpts(),
			Insecure(),
			WithLogger(),
			WithBackoff(0, BackoffExponential()),
		},
	}

	DefaultCfg     = LogClientConfigs[0] // default LogClientConfig
	BackoffFiveMin = LogClientConfigs[1] // backoff config with 5-minute deadline
	BackoffHalfMin = LogClientConfigs[2] // backoff config with 30-second deadline
)
Log Client Backoff

There is a backoff module available, in order to retry transactions in case they fail in any (perceivable) way. While this is optional, it was implemented to consider that connections over a network may fail.

This package exposes the following types to serve as a core logic for any backoff implementation:

type BackoffFunc func(uint) time.Duration

type Backoff struct {
	counter     uint
	max         time.Duration
	wait        time.Duration
	call        interface{}
	msg         []*event.Event
	backoffFunc BackoffFunc
	locked      bool
	mu          sync.Mutex
}

BackoffFunc takes in a(n unsigned) integer representing the attempt counter, and returns a time.Duration value of how much should the module wait before the next attempt / retry.

Backoff struct defines the elements of a backoff module, which is configured by setting a BackoffFunc to define the interval between each attempt.

Backoff will also try to act as a message buffer in case the server connection cannot be established -- as it will attempt to flush these records to the server as soon as connected.

Implementing backoff logic is as simple as writing a function which will return a function with the same signature as BackoffFunc. The parameters that your function takes or how it arrives to the return time.Duration value is completely up to you.

Two examples below, one for NoBackoff() and one for BackoffExponential():

func NoBackoff() BackoffFunc {
	return func(attempt uint) time.Duration {
		return 0
	}
}

func BackoffExponential() BackoffFunc {
	return func(attempt uint) time.Duration {
		return time.Millisecond * time.Duration(
			int64(math.Pow(2, float64(attempt)))+rand.New(
				rand.NewSource(time.Now().UnixNano())).Int63n(1000),
		)
	}
}

With this in mind, regardless if the exposed BackoffFunc, you may pass a deadline and your own BackoffFunc to WithBackoff(), as you create your gRPC Log Client.

Here is a list of the preset BackoffFunc factories, available in this library:

Function Description
NoBackoff() returns a BackoffFunc that overrides the backoff module by setting a zero wait-between duration. This is detected as a sign that the module should be overriden.
BackoffLinear(time.Duration) returns a BackoffFunc that sets a linear backoff according to the input duration. If the input duration is 0, then the default wait-between time is set (3 seconds).
BackoffIncremental(time.Duration) returns a BackoffFunc that calculates exponential backoff according to a scalar method
BackoffExponential() returns a BackoffFunc that calculates exponential backoff according to its standard

Creating a new Backoff instance can be manual, although not necessary considering it is embeded in the gRPC Log Client's logic:

// import "github.com/zalgonoise/zlog/grpc/client"

b := client.NewBackoff()
b.BackoffFunc(
	// your BackoffFunc here
)

From this point onwards, the Backoff module is called on certain errors. For example:

From grpc/client/client.go

func (c *GRPCLogClient) connect() error {
	// (...)

		// handle dial errors
		if err != nil {
			retryErr := c.backoff.UnaryBackoffHandler(err, c.svcLogger)
			if errors.Is(retryErr, ErrBackoffLocked) {
				return retryErr
			} else if errors.Is(retryErr, ErrFailedConn) {
				return retryErr
			} else {
				// (...)
			}
		// (...)
		}
	// (...)
}

While this Backoff logic can be taken as a reference for different implementations, it is very specific to the gRPC Log Client's logic considering its init() method and Register() method; which are hooks for being able to work with either Unary or Stream gRPC Log Clients.

Connection Addresses

Connection Addresses, or ConnAddr type is a custom type for map[string]*grpc.ClientConn, that also exposes a few handy methods for this particular application.

Considering that the Logger interface works with io.Writer interfaces to write events, it was becoming pretty obvious that the gRPC Client / Server logic would need to either discard the Write(), AddOuts() and SetOuts() while adding different methods (or configs) in replacement ...or why not keep working with an io.Writer interface?

The ConnAddr type implements this and other useful methods which will allow the gRPC client and server logic to leverage the same methods as in the Logger interface for the purposes that it needs. This, similar to the Logger interface, allows one gRPC Log Client to connect to multiple gRPC Log Servers at the same time, writing the same events to different endpoints (as needed).

There is also a careful verification if the input io.Writer interface is actually of ConnAddr type, for example in the client's SetOuts() method:

func (c *GRPCLogClient) SetOuts(outs ...io.Writer) log.Logger {
	// (...)
	for _, remote := range outs {
		// ensure the input writer is not nil
		if remote == nil {
			continue
		}

		// ensure the input writer is of type *address.ConnAddr
		// if not, skip this writer and register this event
		if r, ok := remote.(*address.ConnAddr); !ok {		
			// (...)
		} else {
			o = append(o, r.Keys()...)
		}
	}
	// (...)
}

The core of the ConnAddr type stores addresses prior to verifying them. As the gRPC Log Client starts running, it will iterate through all addresses in the map and connect to them (thus the map of strings and pointers to grpc.ClientConn).

A ConnAddr is initialized with any number of parameters (at least one, otherwise it returns nil):

// import "github.com/zalgonoise/zlog/grpc/address"

a := address.New(
	"localhost:9099",
	"mycoolserver.io:9099",
	// more addresses
)

This type exposes a few methods that may be useful; although keep in mind that all of this logic is embeded in the gRPC client and server implementations already (in their WithAddr() configs and writer-related methods like AddOuts() and SetOuts()):

Method Description
AsMap() map[string]*grpc.ClientConn returns a ConnAddr type object in a map[string]*grpc.ClientConn format
Add(...string) allocates the input strings as entries in the map, with initialized pointers to grpc.ClientConn
Keys() []string returns a ConnAddr type object's keys (its addresses) in a slice of strings
Get(string) *grpc.ClientConn returns the pointer to a grpc.ClientConn, as referenced in the input address k
Set(string, *grpc.ClientConn) allocates the input connection to the input string, within the ConnAddr type object (overwritting it if existing)
Len() int returns the size of the ConnAddr type object
Reset() overwrites the existing ConnAddr type object with a new, empty one.
Unset(...string) removes the input addr strings from the ConnAddr type object, if existing
Write(p []byte) (n int, err error) an implementation of io.Writer interface, so that the ConnAddr type object can be used in a gRPC Log Client's SetOuts() and AddOuts() methods. These need to conform with the Logger interface that implements the same methods. For the same layer of compatibility to be possible in a gRPC Log Client (who will write its log entries in a remote server), it uses these methods to implement its way of altering the existing connections, instead of dismissing this part of the implementation all together. This is not a regular io.Writer interface.

Integration

This section will cover general procedures when performing changes or general concerns when integrating a certain package or module to work with this logger. Generally there are interfaces defined so you can adapt the feature to your own needs (like the formatters); but may not be as clear as day for other parts of the code base.

Protobuf code generation

When making changes to the .proto files within the proto/ directory, such as when adding new log levels or a new element to the Event data structure, you will need to run the appropriate code generation command.

The commands referenced below refer to Go's GitHub repo for protoc-gen-go, and not the one from the instructions in the gRPC Quick Start guide. It is important to review this document; but the decision to go with Go's protoc release is only due to compatibility with (built-in) gRPC code generation. This may change in the future, in adoption of Google's official release versions.

Then, you can regenerate the code by running a command appropriate to your target:

Target Command
proto/event.proto protoc --go_out=. proto/event.proto
proto/service.proto protoc --go_out=plugins=grpc:proto proto/service.proto

Example of editing the proto/event.proto file

  1. Make changes to the .proto file, for example changing the default prefix or adding a new field:
message Event {
    optional google.protobuf.Timestamp time = 1;
    optional string prefix = 2 [ default = "system" ];
    optional string sub = 3;
    optional Level level = 4 [ default = info ];
    required string msg = 5;
    optional google.protobuf.Struct meta = 6;
	required bool breaking = 7;
}
  1. Execute the appropriate protoc command to regenerate the code:
protoc  --go_out=.  proto/event.proto
  1. Review changes in the diff. These changes would be reflected in the target .pb.go file, in log/event/event.pb.go:
// (...)
const (
	Default_Event_Prefix = string("system")
	Default_Event_Level  = Level_info
)
// (...)
func (x *Event) GetBreaking() bool {
	if x != nil && x.Breaking != nil {
		return *x.Breaking
	}
	return false
}
  1. Use the new logic within the context of your application
// (...)
if event.GetBreaking() {
	// handle the special type of event
}

NOTE: When performing changes to proto/service.proto and regenerating the code, you will notice that the proto/service/service.pb.go file will have an invalid import for event.

To correct this, simply change this import from:

event "./log/event"

...to:

event "github.com/zalgonoise/zlog/log/event"

For your convenience, [the shell script in proto/gen_pbgo.sh is a simple wrapper for these actions.


Adding your own configuration settings

Most of the modules will be configured with a function that takes a variadic parameter for its configuration, which is an interface -- meaning that you can implement it too. This section will cover an example of both adding a new formatter and a new Logger configuration.

Example of adding a new Formatter

  1. Inspect the flow of configuring a formatter for a Logger. This is done in log/format.go by calling the WithFormat(LogFormatter) function, which returns a LoggerConfig. This means that to add a new formatter, a new LogFormatter type needs to be created, so it can be passed into this function when creating a Logger.

  2. Create your own type and implement the Format(*event.Event) ([]byte, error) method. This is an example with a compact text formatter:

package compact

import (
	"strings"
	"time"

	"github.com/zalgonoise/zlog/log/event"
)

type Text struct {}

func (Text) Format(log *event.Event) (buf []byte, err error) {
	// uses hypen as separator, no square brackets, simple info only
	var sb strings.Builder

	sb.WriteString(log.GetTime().AsTime().Format(time.RFC3339))
	sb.WriteString(" - ")
	sb.WriteString(log.GetLevel().String())
	sb.WriteString(" - ")
	sb.WriteString(log.GetMsg())
	sb.WriteString("\n")

	buf = []byte(sb.String())
	return
}
  1. When configuring the Logger in your application, import your package and use it to configure the Logger:
package main 

import (
	"github.com/zalgonoise/zlog/log"
	"github.com/zalgonoise/zlog/log/event"

	"github.com/myuser/myrepo/mypkg/compact" // your package here
)

func main() {
	logger := log.New(
		log.WithFormat(compact.Text{})	
	)

	logger.Info("hi from custom formatter!")
}
  1. Test it out:
2022-08-06T18:02:34Z - info - hi from custom formatter!

Example of adding a new LoggerConfig

  1. Inspect the flow of configuring a Logger. This is done in log/conf.go by calling a function implementing the LoggerConfig interface, which applies your changes (from the builder object) to the resulting object. This means that to add a new configuration, a new LoggerConfig type needs to be created, so it can be passed as a parameter when creating the Logger.

  2. Create your own type and implement the Apply(lb *LoggerBuilder) method, and optionally a helper function. This is an example with a combo of prefix and subprefix config:

package id

import (
	"github.com/zalgonoise/zlog/log"
)

type id struct {
	prefix string
	sub    string
}

func (i *id) Apply(lb *log.LoggerBuilder) {
	// apply settings to prefix and subprefix
	lb.Prefix = i.prefix
	lb.Sub = i.sub
}

func WithID(prefix, sub string) log.LoggerConfig {
	return &id{
		prefix: prefix,
		sub:    sub,
	}
}

  1. When configuring the Logger in your application, import your package and use it to configure the Logger:
package main 

import (
	"github.com/zalgonoise/zlog/log"
	"github.com/zalgonoise/zlog/log/event"

	"github.com/myuser/myrepo/mypkg/id" // your package here
)

func main() {
	logger := log.New(
		id.WithID("http", "handlers")
	)

	logger.Info("hi from custom formatter logger!")
}
  1. Test it out:
[info]  [2022-08-06T18:25:07.581200505Z]        [http]  [handlers]      hi from custom formatter logger!

Adding methods to a Builder pattern

Currently, the EventBuilder is the only data structure with a true builder pattern. This is to allow a message to be modified on each particular field independently, while wrapping up all of these modifications with the final Build() method, that spits out the actual Event object.

The methods in the EventBuilder are chained together and closed with the Build() call. This means that new methods can be added to the EventBuilder; to be chained-in on creation.

NOTE: this is a very specific case where you're creating a wrapper for the EventBuilder, so that you don't have to define all the existing methods again. This particular module (the log events) are based on the autogenerated data structure from the corresponding .proto file.

Example of adding a new method to the Event building process

  1. Inspect the flow of creating an Event. This is done in log/event/builder.go by calling a function to initialize the builder object, and then chaining any number of methods (at least Message()) until it is closed with the Build() call, that outputs an event (with timestamp). This is a very strict and linear pattern but you're able to create your own custom chains by wrapping the builder object. The only concern is to at a certain point returning the (original) EventBuilder type to gain access to its helper methods.

  2. Create your own builder type and implement the chaining methods you need, and optionally a helper function to initialize this object. The new builder type will only be a wrapper for the original one. This is an example with a combo of prefix and subprefix in a single method:

package newbuilder

import "github.com/zalgonoise/zlog/log/event"

type NewEventBuilder struct {
	*event.EventBuilder
}

func NewB() *NewEventBuilder { // initializing defaults just like the original New()
	var (
		prefix   string = event.Default_Event_Prefix
		sub      string
		level    event.Level = event.Default_Event_Level
		metadata map[string]interface{}
	)

	return &NewEventBuilder{
		EventBuilder: &event.EventBuilder{
			BPrefix:   &prefix,
			BSub:      &sub,
			BLevel:    &level,
			BMetadata: &metadata,
		},
	}
}

func (e *NewEventBuilder) PrefixSub(prefix, sub string) *event.EventBuilder {
	*e.BPrefix = prefix
	*e.BSub = sub

	return e.EventBuilder
}
  1. When configuring the Logger in your application, import your package and use it to configure the Logger:
package main

import (
	"github.com/zalgonoise/zlog/log"

	"github.com/myuser/myrepo/mypkg/newbuilder" // your package here
)

func main() {
	event := newbuilder.NewB(). // initialize new builder
				PrefixSub("http", "handlers"). // use custom methods, returning original builder
				Message("hi from a custom event message!"). // use its methods
				Build() // output a valid message to log

	log.Log(event)
}
  1. Test it out:
[info]  [2022-08-07T18:53:16.683574327Z]        [http]  [handlers]      hi from a custom event message!

Adding interceptors to gRPC server / client

gRPC allows very granular middleware to be added to either client or server implementations, to suit your most peculiar needs. This is simply a means to execute a function before (or after) a message is sent or received by either end.

To add multiple interceptors, this library uses the go-grpc-middleware library, which allows easy chaining of multiple interceptors (not being limited to just one). This library is very interesting and provides a number of useful examples that could suit your logger's needs -- and basically it is an interceptor capable of registering (and calling) other interceptors you need to configure.

Example of adding a simple server interceptor to add a UUID

  1. Inspect the flow of an existing interceptor, such as the server logging interceptor. This consists of implementing two wrapper functions that return other functions, with the following signature:
Purpose Signature
Unary RPCs func(ctx context.Context, req interface{}, info *grpc.UnaryServerInfo, handler grpc.UnaryHandler) (interface{}, error)
Stream RPCs func(srv interface{}, stream grpc.ServerStream, info *grpc.StreamServerInfo, handler grpc.StreamHandler) error

These functions are executed on the corresponding action, take an example of a stripped version (no actions taken) of these functions:

Unary RPC -- stripped

func(ctx context.Context, req interface{}, info *grpc.UnaryServerInfo, handler grpc.UnaryHandler) (interface{}, error) {
	res, err := handler(ctx, req)

	return res, err
}

Stream RPC -- stripped

func(srv interface{}, stream grpc.ServerStream, info *grpc.StreamServerInfo, handler grpc.StreamHandler) error {
	err := handler(srv, wStream)

	return err
}

NOTE: while the approach for Unary RPCs is usually very simple (one message, one interceptor call), the stream RPCs are persistent connections. This means that if you want to handle the underlying messages being exchanged, you need to implement a wrapper for the stream, as shown below in the context of this example.

  1. Create your own interceptor(s) (for your use-case's type of RPCs) by wrapping the functions in the table above. This example will add UUIDs to log events as the server replies back to the client. This is already implemented by default but serves as an example. Both Unary and Stream RPC interceptors are implemented, note how the Stream wraps up the stream handler:

// UnaryServerIDModifier sets a preset UUID to log responses
func UnaryServerIDModifier(id string) grpc.UnaryServerInterceptor {
	return func(ctx context.Context, req interface{}, info *grpc.UnaryServerInfo, handler grpc.UnaryHandler) (interface{}, error) {
		// regular flow
		res, err := handler(ctx, req)
		
		// modify response's ID
		res.(*pb.LogResponse).ReqID = id

		// continue as normal
		return res, err
	}
}

// StreamServerIDModifier sets a preset UUID to log responses
func StreamServerIDModifier(id string) grpc.StreamServerInterceptor {
	return func(srv interface{}, stream grpc.ServerStream, info *grpc.StreamServerInfo, handler grpc.StreamHandler) error {
		// wrap the stream 
		wStream := modIDStream{
			id: id,
		}

		// handle the messages using the wrapped stream
		err := handler(srv, wStream)
		return err
	}
}

// modIDStream is a wrapper for grpc.ServerStream, that will replace
// the response's ID with the preset `id` string value
type modIDStream struct {
	id string
}


// Header method is a wrapper for the grpc.ServerStream.Header() method
func (w loggingStream) SetHeader(m metadata.MD) error { return w.stream.SetHeader(m) }

// Trailer method is a wrapper for the grpc.ServerStream.Trailer() method
func (w loggingStream) SendHeader(m metadata.MD) error { return w.stream.SendHeader(m) }

// CloseSend method is a wrapper for the grpc.ServerStream.CloseSend() method
func (w loggingStream) SetTrailer(m metadata.MD) { w.stream.SetTrailer(m) }

// Context method is a wrapper for the grpc.ServerStream.Context() method
func (w loggingStream) Context() context.Context { return w.stream.Context() }

// SendMsg method is a wrapper for the grpc.ServerStream.SendMsg(m) method, for which the
// configured logger will set a predefined ID
func (w loggingStream) SendMsg(m interface{}) error {
	// set the custom ID 
	m.(*pb.LogResponse).ReqID = w.id

	// continue as normal
	return w.stream.SendMsg(m)
}

// RecvMsg method is a wrapper for the grpc.ServerStream.RecvMsg(m) method
func (w loggingStream) RecvMsg(m interface{}) error { return w.stream.RecvMsg(m) }
  1. When configuring the Logger in your application, import your package and use it to configure the Logger:
package main

import (
	"github.com/zalgonoise/zlog/log/grpc/server"

	"github.com/myuser/myrepo/mypkg/fixedid" // your package here
)

func main() {
	grpcLogger := server.New(
		server.WithAddr("example.com:9099"),
		server.WithGRPCOpts(),
		server.WithStreamInterceptor(
			"fixed-id",
			fixedid.StreamServerIDModifier("staging")
		)
	)
	grpcLogger.Serve()
}
  1. Launch a client, and send some messages back to the server. The response in the client should contain the fixed ID in its metadata.

Benchmarks

Tests for speed and performance are done with benchmark tests, where different approaches to the many loggers is measured so it's clear where the library excels and lacks. This is done with multiple configs of individual features and as well a comparison with other Go loggers.

Benchmark tests are added and posted under the /benchmark directory, to ensure that its imports is separate from the rest of the library. If by any means this approach still makes it bloated, it can be moved to an entirely separate branch, at any point.

As changes are made, the benchmarks' raw results are posted in the /benchmark/README.md file.


Contributing

To contribute to this repository, open an issue in GitHub's Issues section to kick off a discussion with your proposal. It's best that proposals are complete with a (brief) description, current and expected behavior, proposed solution (if any), as well as your view on the impact it causes. In time there will be a preset for both bugs, feature requests, and anything in between.

Documentation

The Go Gopher

There is no documentation for this package.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL