codec

package module
v1.0.0-alpha.1 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Feb 7, 2023 License: MIT Imports: 15 Imported by: 0

README

Large Payload Service

This repository contains an implementation of the Temporal Payload Codec. Temporal limits payload size to 4MB. This Payload Codec allows you to use payloads larger than this limit by transparently storing and retrieving these payloads to and from cloud storage.

In essence, the Large Payload Service implements a content addressed storage system (CAS) for Temporal workflow payloads.

API

Architecturally, the Large Payload Service is an HTTP server offering the following API:

  • /v2/health/head: Health check endpoint using a HEAD request.

    Returns the HTTP response status code 200 if the service is running correctly. Otherwise, an error code is returned.

  • /v2/blobs/put: Upload endpoint expecting a PUT request.

    Required headers:

    • Content-Type set to application/octet-stream.
    • Content-Length set to the length of payload the data in bytes.
    • X-Temporal-Metadata set to the base64 encoded JSON of the Temporal Metadata.

    Query parameters:

    • namespace The Temporal namespace the client using the codec is connected to.

      The namespace forms part of the key for retrieval of the payload.

    • digest Specifies the checksum over the payload data using the format sha256:<sha256_hex_encoded_value>.

    The returned key of the put request needs to be stored and used for later retrieval of the payload. It is up to the Large Payload Server and the backend driver how to arrange the data in the backing data store. The server will honor, however, the value of remote-codec/key-prefix in the Temporal Metadata passed via the X-Temporal-Metadata header. It will use the specified string as prefix in the storage path.

  • /v2/blobs/get: Download endpoint expecting a GET request.

    Required headers:

    • Content-Type set to application/octet-stream.
    • X-Payload-Expected-Content-Length set to the expected size of the payload data in bytes.

    Query parameters:

    • key specifying the key for the payload to retrieve.

Usage

This repository does not provide any prebuilt binaries or images. The recommended approach is to build your own binary and image.

To programmatically build the Large Payload Service, you need to instantiate the driver and then pass it to server.NewHttpHandler. For example, to create a Large Payload Service instance backed by an S3 bucket, you would do something along these lines:

package main

import (
    "context"
    "net/http"
    "os"

    "github.com/aws/aws-sdk-go-v2/config"
    "github.com/DataDog/temporal-large-payload-codec/server"
    "github.com/DataDog/temporal-large-payload-codec/server/storage/s3"
    ...
)

func main() {
    region, set := os.LookupEnv("AWS_SDK_REGION")
    if !set {
        log.Fatal("AWS_REGION environment variable not set")
    }
    bucket, set := os.LookupEnv("BUCKET")
    if !set {
        log.Fatal("BUCKET environment variable not set")
    }

    cfg, err := config.LoadDefaultConfig(context.Background(), config.WithRegion(region))
    if err != nil {
        log.Fatal(err)
    }

    driver := s3.New(&s3.Config{
      Config: cfg,
      Bucket: bucket,
    })

    mux := http.NewServeMux()
    mux.Handle("/", server.NewHttpHandler(driver)))

    if err := http.ListenAndServe(":8577", mux); err != nil {
      log.Fatal(err)
    }
}

On the Temporal side, you need to set the option when creating the options for your Temporal client (simplified, without error handling):

opts := client.Options{
...
}

lpsEndpoint, _ := os.LookupEnv("LARGE_PAYLOAD_SERVICE_URL");
lpc, _ := largepayloadcodec.New(largepayloadcodec.WithURL(lpsEndpoint))
opts.DataConverter = converter.NewCodecDataConverter(opts.DataConverter, c)

temporalClient, _ := router.NewClient(opts)

Development

Refer to CONTRIBUTING.md for instructions on how to build and test the Large Payload Service and for general contributing guidelines.

Documentation

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

This section is empty.

Types

type Codec

type Codec struct {
	// contains filtered or unexported fields
}

func New

func New(opts ...Option) (*Codec, error)

New instantiates a Codec. WithURL is a required option.

An error may be returned if incompatible options are configured or if a connection to the remote payload storage service cannot be established.

func (*Codec) Decode

func (c *Codec) Decode(payloads []*common.Payload) ([]*common.Payload, error)

func (*Codec) Encode

func (c *Codec) Encode(payloads []*common.Payload) ([]*common.Payload, error)

type Option

type Option interface {
	// contains filtered or unexported methods
}

func WithHTTPClient

func WithHTTPClient(client *http.Client) Option

WithHTTPClient sets a custom http.Client.

If unspecified, http.DefaultClient will be used.

func WithHTTPRoundTripper

func WithHTTPRoundTripper(rt http.RoundTripper) Option

WithHTTPRoundTripper sets custom Transport on the http.Client.

This may be used to implement use cases including authentication or tracing.

func WithMinBytes

func WithMinBytes(bytes uint32) Option

WithMinBytes configures the minimum size of an event payload needed to trigger encoding using the large payload codec. Any payload smaller than this value will be transparently persisted in workflow history.

The default value is 128000, or 128KB.

Setting this too low can lead to degraded performance, since decoding requires an additional network round trip per payload. This can add up quickly when replaying a workflow with a large number of events.

According to https://docs.temporal.io/workflows, the hard limit for workflow history size is 50k events and 50MB. A workflow with exactly 50k events can therefore in theory have an average event payload size of 1048 bytes.

In practice this worst case is very unlikely, since common workflow event types such as WorkflowTaskScheduled or WorkflowTaskCompleted do not include user defined payloads. If we estimate that one quarter of events have payloads just below the cutoff, then we can calculate how many events total would fit in one workflow's history (the point before which we must call ContinueAsNew):

AverageNonUserTaskBytes = 1024 (generous estimate for events like WorkflowTaskScheduled)
CodecMinBytes = 128_000
AverageEventBytes = (AverageNonUserTaskBytes * 3 + CodecMinBytes) / 4 = 32_768
MaxHistoryEventBytes = 50_000_000
MaxHistoryEventCount = MaxHistoryEventBytes / AverageEventBytes = 1525

func WithNamespace

func WithNamespace(namespace string) Option

WithNamespace sets the Temporal namespace the client using this codec is connected to. This option is mandatory.

func WithURL

func WithURL(u string) Option

WithURL sets the endpoint for the remote payload storage service. This option is mandatory.

func WithVersion

func WithVersion(version string) Option

WithVersion sets the version of the LPS API to use.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL