services/

directory
v0.0.13 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Mar 30, 2020 License: Apache-2.0

README

Service packages structure

All service cloud function packages share a consistent structure

Two functions and one type

Initialize function
  • Goal
    • Optimize cloud function performance by reducing the invocation latency
  • Implementation
    • Is executed once per cloud function instance as a cold start.
    • Cache objects expensive to create, like clients
    • Retreive settings once, like environment variables
    • Cached objects and reteived settings are exposed in one global variable named global
Global type
  • A struct to define a global variable carrying cached objects and retreived settings by Initialized function and used by EntryPoint function
EntryPoint function
  • Goal
    • Execute operations to be performed each time the cloud function is invoked
  • Implementation
    • Is executed on every event triggering the cloud function
    • Uses cached objects and retreived settings prepared by the Initialized function and carried by a global variable of type Global
    • Performs the task a given service is targetted to do that is described before the package key word

Implementation example

How to implement a RAM service package in a Google Cloud function?

go.mod

Replace <vX.Y.Z> in the go.mod file by the RAM version release to be used

module example.com/cloudfunction

go 1.11

require github.com/BrunoReboul/ram <vX.Y.Z>
function.go for a background function triggered by PubSub events

Replace <package_name> by the name of the service package to be used

// Package p contains a background cloud function
package p

import (
    "context"

    "github.com/BrunoReboul/ram/<package_name>"
    "github.com/BrunoReboul/ram/utilities/ram"
)

var global <package_name>.Global
var ctx = context.Background()

// EntryPoint is the function to be executed for each cloud function occurence
func EntryPoint(ctxEvent context.Context, PubSubMessage ram.PubSubMessage) error {
    return <package_name>.EntryPoint(ctxEvent, PubSubMessage, &global)
}

func init() {
    <package_name>.Initialize(ctx, &global)
}
function.go for a background function triggered by Google Cloud Storage events

Replace <package_name> by the name of the service package to be used

// Package p contains a background cloud function
package p

import (
    "context"

    "github.com/BrunoReboul/ram/<package_name>"
    "github.com/BrunoReboul/ram/utilities/ram"
)

var global <package_name>.Global
var ctx = context.Background()

// EntryPoint is the function to be executed for each cloud function occurence
func EntryPoint(ctxEvent context.Context, gcsEvent ram.GCSEvent) error {
    return <package_name>.EntryPoint(ctxEvent, PubSubMessage, &global)
}

func init() {
    <package_name>.Initialize(ctx, &global)
}

Automatic retrying

Automatic retrying is consistently implemented in RAM service packages as documented in Google Cloud Function best practice Retrying Background Functions

Impact on cloud functions Stackdriver logs:

  • Errors entry in log only appears for transient errors.
    • As en error is reported the function is retried.
  • Other errors and loged as information to avoid unwanted retries
    • To find errors in such cloud function logs you can use the following Stackdriver logging filter
resource.type="cloud_function"
textPayload:"error"

Directories

Path Synopsis
Package dumpinventory request CAI to perform an export Triggered by Cloud Scheduler Job, through PubSub messages.
Package dumpinventory request CAI to perform an export Triggered by Cloud Scheduler Job, through PubSub messages.
Package getgroupsettings retreives one group settings from `Groups Settings API` - Triggered by: PubSub messages from the GCI groups topic - Instances: Only one - Output: PubSub messages to a dedicated topic formated like Cloud Asset Inventory feed messages - Cardinality: one-one, one output message for each triggering event - Automatic retrying: yes - Required environment variables: - GCIADMINUSERTOIMPERSONATE email of the Google Cloud Identity super admin to impersonate - KEYJSONFILENAME name for the service account JSON file containig the key to authenticate against CGI - OUTPUTTOPICNAME name of the PubSub topic where to deliver feed messages - SERVICEACCOUNTNAME name of the service account used to asscess GCI
Package getgroupsettings retreives one group settings from `Groups Settings API` - Triggered by: PubSub messages from the GCI groups topic - Instances: Only one - Output: PubSub messages to a dedicated topic formated like Cloud Asset Inventory feed messages - Cardinality: one-one, one output message for each triggering event - Automatic retrying: yes - Required environment variables: - GCIADMINUSERTOIMPERSONATE email of the Google Cloud Identity super admin to impersonate - KEYJSONFILENAME name for the service account JSON file containig the key to authenticate against CGI - OUTPUTTOPICNAME name of the PubSub topic where to deliver feed messages - SERVICEACCOUNTNAME name of the service account used to asscess GCI
Package listgroupmembers extract all members from a group in Google Cloud Identity directory using the Admin SDK API - Triggered by: PubSub messages from the GCI groups topic - Instances: Only one - Output: PubSub messages to a dedicated topic formated like Cloud Asset Inventory feed messages - Cardinality: one-many, one group may have many members.
Package listgroupmembers extract all members from a group in Google Cloud Identity directory using the Admin SDK API - Triggered by: PubSub messages from the GCI groups topic - Instances: Only one - Output: PubSub messages to a dedicated topic formated like Cloud Asset Inventory feed messages - Cardinality: one-many, one group may have many members.
Package listgroups extract all groups from a Google Cloud Identity directory using the Admin SDK API - Triggered by: Cloud Scheduler Job, through PubSub messages - Instances: few, one per directory customer ID - Output: PubSub messages to a dedicated topic formated like Cloud Asset Inventory feed messages - Cardinality: - one-several: one extraction job is scalled into x queries - x = (number of domains in GCI directory) x (36 email prefixes) - email prefixes: a..z 0..9 - Automatic retrying: yes - Is recurssive: yes - Required environment variables: - GCIADMINUSERTOIMPERSONATE email of the Google Cloud Identity super admin to impersonate - DIRECTORYCUSTOMERID Directory customer identifier.
Package listgroups extract all groups from a Google Cloud Identity directory using the Admin SDK API - Triggered by: Cloud Scheduler Job, through PubSub messages - Instances: few, one per directory customer ID - Output: PubSub messages to a dedicated topic formated like Cloud Asset Inventory feed messages - Cardinality: - one-several: one extraction job is scalled into x queries - x = (number of domains in GCI directory) x (36 email prefixes) - email prefixes: a..z 0..9 - Automatic retrying: yes - Is recurssive: yes - Required environment variables: - GCIADMINUSERTOIMPERSONATE email of the Google Cloud Identity super admin to impersonate - DIRECTORYCUSTOMERID Directory customer identifier.
Package monitorcompliance check asset compliance.
Package monitorcompliance check asset compliance.
Package publish2fs publish assets resource feeds as FireStore documents.
Package publish2fs publish assets resource feeds as FireStore documents.
Package splitdump nibble large Cloud Asset Inventory dumps into many PubSub asset feed messages - Triggered by: Google Cloud Storage event when a new dump is delivered - Instances: only one - Output: - PubSub messages formated like Cloud Asset Inventory real-time feed messages - Delivered in the same topics as used per CAI real-time - Tags to differentiate them from CAI real time feeds - Create missing topics en the fly (best effort) in case it does not already exist for real-time - Cardinality: - one-many: one dump is nubbled in many feed messages - To ensure scallabilty the function is recurssive: - dump size > x lines then segment it in x line child dumps - else nibble the dump - x is set through an environment variable - Automatic retrying: yes - Is recurssive: yes - Required environment variables: - CAIEXPORTBUCKETNAME the name of the GCS bucket where are delivered the CAI dumps - IAMTOPICNAME the name of the topic used for all IAM policies feed messages
Package splitdump nibble large Cloud Asset Inventory dumps into many PubSub asset feed messages - Triggered by: Google Cloud Storage event when a new dump is delivered - Instances: only one - Output: - PubSub messages formated like Cloud Asset Inventory real-time feed messages - Delivered in the same topics as used per CAI real-time - Tags to differentiate them from CAI real time feeds - Create missing topics en the fly (best effort) in case it does not already exist for real-time - Cardinality: - one-many: one dump is nubbled in many feed messages - To ensure scallabilty the function is recurssive: - dump size > x lines then segment it in x line child dumps - else nibble the dump - x is set through an environment variable - Automatic retrying: yes - Is recurssive: yes - Required environment variables: - CAIEXPORTBUCKETNAME the name of the GCS bucket where are delivered the CAI dumps - IAMTOPICNAME the name of the topic used for all IAM policies feed messages
Package stream2bq stream from PubSub to BigQuery 1) assets 2) compliance states 3) violations - Triggered by: Messages in related PubSub topics - Instances: one per Big Query table - assets - compliance states - violations - Output: Streming into BigQuery tables - Cardinality: one-one, one pubsub message - one stream insert in BQ - Automatic retrying: yes - Required environment variables: - ASSETSCOLLECTIONID the name of the FireStore collection grouping all assets documents - BQ_DATASET name of the Big Query dataset hosting the table - BQ_TABLE name of the Big Query table where to insert streams - OWNERLABELKEYNAME key name for the label identifying the asset owner - VIOLATIONRESOLVERLABELKEYNAMEkey name for the label identifying the asset violation resolver
Package stream2bq stream from PubSub to BigQuery 1) assets 2) compliance states 3) violations - Triggered by: Messages in related PubSub topics - Instances: one per Big Query table - assets - compliance states - violations - Output: Streming into BigQuery tables - Cardinality: one-one, one pubsub message - one stream insert in BQ - Automatic retrying: yes - Required environment variables: - ASSETSCOLLECTIONID the name of the FireStore collection grouping all assets documents - BQ_DATASET name of the Big Query dataset hosting the table - BQ_TABLE name of the Big Query table where to insert streams - OWNERLABELKEYNAME key name for the label identifying the asset owner - VIOLATIONRESOLVERLABELKEYNAMEkey name for the label identifying the asset violation resolver
Package upload2gcs stores feeds as JSON files in a Google Cloud Storage bucket.
Package upload2gcs stores feeds as JSON files in a Google Cloud Storage bucket.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL