This is a core plugin for apid and is responsible for collecting analytics data for
runtime traffic from Micro and Enterprise Gateway and puplishing to Apigee.
Configuration
name
description
apidanalytics_base_path
string. default: /analytics
apidanalytics_data_path
string. default: /ax
apidanalytics_collection_interval
int. seconds. default: 120
apidanalytics_upload_interval
int. seconds. default: 5
apidanalytics_uap_server_base
string. url. required.
apidanalytics_use_caching
boolean. default: true
apidanalytics_buffer_channel_size
int. number of slots. default: 100
apidanalytics_cache_refresh_interval
int. seconds. default: 1800
Startup Procedure
Initialize crash recovery, upload and buffering manager to handle buffering analytics messages to files
locally and then periodically upload these files to S3/GCS based on signedURL received from
uapCollectionEndpoint exposed via edgex proxy
Create a listener for Apigee-Sync event
Each time a Snapshot is received, create an in-memory cache for data scope
Each time a changeList is received, if data_scope info changed, then insert/delete info for changed scope from tenantCache
Initialize POST /analytics/{scope_uuid} and POST /analytics API's
Upon receiving requests
Validate and enrich each batch of analytics records. If scope_uuid is given, then that is used to validate.
If scope_uuid is not provided, then the payload should have organization and environment. The org/env
is then used to validate the scope for this cluster.
If valid, then publish records to an internal buffer channel
Buffering Logic
Buffering manager creates listener on the internal buffer channel and thus consumes messages
as soon as they are put on the channel
Based on the current timestamp either an existing directory is used to save these messages
or new a new timestamp directory is created
If a new directory is created, then an event will be published on the closeBucketEvent Channel
at the expected directory closing time
The messages are stored in a file under tmp/<timestamp_directory>
Based on collection interval, periodically the files in tmp are closed by the routine listening on the
closeBucketEvent channel and the directory is moved to staging directory
Upload Manager
The upload manager periodically checks the staging directory to look for new folders
When a new folder arrives here, it means all files under that are closed and ready to uploaded
Tenant info is extracted from the directory name and the files are sequentially uploaded to S3/GCS
Based on the upload status
If upload is successful then directory is deleted from staging and previously failed uploads are retried
if upload fails, then upload is retried 3 times before moving the directory to failed directory
Crash Recovery is a one time activity performed when the plugin is started to
cleanly handle open files from a previous Apid stop or crash event
Exposed API
POST /analytics/{bundle_scope_uuid}
POST /analytics