filepaths

package
v4.38.10 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Nov 4, 2024 License: Apache-2.0 Imports: 1 Imported by: 0

Documentation

Overview

Defines all paths/file names used in S3 for storage of our data. This allows a more centralised view of where our data is in S3 and makes changing storage/paths easier

Index

Constants

View Source
const Auth0PemFileName = "auth0.pem"

Auth0 PEM file which API uses to verify JWTs

View Source
const CSVFileSuffix = ".csv"

CSVFileSuffix - CSV files are <jobid>.csv

View Source
const DatasetCustomMetaFileName = "custom-meta.json"

File name for dataset custom meta file containing the title and other settings

View Source
const DatasetCustomRoot = "dataset-addons"

Root directory for all dataset add-ons. These are custom files that can be uploaded for a datset to set its title, and which the "default" image is, etc.

  • dataset-addons/
  • ----<dataset-id>/
  • --------custom-meta.json - Custom metadata for this dataset, usually to set dataset title, but can also contain matched image scale/bias or other fields
  • ------------UNALINED/
  • ----------------image, *.png or *.jpg
  • ------------MATCHED/
  • ----------------image, *.png, *.jpg or *.tif (if TIF it's considered an RGBU multi-spectral image)
  • ------------RGBU/
  • ----------------images, *.tif - NOTE: Went unused, these are now all stored as MATCHED images
View Source
const DatasetFileName = "dataset.bin"

The dataset file containing all spectra, housekeeping, beam locations, etc. Created by data-converter

View Source
const DatasetImageCacheRoot = "Image-Cache"
View Source
const DatasetImagesRoot = "Images"

Paths for v4 API:

View Source
const DatasetScansRoot = "Scans"
View Source
const DatasetUploadRoot = "UploadedDatasets"

Root directory to store uploaded dataset "raw" artifacts. These are then read by dataset importer to create a dataset in the dataset bucket

  • Uploaded/
  • ----<dataset-id>/
  • --------Files for that dataset importer type. For example, with breadboards we expect:
  • --------import.json <-- Describes what's what
  • --------spectra.zip <-- Spectra .msa files zipped up
  • --------context_image_1.jpg <-- 1 or more context images
View Source
const DiffractionDBFileName = "diffraction-db.bin"

Diffraction peak database, generated by diffraction-detector when dataset is imported

View Source
const DiffractionPeakManualFileName = "manual-diffraction-peaks.json"

Name of user manually entered diffraction peaks file. NOTE: this file only exists as a shared file!

View Source
const DiffractionPeakStatusFileName = "diffraction-peak-statuses.json"

Name of file containing status of diffraction peaks - the diffraction DB is generated when dataset is created but users can view a peak and mark it with a status, eg to delete it because it's not a real diffraction peak OTE: this file only exists as a shared file!

View Source
const PiquantConfigFileName = "config.json"

File name of "overall" piquant config file, which references the individual files PIQUANT will need

View Source
const PiquantConfigSubDir = "PiquantConfigs"

Piquant configs sub-dir

  • NOTE: Quant creation doesn't use GetDetectorConfigPath, maybe DetectorConfig is hard-coded into docker container! TODO: remove that
View Source
const PiquantLogSubdir = "piquant-logs"

Piquant logs sub-directory

View Source
const QuantFileSuffix = ".bin"

QuantFileSuffix - quant files are <jobid>.bin

View Source
const QuantLastOutputFileName = "output_data"

File name of last piquant output (used with fit command). Extension added as needed

View Source
const QuantLastOutputLogName = "output.log"

File name of last piquant output log file (used with fit command)

View Source
const QuantLogsSubDirSuffix = "-logs"

QuantLogsSubDirSuffix - this goes after job ID to form a directory name that stores the quant logs

View Source
const RootArchive = "Archive"

Root directory containing all archived data set zips as we downloaded them

  • Archive/
View Source
const RootDatasetConfig = "DatasetConfig"

Root directory containing all dataset configs

  • DatasetConfig/
  • ----import-times.json - Specifies when each dataset was imported (map id->unix time)
View Source
const RootDetectorConfig = "DetectorConfig"

Root directory containing all detector configs

  • DetectorConfig/
  • ----<config-name>/ - Name shown on UI, eg PIXL or Breadboard
  • --------pixlise-config.json - UI config values for this detector, eg detector energy range, window material, etc
  • --------PiquantConfigs/
  • ------------<version>/ - eg v1, v2, v3
  • ----------------config.json - The PIQUANT config file, used by quant "runner", in docker container. References other files
  • ----------------<other files>.msa or .csv - These are referenced by config.json and passed to PIQUANT exe as parameters
View Source
const RootJobData = "JobData"

This contains temporary files generated when running a long-running job (eg PIQUANT). Contains parameters to the job, status files, log files from the job, intermediate calculation files These are in separate directories to aid listing, so instead of returning 100s of files per job you may only want a list of job statuses, where you'd only get 1 file per job

  • JobData/
  • ----<dataset-id>/
  • --------<job-id>/
  • ------------node*.pmcs - PMC list for a given node running the job
  • ------------params.json - Job parameters as specified when created
  • ------------output/
  • ----------------node*.pmcs_result.csv - CSV generated by a single node, intermediate outpu
  • ----------------combined.csv - The final output generated by combining all the node*.pmcs_result.csv files
  • ------------piquant-logs/
  • ----------------node*.pmcs_piquant.log - PIQUANT log file for a given node
  • ----------------node*.pmcs_stdout.log - stdout for running PIQUANT on a given node
View Source
const RootPixliseConfigPath = "PixliseConfig"

Root directory of PIXLISE-specific config files

  • PixliseConfig/
  • ----auth0.pem - Certificate needed by Auth0 to verify a user request is valid
  • ----datasets.json - Dataset list (tiles)
  • ----piquant-version.json - Docker container for running PIQUANT
  • ----bad-dataset-ids.json - Contains a list of Dataset IDs to ignore when generating dataset tiles
View Source
const RootQuantificationPath = "Quantifications"
View Source
const RootUserContent = "UserContent"

Variables

This section is empty.

Functions

func GetConfigFilePath

func GetConfigFilePath(fileName string) string

Getting a config file path relative to the root of the bucket

func GetDetectorConfigPath

func GetDetectorConfigPath(configName string, version string, fileName string) string

Get a detector config path (used by PIQUANT) given the config name, version and optionally a file name. If file name is blank then the directory path above it is returned

func GetImageCacheFilePath

func GetImageCacheFilePath(imagePath string) string

func GetImageFilePath

func GetImageFilePath(imagePath string) string

func GetJobDataPath

func GetJobDataPath(datasetID string, jobID string, fileName string) string

Retrieves the path of a given file for dataset, job id and file name. NOTE: if job ID is blank, it's omitted from the path, and same for file name

func GetQuantPath

func GetQuantPath(userId string, scanId string, fileName string) string

func GetScanFilePath

func GetScanFilePath(scanID string, fileName string) string

func GetUserLastPiquantOutputPath

func GetUserLastPiquantOutputPath(userID string, datasetID string, piquantCommand string, fileName string) string

func MakeQuantCSVFileName

func MakeQuantCSVFileName(quantID string) string

func MakeQuantDataFileName

func MakeQuantDataFileName(quantID string) string

func MakeQuantLogDirName

func MakeQuantLogDirName(quantID string) string

Types

This section is empty.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL