file-benchmark-s3
file-benchmark-s3 is a tool to test speed on basic operations on files
useful when need to test speed on S3 Object Storage.
Usage:
file-bechmak-s3 [flags]
Flags:
-h, --help help for file-benchmark
-g, --generate-config Generate a config file to conect to S3
-c, --concurrent-jobs int Number of jobs excuted writing files. (Maximum 10) (default 1)
-f, --files-number int Number of files to be created. (default 1)
-m, --max-file-fize int Max size in Mb allocated for each test file. (default 2000)
-i, --min-file-fize int Min size in Mb allocated for each test file. (default 5)
-o, --output string Output folder of the results. (default "./results")
-p, --path string Path where the test files will be created. (default "./")
-s, --spinner Show spinner, just to know if still alive the proccess, don't use if running on background
Config file Generation
file-bechmak-s3 requires a config file s3Config.ini
this can be generated using the command:
$ ./file-benchmark-s3 -g
after the file is generated, only need to fill the required variables
Usage Example:
$ ./file-benchmark-s3 -c10 -f 100 -m10 -i5 -o ./100files -p /mnt/s3fs/
In this example will create 100 random files, between 5 Mb and 10Mb usign 10 concurent jobs.
The files will be created inside a the path /mnt/s3fs/{random name} and /mnt/s3fs/download_{random name}, after the test is complete, those folders will be removed.
The results will be written on the path ./100files, those results will be: