Documentation ¶
Overview ¶
Package filemanager manage the split and encryption of a file in chunkes that will be uploadable to a cloud storage. This package is part of the security strategy of 3nigm4: dividing the file in chunks and assigning a unique resource id will produce unrelated anonymous chunks that cannot be related with the original file metadata (lenght, hash and so on...).
Index ¶
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
func ChunkFileId ¶
ChunkFileId calculate the file name for a specific chunk and returns an hexed string that should be used to store it in a data saver implementation. Checksum data can be any hased data usable to differentiate commonly named files (being derivable form metadata). An entropy component and a time stamp epoch is used to create a totally uique file ID.
func DeleteChunks ¶
func DeleteChunks(ds DataSaver, reference *ReferenceFile, operationID *ContextID) error
DeleteChunks remove all encrypted resources composing a file, this is not exposed as a struct function to avoid requiring having loaded them before deleting (all authentication and authorisation logics will be implemnted server side).
Types ¶
type ContextID ¶
type ContextID string
ContextID is used to pass back to calling function a context usable to get progress infos while interacting with DataSaver interface.
type DataSaver ¶
type DataSaver interface { ProgressStatus(ContextID) (ProgressStatus, error) // Get a requestID argument and return progress infos about; SaveChunks(string, [][]byte, []byte, time.Duration, *Permission, *ContextID) ([]string, error) // Saves chunks using a file name, bucket, actual data, a checksum reference and an expire date; RetrieveChunks(string, []string, *ContextID) ([][]byte, error) // Retrieve all resources composing a file; DeleteChunks(string, []string, *ContextID) error // removes all resources composing a file. }
DataSaver interface of the actual saver for encrypted data: this can be a local file system, a remote fs or APIs or any other system capable of storing data chunks.
type EncryptedChunks ¶
type EncryptedChunks struct {
// contains filtered or unexported fields
}
EncryptedChunks encrypted data chunks related to their keys, these are the files will be uploaded to the cloud storage. All the keys, metadata and encryption algorithm details will be saved locally only (never passed in plain text anywhere).
func LoadChunks ¶
func LoadChunks(ds DataSaver, reference *ReferenceFile, rawKey []byte, operationID *ContextID) (*EncryptedChunks, error)
LoadChunks loads chunks from a struct implementing the DataSaver interface, given a reference file in input. It returns a complete encrypted chunks structure from which decrypt the original file.
func NewEncryptedChunks ¶
func NewEncryptedChunks(rawKey []byte, filepath string, chunkSize uint64, compressed bool) (*EncryptedChunks, error)
NewEncryptedChunks creates a new encrypted chunks structure from a given file, a chunk size and a compression flag. If a rawkey is specified will be used to make AES encryption stronger (this key will not be passed using a reference file). This function returns the initialised struct or an error if sometring went wrong.
func (*EncryptedChunks) GetFile ¶
func (e *EncryptedChunks) GetFile(filepath string) error
GetFile returns the recomposed file merging all data chunks and verifying consistency. It saves the final result to the path specified as argument or returns an error is something went wrong.
func (*EncryptedChunks) SaveChunks ¶
func (e *EncryptedChunks) SaveChunks(ds DataSaver, expires time.Duration, permission *Permission, operationID *ContextID) (*ReferenceFile, error)
SaveChunks saves encrypted data chunks to a structure implementing the DataSaver interface.
type Metadata ¶
type Metadata struct { FileName string `json:"filename" xml:"filename"` Size int64 `json:"size" xml:"size"` ModTime time.Time `json:"modtime" xml:"modtime"` IsDir bool `json:"isdir" xml:"isdir"` CheckSum [sha512.Size384]byte `json:"checksum" xml:"checksum"` }
Metadata metadata related to the original file, will be managed locally with encryption keys.
type Permission ¶
type Permission struct { Permission ct.Permission SharingUsers []string }
Permission defines files associated access permission.
type ProgressStatus ¶
type ProgressStatus interface { TotalUnits() int // the total number of processing units; Done() int // number of alreay processed (on the total number); }
ProgressStatus status of a DataSaver managed operation, can be used to monitor how the op. is going on and how quickly
type ReferenceFile ¶
type ReferenceFile struct { // metadata FileName string `json:"filename" xml:"filename"` Size int64 `json:"size" xml:"size"` ModTime time.Time `json:"modtime" xml:"modtime"` IsDir bool `json:"isdir" xml:"isdir"` CheckSum [sha512.Size384]byte `json:"checksum" xml:"checksum"` // encryption DerivationRounds int `json:"rounds" xml:"rounds"` Salt []byte `json:"salt" xml:"salt"` ChunksKeys [][]byte `json:"chunkskeys" xml:"chunkskeys"` // chunks settings ChunksPaths []string `json:"chunkspaths" xml:"chunkspaths"` Compressed bool `json:"compressed" xml:"compressed"` ChunkSize uint64 `json:"chunksize" xml:"chunksize"` }
ReferenceFile the locally saved output file will contain all required info to later on decrypt data chunks. If lost there will be no way to recover original data.