Documentation ¶
Index ¶
- Constants
- func DeleteCinderVolume(name string) error
- func PerformVolumeLifeCycleInParallel(f *framework.Framework, client clientset.Interface, namespace string, ...)
- func SIGDescribe(text string, body func()) bool
- func SkipUnlessLocalSSDExists(ssdInterface, filesystemType string, node *v1.Node)
- func VolumeCreateAndAttach(client clientset.Interface, namespace string, sc []*storageV1.StorageClass, ...)
- type NodeSelector
Constants ¶
const ( MinNodes = 2 NodeStateTimeout = 1 * time.Minute )
const ( // default local volume type, aka a directory DirectoryLocalVolumeType localVolumeType = "dir" // creates a tmpfs and mounts it TmpfsLocalVolumeType localVolumeType = "tmpfs" // tests based on local ssd at /mnt/disks/by-uuid/ GCELocalSSDVolumeType localVolumeType = "gce-localssd-scsi-fs" )
const ( InvalidDatastore = "invalidDatastore" DatastoreSCName = "datastoresc" )
const ( Ext4FSType = "ext4" Ext3FSType = "ext3" InvalidFSType = "ext10" ExecCommand = "/bin/df -T /mnt/volume1 | /bin/awk 'FNR == 2 {print $2}' > /mnt/volume1/fstype && while true ; do sleep 2 ; done" )
const ( SCSIUnitsAvailablePerNode = 55 CreateOp = "CreateOp" AttachOp = "AttachOp" DetachOp = "DetachOp" DeleteOp = "DeleteOp" )
This test calculates latency numbers for volume lifecycle operations
1. Create 4 type of storage classes 2. Read the total number of volumes to be created and volumes per pod 3. Create total PVCs (number of volumes) 4. Create Pods with attached volumes per pod 5. Verify access to the volumes 6. Delete pods and wait for volumes to detach 7. Delete the PVCs
const ( VmfsDatastore = "sharedVmfs-0" VsanDatastore = "vsanDatastore" Datastore = "datastore" Policy_DiskStripes = "diskStripes" Policy_HostFailuresToTolerate = "hostFailuresToTolerate" Policy_CacheReservation = "cacheReservation" Policy_ObjectSpaceReservation = "objectSpaceReservation" Policy_IopsLimit = "iopsLimit" DiskFormat = "diskformat" ThinDisk = "thin" SpbmStoragePolicy = "storagepolicyname" BronzeStoragePolicy = "bronze" HostFailuresToTolerateCapabilityVal = "0" CacheReservationCapabilityVal = "20" DiskStripesCapabilityVal = "1" ObjectSpaceReservationCapabilityVal = "30" IopsLimitCapabilityVal = "100" StripeWidthCapabilityVal = "2" DiskStripesCapabilityInvalidVal = "14" HostFailuresToTolerateCapabilityInvalidVal = "4" DummyVMPrefixName = "vsphere-k8s" DiskStripesCapabilityMaxVal = "11" )
const (
DiskSizeSCName = "disksizesc"
)
const (
NodeLabelKey = "vsphere_e2e_label"
)
Perform vsphere volume life cycle management at scale based on user configurable value for number of volumes. The following actions will be performed as part of this test.
1. Create Storage Classes of 4 Categories (Default, SC with Non Default Datastore, SC with SPBM Policy, SC with VSAN Storage Capalibilies.) 2. Read VCP_SCALE_VOLUME_COUNT, VCP_SCALE_INSTANCES, VCP_SCALE_VOLUMES_PER_POD, VSPHERE_SPBM_POLICY_NAME, VSPHERE_DATASTORE from System Environment. 3. Launch VCP_SCALE_INSTANCES goroutine for creating VCP_SCALE_VOLUME_COUNT volumes. Each goroutine is responsible for create/attach of VCP_SCALE_VOLUME_COUNT/VCP_SCALE_INSTANCES volumes. 4. Read VCP_SCALE_VOLUMES_PER_POD from System Environment. Each pod will be have VCP_SCALE_VOLUMES_PER_POD attached to it. 5. Once all the go routines are completed, we delete all the pods and volumes.
Variables ¶
This section is empty.
Functions ¶
func DeleteCinderVolume ¶
func PerformVolumeLifeCycleInParallel ¶
func PerformVolumeLifeCycleInParallel(f *framework.Framework, client clientset.Interface, namespace string, instanceId string, sc *storageV1.StorageClass, iterations int, wg *sync.WaitGroup)
goroutine to perform volume lifecycle operations in parallel
func SIGDescribe ¶
func SkipUnlessLocalSSDExists ¶
SkipUnlessLocalSSDExists takes in an ssdInterface (scsi/nvme) and a filesystemType (fs/block) and skips if a disk of that type does not exist on the node
func VolumeCreateAndAttach ¶
func VolumeCreateAndAttach(client clientset.Interface, namespace string, sc []*storageV1.StorageClass, volumeCountPerInstance int, volumesPerPod int, nodeSelectorList []*NodeSelector, nodeVolumeMapChan chan map[string][]string, vsp *vsphere.VSphere)
VolumeCreateAndAttach peforms create and attach operations of vSphere persistent volumes at scale
Types ¶
type NodeSelector ¶
type NodeSelector struct {
// contains filtered or unexported fields
}
NodeSelector holds
Source Files ¶
- empty_dir_wrapper.go
- flexvolume.go
- framework.go
- pd.go
- persistent_volumes-disruptive.go
- persistent_volumes-gce.go
- persistent_volumes-local.go
- persistent_volumes-vsphere.go
- persistent_volumes.go
- pv_reclaimpolicy.go
- pvc_label_selector.go
- volume_io.go
- volume_metrics.go
- volume_provisioning.go
- volumes.go
- vsphere_scale.go
- vsphere_statefulsets.go
- vsphere_stress.go
- vsphere_utils.go
- vsphere_volume_cluster_ds.go
- vsphere_volume_datastore.go
- vsphere_volume_diskformat.go
- vsphere_volume_disksize.go
- vsphere_volume_fstype.go
- vsphere_volume_master_restart.go
- vsphere_volume_node_poweroff.go
- vsphere_volume_ops_storm.go
- vsphere_volume_perf.go
- vsphere_volume_placement.go
- vsphere_volume_vsan_policy.go