General idea
Build Status:
We at @SchweizerischeBundesbahnen have a lot of projects who need changes on their projects all the time. As those settings are (and that is fine) limited to the administrator roles, we had to do a lot of manual changes like:
OpenShift:
- Creating new projects with certain attributes
- Updating projects metadata like billing information
- Updating project quotas
- Creating service-accounts
Persistent storage:
- Create gluster volumes
- Increase the size of a gluster volume
- Create PV, PVC, Gluster Service & Endpoints in OpenShift
Billing:
- Create a billing report for diffrent platforms
AWS:
- Create and manage AWS S3 Buckets
Sematext:
- Create and manage sematext logsene apps
So we built this tool which allows users to do certain things in self service. The tool checks permissions & certain conditions.
Components
Installation & Documentation
Self-Service Portal
# Create a project & a service-account
oc new-project ose-selfservice-backend
oc create serviceaccount ose-selfservice
# Add a cluster policy for the portal:
oc create -f clusterPolicy-selfservice.yml
# Add policy to service account
oc adm policy add-cluster-role-to-user ose:selfservice system:serviceaccount:ose-selfservice:ose-selfservice
# Use the token of the service account in the container
Just create a 'oc new-app' from the dockerfile.
Parameters
Param |
Description |
Example |
GIN_MODE |
Mode of the Webframework |
debug/release |
LDAP_URL |
Your LDAP |
ldap.xzw.ch |
LDAP_BIND_DN |
LDAP Bind |
cn=root |
LDAP_BIND_CRED |
LDAP Credentials |
secret |
LDAP_SEARCH_BASE |
LDAP Search Base |
ou=passport-ldapauth |
LDAP_FILTER |
LDAP Filter |
(uid=%s) |
SESSION_KEY |
A secret password to encrypt session information |
secret |
OPENSHIFT_API_URL |
Your OpenShift API Url |
https://master01.ch:8443 |
OPENSHIFT_TOKEN |
The token from the service-account |
|
MAX_QUOTA_CPU |
How many CPU can a user assign to his project |
30 |
MAX_QUOTA_MEMORY |
How many GB memory can a user assign to his project |
50 |
GLUSTER_API_URL |
The URL of your Gluster-API |
http://glusterserver01:80 |
GLUSTER_SECRET |
The basic auth password you configured on the gluster api |
secret |
GLUSTER_IPS |
IP addresses of the gluster endpoints |
192.168.1.1,192.168.1.2 |
MAX_VOLUME_GB |
How many GB storage can a user order |
100 |
DDC_API |
URL of the DDC Billing API |
http://ddc-api.ch |
AWS_PROD_ACCESS_KEY_ID |
AWS Access Key ID to manage AWS ressources for production buckets |
AKIAIOSFODNN7EXAMPLE |
AWS_PROD_SECRET_ACCESS_KEY |
AWS Secret Access Key to manage AWS ressources for production buckets |
wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY |
AWS_NONPROD_ACCESS_KEY_ID |
AWS Access Key ID to manage AWS ressources for development buckets |
AKIAIOSFODNN7EXAMPLE |
AWS_NONPROD_SECRET_ACCESS_KEY |
AWS Secret Access Key to manage AWS ressources for development buckets |
wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY |
AWS_S3_BUCKET_PREFIX |
Prefix for all generated S3 buckets |
mycompany |
AWS_REGION |
Region for all the aws artifacts |
eu-central-1 |
SEMATEXT_API_TOKEN |
Admin token for Sematext Logsene Apps |
mytoken |
SEMATEXT_BASE_URL |
Base url for Sematext |
for EU: https://apps.eu.sematext.com/ |
LOGSENE_DISCOUNTCODE |
Discount code for Sematext (optional) |
yourcode |
SEC_API_PASSWORD |
Password for basic auth login of SEC_API user (optional) |
pass |
NFS_API_URL |
The URL of your NFS-API (optional) |
https://somenfsapi.ch |
NFS_API_SECRET |
The password of the NFS-API (optional) |
somesecret |
NFS_PROXY |
The proxy to access the NFS-API (optional) |
https://someproxy.ch:1234 |
Route timeout
The api/aws/ec2
endpoints wait until VMs have the desired state.
This can exceed the default timeout and result in a 504 error on the client.
Increasing the route timeout is described here: https://docs.openshift.org/latest/architecture/networking/routes.html#route-specific-annotations
The GlusterFS api
Use/see the service unit file in ./glusterapi/install/
Parameters
glusterapi -poolName=your-pool -vgName=your-vg -basePath=/your/mount -secret=yoursecret -port=yourport
# poolName = The name of the existing LV-pool that should be used to create new logical volumes
# vgName = The name of the vg where the pool lies on
# basePath = The path where the new volumes should be mounted. E.g. /gluster/mypool
# secret = The basic auth secret you specified above in the SSP
# port = The port where the server should run
# maxGB = Optinally specify max GB a volume can be. Default is 100
Monitoring endpoints
The gluster api has two public endpoints for monitoring purposes. Call them this way:
The first endpoint returns usage statistics:
curl <yourserver>:<port>/volume/<volume-name>
{"totalKiloBytes":123520,"usedKiloBytes":5472}
The check endpoint returns if the current %-usage is below the defined threshold:
# Successful response
curl -i <yourserver>:<port>/volume/<volume-name>/check\?threshold=20
HTTP/1.1 200 OK
Content-Type: application/json; charset=utf-8
Date: Mon, 12 Jun 2017 14:23:53 GMT
Content-Length: 38
{"message":"Usage is below threshold"}
# Error response
curl -i <yourserver>:<port>/volume/<volume-name>/check\?threshold=3
HTTP/1.1 400 Bad Request
Content-Type: application/json; charset=utf-8
Date: Mon, 12 Jun 2017 14:23:37 GMT
Content-Length: 70
{"message":"Error used 4.430051813471502 is bigger than threshold: 3"}
For the other (internal) endpoints see the code (glusterapi/main.go)
Contributing
The backend can be started with Docker. All required environment variables must be set in the env_vars
file.
# without proxy:
docker build -p 8080:8080 -t ssp-backend .
# with proxy:
docker build -p 8080:8080 --build-arg https_proxy=http://proxy.ch:9000 -t ssp-backend .
# env_vars must not contain export and quotes
docker run -it --rm --env-file <(sed "s/export\s//" env_vars | tr -d "'") ssp-backend
There is a small script for locally testing the API. It handles authorization (login, token etc).
go run curl.go [-X GET/POST] http://localhost:8080/api/...