README ¶
CDN In a Box (containerized)
This is intended to simplify the process of creating a "CDN in a box", easing the barrier to entry for newcomers as well as providing a way to spin up a minimal CDN for full system testing.
note
For a more in-depth discussion of the CDN in a Box system, please see the official documentation.
Setup
The containers run on Docker, and require Docker (tested v17.05.0-ce) and Docker
Compose (tested v1.9.0) to build and run. On most 'nix systems these can be installed
via the distribution's package manager under the names docker-ce
and
docker-compose
, respectively (e.g. sudo dnf install docker-ce
).
Each container (except the origin) requires an .rpm
file to install the Traffic Control
component for which it is responsible. You can download these *.rpm
files from an archive
(e.g. under "Releases"), use the provided Makefile to generate them (simply
type make
while in the cdn-in-a-box
directory) or create them yourself by using the
pkg
script at the root of the repository. If you choose the latter, copy
the *.rpm
s without any version/architecture information to their respective component
directories, such that their filenames are as follows:
edge/trafficcontrol-cache-config.rpm
mid/trafficcontrol-cache-config.rpm
traffic_monitor/traffic_monitor.rpm
traffic_ops/traffic_ops.rpm
traffic_portal/traffic_portal.rpm
Finally, run the test CDN using the command:
docker-compose up --build
Readiness
To know if your CDN in a Box has started up successfully and is ready to use, you can optionally start the "readiness" container which will test your CDN and exit successfully when your CDN in a Box is ready:
docker-compose -f docker-compose.readiness.yml up --build
If the container does not exit successfully after a reasonable amount of time, something might have gone wrong with the main CDN services. Because the container continually runs end-to-end CDN requests, it will never exit successfully if there are issues with the main CDN services that cause the requests to fail. Check the log output of the main CDN services to see what might be getting stuck.
Components
The following assumes that the default configuration provided in
variables.env
is used.
Once your CDN is running, you should see a cascade of output on your terminal. This is
typically the output of the build, then setup, and finally logging infrastructure
(assuming nothing goes wrong). You can now access the various components of the CDN on
your local machine. For example, opening https://localhost
should
show you the default UI for interacting with the CDN - Traffic Portal.
Note: You will likely see a warning about an untrusted or invalid certificate for components that serve over HTTPS (Traffic Ops & Traffic Portal). If you are sure that you are looking at
https://localhost:N
for some integerN
, these warnings may be safely ignored via the e.g.Add Exception
button (possibly hidden behind e.g.Advanced Options
).
Service Ports exposed and their usage Username Password DNS DNS name resolution on 9353 N/A N/A Edge-Tier Cache Apache Trafficserver HTTP caching reverse proxy on port 9000 N/A N/A Mid-Tier Cache Apache Trafficserver HTTP caching forward proxy on port 9100 N/A N/A Second Mid-Tier Cache (parent of the first Mid-Tier Cache) Apache Trafficserver HTTP caching forward proxy on port 9100 N/A N/A Mock Origin Server Example web page served on port 9200 N/A N/A Traffic Monitor Web interface and API on port 80 N/A N/A Traffic Ops API on port 6443 TO_ADMIN_USER
in variables.envTO_ADMIN_PASSWORD
in variables.envTraffic Ops PostgresQL Database PostgresQL connections accepted on port 5432 (database name: DB_NAME
in variables.env)DB_USER
in variables.envDB_USER_PASS
in variables.envTraffic Portal Web interface on 443 (Javascript required) TO_ADMIN_USER
in variables.envTO_ADMIN_PASSWORD
in variables.envTraffic Router Web interfaces on ports 3080 (HTTP) and 3443 (HTTPS), with a DNS service on 53 and an API on 3333 N/A N/A
Host Ports
By default, docker-compose.yml
does not expose ports to the host. This allows the host to be running other services on those ports, as well as allowing multiple CDN-in-a-Boxes to run on the same host, without port conflicts.
To expose the ports of each service on the host, add the docker-compose.expose-ports.yml
file. For example, docker-compose -f docker-compose.yml -f docker-compose.expose-ports.yml up
.
Common Pitfalls
Traffic Monitor is stuck waiting for a valid Snapshot
Often times you must take a CDN Snapshot in order for a valid Snapshot to be generated. This can be done through Traffic Portal's "CDNs" view, clicking on the "CDN-in-a-Box" CDN, then pressing the camera button, and finally the "Perform Snapshot" button.
I'm seeing a failure to open a socket and/or set a socket option
Try disabling SELinux or setting it to 'permissive'. SELinux hates letting containers bind to certain ports. You can also try re-labeling the docker
executable if you feel comfortable.
Traffic Vault container exits with cp /usr/local/share/ca-certificates cp: missing destination
Bring all components down, remove the traffic_ops/ca
directory, and delete the volumes with docker volume prune
. This will force the regeneration of the certificates.