NOTE: Moved to https://github.com/navikt/nada-markedsplassen
Data management API for NAV
It serves a REST-API for managing data products, and provides functionality for self-service access to the data source.
Getting started with local development
- Install required dependencies
- Configure
gcloud
so you can access Nais clusters
- Login to GCP and configure docker
gcloud auth login --update-adc
gcloud auth configure-docker europe-north1-docker.pkg.dev
# There also exists a make target for login to docker:
make docker-login
- (Optional) If you are on mac with arm (m1, m2, m3, etc.) install rosetta
softwareupdate --install-rosetta
- Run som build commands
# Build all binaries
make build
# Run the tests
make test
Run with fully local resources
With this configuration all dependencies run as containers, as can be seen in docker-compose.yaml
:
- Google BigQuery using bigquery-emulator, with additional mocks for the
IAM Policy
endpoints
- Google Cloud Storage using fake-gcs-server
- Metabase with a patch for enabling use of bigquery-emulator
- Fake API servers for
teamkatalogen
and naisconsole
There are still a couple of services missing, though much functionality should work without this:
- Fetching of Google Groups
- Creating Google Cloud Service Accounts
- Start the dependencies and API
# Starts the dependencies in the background, and runs the API in the foreground
$ make run
-
(Optional): Start the nada-frontend
-
(Optional): Take a look at the locally running Metabase, the username is: nada@nav.no
,
and password is: superdupersecret1
Making changes to the database or generated models and queries
- Migrations allows you to modify the existing database, these are automatically applied during startup of the application
- Queries lets you generate new models and queries based on the existing structure
NB: If you make changes to the Queries remember to run the generate command so your changes are propagated:
$ make generate
The file .metabase_version controls the version of Metabase that is
used in tests and for deployment to dev and prod. Check the Metabase releases page
for available versions; we follow the Metabase Enterprise track.
When you bump this version the following events will occur when you make a PR:
- We build two Metabase images, which are used during integration tests and for local development
- metabase: un-modified version of Metabase when running nada-backend locally towards GCP services
- metabase-patched: modified version of Metabase that allows us to connect to bigquery-emulator running on the host
- We run the nada-backend integration tests using the new version of Metabase
- We deploy the new version of Metabase to
dev
On merge to main
:
- We deploy the new version of Metabase
prod
Bumping the Mocks version
In the Makefile we set the target version for the mocks. If you change the mocks, you also need to bump
the MOCKS_VERSION
, so we get the latest changes.
Update the images locally
We build and push images for the patched metabase and customized big-query emulator to speed up local development and integration tests. If you need to make changes to these:
- Make changes to the base images
Note: building the big query emulator requires quite a bit of memory, so if you see something like clang++: signal: killed
you need to increase the amount of memory you allocate to your container run-time.
- Build the new images locally
$ make build-all
- (optional) Push the images to the container registry; requires that you have run
make docker-login
$ make push-all
Architecture
flowchart TD
%% Define the layers
Transport["Transport (e.g., HTTP)"] --> Router["Router (METHOD /path)"]
Router --> Endpoint["Encoding and decoding (JSON)"]
Endpoint --> Handler["Handler (e.g., Request Handlers)"]
Handler --> Service1["Service1 (e.g., Data Processing Service)"]
Handler --> Service2["Service2 (e.g., Authentication Service)"]
Handler --> ServiceN["ServiceN"]
Service1 --> Model1["Model1 (e.g., Big Query Model)"]
Service2 --> Model2["Model2 (e.g., Data accesss)"]
ServiceN --> ModelN["ModelN (e.g., Metabase)"]
Service1 --> Storage1["Storage1 (e.g., PostgreSQL)"]
Service2 --> Storage2["Storage2 (e.g., MongoDB)"]
Service2 --> StorageN["StorageN"]
Service1 --> API1["External API 1 (e.g., GCP Big Query API)"]
Service2 --> API2["External API 2 (e.g., Metabase API)"]
ServiceN --> APIN
%% Styling classes
classDef service fill:#f9f,stroke:#333,stroke-width:2px;
class Service1,Service2,ServiceN service;
classDef model fill:#bbf,stroke:#333,stroke-width:2px;
class Model1,Model2,ModelN model;
classDef storage fill:#ffb,stroke:#333,stroke-width:2px;
class Storage1,Storage2,StorageN storage;
classDef api fill:#bfb,stroke:#333,stroke-width:2px;
class API1,API2,APIN api;