Skip to content

Development Guide

This guide covers the development process and tools used in the Whitebox project and how you can develop new features or plugins for Whitebox and contribute to the project.

Prerequisites

  • Docker (for running the Whitebox server)
  • git-lfs (for downloading asset files)
  • git (for cloning the repository)

Installation instructions for prerequisites can be found here.

Understanding Docker Configuration

To facilitate easy deployment and testing, Whitebox includes Docker configurations. Dockerfiles define the necessary environment for running or developing the Whitebox, including all required dependencies and configuration. Compose files orchestrate the setup of the server, database, and other services.

The Dockerfile is usually configured in two stages: one for building the application and another for running it. This separation optimizes the build process and reduces the final image size.

Setting up Development Environment

The recommended way developers working on Whitebox is to use the dev containers. It is a pre-configured Docker container that includes all the necessary dependencies and tools for development. Once launched, the container will simply start an environment.

To access the environment, you will need to enter the shell of the container (think "SSH-ing" into the container). From there you can access the environment, install project dependencies, run the required services, etc, same as you would with an SSH session.

This allows you to develop code on your local machine and run it in the container without having to rebuild the container each time you make a change.

To start the dev container, run the following command:

docker compose -f compose.dev.yml up -d

You may encounter a warning about orphan containers being found for this project if you already have whitebox production container running as they share the same project and expose same ports. It's recommended to stop the production container before starting the development container. The warning can look like this:

WARN[0073] Found orphan containers ([frontend backend redis postgres]) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up.

To enter the backend container shell, run the following command:

docker exec -it backend-dev bash

To enter the frontend container shell, run the following command:

docker exec -it frontend-dev bash

Both containers automatically install all the necessary dev dependencies during build. So, as soon as containers are up, you are ready to start developing!

To install new dependencies, make sure to use dev container shell. It is also recommended to rebuild the dev container after adding new dependencies to ensure they are cached at the build stage making future builds faster. To rebuild the container, run the following command:

docker compose -f compose.dev.yml up --build -d

There can be some edge cases where container dependencies won't work correctly. For example when dependencies are reverted via git. Under those circumstances, rebuild the container with the no-cache option to ensure docker ignores the dependency cache:

docker compose -f compose.dev.yml build --no-cache frontend-dev
docker compose up -d

Running the Django backend development server

  1. Ensure development environment is set up.

  2. Enter the backend container shell:

    docker exec -it backend-dev bash
    
  3. Run the Whitebox server:

    make run
    

This will start the Django development server on http://localhost:8000. Any changes you make to the backend will be automatically reloaded.

Plugins' JSX files are transpiled only on backend startup. If you make changes to the plugins' JSX code, you will need to restart the backend server for them to get re-transpiled, and then refresh the frontend app.

Running the React frontend development server

  1. Ensure development environment is set up.

  2. Enter the frontend container shell:

    docker exec -it frontend-dev bash
    
  3. Run the frontend development server:

    npm start
    

This will start the development server on http://localhost:3000. The development server has automatic reloading enabled, so you can see your changes in real-time.

This applies only the frontend project's code. In case you make changes to the plugins' JSX code, you will need to refresh the page for changes to take effect.

Commands

Whitebox includes a Makefile with commands for common development tasks, such as:

  • download_external_assets: For whitebox to function offline, it needs to download external assets like videos, external JS libraries, etc. for both plugins and whitebox itself. This command downloads all the external assets.
  • build_federation_modules: Builds federation modules for all the plugins that are available to Whitebox.
  • clean: All the downloaded external assets will be removed. Whitebox will not function without these assets.
  • run: Start the Whitebox server for production
  • run-dev: Start the Whitebox server for development
  • test: Run tests
  • migrate: Apply database migrations
  • fmt: Format code using ruff
  • run-mkdocs: Run the MkDocs server for documentation

For creating migrations, use the default Django commands:

poetry run python whitebox/manage.py makemigrations <app_name>

Testing

Backend testing

Whitebox implements a custom test runner that allows for running tests for whitebox and plugins within a single test suite by discovering all plugin tests and loadin them dynamically.

To run tests, along with plugin tests, you need to make sure your plugin adheres to guidelines outlined in the Plugin Guide.

Frontend testing

Whitebox uses Vitest for frontend unit testing, and Playwright for integration testing. To run frontend tests:

  1. Ensure development environment is set up.
  2. Enter the frontend container shell:
docker exec -it frontend-dev bash
  1. Run the frontend tests:
make test

This will run first the unit testing suite, and then the integration testing suite.

End-to-end testing

Whitebox uses Playwright for end-to-end testing. The setup steps for end-to-end testing are slightly different from the other tests, as it requires a running both the frontend and backend servers in conjunction.

As the frontend and backend are running in separate containers, and the tests will be run from the frontend container, the target hosts will be different - frontend will effectively live on localhost, while the backend will be on backend host (if using production containers), or backend-dev host (if using development containers).

For the tests to run successfully, you will need to ensure that the frontend application is configured to use the correct backend host from within the container.

To run end-to-end tests:

  1. Ensure development environment is set up.
  2. Run the backend environment that the frontend will use:
docker exec -it backend-dev make run
  1. Run the frontend environment, ensuring that the backend host is set correctly for the frontend build:
docker exec -it frontend-dev env VITE_API_HOST=backend-dev make run
  1. Run the test suite on the frontend container:
docker exec -it frontend-dev make e2e_test

Running the tests on BrowserStack

Whitebox uses BrowserStack for running frontend integration, and end-to-end tests on multiple browsers and devices.

When ran during CI, the end-to-end tests will be run against the sandbox environment, which is setup as one of the steps before the BrowserStack testing step. You can also run the tests on BrowserStack from localhost, if you have an account.

BrowserStack is configured to run through a local tunnel, which allows for testing on local environments.

To run the tests on BrowserStack, you will need the following:

  • BrowserStack username, later set as environment variable named BROWSERSTACK_USERNAME
  • BrowserStack access key, later set as environment variable named BROWSERSTACK_ACCESS_KEY

Similarly to the standard end-to-end test run, you'll need to setup frontend and backend environments, and then run the tests against them:

  1. Ensure development environment is set up.
  2. Run the backend environment that the frontend will use:
docker exec -it backend-dev make run
  1. Run the frontend environment, ensuring that the backend host is set correctly for the frontend build:
docker exec -it frontend-dev env VITE_API_HOST=backend-dev make run
  1. Run the BrowserStack test suites on the frontend container:
# Enter the frontend container shell
docker exec -it frontend-dev bash

# Set the BrowserStack credentials in the environment
export BROWSERSTACK_USERNAME=<your_browserstack_username>
export BROWSERSTACK_ACCESS_KEY=<your_browserstack_access_key>

# Run the integration tests on BrowserStack
npm run test_integration:browserstack

# Run the end-to-end tests on BrowserStack
npm run test_e2e:browserstack

By default, the BrowserStack tests will run using the BrowserStack Local proxy tunnel, which allows for testing on local environments. When ran, the test suite automatically downloads the BrowserStackLocal binary, which acts like a proxy between the BrowserStack cloud and your local network. This is useful when you want to run tests against the dev container running on your local machine.

As the BrowserStack Local has been a bit flaky in our experience, this feature is disabled when tests are running on CI, using an override: --browserstack.local="false"

If you want to disable this locally (e.g. to test against a sandbox), you can run the integration tests with:

E2E_TEST_URL="TARGET_URL_OF_THE_SANDBOX" npm run test_integration:browserstack -- --browserstack.local="false"

or end-to-end tests with:

E2E_TEST_URL="TARGET_URL_OF_THE_SANDBOX" npm run test_e2e:browserstack -- --browserstack.local="false"

Debugging

VSCode

  1. Install VSCode and open the whitebox project in VSCode.
  2. Add Python language support by installing the extension.
  3. JS Language support and chrome debugger support is available by default.
  4. If using firefox browser, you would need to install this extension.

Backend Debugging

  1. SSH into backend dev container: docker exec -it backend-dev bash
  2. Run: make debug (server won't start, move to next steps)
  3. Click the Run & Debug Logo located on the left sidebar.
  4. From the drop-down next to the play button, select Debug Backend.
  5. Click the play button to debug.

Frontend Debugging

  1. SSH into frontend dev container: docker exec -it frontend-dev bash
  2. Run: make debug
  3. Click the Run & Debug Logo located on the left sidebar.
  4. From the drop-down next to the play button, select Debug Frontend (Chrome) OR Debug Frontend (Firefox).
  5. Click the play button to debug.

PyCharm

Remote debugging on PyCharm works only on the Professional Edition. Community edition does not support remote debugging in any capacity, and can take some work to fully setup.

You can find the detailed explanation and setup steps in PyCharm debugging.

Contributing

  1. Fork the repository
  2. Create a new branch for your feature
  3. Make your changes
  4. Run tests and ensure they pass
  5. Submit a pull request

Google Docstring Conventions should be followed for all code documentation.

Documentation

Documentation for Whitebox is generated using MkDocs. To run documentation locally, make sure you have Whitebox repository cloned and set up as per the instructions above.

To run the documentation server:

  1. Ensure development environment is set up.
  2. Enter the backend container shell:

    docker exec -it backend-dev bash
    
    3. Run the documentation server:

    make run-mkdocs
    

Versioning

Whitebox uses Semantic Versioning for versioning. In the CI configuration, the update_version stage is responsible for updating the version of the project. Backend and frontend versions are kept in sync, with the backend's version used as a reference point.

When a merge request is merged to main, first, patch version will be bumped in the pyproject.toml file for backend, and that same version number will then be applied to package.json for frontend. This is done by the script located in packaging/scripts/maintenance/whitebox_update_version.py.

Afterward, a commit will be made with the new version, which will then be tagged, and the CI will push these changes to the repository.

Adding temporary dependencies to CI

Sometimes you need to add a temporary dependency to the CI environment that you do not want to be included in the project's dependencies upon merge. You may want to do this when you are working on Whitebox core and a plugin in parallel. In these cases, the plugins' changes would only be available on its own branch, which are not published in PyPI, so you have to install them directly from Git.

To do this, you can add the dependency to the temporary-dependencies Poetry group. Tests and sandbox will be run with these temporary dependencies installed (they take precedence over the "default" ones from the pyproject.toml file), but as it's an optional group, they won't be included in the final project dependencies. This change will be safe to merge, as the CI will perform the cleanup (for more info, take a look how the maintenance CI step works).

You can reference a Git branch directly in the poetry add command, by doing:

poetry add --group temporary-dependencies git+GIT_BRANCH#BRANCH_NAME

For example, to add a temporary dependency with Git URL https://gitlab.com/whitebox-aero/whitebox-plugin-gps-display.git, with a branch feature/my-new-feature, you can run:

poetry add --group temporary-dependencies git+https://gitlab.com/whitebox-aero/whitebox-plugin-gps-display.git#feature/my-new-feature

Additionally, during development you can verify that the built Docker image includes the temporary dependencies by adding TEMPORARY_DEPENDENCIES=1 environment variable to the docker compose command:

# For building & running the dev container
TEMPORARY_DEPENDENCIES=1 docker compose -f compose.dev.yml up -d --build

# For building & running the prod container
TEMPORARY_DEPENDENCIES=1 docker compose up -d --build

Next Steps