Commit 1b228550 authored by Ilya Rassadin's avatar Ilya Rassadin
Browse files

Add Docker-based integration tests for deploy templates

Spins up a recipient container (sshd + webmaster user) and a deployer
container (rsync + bats-core) to exercise the actual shell commands from
.deploy_code_to_server and .deploy_code_to_server_with_delete. Covers
file ownership, rsync-filter protect/exclude rules, DOC_ROOT_NAME rename,
and CMS-specific filters (bitrix). Also wires the tests into .gitlab-ci.yml
as a Docker-in-Docker integration-test stage.
parent 973fd14e
Loading
Loading
Loading
Loading

.gitignore

0 → 100644
+1 −0
Original line number Diff line number Diff line
tests/integration/.venv/
+20 −0
Original line number Diff line number Diff line
stages:
  - validate
  - integration-test

pre-commit:
  stage: validate
@@ -18,3 +19,22 @@ pre-commit:
  rules:
    - if: $CI_PIPELINE_SOURCE == "merge_request_event"
    - if: $CI_COMMIT_BRANCH == "master"

integration-test:
  stage: integration-test
  image: docker:29
  services:
    - docker:29-dind
  variables:
    DOCKER_TLS_CERTDIR: ""
    DOCKER_HOST: tcp://docker:2375
  before_script:
    - docker info
    - docker compose version
  script:
    - docker compose -f tests/integration/docker-compose.yml run --rm deployer
  after_script:
    - docker compose -f tests/integration/docker-compose.yml down --volumes
  rules:
    - if: $CI_PIPELINE_SOURCE == "merge_request_event"
    - if: $CI_COMMIT_BRANCH == "master"
+18 −0
Original line number Diff line number Diff line
@@ -100,3 +100,21 @@ deploy_code_to_server_with_delete:
- Anchors (`&anchor_name`) are YAML anchors merged with `<<: *anchor_name` — not GitLab extends
- `extends:` pulls in the GitLab job template; `variables:` in the project job override template defaults
- Changes here affect all projects that include from master — test carefully

## Integration Tests

End-to-end tests live in `tests/integration/`. They spin up two Docker containers (a recipient sshd server and a deployer running bats) and exercise the actual rsync/ssh commands from the deploy templates.

Run locally from the repo root:

```bash
docker compose -f tests/integration/docker-compose.yml run --rm deployer
```

**Rebuild the deployer image** after changing any `rsync-filter*` file, fixture, or `.bats` test — those files are baked into the image at build time:

```bash
docker compose -f tests/integration/docker-compose.yml build deployer
```

See `README.md` for full details.

README.md

0 → 100644
+78 −0
Original line number Diff line number Diff line
# Integration Tests

End-to-end tests for the deploy job templates. Two Docker containers are spun up:

- **recipient** — sshd server with a `webmaster` non-root user owning `/var/www/vhost.tld`; seeded with files that protect/delete rules should preserve or remove
- **deployer** — runs the same rsync/ssh commands the CI templates execute; assertions written with [bats-core](https://github.com/bats-core/bats-core)

## How it works

At image build time, `tests/integration/extract_scripts.py` parses
`before_script.yml` and `.deploy_code_to_server.yml` with PyYAML (YAML anchors
are expanded automatically) and writes shell scripts to `/tmp/generated/`:

| Generated script | Source |
|---|---|
| `set_path_vars.sh` | `&set_path_vars` anchor in `before_script.yml` |
| `deploy_code_to_server.sh` | `script:` of `.deploy_code_to_server` |
| `deploy_code_to_server_with_delete.sh` | `script:` of `.deploy_code_to_server_with_delete` |

The bats tests call these generated scripts via `run_deploy` / `run_deploy_with_delete`
helpers, so a change to the production YAML will automatically be exercised.

A mock `curl` binary (`tests/integration/bin/`) intercepts downloads from
`gitlab.cetera.ru/boilerplate/ci/raw/master/rsync-filter-*` and serves the
local filter files instead.

## Prerequisites

- Docker with Compose plugin (`docker compose version`)

## Running locally

```bash
# From the repo root:
docker compose -f tests/integration/docker-compose.yml run --rm deployer
```

Tear down afterwards:

```bash
docker compose -f tests/integration/docker-compose.yml down --volumes
```

## Rebuilding images

Images are built automatically on first run. Rebuild explicitly when you change:

| Changed file | Rebuild command |
|---|---|
| `before_script.yml`, `.deploy_code_to_server.yml`, `rsync-filter*`, fixtures, `.bats` files, or `extract_scripts.py` | `docker compose -f tests/integration/docker-compose.yml build deployer` |
| `recipient/Dockerfile` or `recipient/entrypoint.sh` | `docker compose -f tests/integration/docker-compose.yml build recipient` |

## Test coverage

| File | What it tests |
|---|---|
| `tests/deploy_basic.bats` | Deploy `www/` as root (with `--chown`) and as non-root; verifies file existence and ownership |
| `tests/deploy_with_delete.bats` | `protect` rules preserve `vendor/` and `.htaccess`; `exclude` rules keep `.env.project`, `Makefile`, `working/` off the server; stale server files are deleted; bitrix filter protects `www/upload/` and `www/bitrix/` |
| `tests/deploy_doc_root_name.bats` | `DOC_ROOT_NAME` rename logic: `www/` is renamed to the custom name before deploy |

## Running the script extractor locally

The extractor requires PyYAML. Use a virtual environment to keep it isolated:

```bash
python3 -m venv tests/integration/.venv
source tests/integration/.venv/bin/activate
pip install -r tests/integration/requirements.txt
python3 tests/integration/extract_scripts.py /tmp/generated
```

Inside Docker the `python3-yaml` apt package is used instead — no venv needed there.

## Adding tests

1. Create a new `.bats` file in `tests/integration/tests/`.
2. Start with `load 'helpers'` for shared SSH helpers and `reset_recipient` to restore server state in `setup()`.
3. Add fixtures under `tests/integration/fixtures/` if needed and rebuild the deployer image.
+39 −0
Original line number Diff line number Diff line
#!/bin/bash
# Mock curl: intercept gitlab.cetera.ru boilerplate CI file downloads,
# serve from /ci-repo instead of hitting the network.

URL=""
OUTPUT_FILE=""
USE_REMOTE_NAME=false
NEXT_IS_OUTPUT=false

for arg in "$@"; do
    if $NEXT_IS_OUTPUT; then
        OUTPUT_FILE="$arg"
        NEXT_IS_OUTPUT=false
        continue
    fi
    case "$arg" in
        -o) NEXT_IS_OUTPUT=true ;;
        -O) USE_REMOTE_NAME=true ;;
        http://*|https://*) URL="$arg" ;;
    esac
done

if [[ "$URL" =~ gitlab\.cetera\.ru/boilerplate/ci/raw/master/(.+)$ ]]; then
    remote_file="${BASH_REMATCH[1]}"
    src="/ci-repo/$remote_file"
    if $USE_REMOTE_NAME; then
        OUTPUT_FILE="$(basename "$remote_file")"
    fi
    if [[ -f "$src" ]]; then
        if [[ -n "$OUTPUT_FILE" ]]; then
            cp "$src" "$OUTPUT_FILE"
        else
            cat "$src"
        fi
        exit 0
    fi
fi

exec /usr/bin/curl "$@"
Loading