- Helm Chart:
- Added `app.kubernetes.io/component: exporter` to the exporter pod
labels via `values.yaml` to match the service selector.
- Updated image tag defaulting in `exporter-controller.yaml` to use
`Chart.appVersion` directly (e.g., "0.1.0" instead of "v0.1.0").
- Build Process (`.github/workflows/release.yml`):
- Configured `docker/metadata-action` to ensure image tags are generated
without a 'v' prefix (e.g., "0.1.0" from Git tag "v0.1.0").
This aligns the published image tags with the Helm chart's
updated image tag references.
- Repository:
- Added `rendered-manifests.yaml` and `rendered-manifests-updated.yaml`
to `.gitignore`.
Co-authored-by: google-labs-jules[bot] <161369871+google-labs-jules[bot]@users.noreply.github.com>
Adds missing Helm dependency setup steps (repo add, dependency build) to the release workflow, mirroring the CI workflow. This ensures that dependencies are correctly handled during linting and packaging in the release process.
Co-authored-by: google-labs-jules[bot] <161369871+google-labs-jules[bot]@users.noreply.github.com>
* fix: Correct common lib repo URL, rename exporter template
- Reverted common library repository URL in Chart.yaml to
https://bjw-s-labs.github.io/helm-charts/.
- Ensured helm dependency commands are run after adding repositories.
- Renamed exporter template from exporter-deployment.yaml to
exporter-controller.yaml to better reflect its new role with common library.
Note: Full helm lint/template validation with dependencies was not possible
in the automated environment due to issues with dependency file persistence
in the sandbox.
* fix: Integrate bjw-s/common library for exporter controller
- Corrected bjw-s/common library repository URL in Chart.yaml to the
traditional HTTPS URL and ensured dependencies are fetched.
- Renamed exporter template to exporter-controller.yaml.
- Updated exporter-controller.yaml to correctly use
`bjw-s.common.render.controllers` for rendering.
- Refined the context passed to the common library to include Values, Chart,
Release, and Capabilities, and initialized expected top-level keys
(global, defaultPodOptionsStrategy) in the Values.
- Ensured image.tag is defaulted to Chart.AppVersion in the template data
to pass common library validations.
- Helm lint and template commands now pass successfully for both
Deployment and DaemonSet configurations of the exporter.
* fix: Set dependencies.install to false by default
- Changed the default value for `dependencies.install` to `false` in values.yaml.
- Updated comments to clarify that users should explicitly enable it if they
need the chart to install a Prometheus Operator dependency.
* fix: Update CI workflow to add Helm repositories and build dependencies
* hotfix: Pass .Template to common lib for tpl context
- Updated exporter-controller.yaml to include .Template in the dict
passed to the bjw-s.common.render.controllers include.
- This is to resolve a 'cannot retrieve Template.Basepath' error
encountered with the tpl function in older Helm versions (like v3.10.0 in CI)
when the tpl context does not contain the .Template object.
---------
Co-authored-by: google-labs-jules[bot] <161369871+google-labs-jules[bot]@users.noreply.github.com>
* feat: Implement iperf3 exporter core logic and log level configuration
This commit completes the core functionality of the iperf3 exporter and adds flexible log level configuration.
Key changes:
- Added command-line (`--log-level`) and environment variable (`LOG_LEVEL`) options to configure the logging level.
- Implemented the main test orchestration loop (`main_loop`) which:
- Discovers iperf3 server pods via the Kubernetes API.
- Periodically runs iperf3 tests (TCP/UDP) between the exporter pod and discovered server pods.
- Avoids self-testing.
- Uses configurable test intervals, server ports, and protocols.
- Requires `SOURCE_NODE_NAME` to be set.
- Refined the `parse_and_publish_metrics` function to:
- Accurately parse iperf3 results for bandwidth, jitter, packets, and lost packets.
- Set `IPERF_TEST_SUCCESS` metric (0 for failure, 1 for success).
- Zero out all relevant metrics for a given path upon test failure to prevent stale data.
- Handle UDP-specific metrics correctly, zeroing them for TCP tests.
- Improved robustness in accessing iperf3 result attributes.
- Updated the main execution block to initialize logging, start the Prometheus HTTP server, and invoke the main loop.
- Added comprehensive docstrings and inline comments throughout `exporter/exporter.py` for improved readability and maintainability.
These changes align the exporter's implementation with the details specified in the design document (docs/DESIGN.MD).
* feat: Update Helm chart and CI for exporter enhancements
This commit introduces updates to the Helm chart to support log level
configuration for the iperf3 exporter, and modifies the CI workflow
to improve image tagging for pull requests.
Helm Chart Changes (`charts/iperf3-monitor`):
- Added `exporter.logLevel` to `values.yaml` (default: "INFO") to allow
you to set the exporter's log level.
- Updated `templates/exporter-deployment.yaml` to use the
`exporter.logLevel` value to set the `LOG_LEVEL` environment
variable in the exporter container.
CI Workflow Changes (`.github/workflows/ci.yaml`):
- Modified the Docker image build process to tag images built from
pull requests with `pr-<PR_NUMBER>`.
- Ensured that these PR-specific images are pushed to the container
registry.
- Preserved existing tagging mechanisms (e.g., SHA-based tags).
* fix: Add Docker login and permissions to CI workflow
This commit fixes the Docker image push failure in the CI workflow
by adding the necessary Docker login step and ensuring the correct
permissions are set for the GITHUB_TOKEN.
- Added a Docker login step using `docker/login-action@v3` to the
`Build Docker Image` job in `.github/workflows/ci.yaml`. This
authenticates to GHCR before attempting to push images.
- Added a `permissions` block to the `Build Docker Image` job, granting
`packages: write` scope to the `GITHUB_TOKEN`. This is required
to allow pushing packages to the GitHub Container Registry.
---------
Co-authored-by: google-labs-jules[bot] <161369871+google-labs-jules[bot]@users.noreply.github.com>
* feat: Add support for arm64 architecture
This commit introduces support for the arm64 architecture by:
1. **Updating the Dockerfile:**
* The `exporter/Dockerfile` now uses the `TARGETARCH` build argument to dynamically determine the correct path for `libiperf.so.0`. This allows the same Dockerfile to be used for building both `amd64` and `arm64` images.
2. **Modifying GitHub Workflows:**
* The CI workflow (`.github/workflows/ci.yaml`) and the Release workflow (`.github/workflows/release.yml`) have been updated to build and push multi-architecture Docker images (`linux/amd64` and `linux/arm64`).
* This involves adding the `docker/setup-qemu-action` for cross-compilation and specifying the target platforms in the `docker/build-push-action`.
3. **Helm Chart:**
* No changes were required for the Helm chart as the image tag will now point to a multi-arch manifest, and the default iperf3 server image (`networkstatic/iperf3:latest`) is assumed to be multi-arch. Node selectors in the chart are not architecture-specific.
These changes enable the deployment of the iperf3-monitor on Kubernetes clusters with arm64 nodes.
* fix: Ensure multi-platform builds with Docker Buildx
This commit updates the GitHub Actions workflows to correctly set up
Docker Buildx for multi-platform (amd64, arm64) image builds.
Previously, the workflows were missing the `docker/setup-buildx-action`
step, which led to errors when attempting multi-platform builds as the
default Docker driver does not support this.
The following changes were made:
1. **Added `docker/setup-buildx-action@v3`:**
- This step is now included in both the CI (`.github/workflows/ci.yaml`) and Release (`.github/workflows/release.yml`) workflows before the QEMU setup and build/push actions.
2. **Dockerfile (`exporter/Dockerfile`):**
- Remains as per the previous commit, using `TARGETARCH` to correctly copy architecture-specific libraries. This part was already correct for multi-arch builds.
3. **Helm Chart:**
- No changes were required for the Helm chart.
This ensures that the CI/CD pipeline can successfully build and push
Docker images for both `linux/amd64` and `linux/arm64` architectures.
* fix: Correct Dockerfile lib path and add Helm dependency toggle
This commit includes two main changes:
1. **Fix Dockerfile library path for amd64:**
- I updated the `exporter/Dockerfile` to correctly determine the source path for `libiperf.so.0` when building for different architectures.
- Specifically, for `TARGETARCH=amd64`, the path `/usr/lib/x86_64-linux-gnu/libiperf.so.0` is now used.
- For `TARGETARCH=arm64`, the path `/usr/lib/aarch64-linux-gnu/libiperf.so.0` is used.
- I achieved this by copying the library to a canonical temporary location in the builder stage based on `TARGETARCH`, and then copying it from this location into the final image. This resolves an issue where builds for `amd64` would fail to find the library.
2. **Add Helm chart option to disable dependencies:**
- I added a new option `dependencies.install` (default: `true`) to `charts/iperf3-monitor/values.yaml`.
- This allows you to disable the installation of managed dependencies (i.e., Prometheus Operator via `kube-prometheus-stack` or `prometheus-operator` from TrueCharts) even if `serviceMonitor.enabled` is true.
- I updated the `condition` for these dependencies in `charts/iperf3-monitor/Chart.yaml` to `dependencies.install, serviceMonitor.enabled, ...`.
- This is useful for you if you manage your Prometheus Operator installation separately.
---------
Co-authored-by: google-labs-jules[bot] <161369871+google-labs-jules[bot]@users.noreply.github.com>
This commit implements a verified yq command syntax in the
`.github/workflows/release.yml` file to ensure correct and reliable
updating of Chart.yaml version and appVersion from Git tags.
The previous attempts faced issues with yq argument parsing and
environment variable substitution. The new commands:
VERSION=$VERSION yq e -i '.version = strenv(VERSION)' ./charts/iperf3-monitor/Chart.yaml
VERSION=$VERSION yq e -i '.appVersion = strenv(VERSION)' ./charts/iperf3-monitor/Chart.yaml
were tested and confirmed to correctly modify
the Chart.yaml file as intended.
This change should resolve the issues where chart versions were being
set incorrectly or to empty strings during the release process.
Configure automated checks for pull requests including:
- Linting the Helm chart.
- Building the exporter Docker image.
- A placeholder for future tests.
Add core components for continuous cluster network validation:
- Python exporter (`exporter/`) to run iperf3 tests and expose Prometheus metrics.
- Helm chart (`charts/iperf3-monitor/`) for deploying the exporter as a
Deployment and iperf3 server as a DaemonSet.
- CI/CD workflow (`.github/workflows/release.yml`) for building/publishing
images and charts on tag creation.
- Initial documentation, license, and `.gitignore`.