iperf3-monitor/exporter/Dockerfile

52 lines
1.7 KiB
Docker
Raw Permalink Normal View History

# Stage 1: Build stage with dependencies
FROM python:3.9-slim as builder
feat: Add support for arm64 architecture (#11) * feat: Add support for arm64 architecture This commit introduces support for the arm64 architecture by: 1. **Updating the Dockerfile:** * The `exporter/Dockerfile` now uses the `TARGETARCH` build argument to dynamically determine the correct path for `libiperf.so.0`. This allows the same Dockerfile to be used for building both `amd64` and `arm64` images. 2. **Modifying GitHub Workflows:** * The CI workflow (`.github/workflows/ci.yaml`) and the Release workflow (`.github/workflows/release.yml`) have been updated to build and push multi-architecture Docker images (`linux/amd64` and `linux/arm64`). * This involves adding the `docker/setup-qemu-action` for cross-compilation and specifying the target platforms in the `docker/build-push-action`. 3. **Helm Chart:** * No changes were required for the Helm chart as the image tag will now point to a multi-arch manifest, and the default iperf3 server image (`networkstatic/iperf3:latest`) is assumed to be multi-arch. Node selectors in the chart are not architecture-specific. These changes enable the deployment of the iperf3-monitor on Kubernetes clusters with arm64 nodes. * fix: Ensure multi-platform builds with Docker Buildx This commit updates the GitHub Actions workflows to correctly set up Docker Buildx for multi-platform (amd64, arm64) image builds. Previously, the workflows were missing the `docker/setup-buildx-action` step, which led to errors when attempting multi-platform builds as the default Docker driver does not support this. The following changes were made: 1. **Added `docker/setup-buildx-action@v3`:** - This step is now included in both the CI (`.github/workflows/ci.yaml`) and Release (`.github/workflows/release.yml`) workflows before the QEMU setup and build/push actions. 2. **Dockerfile (`exporter/Dockerfile`):** - Remains as per the previous commit, using `TARGETARCH` to correctly copy architecture-specific libraries. This part was already correct for multi-arch builds. 3. **Helm Chart:** - No changes were required for the Helm chart. This ensures that the CI/CD pipeline can successfully build and push Docker images for both `linux/amd64` and `linux/arm64` architectures. * fix: Correct Dockerfile lib path and add Helm dependency toggle This commit includes two main changes: 1. **Fix Dockerfile library path for amd64:** - I updated the `exporter/Dockerfile` to correctly determine the source path for `libiperf.so.0` when building for different architectures. - Specifically, for `TARGETARCH=amd64`, the path `/usr/lib/x86_64-linux-gnu/libiperf.so.0` is now used. - For `TARGETARCH=arm64`, the path `/usr/lib/aarch64-linux-gnu/libiperf.so.0` is used. - I achieved this by copying the library to a canonical temporary location in the builder stage based on `TARGETARCH`, and then copying it from this location into the final image. This resolves an issue where builds for `amd64` would fail to find the library. 2. **Add Helm chart option to disable dependencies:** - I added a new option `dependencies.install` (default: `true`) to `charts/iperf3-monitor/values.yaml`. - This allows you to disable the installation of managed dependencies (i.e., Prometheus Operator via `kube-prometheus-stack` or `prometheus-operator` from TrueCharts) even if `serviceMonitor.enabled` is true. - I updated the `condition` for these dependencies in `charts/iperf3-monitor/Chart.yaml` to `dependencies.install, serviceMonitor.enabled, ...`. - This is useful for you if you manage your Prometheus Operator installation separately. --------- Co-authored-by: google-labs-jules[bot] <161369871+google-labs-jules[bot]@users.noreply.github.com>
2025-06-20 13:37:36 +00:00
# Declare TARGETARCH for use in this stage
ARG TARGETARCH
WORKDIR /app
# Install iperf3 and build dependencies
RUN apt-get update && \
apt-get install -y --no-install-recommends gcc iperf3 libiperf-dev && \
rm -rf /var/lib/apt/lists/*
feat: Add support for arm64 architecture (#11) * feat: Add support for arm64 architecture This commit introduces support for the arm64 architecture by: 1. **Updating the Dockerfile:** * The `exporter/Dockerfile` now uses the `TARGETARCH` build argument to dynamically determine the correct path for `libiperf.so.0`. This allows the same Dockerfile to be used for building both `amd64` and `arm64` images. 2. **Modifying GitHub Workflows:** * The CI workflow (`.github/workflows/ci.yaml`) and the Release workflow (`.github/workflows/release.yml`) have been updated to build and push multi-architecture Docker images (`linux/amd64` and `linux/arm64`). * This involves adding the `docker/setup-qemu-action` for cross-compilation and specifying the target platforms in the `docker/build-push-action`. 3. **Helm Chart:** * No changes were required for the Helm chart as the image tag will now point to a multi-arch manifest, and the default iperf3 server image (`networkstatic/iperf3:latest`) is assumed to be multi-arch. Node selectors in the chart are not architecture-specific. These changes enable the deployment of the iperf3-monitor on Kubernetes clusters with arm64 nodes. * fix: Ensure multi-platform builds with Docker Buildx This commit updates the GitHub Actions workflows to correctly set up Docker Buildx for multi-platform (amd64, arm64) image builds. Previously, the workflows were missing the `docker/setup-buildx-action` step, which led to errors when attempting multi-platform builds as the default Docker driver does not support this. The following changes were made: 1. **Added `docker/setup-buildx-action@v3`:** - This step is now included in both the CI (`.github/workflows/ci.yaml`) and Release (`.github/workflows/release.yml`) workflows before the QEMU setup and build/push actions. 2. **Dockerfile (`exporter/Dockerfile`):** - Remains as per the previous commit, using `TARGETARCH` to correctly copy architecture-specific libraries. This part was already correct for multi-arch builds. 3. **Helm Chart:** - No changes were required for the Helm chart. This ensures that the CI/CD pipeline can successfully build and push Docker images for both `linux/amd64` and `linux/arm64` architectures. * fix: Correct Dockerfile lib path and add Helm dependency toggle This commit includes two main changes: 1. **Fix Dockerfile library path for amd64:** - I updated the `exporter/Dockerfile` to correctly determine the source path for `libiperf.so.0` when building for different architectures. - Specifically, for `TARGETARCH=amd64`, the path `/usr/lib/x86_64-linux-gnu/libiperf.so.0` is now used. - For `TARGETARCH=arm64`, the path `/usr/lib/aarch64-linux-gnu/libiperf.so.0` is used. - I achieved this by copying the library to a canonical temporary location in the builder stage based on `TARGETARCH`, and then copying it from this location into the final image. This resolves an issue where builds for `amd64` would fail to find the library. 2. **Add Helm chart option to disable dependencies:** - I added a new option `dependencies.install` (default: `true`) to `charts/iperf3-monitor/values.yaml`. - This allows you to disable the installation of managed dependencies (i.e., Prometheus Operator via `kube-prometheus-stack` or `prometheus-operator` from TrueCharts) even if `serviceMonitor.enabled` is true. - I updated the `condition` for these dependencies in `charts/iperf3-monitor/Chart.yaml` to `dependencies.install, serviceMonitor.enabled, ...`. - This is useful for you if you manage your Prometheus Operator installation separately. --------- Co-authored-by: google-labs-jules[bot] <161369871+google-labs-jules[bot]@users.noreply.github.com>
2025-06-20 13:37:36 +00:00
# Determine the correct libiperf source directory based on TARGETARCH
# and copy libiperf.so.0 to a canonical temporary location /tmp/lib/ within the builder stage.
RUN echo "Builder stage TARGETARCH: ${TARGETARCH}" && \
LIBIPERF_SRC_DIR_SEGMENT="" && \
if [ "${TARGETARCH}" = "amd64" ]; then \
LIBIPERF_SRC_DIR_SEGMENT="x86_64-linux-gnu"; \
elif [ "${TARGETARCH}" = "arm64" ]; then \
LIBIPERF_SRC_DIR_SEGMENT="aarch64-linux-gnu"; \
else \
echo "Unsupported TARGETARCH in builder: ${TARGETARCH}" && exit 1; \
fi && \
mkdir -p /tmp/lib && \
cp "/usr/lib/${LIBIPERF_SRC_DIR_SEGMENT}/libiperf.so.0" /tmp/lib/libiperf.so.0
# Install Python dependencies
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
# Stage 2: Final runtime stage
FROM python:3.9-slim
WORKDIR /app
feat: Add support for arm64 architecture (#11) * feat: Add support for arm64 architecture This commit introduces support for the arm64 architecture by: 1. **Updating the Dockerfile:** * The `exporter/Dockerfile` now uses the `TARGETARCH` build argument to dynamically determine the correct path for `libiperf.so.0`. This allows the same Dockerfile to be used for building both `amd64` and `arm64` images. 2. **Modifying GitHub Workflows:** * The CI workflow (`.github/workflows/ci.yaml`) and the Release workflow (`.github/workflows/release.yml`) have been updated to build and push multi-architecture Docker images (`linux/amd64` and `linux/arm64`). * This involves adding the `docker/setup-qemu-action` for cross-compilation and specifying the target platforms in the `docker/build-push-action`. 3. **Helm Chart:** * No changes were required for the Helm chart as the image tag will now point to a multi-arch manifest, and the default iperf3 server image (`networkstatic/iperf3:latest`) is assumed to be multi-arch. Node selectors in the chart are not architecture-specific. These changes enable the deployment of the iperf3-monitor on Kubernetes clusters with arm64 nodes. * fix: Ensure multi-platform builds with Docker Buildx This commit updates the GitHub Actions workflows to correctly set up Docker Buildx for multi-platform (amd64, arm64) image builds. Previously, the workflows were missing the `docker/setup-buildx-action` step, which led to errors when attempting multi-platform builds as the default Docker driver does not support this. The following changes were made: 1. **Added `docker/setup-buildx-action@v3`:** - This step is now included in both the CI (`.github/workflows/ci.yaml`) and Release (`.github/workflows/release.yml`) workflows before the QEMU setup and build/push actions. 2. **Dockerfile (`exporter/Dockerfile`):** - Remains as per the previous commit, using `TARGETARCH` to correctly copy architecture-specific libraries. This part was already correct for multi-arch builds. 3. **Helm Chart:** - No changes were required for the Helm chart. This ensures that the CI/CD pipeline can successfully build and push Docker images for both `linux/amd64` and `linux/arm64` architectures. * fix: Correct Dockerfile lib path and add Helm dependency toggle This commit includes two main changes: 1. **Fix Dockerfile library path for amd64:** - I updated the `exporter/Dockerfile` to correctly determine the source path for `libiperf.so.0` when building for different architectures. - Specifically, for `TARGETARCH=amd64`, the path `/usr/lib/x86_64-linux-gnu/libiperf.so.0` is now used. - For `TARGETARCH=arm64`, the path `/usr/lib/aarch64-linux-gnu/libiperf.so.0` is used. - I achieved this by copying the library to a canonical temporary location in the builder stage based on `TARGETARCH`, and then copying it from this location into the final image. This resolves an issue where builds for `amd64` would fail to find the library. 2. **Add Helm chart option to disable dependencies:** - I added a new option `dependencies.install` (default: `true`) to `charts/iperf3-monitor/values.yaml`. - This allows you to disable the installation of managed dependencies (i.e., Prometheus Operator via `kube-prometheus-stack` or `prometheus-operator` from TrueCharts) even if `serviceMonitor.enabled` is true. - I updated the `condition` for these dependencies in `charts/iperf3-monitor/Chart.yaml` to `dependencies.install, serviceMonitor.enabled, ...`. - This is useful for you if you manage your Prometheus Operator installation separately. --------- Co-authored-by: google-labs-jules[bot] <161369871+google-labs-jules[bot]@users.noreply.github.com>
2025-06-20 13:37:36 +00:00
# Copy iperf3 binary from the builder stage
COPY --from=builder /usr/bin/iperf3 /usr/bin/iperf3
feat: Add support for arm64 architecture (#11) * feat: Add support for arm64 architecture This commit introduces support for the arm64 architecture by: 1. **Updating the Dockerfile:** * The `exporter/Dockerfile` now uses the `TARGETARCH` build argument to dynamically determine the correct path for `libiperf.so.0`. This allows the same Dockerfile to be used for building both `amd64` and `arm64` images. 2. **Modifying GitHub Workflows:** * The CI workflow (`.github/workflows/ci.yaml`) and the Release workflow (`.github/workflows/release.yml`) have been updated to build and push multi-architecture Docker images (`linux/amd64` and `linux/arm64`). * This involves adding the `docker/setup-qemu-action` for cross-compilation and specifying the target platforms in the `docker/build-push-action`. 3. **Helm Chart:** * No changes were required for the Helm chart as the image tag will now point to a multi-arch manifest, and the default iperf3 server image (`networkstatic/iperf3:latest`) is assumed to be multi-arch. Node selectors in the chart are not architecture-specific. These changes enable the deployment of the iperf3-monitor on Kubernetes clusters with arm64 nodes. * fix: Ensure multi-platform builds with Docker Buildx This commit updates the GitHub Actions workflows to correctly set up Docker Buildx for multi-platform (amd64, arm64) image builds. Previously, the workflows were missing the `docker/setup-buildx-action` step, which led to errors when attempting multi-platform builds as the default Docker driver does not support this. The following changes were made: 1. **Added `docker/setup-buildx-action@v3`:** - This step is now included in both the CI (`.github/workflows/ci.yaml`) and Release (`.github/workflows/release.yml`) workflows before the QEMU setup and build/push actions. 2. **Dockerfile (`exporter/Dockerfile`):** - Remains as per the previous commit, using `TARGETARCH` to correctly copy architecture-specific libraries. This part was already correct for multi-arch builds. 3. **Helm Chart:** - No changes were required for the Helm chart. This ensures that the CI/CD pipeline can successfully build and push Docker images for both `linux/amd64` and `linux/arm64` architectures. * fix: Correct Dockerfile lib path and add Helm dependency toggle This commit includes two main changes: 1. **Fix Dockerfile library path for amd64:** - I updated the `exporter/Dockerfile` to correctly determine the source path for `libiperf.so.0` when building for different architectures. - Specifically, for `TARGETARCH=amd64`, the path `/usr/lib/x86_64-linux-gnu/libiperf.so.0` is now used. - For `TARGETARCH=arm64`, the path `/usr/lib/aarch64-linux-gnu/libiperf.so.0` is used. - I achieved this by copying the library to a canonical temporary location in the builder stage based on `TARGETARCH`, and then copying it from this location into the final image. This resolves an issue where builds for `amd64` would fail to find the library. 2. **Add Helm chart option to disable dependencies:** - I added a new option `dependencies.install` (default: `true`) to `charts/iperf3-monitor/values.yaml`. - This allows you to disable the installation of managed dependencies (i.e., Prometheus Operator via `kube-prometheus-stack` or `prometheus-operator` from TrueCharts) even if `serviceMonitor.enabled` is true. - I updated the `condition` for these dependencies in `charts/iperf3-monitor/Chart.yaml` to `dependencies.install, serviceMonitor.enabled, ...`. - This is useful for you if you manage your Prometheus Operator installation separately. --------- Co-authored-by: google-labs-jules[bot] <161369871+google-labs-jules[bot]@users.noreply.github.com>
2025-06-20 13:37:36 +00:00
# Copy the prepared libiperf.so.0 from the builder's canonical temporary location
# into a standard library path in the final image.
COPY --from=builder /tmp/lib/libiperf.so.0 /usr/lib/libiperf.so.0
# Copy installed Python packages from the builder stage
COPY --from=builder /usr/local/lib/python3.9/site-packages /usr/local/lib/python3.9/site-packages
# Copy the exporter application code
COPY exporter.py .
# Expose the metrics port
EXPOSE 9876
# Set the entrypoint
CMD ["python", "exporter.py"]