Compare commits

..

2 Commits
v0.2.4 ... main

Author SHA1 Message Date
Malar Kannan 0d93c9ea67
Fix: Align exporter labels, image tags, and build process (#25)
- Helm Chart:
  - Added `app.kubernetes.io/component: exporter` to the exporter pod
    labels via `values.yaml` to match the service selector.
  - Updated image tag defaulting in `exporter-controller.yaml` to use
    `Chart.appVersion` directly (e.g., "0.1.0" instead of "v0.1.0").

- Build Process (`.github/workflows/release.yml`):
  - Configured `docker/metadata-action` to ensure image tags are generated
    without a 'v' prefix (e.g., "0.1.0" from Git tag "v0.1.0").
    This aligns the published image tags with the Helm chart's
    updated image tag references.

- Repository:
  - Added `rendered-manifests.yaml` and `rendered-manifests-updated.yaml`
    to `.gitignore`.

Co-authored-by: google-labs-jules[bot] <161369871+google-labs-jules[bot]@users.noreply.github.com>
2025-07-02 16:29:20 +05:30
Malar Kannan 587290f1fb
Fix(exporter): Use namespaced pod listing for iperf server discovery (#23)
- Modified `exporter/exporter.py` to use `list_namespaced_pod()`
  instead of `list_pod_for_all_namespaces()`. This resolves the
  RBAC error where the exporter was incorrectly requesting cluster-scoped
  pod listing permissions.
- The exporter now correctly lists pods only within the namespace
  specified by the `IPERF_SERVER_NAMESPACE` environment variable.

- Reverted Helm chart RBAC templates (`charts/iperf3-monitor/templates/rbac.yaml`)
  and `values.yaml` to their simpler, original state. The previous
  parameterization of `serviceAccount.namespace` is no longer needed,
  as the primary fix is in the exporter code.

The Helm chart should be deployed into the same namespace where the
`iperf3-monitor` ServiceAccount resides and where iperf3 server pods
are located. The `IPERF_SERVER_NAMESPACE` environment variable for the
exporter pod must be set to this namespace.

Co-authored-by: google-labs-jules[bot] <161369871+google-labs-jules[bot]@users.noreply.github.com>
2025-07-02 14:19:56 +05:30
5 changed files with 27 additions and 4 deletions

View File

@ -63,6 +63,11 @@ jobs:
uses: docker/metadata-action@v4
with:
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
tags: |
type=semver,pattern={{version}}
# This ensures that for a git tag like "v0.1.0",
# an image tag "0.1.0" is generated.
# It will also generate "latest" for the most recent semver tag.
- name: Build and push Docker image
uses: docker/build-push-action@v4

4
.gitignore vendored
View File

@ -37,3 +37,7 @@ Thumbs.db
# Helm
!charts/iperf3-monitor/.helmignore
charts/iperf3-monitor/charts/
# Rendered Kubernetes manifests (for local testing)
rendered-manifests.yaml
rendered-manifests-updated.yaml

View File

@ -72,12 +72,23 @@ Proceed with modifications only if the exporter controller is defined.
{{- /*
Ensure the container image tag is set, defaulting to Chart.AppVersion if empty,
as the common library validation requires it during 'helm template'.
NOTE: BJW-S common library typically handles defaulting image.tag to Chart.appVersion
if image.tag is empty or null in values. The custom logic below prepending "v"
is specific to this chart and might be redundant if the common library's default
is preferred. For now, we keep it as it was the reason for previous errors if tag was not set.
However, if common library handles it, this block could be removed and image.tag in values.yaml set to "" or null.
Forcing the tag to be set (even if to chart.appVersion) ensures the common library doesn't complain.
The issue encountered during `helm template` earlier (empty output) was resolved by
explicitly setting the tag (e.g. via --set or by ensuring values.yaml has it).
The common library's internal validation likely needs *a* tag to be present in the values passed to it,
even if that tag is derived from AppVersion. This block ensures that.
*/}}
{{- $exporterContainerCfg := get $exporterControllerConfig.containers "exporter" -}}
{{- if $exporterContainerCfg -}}
{{- if not $exporterContainerCfg.image.tag -}}
{{- if $chart.AppVersion -}}
{{- $_ := set $exporterContainerCfg.image "tag" (printf "v%s" $chart.AppVersion) -}}
{{- $_ := set $exporterContainerCfg.image "tag" (printf "%s" $chart.AppVersion) -}} # Removed "v" prefix
{{- else -}}
{{- fail (printf "Error: Container image tag is not specified for controller '%s', container '%s', and Chart.AppVersion is also empty." $exporterControllerKey "exporter") -}}
{{- end -}}

View File

@ -45,7 +45,8 @@ controllers:
# -- Annotations for the exporter pod.
annotations: {}
# -- Labels for the exporter pod.
labels: {} # The common library will add its own default labels.
labels:
app.kubernetes.io/component: exporter # Ensure pods get the component label for service selection
# -- Node selector for scheduling exporter pods.
nodeSelector: {}
# -- Tolerations for scheduling exporter pods.

View File

@ -92,16 +92,18 @@ def discover_iperf_servers():
logging.info(f"Discovering iperf3 servers with label '{label_selector}' in namespace '{namespace}'")
ret = v1.list_pod_for_all_namespaces(label_selector=label_selector, watch=False)
# Use list_namespaced_pod to query only the specified namespace
ret = v1.list_namespaced_pod(namespace=namespace, label_selector=label_selector, watch=False)
servers = []
for item in ret.items:
# No need to filter by namespace here as the API call is already namespaced
if item.status.pod_ip and item.status.phase == 'Running':
servers.append({
'ip': item.status.pod_ip,
'node_name': item.spec.node_name # Node where the iperf server pod is running
})
logging.info(f"Discovered {len(servers)} iperf3 server pods.")
logging.info(f"Discovered {len(servers)} iperf3 server pods in namespace '{namespace}'.")
return servers
except config.ConfigException as e:
logging.error(f"Kubernetes config error: {e}. Is the exporter running in a cluster with RBAC permissions?")