Compare commits

...

4 Commits
v0.2.2 ... main

Author SHA1 Message Date
Malar Kannan 0d93c9ea67
Fix: Align exporter labels, image tags, and build process (#25)
- Helm Chart:
  - Added `app.kubernetes.io/component: exporter` to the exporter pod
    labels via `values.yaml` to match the service selector.
  - Updated image tag defaulting in `exporter-controller.yaml` to use
    `Chart.appVersion` directly (e.g., "0.1.0" instead of "v0.1.0").

- Build Process (`.github/workflows/release.yml`):
  - Configured `docker/metadata-action` to ensure image tags are generated
    without a 'v' prefix (e.g., "0.1.0" from Git tag "v0.1.0").
    This aligns the published image tags with the Helm chart's
    updated image tag references.

- Repository:
  - Added `rendered-manifests.yaml` and `rendered-manifests-updated.yaml`
    to `.gitignore`.

Co-authored-by: google-labs-jules[bot] <161369871+google-labs-jules[bot]@users.noreply.github.com>
2025-07-02 16:29:20 +05:30
Malar Kannan 587290f1fb
Fix(exporter): Use namespaced pod listing for iperf server discovery (#23)
- Modified `exporter/exporter.py` to use `list_namespaced_pod()`
  instead of `list_pod_for_all_namespaces()`. This resolves the
  RBAC error where the exporter was incorrectly requesting cluster-scoped
  pod listing permissions.
- The exporter now correctly lists pods only within the namespace
  specified by the `IPERF_SERVER_NAMESPACE` environment variable.

- Reverted Helm chart RBAC templates (`charts/iperf3-monitor/templates/rbac.yaml`)
  and `values.yaml` to their simpler, original state. The previous
  parameterization of `serviceAccount.namespace` is no longer needed,
  as the primary fix is in the exporter code.

The Helm chart should be deployed into the same namespace where the
`iperf3-monitor` ServiceAccount resides and where iperf3 server pods
are located. The `IPERF_SERVER_NAMESPACE` environment variable for the
exporter pod must be set to this namespace.

Co-authored-by: google-labs-jules[bot] <161369871+google-labs-jules[bot]@users.noreply.github.com>
2025-07-02 14:19:56 +05:30
Malar Kannan 24904ef084
Add grafana dashboard configmap (#24)
* feat: Add Grafana dashboard as ConfigMap

Adds the Grafana dashboard for iperf3-monitor as a ConfigMap to the Helm chart.

The dashboard is sourced from the project's README and stored in
`charts/iperf3-monitor/grafana/iperf3-dashboard.json`.

A new template `charts/iperf3-monitor/templates/grafana-dashboard-configmap.yaml`
creates the ConfigMap, loading the dashboard JSON and labeling it with
`grafana_dashboard: "1"` to enable auto-discovery by Grafana.

* feat: Add Grafana dashboard as ConfigMap

Adds the Grafana dashboard for iperf3-monitor as a ConfigMap to the Helm chart.

The dashboard is sourced from the project's README and stored in
`charts/iperf3-monitor/grafana/iperf3-dashboard.json`.

A new template `charts/iperf3-monitor/templates/grafana-dashboard-configmap.yaml`
creates the ConfigMap, loading the dashboard JSON and labeling it with
`grafana_dashboard: "1"` to enable auto-discovery by Grafana.

* fix: Correct Helm chart label in Grafana dashboard ConfigMap

Updates the `helm.sh/chart` label in the Grafana dashboard ConfigMap
to use `{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}`.
This resolves a Helm linting error caused by an incorrect template reference.

The previous commit added the Grafana dashboard as a ConfigMap:
feat: Add Grafana dashboard as ConfigMap

Adds the Grafana dashboard for iperf3-monitor as a ConfigMap to the Helm chart.

The dashboard is sourced from the project's README and stored in
`charts/iperf3-monitor/grafana/iperf3-dashboard.json`.

A new template `charts/iperf3-monitor/templates/grafana-dashboard-configmap.yaml`
creates the ConfigMap, loading the dashboard JSON and labeling it with
`grafana_dashboard: "1"` to enable auto-discovery by Grafana.

---------

Co-authored-by: google-labs-jules[bot] <161369871+google-labs-jules[bot]@users.noreply.github.com>
2025-07-02 14:03:50 +05:30
Malar Kannan 966985dc3e
Jules/align helm release workflow (#22)
* ci: Align Helm dependency setup in release workflow

Adds missing Helm dependency setup steps (repo add, dependency build) to the release workflow, mirroring the CI workflow. This ensures that dependencies are correctly handled during linting and packaging in the release process.

* refactor: Scope exporter RBAC to namespace for least privilege

Changed the exporter's ClusterRole and ClusterRoleBinding to a namespaced Role and RoleBinding.

This modification ensures that the exporter, by default, only has permissions to get, list, and watch pods within its own installation namespace. This aligns with the default behavior of IPERF_SERVER_NAMESPACE, which also defaults to the pod's own namespace, thereby adhering more strictly to the principle of least privilege.

Verified with `helm template` that the Role and RoleBinding are correctly created within the release namespace.

* fix: Add 'v' prefix to default image tag for exporter

Updated the logic in `charts/iperf3-monitor/templates/exporter-controller.yaml`
to ensure that when the exporter's image tag is not specified in
`values.yaml`, it defaults to `v<Chart.AppVersion>` instead of just
`<Chart.AppVersion>`.

This change ensures the default tag matches image tagging conventions
where a 'v' prefix is used for versions (e.g., `v0.1.0`).
If an image tag is explicitly provided in `values.yaml`, that tag is
used directly without modification.

Verified with `helm template` for both default and custom tag scenarios.

---------

Co-authored-by: google-labs-jules[bot] <161369871+google-labs-jules[bot]@users.noreply.github.com>
2025-07-02 13:29:08 +05:30
7 changed files with 234 additions and 4 deletions

View File

@ -63,6 +63,11 @@ jobs:
uses: docker/metadata-action@v4
with:
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
tags: |
type=semver,pattern={{version}}
# This ensures that for a git tag like "v0.1.0",
# an image tag "0.1.0" is generated.
# It will also generate "latest" for the most recent semver tag.
- name: Build and push Docker image
uses: docker/build-push-action@v4

4
.gitignore vendored
View File

@ -37,3 +37,7 @@ Thumbs.db
# Helm
!charts/iperf3-monitor/.helmignore
charts/iperf3-monitor/charts/
# Rendered Kubernetes manifests (for local testing)
rendered-manifests.yaml
rendered-manifests-updated.yaml

View File

@ -0,0 +1,194 @@
{
"__inputs": [],
"__requires": [
{
"type": "grafana",
"id": "grafana",
"name": "Grafana",
"version": "8.0.0"
},
{
"type": "datasource",
"id": "prometheus",
"name": "Prometheus",
"version": "1.0.0"
}
],
"annotations": {
"list": [
{
"builtIn": 1,
"datasource": {
"type": "grafana",
"uid": "-- Grafana --"
},
"enable": true,
"hide": true,
"iconColor": "rgba(0, 211, 255, 1)",
"name": "Annotations & Alerts",
"type": "dashboard"
}
]
},
"editable": true,
"fiscalYearStartMonth": 0,
"gnetId": null,
"graphTooltip": 0,
"id": null,
"links": [],
"panels": [
{
"datasource": {
"type": "prometheus",
"uid": "prometheus"
},
"gridPos": {
"h": 9,
"w": 24,
"x": 0,
"y": 0
},
"id": 2,
"targets": [
{
"expr": "avg(iperf_network_bandwidth_mbps) by (source_node, destination_node)",
"format": "heatmap",
"legendFormat": "{{source_node}} -> {{destination_node}}",
"refId": "A"
}
],
"cards": { "cardPadding": null, "cardRound": null },
"color": {
"mode": "spectrum",
"scheme": "red-yellow-green",
"exponent": 0.5,
"reverse": false
},
"dataFormat": "tsbuckets",
"yAxis": { "show": true, "format": "short" },
"xAxis": { "show": true }
},
{
"title": "Bandwidth Over Time (Source: $source_node, Dest: $destination_node)",
"type": "timeseries",
"datasource": {
"type": "prometheus",
"uid": "prometheus"
},
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 9
},
"targets": [
{
"expr": "iperf_network_bandwidth_mbps{source_node=~\"^$source_node$\", destination_node=~\"^$destination_node$\", protocol=~\"^$protocol$\"}",
"legendFormat": "Bandwidth",
"refId": "A"
}
],
"fieldConfig": {
"defaults": {
"unit": "mbps"
}
}
},
{
"title": "Jitter Over Time (Source: $source_node, Dest: $destination_node)",
"type": "timeseries",
"datasource": {
"type": "prometheus",
"uid": "prometheus"
},
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 9
},
"targets": [
{
"expr": "iperf_network_jitter_ms{source_node=~\"^$source_node$\", destination_node=~\"^$destination_node$\", protocol=\"udp\"}",
"legendFormat": "Jitter",
"refId": "A"
}
],
"fieldConfig": {
"defaults": {
"unit": "ms"
}
}
}
],
"refresh": "30s",
"schemaVersion": 36,
"style": "dark",
"tags": ["iperf3", "network", "kubernetes"],
"templating": {
"list": [
{
"current": {},
"datasource": {
"type": "prometheus",
"uid": "prometheus"
},
"definition": "label_values(iperf_network_bandwidth_mbps, source_node)",
"hide": 0,
"includeAll": false,
"multi": false,
"name": "source_node",
"options": [],
"query": "label_values(iperf_network_bandwidth_mbps, source_node)",
"refresh": 1,
"regex": "",
"skipUrlSync": false,
"sort": 1,
"type": "query"
},
{
"current": {},
"datasource": {
"type": "prometheus",
"uid": "prometheus"
},
"definition": "label_values(iperf_network_bandwidth_mbps{source_node=~\"^$source_node$\"}, destination_node)",
"hide": 0,
"includeAll": false,
"multi": false,
"name": "destination_node",
"options": [],
"query": "label_values(iperf_network_bandwidth_mbps{source_node=~\"^$source_node$\"}, destination_node)",
"refresh": 1,
"regex": "",
"skipUrlSync": false,
"sort": 1,
"type": "query"
},
{
"current": { "selected": true, "text": "tcp", "value": "tcp" },
"hide": 0,
"includeAll": false,
"multi": false,
"name": "protocol",
"options": [
{ "selected": true, "text": "tcp", "value": "tcp" },
{ "selected": false, "text": "udp", "value": "udp" }
],
"query": "tcp,udp",
"skipUrlSync": false,
"type": "custom"
}
]
},
"time": {
"from": "now-1h",
"to": "now"
},
"timepicker": {},
"timezone": "browser",
"title": "Kubernetes iperf3 Network Performance",
"uid": "k8s-iperf3-dashboard",
"version": 1,
"weekStart": ""
}

View File

@ -72,12 +72,23 @@ Proceed with modifications only if the exporter controller is defined.
{{- /*
Ensure the container image tag is set, defaulting to Chart.AppVersion if empty,
as the common library validation requires it during 'helm template'.
NOTE: BJW-S common library typically handles defaulting image.tag to Chart.appVersion
if image.tag is empty or null in values. The custom logic below prepending "v"
is specific to this chart and might be redundant if the common library's default
is preferred. For now, we keep it as it was the reason for previous errors if tag was not set.
However, if common library handles it, this block could be removed and image.tag in values.yaml set to "" or null.
Forcing the tag to be set (even if to chart.appVersion) ensures the common library doesn't complain.
The issue encountered during `helm template` earlier (empty output) was resolved by
explicitly setting the tag (e.g. via --set or by ensuring values.yaml has it).
The common library's internal validation likely needs *a* tag to be present in the values passed to it,
even if that tag is derived from AppVersion. This block ensures that.
*/}}
{{- $exporterContainerCfg := get $exporterControllerConfig.containers "exporter" -}}
{{- if $exporterContainerCfg -}}
{{- if not $exporterContainerCfg.image.tag -}}
{{- if $chart.AppVersion -}}
{{- $_ := set $exporterContainerCfg.image "tag" $chart.AppVersion -}}
{{- $_ := set $exporterContainerCfg.image "tag" (printf "%s" $chart.AppVersion) -}} # Removed "v" prefix
{{- else -}}
{{- fail (printf "Error: Container image tag is not specified for controller '%s', container '%s', and Chart.AppVersion is also empty." $exporterControllerKey "exporter") -}}
{{- end -}}

View File

@ -0,0 +1,13 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Release.Name }}-grafana-dashboard
labels:
grafana_dashboard: "1"
app.kubernetes.io/name: {{ include "iperf3-monitor.name" . }}
helm.sh/chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
data:
iperf3-dashboard.json: |
{{ .Files.Get "grafana/iperf3-dashboard.json" | nindent 4 }}

View File

@ -45,7 +45,8 @@ controllers:
# -- Annotations for the exporter pod.
annotations: {}
# -- Labels for the exporter pod.
labels: {} # The common library will add its own default labels.
labels:
app.kubernetes.io/component: exporter # Ensure pods get the component label for service selection
# -- Node selector for scheduling exporter pods.
nodeSelector: {}
# -- Tolerations for scheduling exporter pods.

View File

@ -92,16 +92,18 @@ def discover_iperf_servers():
logging.info(f"Discovering iperf3 servers with label '{label_selector}' in namespace '{namespace}'")
ret = v1.list_pod_for_all_namespaces(label_selector=label_selector, watch=False)
# Use list_namespaced_pod to query only the specified namespace
ret = v1.list_namespaced_pod(namespace=namespace, label_selector=label_selector, watch=False)
servers = []
for item in ret.items:
# No need to filter by namespace here as the API call is already namespaced
if item.status.pod_ip and item.status.phase == 'Running':
servers.append({
'ip': item.status.pod_ip,
'node_name': item.spec.node_name # Node where the iperf server pod is running
})
logging.info(f"Discovered {len(servers)} iperf3 server pods.")
logging.info(f"Discovered {len(servers)} iperf3 server pods in namespace '{namespace}'.")
return servers
except config.ConfigException as e:
logging.error(f"Kubernetes config error: {e}. Is the exporter running in a cluster with RBAC permissions?")