Compare commits

..

9 Commits

Author SHA1 Message Date
google-labs-jules[bot]
972767a410 Fix: Align exporter labels, image tags, and build process
- Helm Chart:
  - Added `app.kubernetes.io/component: exporter` to the exporter pod
    labels via `values.yaml` to match the service selector.
  - Updated image tag defaulting in `exporter-controller.yaml` to use
    `Chart.appVersion` directly (e.g., "0.1.0" instead of "v0.1.0").

- Build Process (`.github/workflows/release.yml`):
  - Configured `docker/metadata-action` to ensure image tags are generated
    without a 'v' prefix (e.g., "0.1.0" from Git tag "v0.1.0").
    This aligns the published image tags with the Helm chart's
    updated image tag references.

- Repository:
  - Added `rendered-manifests.yaml` and `rendered-manifests-updated.yaml`
    to `.gitignore`.
2025-07-02 10:58:42 +00:00
587290f1fb Fix(exporter): Use namespaced pod listing for iperf server discovery (#23)
- Modified `exporter/exporter.py` to use `list_namespaced_pod()`
  instead of `list_pod_for_all_namespaces()`. This resolves the
  RBAC error where the exporter was incorrectly requesting cluster-scoped
  pod listing permissions.
- The exporter now correctly lists pods only within the namespace
  specified by the `IPERF_SERVER_NAMESPACE` environment variable.

- Reverted Helm chart RBAC templates (`charts/iperf3-monitor/templates/rbac.yaml`)
  and `values.yaml` to their simpler, original state. The previous
  parameterization of `serviceAccount.namespace` is no longer needed,
  as the primary fix is in the exporter code.

The Helm chart should be deployed into the same namespace where the
`iperf3-monitor` ServiceAccount resides and where iperf3 server pods
are located. The `IPERF_SERVER_NAMESPACE` environment variable for the
exporter pod must be set to this namespace.

Co-authored-by: google-labs-jules[bot] <161369871+google-labs-jules[bot]@users.noreply.github.com>
2025-07-02 14:19:56 +05:30
24904ef084 Add grafana dashboard configmap (#24)
* feat: Add Grafana dashboard as ConfigMap

Adds the Grafana dashboard for iperf3-monitor as a ConfigMap to the Helm chart.

The dashboard is sourced from the project's README and stored in
`charts/iperf3-monitor/grafana/iperf3-dashboard.json`.

A new template `charts/iperf3-monitor/templates/grafana-dashboard-configmap.yaml`
creates the ConfigMap, loading the dashboard JSON and labeling it with
`grafana_dashboard: "1"` to enable auto-discovery by Grafana.

* feat: Add Grafana dashboard as ConfigMap

Adds the Grafana dashboard for iperf3-monitor as a ConfigMap to the Helm chart.

The dashboard is sourced from the project's README and stored in
`charts/iperf3-monitor/grafana/iperf3-dashboard.json`.

A new template `charts/iperf3-monitor/templates/grafana-dashboard-configmap.yaml`
creates the ConfigMap, loading the dashboard JSON and labeling it with
`grafana_dashboard: "1"` to enable auto-discovery by Grafana.

* fix: Correct Helm chart label in Grafana dashboard ConfigMap

Updates the `helm.sh/chart` label in the Grafana dashboard ConfigMap
to use `{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}`.
This resolves a Helm linting error caused by an incorrect template reference.

The previous commit added the Grafana dashboard as a ConfigMap:
feat: Add Grafana dashboard as ConfigMap

Adds the Grafana dashboard for iperf3-monitor as a ConfigMap to the Helm chart.

The dashboard is sourced from the project's README and stored in
`charts/iperf3-monitor/grafana/iperf3-dashboard.json`.

A new template `charts/iperf3-monitor/templates/grafana-dashboard-configmap.yaml`
creates the ConfigMap, loading the dashboard JSON and labeling it with
`grafana_dashboard: "1"` to enable auto-discovery by Grafana.

---------

Co-authored-by: google-labs-jules[bot] <161369871+google-labs-jules[bot]@users.noreply.github.com>
2025-07-02 14:03:50 +05:30
966985dc3e Jules/align helm release workflow (#22)
* ci: Align Helm dependency setup in release workflow

Adds missing Helm dependency setup steps (repo add, dependency build) to the release workflow, mirroring the CI workflow. This ensures that dependencies are correctly handled during linting and packaging in the release process.

* refactor: Scope exporter RBAC to namespace for least privilege

Changed the exporter's ClusterRole and ClusterRoleBinding to a namespaced Role and RoleBinding.

This modification ensures that the exporter, by default, only has permissions to get, list, and watch pods within its own installation namespace. This aligns with the default behavior of IPERF_SERVER_NAMESPACE, which also defaults to the pod's own namespace, thereby adhering more strictly to the principle of least privilege.

Verified with `helm template` that the Role and RoleBinding are correctly created within the release namespace.

* fix: Add 'v' prefix to default image tag for exporter

Updated the logic in `charts/iperf3-monitor/templates/exporter-controller.yaml`
to ensure that when the exporter's image tag is not specified in
`values.yaml`, it defaults to `v<Chart.AppVersion>` instead of just
`<Chart.AppVersion>`.

This change ensures the default tag matches image tagging conventions
where a 'v' prefix is used for versions (e.g., `v0.1.0`).
If an image tag is explicitly provided in `values.yaml`, that tag is
used directly without modification.

Verified with `helm template` for both default and custom tag scenarios.

---------

Co-authored-by: google-labs-jules[bot] <161369871+google-labs-jules[bot]@users.noreply.github.com>
2025-07-02 13:29:08 +05:30
d3cb92eb0f Jules/align helm release workflow (#21)
* ci: Align Helm dependency setup in release workflow

Adds missing Helm dependency setup steps (repo add, dependency build) to the release workflow, mirroring the CI workflow. This ensures that dependencies are correctly handled during linting and packaging in the release process.

* refactor: Scope exporter RBAC to namespace for least privilege

Changed the exporter's ClusterRole and ClusterRoleBinding to a namespaced Role and RoleBinding.

This modification ensures that the exporter, by default, only has permissions to get, list, and watch pods within its own installation namespace. This aligns with the default behavior of IPERF_SERVER_NAMESPACE, which also defaults to the pod's own namespace, thereby adhering more strictly to the principle of least privilege.

Verified with `helm template` that the Role and RoleBinding are correctly created within the release namespace.

---------

Co-authored-by: google-labs-jules[bot] <161369871+google-labs-jules[bot]@users.noreply.github.com>
2025-07-02 12:57:00 +05:30
4cce553441 ci: Align Helm dependency setup in release workflow (#20)
Adds missing Helm dependency setup steps (repo add, dependency build) to the release workflow, mirroring the CI workflow. This ensures that dependencies are correctly handled during linting and packaging in the release process.

Co-authored-by: google-labs-jules[bot] <161369871+google-labs-jules[bot]@users.noreply.github.com>
2025-07-02 11:56:38 +05:30
a0ecc5c11a fix: Correct common lib repo URL, rename exporter template (#19)
* fix: Correct common lib repo URL, rename exporter template

- Reverted common library repository URL in Chart.yaml to
  https://bjw-s-labs.github.io/helm-charts/.
- Ensured helm dependency commands are run after adding repositories.
- Renamed exporter template from exporter-deployment.yaml to
  exporter-controller.yaml to better reflect its new role with common library.

Note: Full helm lint/template validation with dependencies was not possible
in the automated environment due to issues with dependency file persistence
in the sandbox.

* fix: Integrate bjw-s/common library for exporter controller

- Corrected bjw-s/common library repository URL in Chart.yaml to the
  traditional HTTPS URL and ensured dependencies are fetched.
- Renamed exporter template to exporter-controller.yaml.
- Updated exporter-controller.yaml to correctly use
  `bjw-s.common.render.controllers` for rendering.
- Refined the context passed to the common library to include Values, Chart,
  Release, and Capabilities, and initialized expected top-level keys
  (global, defaultPodOptionsStrategy) in the Values.
- Ensured image.tag is defaulted to Chart.AppVersion in the template data
  to pass common library validations.
- Helm lint and template commands now pass successfully for both
  Deployment and DaemonSet configurations of the exporter.

* fix: Set dependencies.install to false by default

- Changed the default value for `dependencies.install` to `false` in values.yaml.
- Updated comments to clarify that users should explicitly enable it if they
  need the chart to install a Prometheus Operator dependency.

* fix: Update CI workflow to add Helm repositories and build dependencies

* hotfix: Pass .Template to common lib for tpl context

- Updated exporter-controller.yaml to include .Template in the dict
  passed to the bjw-s.common.render.controllers include.
- This is to resolve a 'cannot retrieve Template.Basepath' error
  encountered with the tpl function in older Helm versions (like v3.10.0 in CI)
  when the tpl context does not contain the .Template object.

---------

Co-authored-by: google-labs-jules[bot] <161369871+google-labs-jules[bot]@users.noreply.github.com>
2025-07-02 11:17:05 +05:30
5fa41a6aad Use 'v' prefix for default exporter image tag (#17) 2025-07-02 10:11:31 +05:30
49fb881f24 Add grafana dashboard configmap (#18)
* feat: Add Grafana dashboard as ConfigMap

Adds the Grafana dashboard for iperf3-monitor as a ConfigMap to the Helm chart.

The dashboard is sourced from the project's README and stored in
`charts/iperf3-monitor/grafana/iperf3-dashboard.json`.

A new template `charts/iperf3-monitor/templates/grafana-dashboard-configmap.yaml`
creates the ConfigMap, loading the dashboard JSON and labeling it with
`grafana_dashboard: "1"` to enable auto-discovery by Grafana.

* feat: Add Grafana dashboard as ConfigMap

Adds the Grafana dashboard for iperf3-monitor as a ConfigMap to the Helm chart.

The dashboard is sourced from the project's README and stored in
`charts/iperf3-monitor/grafana/iperf3-dashboard.json`.

A new template `charts/iperf3-monitor/templates/grafana-dashboard-configmap.yaml`
creates the ConfigMap, loading the dashboard JSON and labeling it with
`grafana_dashboard: "1"` to enable auto-discovery by Grafana.

* fix: Correct Helm chart label in Grafana dashboard ConfigMap

Updates the `helm.sh/chart` label in the Grafana dashboard ConfigMap
to use `{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}`.
This resolves a Helm linting error caused by an incorrect template reference.

The previous commit added the Grafana dashboard as a ConfigMap:
feat: Add Grafana dashboard as ConfigMap

Adds the Grafana dashboard for iperf3-monitor as a ConfigMap to the Helm chart.

The dashboard is sourced from the project's README and stored in
`charts/iperf3-monitor/grafana/iperf3-dashboard.json`.

A new template `charts/iperf3-monitor/templates/grafana-dashboard-configmap.yaml`
creates the ConfigMap, loading the dashboard JSON and labeling it with
`grafana_dashboard: "1"` to enable auto-discovery by Grafana.

---------

Co-authored-by: google-labs-jules[bot] <161369871+google-labs-jules[bot]@users.noreply.github.com>
2025-07-02 01:37:14 +05:30
10 changed files with 284 additions and 22 deletions

View File

@@ -19,7 +19,16 @@ jobs:
- name: Set up Helm
uses: azure/setup-helm@v3
with:
version: v3.10.0
version: v3.10.0 # Using a specific version, can be updated
- name: Add Helm repositories
run: |
helm repo add bjw-s https://bjw-s-labs.github.io/helm-charts/ --force-update
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts --force-update
helm repo update
- name: Build Helm chart dependencies
run: helm dependency build ./charts/iperf3-monitor
- name: Helm Lint
run: helm lint ./charts/iperf3-monitor

View File

@@ -22,6 +22,15 @@ jobs:
with:
version: v3.10.0
- name: Add Helm repositories
run: |
helm repo add bjw-s https://bjw-s-labs.github.io/helm-charts/ --force-update
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts --force-update
helm repo update
- name: Build Helm chart dependencies
run: helm dependency build ./charts/iperf3-monitor
- name: Helm Lint
run: helm lint ./charts/iperf3-monitor
@@ -54,6 +63,11 @@ jobs:
uses: docker/metadata-action@v4
with:
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
tags: |
type=semver,pattern={{version}}
# This ensures that for a git tag like "v0.1.0",
# an image tag "0.1.0" is generated.
# It will also generate "latest" for the most recent semver tag.
- name: Build and push Docker image
uses: docker/build-push-action@v4
@@ -86,6 +100,15 @@ jobs:
sudo wget https://github.com/mikefarah/yq/releases/latest/download/yq_linux_amd64 -O /usr/bin/yq &&\
sudo chmod +x /usr/bin/yq
- name: Add Helm repositories
run: |
helm repo add bjw-s https://bjw-s-labs.github.io/helm-charts/ --force-update
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts --force-update
helm repo update
- name: Build Helm chart dependencies
run: helm dependency build ./charts/iperf3-monitor
- name: Set Chart Version from Tag
run: |
VERSION=$(echo "${{ github.ref_name }}" | sed 's/^v//')

4
.gitignore vendored
View File

@@ -37,3 +37,7 @@ Thumbs.db
# Helm
!charts/iperf3-monitor/.helmignore
charts/iperf3-monitor/charts/
# Rendered Kubernetes manifests (for local testing)
rendered-manifests.yaml
rendered-manifests-updated.yaml

View File

@@ -0,0 +1,194 @@
{
"__inputs": [],
"__requires": [
{
"type": "grafana",
"id": "grafana",
"name": "Grafana",
"version": "8.0.0"
},
{
"type": "datasource",
"id": "prometheus",
"name": "Prometheus",
"version": "1.0.0"
}
],
"annotations": {
"list": [
{
"builtIn": 1,
"datasource": {
"type": "grafana",
"uid": "-- Grafana --"
},
"enable": true,
"hide": true,
"iconColor": "rgba(0, 211, 255, 1)",
"name": "Annotations & Alerts",
"type": "dashboard"
}
]
},
"editable": true,
"fiscalYearStartMonth": 0,
"gnetId": null,
"graphTooltip": 0,
"id": null,
"links": [],
"panels": [
{
"datasource": {
"type": "prometheus",
"uid": "prometheus"
},
"gridPos": {
"h": 9,
"w": 24,
"x": 0,
"y": 0
},
"id": 2,
"targets": [
{
"expr": "avg(iperf_network_bandwidth_mbps) by (source_node, destination_node)",
"format": "heatmap",
"legendFormat": "{{source_node}} -> {{destination_node}}",
"refId": "A"
}
],
"cards": { "cardPadding": null, "cardRound": null },
"color": {
"mode": "spectrum",
"scheme": "red-yellow-green",
"exponent": 0.5,
"reverse": false
},
"dataFormat": "tsbuckets",
"yAxis": { "show": true, "format": "short" },
"xAxis": { "show": true }
},
{
"title": "Bandwidth Over Time (Source: $source_node, Dest: $destination_node)",
"type": "timeseries",
"datasource": {
"type": "prometheus",
"uid": "prometheus"
},
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 9
},
"targets": [
{
"expr": "iperf_network_bandwidth_mbps{source_node=~\"^$source_node$\", destination_node=~\"^$destination_node$\", protocol=~\"^$protocol$\"}",
"legendFormat": "Bandwidth",
"refId": "A"
}
],
"fieldConfig": {
"defaults": {
"unit": "mbps"
}
}
},
{
"title": "Jitter Over Time (Source: $source_node, Dest: $destination_node)",
"type": "timeseries",
"datasource": {
"type": "prometheus",
"uid": "prometheus"
},
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 9
},
"targets": [
{
"expr": "iperf_network_jitter_ms{source_node=~\"^$source_node$\", destination_node=~\"^$destination_node$\", protocol=\"udp\"}",
"legendFormat": "Jitter",
"refId": "A"
}
],
"fieldConfig": {
"defaults": {
"unit": "ms"
}
}
}
],
"refresh": "30s",
"schemaVersion": 36,
"style": "dark",
"tags": ["iperf3", "network", "kubernetes"],
"templating": {
"list": [
{
"current": {},
"datasource": {
"type": "prometheus",
"uid": "prometheus"
},
"definition": "label_values(iperf_network_bandwidth_mbps, source_node)",
"hide": 0,
"includeAll": false,
"multi": false,
"name": "source_node",
"options": [],
"query": "label_values(iperf_network_bandwidth_mbps, source_node)",
"refresh": 1,
"regex": "",
"skipUrlSync": false,
"sort": 1,
"type": "query"
},
{
"current": {},
"datasource": {
"type": "prometheus",
"uid": "prometheus"
},
"definition": "label_values(iperf_network_bandwidth_mbps{source_node=~\"^$source_node$\"}, destination_node)",
"hide": 0,
"includeAll": false,
"multi": false,
"name": "destination_node",
"options": [],
"query": "label_values(iperf_network_bandwidth_mbps{source_node=~\"^$source_node$\"}, destination_node)",
"refresh": 1,
"regex": "",
"skipUrlSync": false,
"sort": 1,
"type": "query"
},
{
"current": { "selected": true, "text": "tcp", "value": "tcp" },
"hide": 0,
"includeAll": false,
"multi": false,
"name": "protocol",
"options": [
{ "selected": true, "text": "tcp", "value": "tcp" },
{ "selected": false, "text": "udp", "value": "udp" }
],
"query": "tcp,udp",
"skipUrlSync": false,
"type": "custom"
}
]
},
"time": {
"from": "now-1h",
"to": "now"
},
"timepicker": {},
"timezone": "browser",
"title": "Kubernetes iperf3 Network Performance",
"uid": "k8s-iperf3-dashboard",
"version": 1,
"weekStart": ""
}

View File

@@ -45,7 +45,7 @@ Proceed with modifications only if the exporter controller is defined.
{{- $_ := set $baseExporterEnv "IPERF_SERVER_NAMESPACE" (dict "valueFrom" (dict "fieldRef" (dict "fieldPath" "metadata.namespace"))) -}}
{{- $_ := set $baseExporterEnv "IPERF_TEST_TIMEOUT" ($exporterControllerConfig.appConfig.testTimeout | default "10" | toString) -}}
{{- $serverLabelSelectorDefault := printf "app.kubernetes.io/name=%s,app.kubernetes.io/instance=%s,app.kubernetes.io/component=server" $appName $release.Name -}}
{{- $serverLabelSelector := tpl ($exporterControllerConfig.appConfig.serverLabelSelector | default $serverLabelSelectorDefault) (dict "Release" $release "Chart" $chart "Values" $localValues) -}}
{{- $serverLabelSelector := tpl ($exporterControllerConfig.appConfig.serverLabelSelector | default $serverLabelSelectorDefault) . -}}
{{- $_ := set $baseExporterEnv "IPERF_SERVER_LABEL_SELECTOR" $serverLabelSelector -}}
{{- end -}}
@@ -72,12 +72,23 @@ Proceed with modifications only if the exporter controller is defined.
{{- /*
Ensure the container image tag is set, defaulting to Chart.AppVersion if empty,
as the common library validation requires it during 'helm template'.
NOTE: BJW-S common library typically handles defaulting image.tag to Chart.appVersion
if image.tag is empty or null in values. The custom logic below prepending "v"
is specific to this chart and might be redundant if the common library's default
is preferred. For now, we keep it as it was the reason for previous errors if tag was not set.
However, if common library handles it, this block could be removed and image.tag in values.yaml set to "" or null.
Forcing the tag to be set (even if to chart.appVersion) ensures the common library doesn't complain.
The issue encountered during `helm template` earlier (empty output) was resolved by
explicitly setting the tag (e.g. via --set or by ensuring values.yaml has it).
The common library's internal validation likely needs *a* tag to be present in the values passed to it,
even if that tag is derived from AppVersion. This block ensures that.
*/}}
{{- $exporterContainerCfg := get $exporterControllerConfig.containers "exporter" -}}
{{- if $exporterContainerCfg -}}
{{- if not $exporterContainerCfg.image.tag -}}
{{- if $chart.AppVersion -}}
{{- $_ := set $exporterContainerCfg.image "tag" $chart.AppVersion -}}
{{- $_ := set $exporterContainerCfg.image "tag" (printf "%s" $chart.AppVersion) -}} # Removed "v" prefix
{{- else -}}
{{- fail (printf "Error: Container image tag is not specified for controller '%s', container '%s', and Chart.AppVersion is also empty." $exporterControllerKey "exporter") -}}
{{- end -}}
@@ -124,6 +135,6 @@ Ensure defaultPodOptionsStrategy exists, as common lib expects it at the root of
Call the common library's main render function for controllers.
This function iterates through all controllers defined under $localValues.controllers
(in our case, just "exporter") and renders them using their specified type and configuration.
The context passed must mirror the global Helm context, including 'Values', 'Chart', 'Release', and 'Capabilities'.
The context passed must mirror the global Helm context, including 'Values', 'Chart', 'Release', 'Capabilities', and 'Template'.
*/}}
{{- include "bjw-s.common.render.controllers" (dict "Values" $localValues "Chart" $chart "Release" $release "Capabilities" .Capabilities) | nindent 0 -}}
{{- include "bjw-s.common.render.controllers" (dict "Values" $localValues "Chart" $chart "Release" $release "Capabilities" .Capabilities "Template" .Template) | nindent 0 -}}

View File

@@ -0,0 +1,13 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Release.Name }}-grafana-dashboard
labels:
grafana_dashboard: "1"
app.kubernetes.io/name: {{ include "iperf3-monitor.name" . }}
helm.sh/chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
data:
iperf3-dashboard.json: |
{{ .Files.Get "grafana/iperf3-dashboard.json" | nindent 4 }}

View File

@@ -7,9 +7,10 @@ metadata:
{{- include "iperf3-monitor.labels" . | nindent 4 }}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
kind: Role
metadata:
name: {{ include "iperf3-monitor.fullname" . }}-role
namespace: {{ .Release.Namespace }}
labels:
{{- include "iperf3-monitor.labels" . | nindent 4 }}
rules:
@@ -18,9 +19,10 @@ rules:
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
kind: RoleBinding
metadata:
name: {{ include "iperf3-monitor.fullname" . }}-rb
namespace: {{ .Release.Namespace }}
labels:
{{- include "iperf3-monitor.labels" . | nindent 4 }}
subjects:
@@ -28,7 +30,7 @@ subjects:
name: {{ include "iperf3-monitor.serviceAccountName" . }}
namespace: {{ .Release.Namespace }}
roleRef:
kind: ClusterRole
kind: Role # Changed from ClusterRole
name: {{ include "iperf3-monitor.fullname" . }}-role
apiGroup: rbac.authorization.k8s.io
{{- end -}}

View File

@@ -11,7 +11,7 @@ spec:
{{- include "iperf3-monitor.selectorLabels" . | nindent 4 }}
app.kubernetes.io/component: exporter
ports:
- name: metrics
port: {{ .Values.service.port }}
targetPort: {{ .Values.service.targetPort }}
protocol: TCP
- name: metrics # Assuming 'metrics' is the intended name, aligns with values structure
port: {{ .Values.service.main.ports.metrics.port }}
targetPort: {{ .Values.service.main.ports.metrics.targetPort }}
protocol: {{ .Values.service.main.ports.metrics.protocol | default "TCP" }}

View File

@@ -45,7 +45,8 @@ controllers:
# -- Annotations for the exporter pod.
annotations: {}
# -- Labels for the exporter pod.
labels: {} # The common library will add its own default labels.
labels:
app.kubernetes.io/component: exporter # Ensure pods get the component label for service selection
# -- Node selector for scheduling exporter pods.
nodeSelector: {}
# -- Tolerations for scheduling exporter pods.
@@ -86,13 +87,15 @@ controllers:
# key: mykey
# -- Ports for the exporter container.
# Expected by Kubernetes and bjw-s common library as a list of objects.
ports:
metrics: # Name of the port, will be used in Service definition
- name: metrics # Name of the port, referenced by Service's targetPort
# -- Port number for the metrics endpoint on the container.
port: 9876 # Default, should match service.targetPort
containerPort: 9876
# -- Protocol for the metrics port.
protocol: TCP # Common library defaults to TCP if not specified.
enabled: true # This port is enabled
protocol: TCP
# -- Whether this port definition is enabled. Specific to bjw-s common library.
enabled: true
# -- CPU and memory resource requests and limits for the exporter container.
resources:
@@ -198,7 +201,8 @@ networkPolicy:
# Dependency Configuration (for Prometheus Operator)
dependencies:
# -- If true, install Prometheus operator dependency (used if serviceMonitor.enabled=true).
install: true
# -- If true, use TrueCharts Prometheus Operator instead of kube-prometheus-stack.
# -- Set to false by default. Set to true to install a Prometheus operator dependency (used if serviceMonitor.enabled=true).
# -- If false (default), and serviceMonitor.enabled is true, you must have a compatible Prometheus Operator already running in your cluster.
install: false
# -- If true, use TrueCharts Prometheus Operator instead of kube-prometheus-stack (used if dependencies.install is true).
useTrueChartsPrometheusOperator: false

View File

@@ -92,16 +92,18 @@ def discover_iperf_servers():
logging.info(f"Discovering iperf3 servers with label '{label_selector}' in namespace '{namespace}'")
ret = v1.list_pod_for_all_namespaces(label_selector=label_selector, watch=False)
# Use list_namespaced_pod to query only the specified namespace
ret = v1.list_namespaced_pod(namespace=namespace, label_selector=label_selector, watch=False)
servers = []
for item in ret.items:
# No need to filter by namespace here as the API call is already namespaced
if item.status.pod_ip and item.status.phase == 'Running':
servers.append({
'ip': item.status.pod_ip,
'node_name': item.spec.node_name # Node where the iperf server pod is running
})
logging.info(f"Discovered {len(servers)} iperf3 server pods.")
logging.info(f"Discovered {len(servers)} iperf3 server pods in namespace '{namespace}'.")
return servers
except config.ConfigException as e:
logging.error(f"Kubernetes config error: {e}. Is the exporter running in a cluster with RBAC permissions?")