Documentation Index
Fetch the complete documentation index at: https://www.aidonow.com/llms.txt
Use this file to discover all available pages before exploring further.
Executive Summary
This paper documents the deployment of a self-hosted engineering quality platform on Kubernetes, combining Allure TestOps for test result history and trend analysis with SonarQube for static analysis and code coverage. Authentication is unified through oauth2-proxy backed by a self-hosted Gitea OIDC provider. Persistent state is secured via a Longhorn encrypted StorageClass. The analysis surfaces two distinct production defects in the oauth2-proxy-Gitea integration — a trailing-slash issuer mismatch and an nginx loopback condition — whose root causes are non-obvious and whose failure modes produce silent authentication rejection rather than actionable error messages. In addition, the paper establishes that nextest’s native JUnit XML output bridges cleanly to Allure TestOps without requiring native Rust annotation libraries, substantially reducing integration complexity for teams deploying this stack.Key Findings
- nextest’s native JUnit XML output is fully compatible with Allure TestOps ingestion, eliminating the need for native Rust Allure annotation libraries and decoupling test framework tooling from CI pipeline tooling.
- Gitea’s OIDC discovery endpoint returns an issuer claim with a trailing slash; oauth2-proxy performs a strict string comparison against the configured
oidc_issuer_url, and a single-character discrepancy causes silent authentication failure with no actionable error surfaced to the end user. - Setting the nginx
auth_urldirective to an external hostname routes the subrequest back through nginx itself, creating a loopback condition; cluster-internal DNS resolves this at the network layer without changes to the nginx configuration logic. - Longhorn encrypted StorageClass requires all three CSI secret references to be present simultaneously; partial configuration does not produce an error at provisioning time, but encryption does not activate, and the omission is undetectable without explicit verification.
- The nextest JUnit XML bridge pattern decouples test framework tooling from CI pipeline tooling, allowing each component to evolve independently and eliminating a class of version-lock dependency that native annotation libraries introduce.
1. Self-Hosted Quality Infrastructure Achieves Feature Parity With Managed Services at the Cost of Operational Depth
Self-hosted Kubernetes environments present a recurring infrastructure challenge: tooling that is trivially available in managed CI/CD SaaS offerings must be deployed, secured, and maintained as first-class platform components. Test result visualization and static analysis fall into this category. Teams operating in air-gapped or compliance-constrained environments cannot rely on external services for these functions. The platform described in this paper satisfies four requirements:- Test result history and trend analysis across multiple Rust services, with per-run JUnit XML ingestion
- Static analysis and code coverage with LCOV-format Rust coverage data
- Unified OIDC authentication delegated to the existing self-hosted Gitea instance
- Encrypted persistent storage for all stateful components
rust-ci.yml shared workflow coordinates lint, test, coverage collection, and upload across all Rust service repositories.
| Component | Role | Persistent Storage |
|---|---|---|
| Allure TestOps | Test result history, trend analysis | Longhorn encrypted (PostgreSQL backend) |
| SonarQube | Static analysis, coverage ingestion | Longhorn encrypted (data + extensions volumes) |
| oauth2-proxy | OIDC authentication proxy | None (stateless) |
| Gitea | OIDC provider (existing) | Pre-existing |
2. nextest’s Native JUnit XML Output Eliminates the Allure Annotation Library Dependency
2.1 Deployment
Allure TestOps is deployed via thefrankescobar/allure-docker-service Docker image as a Kubernetes Deployment. The service exposes a REST API for result upload and a web UI for report visualization. The PostgreSQL backend uses the Longhorn encrypted StorageClass described in Section 5.
The Deployment manifest specifies a single replica with a PersistentVolumeClaim for the PostgreSQL data directory:
2.2 The JUnit XML Bridge: Replacing Native Annotation Libraries
The standard approach to integrating Rust test output with Allure requires theallure-rs annotation library, which instruments test functions with Allure-specific macros. This approach introduces a compile-time dependency on a third-party crate that must be maintained alongside the test suite, and whose version must remain compatible with the Allure server’s ingestion format as both components evolve.
The nextest JUnit XML approach eliminates this dependency entirely. cargo nextest run --profile ci produces a JUnit-format XML report at target/nextest/ci/junit.xml. This XML is structurally identical to the output produced by Java, Python, and Go test frameworks that Allure natively ingests. The Allure TestOps REST API accepts it without modification.
The comparison between the two approaches:
| Dimension | allure-rs Annotation Approach | nextest JUnit XML Bridge |
|---|---|---|
| Test framework dependency | allure-rs crate required | None beyond cargo-nextest |
| Instrumentation requirement | Per-test macro annotations | None — nextest is the test runner |
| Version coupling | Annotation library version must match server API | JUnit XML is a stable, version-independent format |
| Failure mode | Library version mismatch breaks ingestion | Format mismatch produces clear parse error |
| Rich metadata (steps, attachments) | Supported via macros | Not supported — pass/fail/duration only |
| Adoption cost | Requires modifying test code | Requires only CI pipeline change |
allure-rs independently.
The upload step in the CI pipeline posts the JUnit XML to the Allure REST API after the test run completes:
The
if: always() condition is critical. Without it, the upload step is skipped when tests fail — precisely the case where historical failure data is most valuable for trend analysis. Setting if: always() ensures Allure receives result data regardless of test outcome.Cargo.toml configures the JUnit output path and format:
target/nextest/ci/junit.xml relative to the workspace root on every cargo nextest run --profile ci invocation.
3. SonarQube LCOV Ingestion Works Directly Without Format Conversion in Community Edition
3.1 SonarQube Deployment
SonarQube is deployed as a Kubernetes Deployment using the officialsonarqube:community image. The community edition supports all features required for this use case: static analysis, LCOV coverage ingestion, branch analysis, and quality gate enforcement.
SonarQube requires two persistent volumes: one for application data (/opt/sonarqube/data) and one for extensions (/opt/sonarqube/extensions). Both use the Longhorn encrypted StorageClass.
SonarQube’s startup requirements include elevated vm.max_map_count and fs.file-max kernel parameters, typically set via an init container:
3.2 LCOV Coverage Collection
Rust coverage is collected usingcargo-llvm-cov, which instruments the binary at the LLVM IR level and produces coverage data in LCOV format. LCOV is SonarQube’s preferred Rust coverage format — it is consumed directly by the SonarQube scanner without intermediate conversion.
The coverage collection step in the CI pipeline:
--test-threads=1 flag serializes test execution during coverage collection. Parallel test execution with LLVM coverage instrumentation produces race conditions in the coverage counters for shared global state, resulting in non-deterministic coverage data.
The SonarQube scanner reads the LCOV file via sonar.coverageReportPaths:
3.3 The ci-sonar Docker Image and JRE Symlink Fix
The CI pipeline uses a customci-sonar Docker image that bundles the SonarQube scanner alongside the Rust toolchain. The SonarQube scanner requires a JRE at a specific path; the exact path differs between JDK distributions. When the JRE is not found at the expected path, the scanner fails with a JAVA_HOME resolution error that does not identify the missing symlink as the cause.
The Dockerfile includes an explicit symlink to normalize the JRE path across distributions:
4. Two Production Defects in the oauth2-proxy–Gitea Integration Produce Silent Authentication Failures
4.1 Architecture
oauth2-proxy is deployed as a sidecar-pattern proxy in front of both Allure and SonarQube. Incoming requests are intercepted; unauthenticated sessions are redirected to Gitea’s OIDC authorization endpoint; authenticated sessions receive a session cookie and are proxied upstream. The nginx Ingress configuration delegates authentication to oauth2-proxy via anauth_url subrequest on each protected request:
4.2 Production Bug: Trailing-Slash Issuer Mismatch
Gitea’s OIDC discovery endpoint, available at/.well-known/openid-configuration, returns an issuer claim that includes a trailing slash:
oidc_issuer_url value in its configuration. If the configured value does not include the trailing slash, the comparison fails. The resulting behavior is a silent authentication rejection: oauth2-proxy presents the login flow normally, the user authenticates against Gitea, and the callback is rejected without a user-visible error message identifying the issuer mismatch as the cause.
The corrected oauth2-proxy configuration:
oidc_issuer_url even though it is unconventional for URL configuration values. Without it, the OIDC flow completes on the Gitea side and fails on the oauth2-proxy side with no user-visible indication.
4.3 Production Bug: nginx Loopback via External Hostname
The second production defect arises from nginx’sauth_url subrequest mechanism. When auth_url is set to an external hostname — for example, https://auth.internal.example.com/oauth2/auth — nginx routes the subrequest through its own listener, because the external hostname resolves to the same cluster IP that serves nginx. The subrequest arrives at nginx, which processes it as a new ingress request, routes it to the auth Ingress, and issues another subrequest — completing a loopback.
The loopback condition manifests as request timeout errors on protected resources, not as an authentication error. Diagnosis requires inspecting nginx access logs to observe the recursive subrequest pattern.
The resolution is to configure auth_url using the cluster-internal DNS name of the oauth2-proxy Service:
auth-signin annotation may continue to reference the external hostname, as it is used for browser redirects rather than server-side subrequests:
5. Longhorn Provisions Unencrypted Volumes When Any of Three Required CSI Secret References Are Missing
5.1 Encrypted StorageClass Configuration
Longhorn provides block-level encryption via dm-crypt, configured through a StorageClass with CSI secret references. The encrypted StorageClass requires three distinct secrets to be referenced simultaneously. The Longhorn documentation presents each secret reference individually, which can lead practitioners to configure only the subset relevant to their immediate use case. However, all three references must be present for encryption to activate; Longhorn does not produce an error when references are absent — it silently provisions an unencrypted volume. The three required CSI secret references and their roles:| Secret Reference | Parameter | Role |
|---|---|---|
csi.storage.k8s.io/provisioner-secret-name | Global encryption configuration | Applied at volume provisioning time |
csi.storage.k8s.io/node-stage-secret-name | Per-node encryption key | Applied when the volume is staged on a node |
csi.storage.k8s.io/node-publish-secret-name | Per-volume publish secret | Applied when the volume is published to a Pod |
numberOfReplicas: "1" is appropriate for single-node homelab deployments where no additional nodes are available for replica placement. In multi-node production environments, increase this value to match the replication factor required by the availability SLA. Longhorn will refuse to schedule a volume if insufficient nodes are available for the configured replica count.5.2 Verification
The only reliable method to verify that encryption activated is to inspect the block device on the Longhorn node after the volume is staged:dm-N device with FSTYPE=crypto_LUKS on the underlying block device and ext4 on the dm-crypt layer. An unencrypted volume appears as ext4 directly on the block device. The distinction is not visible from within the Pod or from Kubernetes API objects.
6. A Reusable Shared Workflow Centralizes Lint, Test, and Coverage Logic Across All Service Repositories
6.1 Shared Workflow Design
The CI pipeline is implemented as a reusable shared workflow (rust-ci.yml) consumed by all Rust service repositories. The workflow exposes input parameters that allow per-service customization without duplicating pipeline logic. This pattern ensures that changes to the lint configuration, test invocation, or Allure upload URL propagate to all services simultaneously.
The workflow defines two jobs: lint and test-and-sonar.
6.2 The run_tests Default: false
Therun_tests input defaults to false. This is an intentional design decision driven by a recurring constraint: several services in the platform consume sufficient memory during parallel test execution to exhaust the CI runner’s available RAM, causing the test process to be killed by the kernel OOM handler with no actionable error output.
The default false value ensures that no service executes tests unless explicitly opted in. Each service repository overrides the default in its own workflow call:
6.3 End-to-End Test Migration
End-to-end tests were migrated from direct API calls to execution via the platform’s internal CLI tool. This change ensures that end-to-end tests exercise the full user-facing request path, including authentication middleware, tenant context extraction, and rate limiting — components that direct API calls may bypass depending on the test harness configuration. The migration does not affect the CI pipeline structure: end-to-end tests continue to produce JUnit XML output via nextest and are uploaded to Allure using the same mechanism as unit tests.7. Recommendations for Deploying Quality Platforms on Self-Hosted Kubernetes
-
Use nextest’s
--profile ciwith JUnit XML output as your default Rust test integration for Allure TestOps. The JUnit XML bridge eliminates the native annotation library dependency, reduces test suite coupling to the reporting infrastructure, and produces a stable, version-independent artifact. Reserve native annotation libraries for use cases that explicitly require step-level attachment data. -
Always set
if: always()on Allure result upload steps. The most analytically valuable data — failure patterns and duration trends — is produced precisely when tests fail. A conditional upload that skips on test failure defeats the primary purpose of historical result tracking. -
Configure
oidc_issuer_urlin your oauth2-proxy with an explicit trailing slash when Gitea is the OIDC provider. Inspect Gitea’s/.well-known/openid-configurationendpoint and copy theissuervalue character-for-character into the oauth2-proxy configuration. Do not normalize the URL by removing the trailing slash. -
Use cluster-internal Service DNS names for nginx
auth_urlsubrequest targets. External hostnames that resolve to the cluster ingress IP create a loopback through nginx. The patternhttp://service-name.namespace.svc.cluster.local/oauth2/authis correct for any nginx Ingress auth subrequest and should be treated as a policy, not a workaround. - Include all three CSI secret references in every Longhorn encrypted StorageClass you provision. Longhorn does not validate the presence of all three references at provisioning time; absent references result in silent unencrypted provisioning. Verify encryption activation by inspecting block device LUKS status on the Kubernetes node after the first PVC is bound.
-
Default
run_teststofalsein your reusable CI workflows for memory-intensive Rust services. Allow individual service repositories to opt in explicitly after verifying that the CI runner has sufficient memory headroom. Establish a nextesttest-threadsprofile for services that require parallelism reduction. - Migrate your end-to-end tests to exercise the full user-facing request path, including CLI and middleware layers, rather than bypassing them with direct API calls. Tests that bypass authentication and tenant context do not validate the behavior that users experience.
Conclusion
The quality platform architecture described in this paper demonstrates that self-hosted Kubernetes environments can achieve feature parity with managed quality tooling SaaS offerings, at the cost of operational depth that managed services abstract away. The two oauth2-proxy production defects — trailing-slash issuer mismatch and nginx loopback — exemplify a class of integration failure that arises specifically at the boundary between independently developed components: each component behaves correctly in isolation, but their interaction produces a failure mode that neither component’s documentation anticipates. The nextest JUnit XML bridge finding has broader applicability beyond Rust. Any test framework that produces standard JUnit XML output can integrate with Allure TestOps without native annotation libraries, provided the output conforms to the schema that Allure’s ingestion API accepts. This reduces the barrier to Allure adoption for teams whose primary language or framework does not have a mature native Allure library. As self-hosted Kubernetes adoption continues to expand among engineering organizations operating under data residency, compliance, or cost constraints, the operational patterns documented here — unified OIDC proxy authentication, encrypted storage for quality data, reusable shared CI workflows — will become baseline infrastructure expectations rather than advanced configurations. The failure modes documented for each component represent current integration surface area that future versions of Gitea, oauth2-proxy, and Longhorn may address through improved validation and error reporting.Code examples are sanitized and generalized. No proprietary information is shared. Opinions are my own and do not reflect my employer’s views.