Portefaix docs GitHub

Platform Architecture

This page explains how the Portefaix sub-projects relate to each other, the reasoning behind the layered architecture, and the design decisions that shaped the platform.

Reading this page will give you a mental model of the whole system. For step-by-step deployment instructions, see the Getting Started tutorial or the cloud-specific how-to guides.

The four layers

Portefaix is structured as four distinct layers. Each layer has a clear responsibility and depends only on the layers below it. This separation makes the platform composable: you can adopt one layer at a time, and replace components within a layer without affecting others.

Layer 1 — Cloud infrastructure

Managed by portefaix-infrastructure (Terraform). This layer creates the cloud resources that everything else runs on: VPCs, managed Kubernetes clusters (GKE, EKS, AKS, Kapsule), IAM roles, object storage buckets, and DNS zones.

The infrastructure layer is cloud-specific but structurally uniform. Every cloud provider module exposes the same outputs: cluster endpoint, workload identity configuration, and storage locations. This uniformity means higher layers do not need to know which cloud they are running on.

Layer 2 — Platform services

Managed by portefaix-kubernetes (ArgoCD + Helm). This layer installs the services that make Kubernetes production-ready: ingress controllers, certificate managers, secret operators, policy engines, and the observability stack.

Components in this layer are selected for CNCF maturity and alignment with open standards. They are deployed using Helm charts curated by the portefaix-hub project, which pins chart versions and applies Portefaix-specific default values.

Layer 3 — Policy and governance

Managed by portefaix-policies (Kyverno + OPA). Policies run as admission controllers and audit rules inside the cluster. They enforce:

  • Security baselines (no root containers, required labels, pod disruption budgets)
  • Resource governance (CPU/memory limits, namespace quotas)
  • Compliance controls (image registry allowlists, network policy requirements)

Policies are stored in Git alongside the rest of the platform configuration and deployed by ArgoCD like any other resource. This means policy changes go through code review and can be tested in staging before being applied to production.

Layer 4 — Application workloads

Applications deployed by platform consumers. This layer is not managed by Portefaix directly, but Portefaix provides the foundation — GitOps patterns, secret management, ingress, TLS, and observability — that application teams build on.

Why Kubernetes as the foundation?

Kubernetes was chosen not because it is simple (it is not) but because it provides a universal API surface. The same resource model — Deployments, Services, Ingresses, Custom Resources — works across every cloud provider and on-premises. This is the abstraction that makes Portefaix cloud-agnostic.

The operator pattern, in particular, is what enables platform automation at scale. Operators like External Secrets Operator and cert-manager turn imperative provisioning tasks (create a certificate, sync a secret) into declarative, reconciled resources.

GitOps as the control plane

The platform does not have a control plane API or dashboard that operators use to make changes. The control plane is Git. Every change — deploying a new chart version, updating a policy, rotating a secret reference — is a pull request.

This means the operational model is familiar to any developer: branch, edit, PR, review, merge. There is no separate "ops console" to learn. ArgoCD observes the Git state and makes it real in the cluster.

Observability by default

Every component in the platform is instrumented before it is deployed. The observability stack (Prometheus, Grafana, Loki, Tempo) is installed in wave 3 of the ArgoCD sync order — before application workloads — so metrics, logs, and traces are available from the first deployment.

portefaix-kubernetes includes pre-built Grafana dashboards for every platform component. These dashboards use standardised labels and follow the Grafana dashboard conventions so they compose cleanly with custom application dashboards.

Design decisions

Helm over Kustomize

Portefaix uses Helm for all components because Helm's values system provides a clean interface for cloud-specific and environment-specific overrides. A single chart can be deployed with different values to GCP staging, AWS production, and a local kind cluster. Kustomize patches are harder to review and easier to misapply.

ArgoCD over FluxCD

Both tools implement GitOps correctly. ArgoCD was chosen because its ApplicationSet controller makes multi-cluster management declarative, and its UI provides a valuable operational view of sync status across all managed applications. FluxCD's image automation is more sophisticated, but Portefaix handles image versioning at the chart values layer.

Kyverno over OPA/Gatekeeper

Kyverno uses native Kubernetes resources (YAML policies) rather than a separate policy language (Rego). This lowers the barrier for platform teams to write and review policies. OPA/Rego is still available and used for more complex rule logic via the portefaix-policies project.

Further reading