Deploy Portefaix on Scaleway
This guide shows you how to deploy a Portefaix platform on Scaleway using Kapsule managed Kubernetes, Scaleway Object Storage for Terraform state, and Scaleway Cockpit for managed observability.
Goal: a running Kapsule cluster with Portefaix stacks continuously reconciled by ArgoCD, with metrics and logs shipped to Scaleway Cockpit.
Prerequisites
- Scaleway account with Project Owner or IAM permissions for Kubernetes and Object Storage
scwCLI configured (scw init)- Terraform ≥ 1.5, kubectl, and Helm installed locally
- Scaleway API key pair (Access Key + Secret Key) with
KubernetesFullAccessandObjectStorageFullAccess
1. Configure your environment
Add the Scaleway credentials to your Portefaix config file at
$HOME/.config/portefaix/portefaix.sh:
function setup_scaleway() {
export SCW_ACCESS_KEY="SCWXXXXXXXXXXXXXXXXX"
export SCW_SECRET_KEY="xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
export SCW_DEFAULT_PROJECT_ID="xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
export SCW_DEFAULT_ORGANIZATION_ID="$SCW_DEFAULT_PROJECT_ID"
# Scaleway Object Storage uses S3-compatible API
export AWS_ACCESS_KEY_ID="$SCW_ACCESS_KEY"
export AWS_SECRET_ACCESS_KEY="$SCW_SECRET_KEY"
export AWS_DEFAULT_REGION="fr-par"
export AWS_REGION="fr-par"
} . ./portefaix.sh scaleway
export SCW_DEFAULT_REGION="fr-par"
export PORTEFAIX_ENV="staging" 2. Create Terraform remote state storage
Scaleway Object Storage is S3-compatible. Create a bucket to hold Terraform state files:
scw object bucket create \
portefaix-tfstate \
--region $SCW_DEFAULT_REGION
# Enable versioning for state history
scw object bucket update portefaix-tfstate \
--enable-versioning \
--region $SCW_DEFAULT_REGION 3. Provision the Kapsule cluster with Terraform
cd portefaix-infrastructure/terraform/scaleway/kapsule
cp terraform.tfvars.example terraform.tfvars Key variables in terraform.tfvars:
region = "fr-par"
zone = "fr-par-1"
project_id = "your-project-id"
cluster_name = "portefaix-staging"
k8s_version = "1.31"
cni = "cilium" terraform init \
-backend-config="bucket=portefaix-tfstate" \
-backend-config="key=kapsule/$PORTEFAIX_ENV.tfstate" \
-backend-config="region=$SCW_DEFAULT_REGION" \
-backend-config="endpoint=https://s3.$SCW_DEFAULT_REGION.scw.cloud"
terraform plan -out=tfplan
terraform apply tfplan 4. Fetch cluster credentials
export CLUSTER_ID="$(terraform output -raw cluster_id)"
scw k8s kubeconfig install "$CLUSTER_ID" \
--region $SCW_DEFAULT_REGION
kubectl get nodes 5. Configure Scaleway Cockpit
Scaleway Cockpit provides a managed Prometheus-compatible metrics endpoint and a Loki-compatible logs endpoint. Create dedicated tokens and store them as Kubernetes secrets — the platform components will use these to push telemetry:
# Create push tokens
scw cockpit token create \
name=portefaix-metrics \
type=metrics \
--project-id $SCW_DEFAULT_PROJECT_ID
scw cockpit token create \
name=portefaix-logs \
type=logs \
--project-id $SCW_DEFAULT_PROJECT_ID kubectl create namespace monitoring
kubectl create secret generic cockpit-tokens \
--namespace monitoring \
--from-literal=metrics-token="YOUR_METRICS_TOKEN" \
--from-literal=logs-token="YOUR_LOGS_TOKEN" Retrieve your Cockpit push endpoints (used in the ArgoCD values file):
scw cockpit get --project-id $SCW_DEFAULT_PROJECT_ID 6. Deploy Portefaix stacks via ArgoCD
helm repo add argo https://argoproj.github.io/argo-helm
helm repo update
helm install argocd argo/argo-cd \
--namespace argocd --create-namespace \
--values portefaix-kubernetes/gitops/argocd/values-scaleway.yaml \
--wait
kubectl apply -f portefaix-kubernetes/gitops/argocd/bootstrap/app-of-apps-scaleway-$PORTEFAIX_ENV.yaml
argocd app wait portefaix-bootstrap --health --timeout 600 7. Automate with Spacelift (optional)
Portefaix supports Spacelift
as an alternative to running Terraform locally. Create a Spacelift Stack pointing to
portefaix-infrastructure/terraform/scaleway/kapsule and add your Scaleway
credentials as Stack environment variables. Spacelift handles plan previews on PRs and
applies on merge automatically.
Stacks available on Scaleway
| Stack | Description | Scaleway service used |
|---|---|---|
| Observability | Prometheus, Grafana, Loki, Tempo | Cockpit (managed metrics + logs endpoints) |
| Long-term storage | Thanos, Loki chunks, Tempo blocks | Object Storage (S3-compatible) |
| Secret management | External Secrets Operator | Scaleway Secret Manager |
| DNS management | External DNS | Scaleway DNS |
| TLS certificates | cert-manager | Scaleway DNS for DNS-01 challenges |
| Policy enforcement | Kyverno | — |
Tip: Kapsule clusters ship with Cilium as the default CNI. Set
cni = "cilium" in Terraform to enable Hubble for in-cluster network
observability — it integrates with the Portefaix observability stack automatically.
Troubleshooting
Terraform S3 backend unreachable
Scaleway Object Storage requires the region-specific endpoint. Verify your backend config uses:
endpoint = "https://s3.fr-par.scw.cloud" # for fr-par
endpoint = "https://s3.nl-ams.scw.cloud" # for nl-ams
endpoint = "https://s3.pl-waw.scw.cloud" # for pl-waw Cockpit tokens rejected
# List tokens and verify they exist
scw cockpit token list --project-id $SCW_DEFAULT_PROJECT_ID
# Re-create if missing
scw cockpit token create name=portefaix-metrics type=metrics \
--project-id $SCW_DEFAULT_PROJECT_ID