Deploy Portefaix on AWS
This guide shows you how to deploy a production-ready Portefaix platform on Amazon Web Services using a multi-account AWS Organization, EKS with EKS Pod Identity for keyless IAM authentication, and GitHub Actions for infrastructure automation.
Goal: a running EKS cluster in a dedicated AWS account, with Portefaix stacks continuously reconciled by ArgoCD and IAM roles bound to workloads via EKS Pod Identity — no static credentials anywhere in the cluster.
Prerequisites
- AWS root account — you will create an Organization and sub-accounts from it
awsCLI v2 configured with admin credentials (aws configureor AWS SSO)- Terraform ≥ 1.5 and kubectl installed locally
- A GitHub organisation for your GitOps repositories
1. Set up AWS Organizations
Portefaix follows a multi-account model: a root management account, and separate workload accounts per environment (staging, production). This isolates blast radius and simplifies IAM governance.
From the AWS Console, create an Organization and enable Service Control Policies (SCPs) and Tag Policies. Then apply the Portefaix root Terraform module to create the Organizational Units and accounts declaratively:
cd portefaix-infrastructure/terraform/aws/root
terraform init
terraform plan -out=tfplan
terraform apply tfplan Also enable the AWS Health organizational view for cross-account incident visibility:
- Go to Personal Health Dashboard → Organizational view → Configurations
- Click Enable organizational view
2. Configure Portefaix environment credentials
Create your local Portefaix configuration file at
$HOME/.config/portefaix/portefaix.sh. This file is sourced before running
Terraform or kubectl commands and sets the environment variables used across
all tooling:
HOME_IP=$(curl -s http://ifconfig.me)
SLACK_WEBHOOK_NOTIFS="https://hooks.slack.com/services/xxx/xxx"
function setup_aws() {
export AWS_ACCESS_KEY_ID="..."
export AWS_SECRET_ACCESS_KEY="..."
export AWS_DEFAULT_REGION="eu-west-1"
export AWS_REGION="eu-west-1"
# Terraform variable pass-through
export TF_VAR_slack_webhook_url="$SLACK_WEBHOOK_NOTIFS"
export TF_VAR_org_email="your-root@example.com"
export TF_VAR_org_email_domain="example.com"
export TF_VAR_org_admin_username="portefaix-admin"
export TF_VAR_admin_ipv4="[\"$HOME_IP/32\"]"
} Load the credentials for your session:
. ./portefaix.sh aws 3. Create Terraform remote state storage
Terraform state for the AWS workload account is stored in S3 with DynamoDB locking. Create these resources in the target account before running any other Terraform modules:
# Create the S3 bucket for Terraform state
aws s3api create-bucket \
--bucket portefaix-tfstate-$AWS_ACCOUNT_ID \
--region $AWS_REGION \
--create-bucket-configuration LocationConstraint=$AWS_REGION
aws s3api put-bucket-versioning \
--bucket portefaix-tfstate-$AWS_ACCOUNT_ID \
--versioning-configuration Status=Enabled
# Create the DynamoDB table for state locking
aws dynamodb create-table \
--table-name portefaix-tfstate-lock \
--attribute-definitions AttributeName=LockID,AttributeType=S \
--key-schema AttributeName=LockID,KeyType=HASH \
--billing-mode PAY_PER_REQUEST \
--region $AWS_REGION 4. Provision the EKS cluster with Terraform
The EKS Terraform module creates the cluster, VPC, node groups, and all IAM roles required for Pod Identity. Edit the variables file for your target environment, then apply:
cd portefaix-infrastructure/terraform/aws/eks
cp terraform.tfvars.example terraform.tfvars Key variables to set in terraform.tfvars:
aws_region = "eu-west-1"
cluster_name = "portefaix-staging"
cluster_version = "1.31"
vpc_cidr = "10.0.0.0/16"
# Enable EKS Pod Identity (replaces IRSA on new clusters)
pod_identity_enabled = true terraform init \
-backend-config="bucket=portefaix-tfstate-$AWS_ACCOUNT_ID" \
-backend-config="key=eks/staging.tfstate" \
-backend-config="region=$AWS_REGION" \
-backend-config="dynamodb_table=portefaix-tfstate-lock"
terraform plan -out=tfplan
terraform apply tfplan 5. Authenticate to the cluster
After the Terraform apply, assume the administrator role for the target account and update your local kubeconfig:
# Assume the admin role for the staging account
aws sts assume-role \
--role-arn arn:aws:iam::$AWS_ACCOUNT_ID:role/portefaix-admin \
--role-session-name portefaix-session \
--output json | \
jq -r '.Credentials | "export AWS_ACCESS_KEY_ID=\(.AccessKeyId)\nexport AWS_SECRET_ACCESS_KEY=\(.SecretAccessKey)\nexport AWS_SESSION_TOKEN=\(.SessionToken)"' | \
source /dev/stdin
# Update kubeconfig
aws eks update-kubeconfig \
--name portefaix-staging \
--region $AWS_REGION Verify the cluster nodes are ready:
kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-10-0-13-85.eu-west-1.compute.internal Ready <none> 10m v1.31.0-eks-a737599
ip-10-0-29-115.eu-west-1.compute.internal Ready <none> 10m v1.31.0-eks-a737599
ip-10-0-60-137.eu-west-1.compute.internal Ready <none> 10m v1.31.0-eks-a737599 6. Configure EKS Pod Identity
EKS Pod Identity lets Pods assume IAM roles without storing credentials. The Terraform module installs the Pod Identity Agent add-on and creates the Pod Identity Associations. Verify the agent is running:
kubectl get daemonset -n kube-system eks-pod-identity-agent List all Pod Identity Associations created by Terraform:
aws eks list-pod-identity-associations \
--cluster-name portefaix-staging \
--region $AWS_REGION Each platform component (External Secrets Operator, External DNS, cert-manager, etc.) has a dedicated IAM role with least-privilege policies, bound to the corresponding Kubernetes service account through a Pod Identity Association.
7. Connect via Bastion (optional)
If your EKS cluster runs in a private VPC, use the AWS Systems Manager Session Manager to access EC2 or EKS nodes without exposing SSH ports. Install the SSM Session Manager plugin then start a session:
# List running instances
aws ec2 describe-instances \
--filters "Name=instance-state-name,Values=running" \
--query "Reservations[*].Instances[*].[InstanceId,Tags[?Key=='Name'].Value|[0]]" \
--output table
# Start a session (no SSH key required)
aws ssm start-session --target i-0123456789abcdef0 8. Deploy Portefaix stacks via ArgoCD
Install ArgoCD using the AWS-specific values file, then apply the app-of-apps bootstrap:
helm repo add argo https://argoproj.github.io/argo-helm
helm repo update
helm install argocd argo/argo-cd \
--namespace argocd --create-namespace \
--values portefaix-kubernetes/gitops/argocd/values-aws.yaml \
--wait
kubectl apply -f portefaix-kubernetes/gitops/argocd/bootstrap/app-of-apps-aws-staging.yaml Monitor the initial sync — all applications should reach Healthy / Synced:
argocd app list
argocd app wait portefaix-bootstrap --health --timeout 600 9. Automate with GitHub Actions
The Portefaix infrastructure repository ships GitHub Actions workflows for all Terraform operations. Workflows authenticate via OIDC federation — no AWS credentials stored as GitHub secrets.
Configure the OIDC provider in your AWS account (the Terraform root module does this):
# Verify the OIDC provider exists
aws iam list-open-id-connect-providers
Once configured, GitHub Actions plan and apply workflows run on
pull requests and merges automatically. See
.github/workflows/terraform-aws.yml in
portefaix-infrastructure for the workflow definition.
Stacks available on AWS
| Stack | Description | AWS service used |
|---|---|---|
| Observability | Prometheus, Grafana, Loki, Tempo | S3 for long-term storage (Thanos, Loki, Tempo) |
| Secret management | External Secrets Operator | Secrets Manager + SSM Parameter Store |
| DNS management | External DNS | Route 53 |
| TLS certificates | cert-manager | Route 53 for DNS-01 challenges |
| Load balancing | AWS Load Balancer Controller | ALB / NLB |
| Cluster autoscaling | Karpenter | EC2 (Spot + On-Demand) |
| Policy enforcement | Kyverno | — |
Tip: for cost optimisation, enable Karpenter with mixed instance types and
Spot interruption handling. Set karpenter_enabled = true in the EKS Terraform
module. Karpenter works alongside EKS Managed Node Groups — keep at least one node group
for critical system workloads.
Troubleshooting
Pod Identity not working
Verify the Pod Identity Agent DaemonSet is running on all nodes:
kubectl get daemonset -n kube-system eks-pod-identity-agent -o wide Check that the Pod Identity Association exists for the service account:
aws eks describe-pod-identity-association \
--cluster-name portefaix-staging \
--association-id a-xxxxxxxxxxxxxxxxx \
--region $AWS_REGION kubectl access denied after role assumption
If you get error: You must be logged in to the server (Unauthorized), your
assumed role may not be in the EKS access entries. Add it:
aws eks create-access-entry \
--cluster-name portefaix-staging \
--principal-arn arn:aws:iam::$AWS_ACCOUNT_ID:role/portefaix-admin \
--type STANDARD \
--region $AWS_REGION
aws eks associate-access-policy \
--cluster-name portefaix-staging \
--principal-arn arn:aws:iam::$AWS_ACCOUNT_ID:role/portefaix-admin \
--policy-arn arn:aws:eks::aws:cluster-access-policy/AmazonEKSClusterAdminPolicy \
--access-scope type=cluster \
--region $AWS_REGION