Chainguard VMs FAQ
Frequently asked questions about Chainguard VMs, including availability, supported ecosystems, compliance, and more
Chainguard VMs provide a secure, minimal foundation for running workloads in cloud environments. Integrating them with Karpenter on AWS EKS allows for efficient, on-demand node provisioning using custom Chainguard AMIs. This guide covers the setup and configuration based on Karpenter v1.x, which uses EC2NodeClass for node management.
Karpenter v1.x introduces EC2NodeClass to replace the deprecated Provisioners from earlier versions (e.g., v0.31 or older). This enables more flexible node configuration, including custom AMI selection and block device mappings.
Security Note: This guide follows least privilege principles for IAM permissions. Always audit and minimize permissions after deployment, removing any unused access to maintain a secure posture.
CLUSTER_NAME, AWS_DEFAULT_REGION, KARPENTER_NAMESPACE, CUSTOM_AMI_IDBefore configuring Karpenter, verify your Chainguard AMI details to ensure correct configuration:
# Verify AMI details (replace with your actual AMI ID and region)
aws ec2 describe-images --image-ids ${CUSTOM_AMI_ID} --region ${AWS_DEFAULT_REGION} --query 'Images[*].{Name:Name,RootDeviceName:RootDeviceName,BlockDeviceMappings:BlockDeviceMappings,Architecture:Architecture,Description:Description}'Key details to verify:
/dev/sda1 for Chainguard AMIsExample output:
{
"Name": "chainguard-eks-1.32-dev-x86_64-20251024-0318",
"RootDeviceName": "/dev/sda1",
"BlockDeviceMappings": [
{
"DeviceName": "/dev/sda1",
"Ebs": {
"VolumeSize": 8,
"VolumeType": "gp3",
"Encrypted": false
}
}
],
"Architecture": "x86_64",
"Description": "chainguard-eks-1.32-dev-x86_64-20251024-0318"
}Follow the official Karpenter installation guide for the complete setup process. This section covers only the Chainguard-specific modifications needed.
Set these variables for your Chainguard integration:
export CLUSTER_NAME="your-cluster-name"
export AWS_DEFAULT_REGION="us-west-2"
export KARPENTER_VERSION="1.8.1" # Use latest stable version
export KARPENTER_NAMESPACE="karpenter"
export CUSTOM_AMI_ID="ami-01f687414663bd721" # Your Chainguard AMI IDImportant: The standard Karpenter CloudFormation stack may not include all permissions needed for custom AMI usage. Based on your setup, additional permissions might be required for the controller role.
Add these permissions to your Karpenter controller role if you encounter access denied errors:
# Add basic permissions (if not already included in CloudFormation stack)
aws iam put-role-policy --role-name "${CLUSTER_NAME}-karpenter" --policy-name "KarpenterEKS" --policy-document '{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"eks:DescribeCluster",
"ec2:DescribeInstanceTypes"
],
"Resource": [
"arn:aws:eks:${AWS_DEFAULT_REGION}:${AWS_ACCOUNT_ID}:cluster/${CLUSTER_NAME}",
"*"
]
}
]
}'
# Add comprehensive permissions for custom AMI operations
aws iam put-role-policy \
--role-name "${CLUSTER_NAME}-karpenter" \
--policy-name "KarpenterCompletePermissions" \
--policy-document '{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:DescribeInstanceTypes",
"ec2:DescribeInstanceTypeOfferings",
"ec2:DescribeAvailabilityZones",
"ec2:DescribeImages",
"ec2:DescribeInstances",
"ec2:DescribeSecurityGroups",
"ec2:DescribeSubnets"
],
"Resource": "*"
}
]
}'Note: Monitor CloudTrail logs for “Access Denied” errors and add only the minimum permissions needed for your specific environment.
Security: Follow least privilege by auditing these permissions after deployment and removing any unused access.
The initial managed nodegroup created during cluster setup can coexist with Karpenter-managed nodes. For production clusters, consider keeping a small managed nodegroup for stability and core components. Only remove it if you want Karpenter to handle all node provisioning.
To integrate Chainguard VMs, configure a NodePool and EC2NodeClass in Karpenter. The key components are:
amiFamily: Custom.Here’s the example configuration:
apiVersion: karpenter.sh/v1
kind: NodePool
metadata:
name: default
spec:
template:
spec:
requirements:
- key: kubernetes.io/arch
operator: In
values: ["amd64"]
- key: kubernetes.io/os
operator: In
values: ["linux"]
- key: karpenter.sh/capacity-type
operator: In
values: ["on-demand"]
- key: karpenter.k8s.aws/instance-category
operator: In
values: ["c", "m", "r", "t"]
- key: karpenter.k8s.aws/instance-size
operator: NotIn
values: ["metal"]
nodeClassRef:
group: karpenter.k8s.aws
kind: EC2NodeClass
name: chainguard-eks
expireAfter: 720h # 30 days
limits:
cpu: 1000
disruption:
consolidationPolicy: WhenEmptyOrUnderutilized
consolidateAfter: 1m
---
apiVersion: karpenter.k8s.aws/v1
kind: EC2NodeClass
metadata:
name: chainguard-eks
spec:
amiFamily: AL2023
role: "KarpenterNodeRole-${CLUSTER_NAME}"
amiSelectorTerms:
- id: "${CUSTOM_AMI_ID}"
blockDeviceMappings:
- deviceName: /dev/sda1 # Chainguard AMI root device name (verify with: aws ec2 describe-images --image-ids <AMI_ID>)
ebs:
volumeSize: 100Gi
volumeType: gp3
encrypted: true
deleteOnTermination: true
subnetSelectorTerms:
- tags:
karpenter.sh/discovery: "${CLUSTER_NAME}"
securityGroupSelectorTerms:
- tags:
karpenter.sh/discovery: "${CLUSTER_NAME}"Apply the configuration using:
cat <<EOF | envsubst | kubectl apply -f -
# Paste the YAML above
EOFamiSelectorTerms to specify your Chainguard AMI. This is required for custom AMIs and ensures Karpenter provisions nodes with the correct image.amiFamily: Custom, where you must manage bootstrapping manually.blockDeviceMappings in EC2NodeClass allows customization of the root volume. For Chainguard AMIs, the deviceName is /dev/sda1 (the AMI’s root device name). You can specify the size (e.g., 100Gi) and volume type (gp3). Always verify the AMI’s root device name before configuration using: aws ec2 describe-images --image-ids <AMI_ID>.Provisioners, which are deprecated. Ensure your cluster is updated for EC2NodeClass support.expireAfter: 720h setting in the NodePool ensures nodes are replaced after 30 days, aligning with Chainguard VM best practices for node replacement over in-place upgrades.After applying the configuration, verify that nodes are provisioned correctly and the file system aligns with your settings.
Check node details:
kubectl get nodesTo confirm the root volume size, query the node stats (replace with your node IP):
kubectl get --raw /api/v1/nodes/<node-ip>/proxy/stats/summary | jq '{node_fs: .node.fs, runtime_fs: .runtime.fs, rootfs: .node.rootfs}'Expected output for a 100Gi volume:
{
"node_fs": {
"time": "2025-10-22T22:58:07Z",
"availableBytes": 102086111232,
"capacityBytes": 106233311232,
"usedBytes": 4147200000,
"inodesFree": 51874115,
"inodes": 51904496,
"inodesUsed": 30381
},
"runtime_fs": null,
"rootfs": null
}The capacityBytes should reflect approximately 100Gi (106,233,311,232 bytes ≈ 100 GiB).
Device Name Mismatch: Always verify the Chainguard AMI’s root device name before configuration. Chainguard AMIs typically use /dev/sda1 as the root device. Verify using:
# Check AMI details before configuration
aws ec2 describe-images --image-ids <AMI_ID> --region <region> --query 'Images[*].{RootDeviceName:RootDeviceName,BlockDeviceMappings:BlockDeviceMappings,ImageId:ImageId}'
# Alternative: Check from running instance
aws ec2 describe-instances --instance-ids <instance-id> --region <region> --query 'Reservations[*].Instances[*].{RootDeviceName:RootDeviceName,BDM:BlockDeviceMappings,ImageId:ImageId}'NodeClaim Cleanup: If nodes fail to provision, clean up stuck NodeClaims:
kubectl get nodeclaims.karpenter.sh -A -o name | xargs -I{} kubectl delete {}IAM Permissions: Verify the Karpenter controller role has sufficient permissions for custom AMI operations. The standard CloudFormation stack may not include all permissions needed for custom AMI usage. Additional permissions might be required for ec2:DescribeImages, ec2:DescribeInstances, and other read-only operations. Check CloudTrail logs for “Access Denied” errors and add only the minimum required permissions following least privilege principles.
Check if Chainguard AMI is being used:
# Get node and instance mapping
kubectl get nodes -o wide
# Get instance ID and check AMI
aws ec2 describe-instances --instance-ids <instance-id> --region <region> --query 'Reservations[*].Instances[*].ImageId' --output textCheck root volume configuration:
kubectl get --raw /api/v1/nodes/<node-ip>/proxy/stats/summary | jq '{node_fs: .node.fs, runtime_fs: .runtime.fs, rootfs: .node.rootfs}'For more details on Karpenter, refer to the official documentation.
Last updated: 2025-10-23 18:55