Upgrade Note of Karpenter (from v0.13.2 to v0.21.1)
2022年12月30日
helm repo update
export CLUSTER_NAME=<EKS cluster name>
export AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
export KARPENTER_IAM_ROLE_ARN="arn:aws:iam::${AWS_ACCOUNT_ID}:role/${CLUSTER_NAME}-karpenter"
export CLUSTER_ENDPOINT="$(aws eks describe-cluster --name ${CLUSTER_NAME} --query "cluster.endpoint" --output text)"
export KARPENTER_VERSION=v0.21.1
docker logout public.ecr.aws
Instances launched by Karpenter must run with an InstanceProfile that grants permissions necessary to run containers and configure networking. This command adds the Karpenter node role to your aws-auth configmap, allowing nodes to connect.
kubectl replace -f https://raw.githubusercontent.com/aws/karpenter/v0.21.1/pkg/apis/crds/karpenter.sh_provisioners.yaml
kubectl replace -f https://raw.githubusercontent.com/aws/karpenter/v0.21.1/pkg/apis/crds/karpenter.k8s.aws_awsnodetemplates.yaml
如果出现这个报错,有可能就是因为没有replace CRD。解决办法是执行上面两个“kubectl replace -f”命令。Reason: some parameters was added in later versions. However, Helm upgrades do not upgrade the CRD describing the provisioner, so it must be done manually.
k apply -f provisioner-x86.yaml
检查部署:
k logs -f deployment/karpenter -c controller -n karpenter
启用debug日志,方便排查问题(此步骤可选):
k patch configmap config-logging -n karpenter --patch '{"data":{"loglevel.controller":"debug"}}'
References
https://karpenter.sh/v0.21.1/getting-started/getting-started-with-eksctl/
This post demonstrates how to upgrade Karpenter using Helm, specifically, from 0.13.2 to 0.21.1.
helm repo list
NAME URL karpenter https://charts.karpenter.sh
helm repo update
Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "karpenter" chart repository Update Complete. ⎈Happy Helming!⎈
export CLUSTER_NAME=<EKS cluster name>
export AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
export KARPENTER_IAM_ROLE_ARN="arn:aws:iam::${AWS_ACCOUNT_ID}:role/${CLUSTER_NAME}-karpenter"
export CLUSTER_ENDPOINT="$(aws eks describe-cluster --name ${CLUSTER_NAME} --query "cluster.endpoint" --output text)"
export KARPENTER_VERSION=v0.21.1
docker logout public.ecr.aws
Removing login credentials for public.ecr.aws
helm upgrade --install karpenter oci://public.ecr.aws/karpenter/karpenter \ --version ${KARPENTER_VERSION} \ --namespace karpenter --create-namespace \ --set serviceAccount.annotations."eks\.amazonaws\.com/role-arn"=${KARPENTER_IAM_ROLE_ARN} \ --set settings.aws.clusterName=${CLUSTER_NAME} \ --set settings.aws.clusterEndpoint=${CLUSTER_ENDPOINT} \ --set settings.aws.interruptionQueueName=${CLUSTER_NAME} \ --wait---
Release "karpenter" has been upgraded. Happy Helming! NAME: karpenter LAST DEPLOYED: Fri Dec 30 15:58:40 2022 NAMESPACE: karpenter STATUS: deployed REVISION: 5 TEST SUITE: None
Instances launched by Karpenter must run with an InstanceProfile that grants permissions necessary to run containers and configure networking. This command adds the Karpenter node role to your aws-auth configmap, allowing nodes to connect.
... data: mapRoles: | - groups: - system:bootstrappers - system:nodes rolearn: arn:aws:iam::111122223333:role/<EC2 role name> username: system:node:{{EC2PrivateDNSName}} ...
kubectl replace -f https://raw.githubusercontent.com/aws/karpenter/v0.21.1/pkg/apis/crds/karpenter.sh_provisioners.yaml
customresourcedefinition.apiextensions.k8s.io/provisioners.karpenter.sh replaced
kubectl replace -f https://raw.githubusercontent.com/aws/karpenter/v0.21.1/pkg/apis/crds/karpenter.k8s.aws_awsnodetemplates.yaml
customresourcedefinition.apiextensions.k8s.io/awsnodetemplates.karpenter.k8s.aws replaced
如果出现这个报错,有可能就是因为没有replace CRD。解决办法是执行上面两个“kubectl replace -f”命令。Reason: some parameters was added in later versions. However, Helm upgrades do not upgrade the CRD describing the provisioner, so it must be done manually.
k apply -f provisioner-x86.yaml
error: error validating "provisioner-x86.yaml": error validating data: [ValidationError(Provisioner.spec): unknown field "consolidation" in sh.karpenter.v1alpha5.Provisioner.spec, ValidationError(Provisioner.spec.kubeletConfiguration): unknown field "maxPods" in sh.karpenter.v1alpha5.Provisioner.spec.kubeletConfiguration, ValidationError(Provisioner.spec.kubeletConfiguration): unknown field "systemReserved" in sh.karpenter.v1alpha5.Provisioner.spec.kubeletConfiguration, ValidationError(Provisioner.spec): unknown field "weight" in sh.karpenter.v1alpha5.Provisioner.spec]; if you choose to ignore these errors, turn validation off with --validate=false
检查部署:
k logs -f deployment/karpenter -c controller -n karpenter
Found 2 pods, using pod/karpenter-6766c7c5cc-**** 2022-12-30T07:59:08.206Z DEBUG Successfully created the logger. 2022-12-30T07:59:08.206Z DEBUG Logging level set to: debug {"level":"info","ts":1672387148.210557,"logger":"fallback","caller":"injection/injection.go:63","msg":"Starting informers..."} ...
启用debug日志,方便排查问题(此步骤可选):
k patch configmap config-logging -n karpenter --patch '{"data":{"loglevel.controller":"debug"}}'
References
https://karpenter.sh/v0.21.1/getting-started/getting-started-with-eksctl/