Upgrade Notes of Kubernetes (from v1.29 to v1.30)
2024年06月16日
Get the Kubernetes version of Kubernetes cluster's control plane.
% kubectl version
Get the Kubernetes version of the nodes. This command returns all self-managed and managed Amazon EC2 and Fargate nodes. Each Fargate pod is listed as its own node.
% k get nodes
Before updating the control plane to a new Kubernetes version, make sure that the Kubernetes minor version of the managed nodes in K8s cluster are the same as K8s control plane's version.
For example, if K8s control plane is running version
It is also recommended that you update K8s self-managed nodes to the same version as K8s control plane before updating the control plane.
2.
% k get psp eks.privileged
3. You might need to remove a discontinued term from your CoreDNS manifest.
Check to see if your CoreDNS manifest has a line that only has the word
% kubectl get configmap coredns -n kube-system -o jsonpath='{$.data.Corefile}' | grep upstream
No output is returned.
If no output is returned, this means that your manifest doesn't have the line. If this is the case, skip to the next step. If the word
4. Update your Amazon EKS cluster with the following AWS CLI command. Replace the
If you're updating K8s cluster to version
For details, refer to Update AWS Load Balancer Controller (from 2.4.5 to v2.6.2).
% aws eks update-cluster-version --region <region-code> --name <EKS-cluster-name> --kubernetes-version 1.30
An example output is as follows.
Monitor the status of your cluster update with the following command. Use the cluster name and update ID that the previous command returned. When a
% aws eks describe-update --region <region-code> --name <EKS-cluster-name> --update-id 287489cb-****-****-****-ed98412de143
An example output is as follows.
Here's a bash shell script that will check the status of an AWS EKS update every 1 minute and exit if the status changes to "Successful". Replace
Make it executable with
% chmod +x check-eks-upgrade-status.sh
Run it in your terminal.
% ./check-eks-upgrade-status.sh
5. After K8s cluster update is complete, update K8s nodes to the same Kubernetes minor version as your updated cluster. For more information, see Self-managed node updates and Updating a managed node group.
aws ssm get-parameter --name /aws/service/eks/optimized-ami/1.30/amazon-linux-2/recommended/image_id --region region-code --query "Parameter.Value" --output text
% aws ssm get-parameter --name /aws/service/eks/optimized-ami/1.30/amazon-linux-2-arm64/recommended/image_id --region us-west-2 --query "Parameter.Value" --output text
NOTE
There is an automated mechanism to get the latest AMI ID. For more information, refer to Update Worker Node AMI in Launch Template with Automation.
Update launch templates for EKS worker nodes. Then, update launch template version for each node group, using the "Force update" update strategy.
Update the launch template version using the following script:
Update the Amazon VPC CNI plugin for Kubernetes, CoreDNS, and
Note
You must use a
Determine whether you already have
% kubectl version --client
Install or update
Download the binary for your cluster's Kubernetes version from Amazon S3 (for Kubernetes 1.29).
% curl -O https://s3.us-west-2.amazonaws.com/amazon-eks/1.30.0/2024-05-12/bin/darwin/amd64/kubectl
% curl -O https://s3.us-west-2.amazonaws.com/amazon-eks/1.29.0/2024-01-04/bin/darwin/amd64/kubectl
Apply execute permissions to the binary.
% chmod +x ./kubectl
Copy the binary to a folder in your PATH. If you have already installed a version of kubectl, then it is recommended createing a $HOME/bin/kubectl and ensuring that $HOME/bin comes first in your $PATH.
% cp ./kubectl $HOME/bin/kubectl
Add the $HOME/bin path to the shell initialization file so that it is configured when you open a shell.
For example, for zsh:
% vim ~/.zshrc
After you install
% kubectl version
% k get nodes -o yaml | grep -i 'kubeletVersion'
Installing or updating kubectl
Updating an Amazon EKS cluster Kubernetes version
The post Release Notes of Site Upgrades holds the catalog of the whole upgrade's note.
This post focuses on the upgrade of Kubernetes, from version 1.29 to 1.30.
Update Kubernetes Version for Amazon EKS cluster
1. Compare the Kubernetes version of your cluster control plane to the Kubernetes version of your nodes.Get the Kubernetes version of Kubernetes cluster's control plane.
% kubectl version
Client Version: v1.29.0-eks-5e0fdde Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3 Server Version: v1.29.4-eks-036c24b
Get the Kubernetes version of the nodes. This command returns all self-managed and managed Amazon EC2 and Fargate nodes. Each Fargate pod is listed as its own node.
% k get nodes
NAME STATUS ROLES AGE VERSION ip-10-0-8-131.us-west-2.compute.internal Ready <none> 29d v1.29.3-eks-ae9a62a ip-10-0-8-214.us-west-2.compute.internal Ready <none> 29d v1.29.3-eks-ae9a62a ip-10-0-9-116.us-west-2.compute.internal Ready <none> 29d v1.29.3-eks-ae9a62a ip-10-0-9-144.us-west-2.compute.internal Ready <none> 34h v1.29.3-eks-ae9a62a ip-10-0-9-241.us-west-2.compute.internal Ready <none> 29d v1.29.3-eks-ae9a62a
Before updating the control plane to a new Kubernetes version, make sure that the Kubernetes minor version of the managed nodes in K8s cluster are the same as K8s control plane's version.
For example, if K8s control plane is running version
1.29
and one of K8s nodes is running version 1.28
, then you must update K8s nodes to version 1.29
before updating K8s control plane to 1.30.It is also recommended that you update K8s self-managed nodes to the same version as K8s control plane before updating the control plane.
2.
% k get psp eks.privileged
error: the server doesn't have a resource type "psp"
3. You might need to remove a discontinued term from your CoreDNS manifest.
Check to see if your CoreDNS manifest has a line that only has the word
upstream
.% kubectl get configmap coredns -n kube-system -o jsonpath='{$.data.Corefile}' | grep upstream
No output is returned.
If no output is returned, this means that your manifest doesn't have the line. If this is the case, skip to the next step. If the word
upstream
is returned, remove the line.4. Update your Amazon EKS cluster with the following AWS CLI command. Replace the
example values
with your own.If you're updating K8s cluster to version
1.25
or later and have the AWS Load Balancer Controller deployed in K8s cluster, then update the controller to version 2.4.7
or later before updating K8s cluster version to 1.25.For details, refer to Update AWS Load Balancer Controller (from 2.4.5 to v2.6.2).
% aws eks update-cluster-version --region <region-code> --name <EKS-cluster-name> --kubernetes-version 1.30
An example output is as follows.
{ "update": { "id": "287489cb-****-****-****-ed98412de143", "status": "InProgress", "type": "VersionUpdate", "params": [ { "type": "Version", "value": "1.30" }, { "type": "PlatformVersion", "value": "eks.3" } ], "createdAt": "2024-06-16T17:56:32.528000+08:00", "errors": [] } }
Monitor the status of your cluster update with the following command. Use the cluster name and update ID that the previous command returned. When a
Successful
status is displayed, the update is complete. The update takes several minutes to complete.% aws eks describe-update --region <region-code> --name <EKS-cluster-name> --update-id 287489cb-****-****-****-ed98412de143
An example output is as follows.
{ "update": { "id": "2dcbec62-****-****-****-147c978e2c15", "status": "Successful", "type": "VersionUpdate", "params": [ { "type": "Version", "value": "1.29" }, { "type": "PlatformVersion", "value": "eks.1" } ], "createdAt": "2024-02-12T17:09:21.589000+08:00", "errors": [] } }
Here's a bash shell script that will check the status of an AWS EKS update every 1 minute and exit if the status changes to "Successful". Replace
your-cluster-name
with the name of your EKS cluster and your-update-id
with the update ID of the EKS update you are monitoring.#!/bin/bash # Variables CLUSTER_NAME="your-cluster-name" # Replace with your EKS cluster name UPDATE_ID="your-update-id" # Replace with your EKS update ID while true; do # Get the update status STATUS=$(aws eks describe-update --name $CLUSTER_NAME --update-id $UPDATE_ID --query 'update.status' --output text) # Check if the status is "Successful" if [ "$STATUS" == "Successful" ]; then echo "Update is successful." exit 0 fi # Wait for 1 minute before checking again sleep 60 doneSave this script as
check-eks-upgrade-status.sh
.Make it executable with
chmod +x check-eks-upgrade-status.sh
.% chmod +x check-eks-upgrade-status.sh
Run it in your terminal.
% ./check-eks-upgrade-status.sh
Update is successful.
5. After K8s cluster update is complete, update K8s nodes to the same Kubernetes minor version as your updated cluster. For more information, see Self-managed node updates and Updating a managed node group.
Updating AMI for Self-managed Node Groups
Replace1.30
with a supported version. Replace region-code
with the AWS Region that your cluster is in. Replace amazon-linux-2
with amazon-linux-2-arm64
to see the AMI ID for ARM CPU architecture.aws ssm get-parameter --name /aws/service/eks/optimized-ami/1.30/amazon-linux-2/recommended/image_id --region region-code --query "Parameter.Value" --output text
% aws ssm get-parameter --name /aws/service/eks/optimized-ami/1.30/amazon-linux-2-arm64/recommended/image_id --region us-west-2 --query "Parameter.Value" --output text
ami-029b69cad40e85814
NOTE
There is an automated mechanism to get the latest AMI ID. For more information, refer to Update Worker Node AMI in Launch Template with Automation.
Update launch templates for EKS worker nodes. Then, update launch template version for each node group, using the "Force update" update strategy.
Update the launch template version using the following script:
EKS_cluster_name=<EKS-cluster-name> EKS_node_group_name=<EKS-node-group-name> launch_template_name=<launch-template-name> launch_template_version=<launch-template-version> aws eks update-nodegroup-version \ --cluster-name ${EKS_cluster_name} \ --nodegroup-name ${EKS_node_group_name} \ --launch-template name=${launch_template_name},version=${launch_template_version} \ --force
Update the Amazon VPC CNI plugin for Kubernetes, CoreDNS, and
kube-proxy
add-ons. It is recommended updating the add-ons to the minimum versions listed in Service account tokens.
- If you are using Amazon EKS add-ons, select Clusters in the Amazon EKS console, then select the name of the cluster that you updated in the left navigation pane. Notifications appear in the console. They inform you that a new version is available for each add-on that has an available update. To update an add-on, select the Add-ons tab. In one of the boxes for an add-on that has an update available, select Update now, select an available version, and then select Update.
- Alternately, you can use the AWS CLI or
eksctl
to update add-ons. For more information, see Updating an add-on.
Install or update kubectl
Note
You must use a
kubectl
version that is within one minor version difference of your Amazon EKS cluster control plane. For example, a 1.28
kubectl
client works with Kubernetes 1.27
, 1.28
, and 1.29
clusters.Determine whether you already have
kubectl
installed on your device.% kubectl version --client
Client Version: v1.28.3-eks-e71965b Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Install or update
kubectl
on macOS
Download the binary for your cluster's Kubernetes version from Amazon S3 (for Kubernetes 1.29).
% curl -O https://s3.us-west-2.amazonaws.com/amazon-eks/1.30.0/2024-05-12/bin/darwin/amd64/kubectl
% curl -O https://s3.us-west-2.amazonaws.com/amazon-eks/1.29.0/2024-01-04/bin/darwin/amd64/kubectl
% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 50.1M 100 50.1M 0 0 7015k 0 0:00:07 0:00:07 --:--:-- 8616k
Apply execute permissions to the binary.
% chmod +x ./kubectl
Copy the binary to a folder in your PATH. If you have already installed a version of kubectl, then it is recommended createing a $HOME/bin/kubectl and ensuring that $HOME/bin comes first in your $PATH.
% cp ./kubectl $HOME/bin/kubectl
Add the $HOME/bin path to the shell initialization file so that it is configured when you open a shell.
For example, for zsh:
% vim ~/.zshrc
... export PATH=$HOME/bin:$PATH ...% source ~/.zshrc
After you install
kubectl
, you can verify its version.% kubectl version
Client Version: v1.30.0-eks-036c24b Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3 Server Version: v1.30.1-eks-49c6de4Check kubelet version:
% k get nodes -o yaml | grep -i 'kubeletVersion'
kubeletVersion: v1.30.0-eks-036c24b kubeletVersion: v1.30.0-eks-036c24b kubeletVersion: v1.30.0-eks-036c24b kubeletVersion: v1.30.0-eks-036c24b kubeletVersion: v1.30.0-eks-036c24b
References
Installing or updating kubectl
Updating an Amazon EKS cluster Kubernetes version