Upgrade Notes of Kubernetes (from v1.28 to v1.29)

2024年02月11日


The post Release Notes of Site Upgrades holds the catalog of the whole upgrade's note. 

This post focuses on the upgrade of Kubernetes, from version 1.28 to 1.29.


Update Kubernetes Version for Amazon EKS cluster

Compare the Kubernetes version of your cluster control plane to the Kubernetes version of your nodes.

Get the Kubernetes version of Kubernetes cluster's control plane.
kubectl version
Client Version: v1.28.3-eks-e71965b
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.28.5-eks-5e0fdde

Get the Kubernetes version of the nodes. This command returns all self-managed and managed Amazon EC2 and Fargate nodes. Each Fargate pod is listed as its own node.
k get nodes
NAME                                       STATUS   ROLES    AGE     VERSION
ip-10-0-*-**.us-west-2.compute.internal    Ready    <none>   23d     v1.28.5-eks-5e0fdde
ip-10-0-*-**.us-west-2.compute.internal    Ready    <none>   23d     v1.28.5-eks-5e0fdde
ip-10-0-*-***.us-west-2.compute.internal   Ready    <none>   11m     v1.28.5-eks-5e0fdde
ip-10-0-*-***.us-west-2.compute.internal   Ready    <none>   23d     v1.28.5-eks-5e0fdde
ip-10-0-*-**.us-west-2.compute.internal    Ready    <none>   4d18h   v1.28.5-eks-5e0fdde
ip-10-0-*-**.us-west-2.compute.internal    Ready    <none>   4d18h   v1.28.5-eks-5e0fdde

Before updating the control plane to a new Kubernetes version, make sure that the Kubernetes minor version of both the managed nodes and Fargate nodes in your cluster are the same as your control plane's version.
For example, if your control plane is running version 1.28 and one of your nodes is running version 1.27, then you must update your nodes to version 1.28 before updating your control plane to 1.29.

% kubectl get psp eks.privileged
error: the server doesn't have a resource type "psp"

% kubectl get configmap coredns -n kube-system -o jsonpath='{$.data.Corefile}' | grep upstream
No output is returned.

If you're updating your cluster to version 1.25 or later and have the AWS Load Balancer Controller deployed in your cluster, then update the controller to version 2.4.7 or later before updating your cluster version to 1.25.
For details, refer to Update AWS Load Balancer Controller (from 2.4.5 to v2.6.2).

Update the EKS cluster
Update your Amazon EKS cluster with the following AWS CLI command. Replace the example values with your own.

% aws eks update-cluster-version --region <region-code> --name <EKS-cluster-name> --kubernetes-version 1.29
An example output is as follows.
{
    "update": {
        "id": "2dcbec62-****-****-****-147c978e2c15",
        "status": "InProgress",
        "type": "VersionUpdate",
        "params": [
            {
                "type": "Version",
                "value": "1.29"
            },
            {
                "type": "PlatformVersion",
                "value": "eks.1"
            }
        ],
        "createdAt": "2024-02-12T17:09:21.589000+08:00",
        "errors": []
    }
}

Monitor the status of your cluster update with the following command. Use the cluster name and update ID that the previous command returned. When a Successful status is displayed, the update is complete. The update takes several minutes to complete.
% aws eks describe-update --region us-west-2 --name <EKS-cluster-name> --update-id 2dcbec62-****-****-****-147c978e2c15
An example output is as follows.
{
    "update": {
        "id": "2dcbec62-****-****-****-147c978e2c15",
        "status": "Successful",
        "type": "VersionUpdate",
        "params": [
            {
                "type": "Version",
                "value": "1.29"
            },
            {
                "type": "PlatformVersion",
                "value": "eks.1"
            }
        ],
        "createdAt": "2024-02-12T17:09:21.589000+08:00",
        "errors": []
    }
}


After your cluster update is complete, update your nodes to the same Kubernetes minor version as your updated cluster. For more information, see Self-managed node updates and Updating a managed node group.

Updating AMI for Self-managed Node Groups

Replace 1.29 with a supported version. Replace region-code with the AWS Region that your cluster is in. Replace amazon-linux-2 with amazon-linux-2-arm64 to see the Arm ID.
aws ssm get-parameter --name /aws/service/eks/optimized-ami/1.29/amazon-linux-2/recommended/image_id --region region-code --query "Parameter.Value" --output text

aws ssm get-parameter --name /aws/service/eks/optimized-ami/1.29/amazon-linux-2-arm64/recommended/image_id --region us-west-2 --query "Parameter.Value" --output text
ami-029b69cad40e85814

NOTE
There is an automated mechanism to get the latest AMI ID. For more information, refer to Update Worker Node AMI in Launch Template with Automation.

Update launch templates for EKS worker nodes. Then, update launch template version for each node group, using the "Force update" update strategy.

Update the launch template version using the following script:
EKS_cluster_name=<EKS-cluster-name>
EKS_node_group_name=<EKS-node-group-name>
launch_template_name=<launch-template-name>
launch_template_version=<launch-template-version>
aws eks update-nodegroup-version \
    --cluster-name ${EKS_cluster_name} \
    --nodegroup-name ${EKS_node_group_name} \
    --launch-template name=${launch_template_name},version=${launch_template_version} \
    --force

Update the Amazon VPC CNI plugin for Kubernetes, CoreDNS, and kube-proxy add-ons. It is recommended updating the add-ons to the minimum versions listed in Service account tokens.
  • If you are using Amazon EKS add-ons, select Clusters in the Amazon EKS console, then select the name of the cluster that you updated in the left navigation pane. Notifications appear in the console. They inform you that a new version is available for each add-on that has an available update. To update an add-on, select the Add-ons tab. In one of the boxes for an add-on that has an update available, select Update now, select an available version, and then select Update.
  • Alternately, you can use the AWS CLI or eksctl to update add-ons. For more information, see Updating an add-on.


Install or update kubectl


Note
You must use a kubectl version that is within one minor version difference of your Amazon EKS cluster control plane. For example, a 1.28 kubectl client works with Kubernetes 1.271.28, and 1.29 clusters.

Determine whether you already have kubectl installed on your device.

% kubectl version --client
Client Version: v1.28.3-eks-e71965b
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3

Install or update kubectl on macOS
Download the binary for your cluster's Kubernetes version from Amazon S3 (for Kubernetes 1.29).
% curl -O https://s3.us-west-2.amazonaws.com/amazon-eks/1.29.0/2024-01-04/bin/darwin/amd64/kubectl
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 53.2M  100 53.2M    0     0  3823k      0  0:00:14  0:00:14 --:--:-- 2800k

Apply execute permissions to the binary.
chmod +x ./kubectl

Copy the binary to a folder in your PATH. If you have already installed a version of kubectl, then it is recommended createing a $HOME/bin/kubectl and ensuring that $HOME/bin comes first in your $PATH.
cp ./kubectl $HOME/bin/kubectl

Add the $HOME/bin path to the shell initialization file so that it is configured when you open a shell.
For example, for zsh:
% vim ~/.zshrc
...
export PATH=$HOME/bin:$PATH
...
% source ~/.zshrc

After you install kubectl, you can verify its version.
kubectl version
Client Version: v1.29.0-eks-5e0fdde
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.28.5-eks-5e0fdde


References


Installing or updating kubectl


Category: AWS Tags: public

Upvote


Downvote