Upgrade Notes of Kubernetes (from v1.27 to v1.28)

2023年12月15日


The post Release Notes of Site Upgrades holds the catalog of the whole upgrade's note. 

This post focuses on the upgrade of K8s, from version 1.27 to 1.28.

Compare the Kubernetes version of your cluster control plane to the Kubernetes version of your nodes.

Get the Kubernetes version of your cluster control plane
% kubectl version

WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short.  Use --output=yaml|json to get the full version.
Client Version: version.Info{Major:"1", Minor:"25", GitVersion:"v1.25.4", GitCommit:"872a965c6c6526caa949f0c6ac028ef7aff3fb78", GitTreeState:"clean", BuildDate:"2022-11-09T13:36:36Z", GoVersion:"go1.19.3", Compiler:"gc", Platform:"darwin/amd64"}
Kustomize Version: v4.5.7
Server Version: version.Info{Major:"1", Minor:"27+", GitVersion:"v1.27.8-eks-8cb36c9", GitCommit:"fca3a8722c88c4dba573a903712a6feaf3c40a51", GitTreeState:"clean", BuildDate:"2023-11-22T21:52:13Z", GoVersion:"go1.20.11", Compiler:"gc", Platform:"linux/amd64"}
WARNING: version difference between client (1.25) and server (1.27) exceeds the supported minor version skew of +/-1

To install or update kubectl on macOS
Download the binary for your cluster's Kubernetes version from Amazon S3.
% curl -O https://s3.us-west-2.amazonaws.com/amazon-eks/1.28.3/2023-11-14/bin/darwin/amd64/kubectl

Apply execute permissions to the binary.
% chmod +x ./kubectl

Copy the binary to a folder in your PATH. If you have already installed a version of kubectl, then we recommend creating a $HOME/bin/kubectl and ensuring that $HOME/bin comes first in your $PATH.
% cp ./kubectl $HOME/bin/kubectl && export PATH=$HOME/bin:$PATH

Add the $HOME/bin path to your shell initialization file so that it is configured when you open a shell.
For example, for zsh:
% vim ~/.zshrc
...
export PATH=$HOME/bin:$PATH
...
% source ~/.zshrc

% kubectl version
Client Version: v1.28.3-eks-e71965b
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.27.8-eks-8cb36c9

Get the Kubernetes version of your nodes. This command returns all self-managed and managed Amazon EC2 and Fargate nodes. Each Fargate Pod is listed as its own node.

% k get nodes
NAME                                       STATUS   ROLES    AGE    VERSION
ip-10-0-x-yyy.us-west-2.compute.internal   Ready    <none>   156d   v1.27.1-eks-2f008fe
ip-10-0-x-yy.us-west-2.compute.internal    Ready    <none>   156d   v1.27.1-eks-2f008fe
ip-10-0-x-yyy.us-west-2.compute.internal   Ready    <none>   156d   v1.27.1-eks-2f008fe
ip-10-0-x-yyy.us-west-2.compute.internal   Ready    <none>   156d   v1.27.1-eks-2f008fe
ip-10-0-x-yy.us-west-2.compute.internal    Ready    <none>   156d   v1.27.1-eks-2f008fe
Before updating your control plane to a new Kubernetes version, make sure that the Kubernetes minor version of both the managed nodes and Fargate nodes in your cluster are the same as your control plane's version. For example, if your control plane is running version 1.27 and one of your nodes is running version 1.26, then you must update your nodes to version 1.27 before updating your control plane to 1.28.

If you're updating your cluster to version 1.25 or later and have the AWS Load Balancer Controller deployed in your cluster, then update the controller to version 2.4.7 or later before updating your cluster version to 1.25.
For details, refer to Update AWS Load Balancer Controller (from 2.4.5 to v2.6.2).


Update the EKS cluster

After your cluster update is complete, update your nodes to the same Kubernetes minor version as your updated cluster. For more information, see Self-managed node updates and Updating a managed node group.


Updating an existing self-managed node group

Replace 1.28 with a supported version. Replace region-code with the AWS Region that your cluster is in. Replace amazon-linux-2 with amazon-linux-2-arm64 to see the Arm ID.
aws ssm get-parameter --name /aws/service/eks/optimized-ami/1.28/amazon-linux-2/recommended/image_id --region region-code --query "Parameter.Value" --output text

% aws ssm get-parameter --name /aws/service/eks/optimized-ami/1.28/amazon-linux-2-arm64/recommended/image_id --region us-west-2 --query "Parameter.Value" --output text
ami-02a830e38e10ecb22

% aws ssm get-parameter \
--name /aws/service/eks/optimized-ami/
1.28/amazon-linux-2/recommended/image_id \
--region us-west-2 \
--query "Parameter.Value" \
--output text

ami-0568c24ececf803e7

Update launch template for ARM worker nodes:
Description: Template for K8s worker node, EKS 1.28, t4g.medium, OD, ARM, EBS 8 GB, ami-02a830e38e10ecb22
AMI: ami-02a830e38e10ecb22
EBS volume type: gp3
EBS size: 8 Gib
userdata:
#!/bin/bash
set -ex
B64_CLUSTER_CA=LS0t***LQo=
API_SERVER_URL=https://BFDB***D49F.gr7.us-west-2.eks.amazonaws.com
K8S_CLUSTER_DNS_IP=172.20.0.10
/etc/eks/bootstrap.sh blog \
--kubelet-extra-args '--node-labels=eks.amazonaws.com/nodegroup-image=ami-02a830e38e10ecb22,eks.amazonaws.com/capacityType=ON_DEMAND --max-pods=17' \
--b64-cluster-ca $B64_CLUSTER_CA \
--apiserver-endpoint $API_SERVER_URL \
--dns-cluster-ip $K8S_CLUSTER_DNS_IP \
--use-max-pods false

mkdir /blogdata
yum install -y amazon-efs-utils
mount -t efs fs-fd376857:/ /blogdata

Update launch template for X86 worker nodes:
Description: Template for K8s worker node, EKS 1.28, t4g.medium, OD, X86, 8 GB, ami-0568c24ececf803e7
AMI: ami-0568c24ececf803e7
EBS volume type: gp3
EBS size: 8 GiB
userdata:
#!/bin/bash
set -ex
B64_CLUSTER_CA=LS0t***LQo=
API_SERVER_URL=https://BFDB***D49F.gr7.us-west-2.eks.amazonaws.com
K8S_CLUSTER_DNS_IP=172.20.0.10
/etc/eks/bootstrap.sh blog \
--kubelet-extra-args '--node-labels=eks.amazonaws.com/nodegroup-image=ami-0568c24ececf803e7,eks.amazonaws.com/capacityType=ON_DEMAND --max-pods=17' \
--b64-cluster-ca $B64_CLUSTER_CA \
--apiserver-endpoint $API_SERVER_URL \
--dns-cluster-ip $K8S_CLUSTER_DNS_IP \
--use-max-pods false

mkdir /blogdata
yum install -y amazon-efs-utils
mount -t efs fs-fd***57:/ /blogdata
PS:
- EBS 6 GB的时候会有disk-pressure的报错。

Update node group version and use "Force update" update strategy for ARM worker nodes.

% kubectl get pod -n istio-system | grep 'ContainerStatusUnknown\|Completed\|Evicted\|Error' | awk '{print $1}' | xargs kubectl delete pod -n istio-system
% kubectl get pod -n knative-serving | grep 'ContainerStatusUnknown\|Completed\|Evicted\|Error' | awk '{print $1}' | xargs kubectl delete pod -n knative-serving
% kubectl get pod -n default | grep 'ContainerStatusUnknown\|Completed\|Evicted\|Error' | awk '{print $1}' | xargs kubectl delete pod -n default
% kubectl get pod -n blog | grep 'ContainerStatusUnknown\|Completed\|Evicted\|Error' | awk '{print $1}' | xargs kubectl delete pod -n blog

Update node group version and use "Force update" update strategy for x86 worker nodes.

Update the Amazon VPC CNI plugin for Kubernetes, CoreDNS, and kube-proxy add-ons. We recommend updating the add-ons to the minimum versions listed in Service account tokens.
  • If you are using Amazon EKS add-ons, select Clusters in the Amazon EKS console, then select the name of the cluster that you updated in the left navigation pane. Notifications appear in the console. They inform you that a new version is available for each add-on that has an available update. To update an add-on, select the Add-ons tab. In one of the boxes for an add-on that has an update available, select Update now, select an available version, and then select Update.
  • Alternately, you can use the AWS CLI or eksctl to update add-ons. For more information, see Updating an add-on.

When updating Amazon EKS managed add-ons this time, I received the following error:
Conflicts found when trying to apply. Will not continue due to resolve conflicts mode. Conflicts: ConfigMap coredns - .data.Corefile Deployment.apps coredns - .spec.template.spec.containers Deployment.apps coredns - .spec.template.spec.containers[name="coredns"].securityContext.capabilities.drop Deployment.apps coredns - .spec.template.spec.containers[name="coredns"].securityContext.capabilities.drop Deployment.apps coredns - .spec.template.spec.containers[name="coredns"].securityContext.capabilities.drop

Amazon EKS advanced configuration for managed add-ons lets you override existing add-on configurations. To override custom configurations, use the parameter resolve-conflicts OVERWRITE.


References

Configuring the AWS Security Token Service endpoint for a service account

How do I prevent configuration conflicts when I create or update my Amazon EKS managed add-ons?


Category: AWS Tags: public

Upvote


Downvote