kubeadm과 kubelet을 controlplane과 node01 노드에 설치하세요
Use the exact version of 1.31.0-1.1 for both
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
EOF
sudo sysctl --system
sysctl net.ipv4.ip_forward
sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl gpg
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.31/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.31/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl 이지만 특정버전을 설치하라는 요구가 있었기 때문에
sudo apt-get install -y kubelet=1.31.0-1.1 kubeadm=1.31.0-1.1 kubectl=1.31.0-1.1
sudo apt-mark hold kubelet kubeadm kubectl
controlplane ~ ➜ kubectl get ndoe
E0108 20:55:55.075167 12581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: the server could not find the requested resource"
E0108 20:55:55.078152 12581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: the server could not find the requested resource"
E0108 20:55:55.080570 12581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: the server could not find the requested resource"
E0108 20:55:55.083258 12581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: the server could not find the requested resource"
E0108 20:55:55.086365 12581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: the server could not find the requested resource"
Error from server (NotFound): the server could not find the requested resource
그런다음 kubectl 명령어를 사용하면 이런 에러가 발생할텐데 클러스터가 초기화되지 않았기 때문이다
controlplane node를 initialize 하세요
Initialize Control Plane Node (Master Node). Use the following options:
apiserver-advertise-address - Use the IP address allocated to eth0 on the controlplane node
apiserver-cert-extra-sans - Set it to controlplane
pod-network-cidr - Set to 10.244.0.0/16
Once done, set up the default kubeconfig file and wait for node to be part of the cluster.
ip add 명령어로 ip 주소를 확인
controlplane ~ ➜ ip add
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
2: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:d2:6b:aa:07 brd ff:ff:ff:ff:ff:ff
inet 172.12.0.1/24 brd 172.12.0.255 scope global docker0
valid_lft forever preferred_lft forever
3: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc noqueue state UNKNOWN group default
link/ether f2:85:d7:64:c4:f5 brd ff:ff:ff:ff:ff:ff
inet 10.244.0.0/32 scope global flannel.1
valid_lft forever preferred_lft forever
4: cni0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
link/ether 92:2a:5e:43:46:fa brd ff:ff:ff:ff:ff:ff
inet 10.244.0.1/24 brd 10.244.0.255 scope global cni0
valid_lft forever preferred_lft forever
13849: eth0@if13850: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default
link/ether 02:42:c0:12:d7:09 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 192.18.215.9/24 brd 192.18.215.255 scope global eth0
valid_lft forever preferred_lft forever
13851: eth1@if13852: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:19:00:38 brd ff:ff:ff:ff:ff:ff link-netnsid 1
inet 172.25.0.56/24 brd 172.25.0.255 scope global eth1
valid_lft forever preferred_lft forever
kubeadm init --apiserver-advertise-address=192.18.215.9 --apiserver-cert-extra-sans=controlplane --pod-network-cidr=10.244.0.0/16
완료되면 이런 메세지가 나온다
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.18.215.9:6443 --token sl8f8x.1dvkt1ydhyetpr0y \
--discovery-token-ca-cert-hash sha256:3a483805d08904795891e27aad3a8d21c30d4e8af83fb953e59083c4dd60e1e9
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
부분을 복사해서 실행
controlplane ~ ➜ kubectl get node
NAME STATUS ROLES AGE VERSION
controlplane NotReady control-plane 71s v1.31.0
정상 작동된다
node01 노드를 join 시키세요
node01에서
kubeadm join 192.18.215.9:6443 --token sl8f8x.1dvkt1ydhyetpr0y \
--discovery-token-ca-cert-hash sha256:3a483805d08904795891e27aad3a8d21c30d4e8af83fb953e59083c4dd60e1e9
실행
flannel을 설치하세요
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
친절한 안내문에 링크까지 있다
kubectl apply -f https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml
'DevOps' 카테고리의 다른 글
CKA 예제 리마인더 - 35. Control Plane Failure (0) | 2025.01.11 |
---|---|
CKA 예제 리마인더 - 34. Application Failure (0) | 2025.01.10 |
CKA 예제 리마인더 - 32. Ingress Networking - 2 (0) | 2025.01.08 |
CKA 예제 리마인더 - 31. Ingress Networking - 1 (0) | 2025.01.07 |
CKA 예제 리마인더 - 30. Service Networking (0) | 2025.01.03 |