application의 deploy에 문제가 있습니다. cluster의 issue를 fix하세요
controlplane ~ ➜ kubectl get all -A
NAMESPACE NAME READY STATUS RESTARTS AGE
default pod/app-776bb5c68f-jgxr2 0/1 Pending 0 46s
kube-flannel pod/kube-flannel-ds-7ppwx 1/1 Running 0 88s
kube-system pod/coredns-77d6fd4654-rvn8b 1/1 Running 0 88s
kube-system pod/coredns-77d6fd4654-w92x5 1/1 Running 0 88s
kube-system pod/etcd-controlplane 1/1 Running 0 92s
kube-system pod/kube-apiserver-controlplane 1/1 Running 0 92s
kube-system pod/kube-controller-manager-controlplane 1/1 Running 0 92s
kube-system pod/kube-proxy-j4r8s 1/1 Running 0 88s
kube-system pod/kube-scheduler-controlplane 0/1 RunContainerError 3 (3s ago) 47s
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/kubernetes ClusterIP 172.20.0.1 <none> 443/TCP 95s
kube-system service/kube-dns ClusterIP 172.20.0.10 <none> 53/UDP,53/TCP,9153/TCP 92s
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
kube-flannel daemonset.apps/kube-flannel-ds 1 1 1 1 1 <none> 91s
kube-system daemonset.apps/kube-proxy 1 1 1 1 1 kubernetes.io/os=linux 92s
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
default deployment.apps/app 0/1 1 0 46s
kube-system deployment.apps/coredns 2/2 2 2 92s
NAMESPACE NAME DESIRED CURRENT READY AGE
default replicaset.apps/app-776bb5c68f 1 1 0 46s
kube-system replicaset.apps/coredns-77d6fd4654 2 2 2 88s
default namespace의 deploy가 ready 상태가 아님
그리고 스케쥴러도 에러가 발생했음
스케쥴러부터 살펴보자
controlplane ~ ➜ kubectl config set-context --current --namespace=kube-system
Context "kubernetes-admin@kubernetes" modified.
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning BackOff 3m40s (x10 over 4m56s) kubelet Back-off restarting failed container kube-scheduler in pod kube-scheduler-controlplane_kube-system(3aa60fbba62f9faa79076d6a3f6cb9ba)
Normal Pulled 3m25s (x5 over 4m58s) kubelet Container image "registry.k8s.io/kube-scheduler:v1.31.0" already present on machine
Normal Created 3m25s (x5 over 4m58s) kubelet Created container kube-scheduler
Warning Failed 3m25s (x5 over 4m58s) kubelet Error: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: exec: "kube-schedulerrrr": executable file not found in $PATH: unknown
스케쥴러 manifest 체크
controlplane /etc/kubernetes/manifests ➜ vim kube-scheduler.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
component: kube-scheduler
tier: control-plane
name: kube-scheduler
namespace: kube-system
spec:
containers:
- command:
- kube-scheduler
- --authentication-kubeconfig=/etc/kubernetes/scheduler.conf
- --authorization-kubeconfig=/etc/kubernetes/scheduler.conf
- --bind-address=127.0.0.1
- --kubeconfig=/etc/kubernetes/scheduler.conf
- --leader-elect=true
image: registry.k8s.io/kube-scheduler:v1.31.0
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 8
httpGet:
host: 127.0.0.1
path: /healthz
port: 10259
scheme: HTTPS
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 15
name: kube-scheduler
resources:
requests:
cpu: 100m
startupProbe:
failureThreshold: 24
httpGet:
host: 127.0.0.1
path: /healthz
port: 10259
scheme: HTTPS
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 15
volumeMounts:
- mountPath: /etc/kubernetes/scheduler.conf
name: kubeconfig
readOnly: true
hostNetwork: true
priority: 2000001000
priorityClassName: system-node-critical
securityContext:
seccompProfile:
type: RuntimeDefault
volumes:
- hostPath:
path: /etc/kubernetes/scheduler.conf
type: FileOrCreate
name: kubeconfig
status: {}
spec.command의 kube-schedulerrr를 kube-scheduler로 변경 후
기존 스케쥴러 파드 삭제
controlplane /etc/kubernetes/manifests ➜ kubectl delete po kube-scheduler-controlplane
pod "kube-scheduler-controlplane" deleted
controlplane /etc/kubernetes/manifests ➜ kubectl get po
NAME READY STATUS RESTARTS AGE
coredns-77d6fd4654-rvn8b 1/1 Running 0 10m
coredns-77d6fd4654-w92x5 1/1 Running 0 10m
etcd-controlplane 1/1 Running 0 10m
kube-apiserver-controlplane 1/1 Running 0 10m
kube-controller-manager-controlplane 1/1 Running 0 10m
kube-proxy-j4r8s 1/1 Running 0 10m
kube-scheduler-controlplane 1/1 Running 0 27s
정상 작동 확인
deployment app을 2개의 파드로 scale 하세요
controlplane ~ ➜ kubectl config set-context --current --namespace=default
Context "kubernetes-admin@kubernetes" modified.
controlplane ~ ➜ kubectl get deploy
NAME READY UP-TO-DATE AVAILABLE AGE
app 1/1 1 1 12m
controlplane ~ ➜ kubectl get deploy -o yaml > ./app.yaml
controlplane ~ ➜ vim app.yaml
apiVersion: v1
items:
- apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp: "2025-01-11T12:23:45Z"
generation: 1
labels:
app: app
name: app
namespace: default
resourceVersion: "972"
uid: 4315a282-944a-41e4-b13d-00e8ddc8cb69
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: app
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: app
spec:
containers:
- image: nginx:alpine
imagePullPolicy: IfNotPresent
name: nginx
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
availableReplicas: 1
conditions:
- lastTransitionTime: "2025-01-11T12:32:42Z"
lastUpdateTime: "2025-01-11T12:32:42Z"
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
- lastTransitionTime: "2025-01-11T12:23:45Z"
lastUpdateTime: "2025-01-11T12:32:42Z"
message: ReplicaSet "app-776bb5c68f" has successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
observedGeneration: 1
readyReplicas: 1
replicas: 1
updatedReplicas: 1
kind: List
metadata:
resourceVersion: ""
replicas를 2로 수정 후 apply
or
kubectl scale deploy app --replicas=2
deploy의 scale을 2로 조정했지만 실제로 파드는 늘어나지 않습니다. issue를 해결하세요
scale이 제대로 이루어지지 않는 것은 controller-manager의 상태를 확인해야함
controlplane ~ ➜ kubectl get po -A
NAMESPACE NAME READY STATUS RESTARTS AGE
default app-776bb5c68f-jgxr2 1/1 Running 0 26m
kube-flannel kube-flannel-ds-7ppwx 1/1 Running 0 27m
kube-system coredns-77d6fd4654-rvn8b 1/1 Running 0 27m
kube-system coredns-77d6fd4654-w92x5 1/1 Running 0 27m
kube-system etcd-controlplane 1/1 Running 0 27m
kube-system kube-apiserver-controlplane 1/1 Running 0 27m
kube-system kube-controller-manager-controlplane 0/1 CrashLoopBackOff 7 (3m47s ago) 14m
kube-system kube-proxy-j4r8s 1/1 Running 0 27m
kube-system kube-scheduler-controlplane 1/1 Running 0 17m
controller-manager가 CarshLoopBackOff 상태
controlplane ~ ➜ kubectl logs kube-controller-manager-controlplane -n kube-system
I0111 12:46:33.040609 1 serving.go:386] Generated self-signed cert in-memory
E0111 12:46:33.040712 1 run.go:72] "command failed" err="stat /etc/kubernetes/controller-manager-XXXX.conf: no such file or directory"
controlplane /etc/kubernetes/manifests ➜ cat kube-controller-manager.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
component: kube-controller-manager
tier: control-plane
name: kube-controller-manager
namespace: kube-system
spec:
containers:
- command:
- kube-controller-manager
- --allocate-node-cidrs=true
- --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf
- --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf
- --bind-address=127.0.0.1
- --client-ca-file=/etc/kubernetes/pki/ca.crt
- --cluster-cidr=172.17.0.0/16
- --cluster-name=kubernetes
- --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt
- --cluster-signing-key-file=/etc/kubernetes/pki/ca.key
- --controllers=*,bootstrapsigner,tokencleaner
- --kubeconfig=/etc/kubernetes/controller-manager-XXXX.conf
- --leader-elect=true
- --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
- --root-ca-file=/etc/kubernetes/pki/ca.crt
- --service-account-private-key-file=/etc/kubernetes/pki/sa.key
- --service-cluster-ip-range=172.20.0.0/16
- --use-service-account-credentials=true
image: registry.k8s.io/kube-controller-manager:v1.31.0
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 8
httpGet:
host: 127.0.0.1
path: /healthz
port: 10257
scheme: HTTPS
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 15
name: kube-controller-manager
resources:
requests:
cpu: 200m
startupProbe:
failureThreshold: 24
httpGet:
host: 127.0.0.1
path: /healthz
port: 10257
scheme: HTTPS
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 15
volumeMounts:
- mountPath: /etc/ssl/certs
name: ca-certs
readOnly: true
- mountPath: /etc/ca-certificates
name: etc-ca-certificates
readOnly: true
- mountPath: /usr/libexec/kubernetes/kubelet-plugins/volume/exec
name: flexvolume-dir
- mountPath: /etc/kubernetes/pki
name: k8s-certs
readOnly: true
- mountPath: /etc/kubernetes/controller-manager.conf
name: kubeconfig
readOnly: true
- mountPath: /usr/local/share/ca-certificates
name: usr-local-share-ca-certificates
readOnly: true
- mountPath: /usr/share/ca-certificates
name: usr-share-ca-certificates
readOnly: true
hostNetwork: true
priority: 2000001000
priorityClassName: system-node-critical
securityContext:
seccompProfile:
type: RuntimeDefault
volumes:
- hostPath:
path: /etc/ssl/certs
type: DirectoryOrCreate
name: ca-certs
- hostPath:
path: /etc/ca-certificates
type: DirectoryOrCreate
name: etc-ca-certificates
- hostPath:
path: /usr/libexec/kubernetes/kubelet-plugins/volume/exec
type: DirectoryOrCreate
name: flexvolume-dir
- hostPath:
path: /etc/kubernetes/pki
type: DirectoryOrCreate
name: k8s-certs
- hostPath:
path: /etc/kubernetes/controller-manager.conf
type: FileOrCreate
name: kubeconfig
- hostPath:
path: /usr/local/share/ca-certificates
type: DirectoryOrCreate
name: usr-local-share-ca-certificates
- hostPath:
path: /usr/share/ca-certificates
type: DirectoryOrCreate
name: usr-share-ca-certificates
status: {}
spec.command 아래에 잘못된 설정이 있음
해당 부분을 올바르게 수정 후 에러가 발생한 파드 삭제
controlplane /etc/kubernetes/manifests ➜ kubectl delete po kube-controller-manager-controlplane -n kube-system
pod "kube-controller-manager-controlplane" deleted
controlplane /etc/kubernetes/manifests ➜ kubectl get po -A
NAMESPACE NAME READY STATUS RESTARTS AGE
default app-776bb5c68f-jgxr2 1/1 Running 0 30m
default app-776bb5c68f-zzgxx 1/1 Running 0 9s
kube-flannel kube-flannel-ds-7ppwx 1/1 Running 0 30m
kube-system coredns-77d6fd4654-rvn8b 1/1 Running 0 30m
kube-system coredns-77d6fd4654-w92x5 1/1 Running 0 30m
kube-system etcd-controlplane 1/1 Running 0 31m
kube-system kube-apiserver-controlplane 1/1 Running 0 31m
kube-system kube-controller-manager-controlplane 1/1 Running 0 5s
kube-system kube-proxy-j4r8s 1/1 Running 0 30m
kube-system kube-scheduler-controlplane 1/1 Running 0 20m
controlplane /etc/kubernetes/manifests ➜ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/app-776bb5c68f-jgxr2 1/1 Running 0 30m
pod/app-776bb5c68f-zzgxx 1/1 Running 0 35s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 172.20.0.1 <none> 443/TCP 31m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/app 2/2 2 2 30m
NAME DESIRED CURRENT READY AGE
replicaset.apps/app-776bb5c68f 2 2 2 30m
정상작동 확인
scaling에 다시 문제가 생겼습니다. deploy의 replica를 3개로 늘릴 수 있도록 해결하세요
controlplane /etc/kubernetes/manifests ➜ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/app-776bb5c68f-jgxr2 1/1 Running 0 32m
pod/app-776bb5c68f-zzgxx 1/1 Running 0 2m20s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 172.20.0.1 <none> 443/TCP 33m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/app 2/3 2 2 32m
NAME DESIRED CURRENT READY AGE
replicaset.apps/app-776bb5c68f 2 2 2 32m
controlplane /etc/kubernetes/manifests ➜ kubectl get po -A
NAMESPACE NAME READY STATUS RESTARTS AGE
default app-776bb5c68f-jgxr2 1/1 Running 0 32m
default app-776bb5c68f-zzgxx 1/1 Running 0 2m25s
kube-flannel kube-flannel-ds-7ppwx 1/1 Running 0 33m
kube-system coredns-77d6fd4654-rvn8b 1/1 Running 0 33m
kube-system coredns-77d6fd4654-w92x5 1/1 Running 0 33m
kube-system etcd-controlplane 1/1 Running 0 33m
kube-system kube-apiserver-controlplane 1/1 Running 0 33m
kube-system kube-controller-manager-controlplane 0/1 CrashLoopBackOff 3 (7s ago) 54s
kube-system kube-proxy-j4r8s 1/1 Running 0 33m
kube-system kube-scheduler-controlplane 1/1 Running 0 23m
역시 controller-manager에 문제가 또 생김
controlplane /etc/kubernetes/manifests ➜ kubectl logs kube-controller-manager-controlplane -n kube-system
I0111 12:56:54.045409 1 serving.go:386] Generated self-signed cert in-memory
E0111 12:56:54.304387 1 run.go:72] "command failed" err="unable to load client CA provider: open /etc/kubernetes/pki/ca.crt: no such file or directory"
controlplane /etc/kubernetes/pki ➜ cat ca.crt
-----BEGIN CERTIFICATE-----
MIIDBTCCAe2gAwIBAgIIGPEenQtTYb4wDQYJKoZIhvcNAQELBQAwFTETMBEGA1UE
AxMKa3ViZXJuZXRlczAeFw0yNTAxMTExMjE3MzRaFw0zNTAxMDkxMjIyMzRaMBUx
EzARBgNVBAMTCmt1YmVybmV0ZXMwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEK
AoIBAQDN3pYpw67ce5xgsvXDhO7bxyzYjyONn3Y/oC5lfSZ9DHu/DrONWovufF5b
AIqELVBCUpni53KZsF7WvB80C1Bh5xvJYvHEvh6rwy5eIYj2gshvQa31PLI0/vaq
d9LQuAfU6FC7fRRfQD/iuGMvf6TttJt8pD5ZWiQGS7hyz3pC0okdB/3s4ZMA/Duh
MtL0KLGzcYHedkaSOZExvRbiGejGKe/bK9IkRU/f5QtgDpxBblveuhi9OlR7VQvz
EJO33K/KjrUQJs8WAHQ78PpNFmS/T3zqddTx0sFWUzrcD9H4934txVIxUK7u66Wh
ihXlDgRj4N/QqqrYezIK2Se1bzc3AgMBAAGjWTBXMA4GA1UdDwEB/wQEAwICpDAP
BgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBS65f6Ghzjb0oJn4alufpGaVQPzIjAV
BgNVHREEDjAMggprdWJlcm5ldGVzMA0GCSqGSIb3DQEBCwUAA4IBAQCnlZt1NVro
rgshmfWF84IMGl9pTO2y2bfsJgMRZO2A9p3H1HRuH6rkHUP8Og7mrFKlxxm3JT6D
ZiZDY08fI6UAE5AGJokAVuoBmLWXWUoi7XmgsyJTRIRxXT87y5qeHKB6xfChlZa0
3E0JpBO7/t/lLAmqR+SZsm7sLM9nw2btGF0HR7EUiTWtmqRQ/Gp1v/0DxwSOjyiD
2WFbXS9P7AffhrvXHSNnuDIAFQ7diIAvqwQ88vs4KQs/DPQRS+4ysbu7cXmVV1Xd
FTgbLLAxNKhTR5kSdYUDQYT1pWF7tC/veji6vHG8feXpN0+FGtKxoXaoltcrUY+J
UbXi+HkBsMuk
-----END CERTIFICATE-----
ca.crt가 실제로 호스트의 /etc/kubernetes/pki 경로에 존재하지만 파일을 찾을 수 없다고 하는것은
볼륨 마운트가 잘못되었다고 의심할 수 있다
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
component: kube-controller-manager
tier: control-plane
name: kube-controller-manager
namespace: kube-system
spec:
containers:
- command:
- kube-controller-manager
- --allocate-node-cidrs=true
- --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf
- --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf
- --bind-address=127.0.0.1
- --client-ca-file=/etc/kubernetes/pki/ca.crt
- --cluster-cidr=172.17.0.0/16
- --cluster-name=kubernetes
- --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt
- --cluster-signing-key-file=/etc/kubernetes/pki/ca.key
- --controllers=*,bootstrapsigner,tokencleaner
- --kubeconfig=/etc/kubernetes/controller-manager.conf
- --leader-elect=true
- --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
- --root-ca-file=/etc/kubernetes/pki/ca.crt
- --service-account-private-key-file=/etc/kubernetes/pki/sa.key
- --service-cluster-ip-range=172.20.0.0/16
- --use-service-account-credentials=true
image: registry.k8s.io/kube-controller-manager:v1.31.0
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 8
httpGet:
host: 127.0.0.1
path: /healthz
port: 10257
scheme: HTTPS
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 15
name: kube-controller-manager
resources:
requests:
cpu: 200m
startupProbe:
failureThreshold: 24
httpGet:
host: 127.0.0.1
path: /healthz
port: 10257
scheme: HTTPS
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 15
volumeMounts:
- mountPath: /etc/ssl/certs
name: ca-certs
readOnly: true
- mountPath: /etc/ca-certificates
name: etc-ca-certificates
readOnly: true
- mountPath: /usr/libexec/kubernetes/kubelet-plugins/volume/exec
name: flexvolume-dir
- mountPath: /etc/kubernetes/pki
name: k8s-certs
readOnly: true
- mountPath: /etc/kubernetes/controller-manager.conf
name: kubeconfig
readOnly: true
- mountPath: /usr/local/share/ca-certificates
name: usr-local-share-ca-certificates
readOnly: true
- mountPath: /usr/share/ca-certificates
name: usr-share-ca-certificates
readOnly: true
hostNetwork: true
priority: 2000001000
priorityClassName: system-node-critical
securityContext:
seccompProfile:
type: RuntimeDefault
volumes:
- hostPath:
path: /etc/ssl/certs
type: DirectoryOrCreate
name: ca-certs
- hostPath:
path: /etc/ca-certificates
type: DirectoryOrCreate
name: etc-ca-certificates
- hostPath:
path: /usr/libexec/kubernetes/kubelet-plugins/volume/exec
type: DirectoryOrCreate
name: flexvolume-dir
- hostPath:
path: /etc/kubernetes/WRONG-PKI-DIRECTORY
type: DirectoryOrCreate
name: k8s-certs
- hostPath:
path: /etc/kubernetes/controller-manager.conf
type: FileOrCreate
name: kubeconfig
- hostPath:
path: /usr/local/share/ca-certificates
type: DirectoryOrCreate
name: usr-local-share-ca-certificates
- hostPath:
path: /usr/share/ca-certificates
type: DirectoryOrCreate
name: usr-share-ca-certificates
status: {}
volume에 k8s-certs 에 잘못된 경로로 마운트되어 있는 것을 찾아서 수정
controlplane /etc/kubernetes/manifests ➜ kubectl get po -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-77d6fd4654-rvn8b 1/1 Running 0 42m
coredns-77d6fd4654-w92x5 1/1 Running 0 42m
etcd-controlplane 1/1 Running 0 42m
kube-apiserver-controlplane 1/1 Running 0 42m
kube-controller-manager-controlplane 1/1 Running 0 80s
kube-proxy-j4r8s 1/1 Running 0 42m
kube-scheduler-controlplane 1/1 Running 0 32m
controlplane /etc/kubernetes/manifests ➜ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/app-776bb5c68f-cdwzk 1/1 Running 0 62s
pod/app-776bb5c68f-jgxr2 1/1 Running 0 41m
pod/app-776bb5c68f-zzgxx 1/1 Running 0 11m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 172.20.0.1 <none> 443/TCP 42m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/app 3/3 3 3 41m
NAME DESIRED CURRENT READY AGE
replicaset.apps/app-776bb5c68f 3 3 3 41m
정상적으로 scale을 3으로 조정함
'DevOps' 카테고리의 다른 글
CKA 예제 리마인더 - 36. Worker Node Failure (0) | 2025.01.11 |
---|---|
/var/libe/kubelet/config.yaml 과 /etc/kubernetes/kubelet.conf 차이 (0) | 2025.01.11 |
CKA 예제 리마인더 - 34. Application Failure (0) | 2025.01.10 |
CKA 예제 리마인더 - 33. Cluster Installation using Kubeadm (0) | 2025.01.09 |
CKA 예제 리마인더 - 32. Ingress Networking - 2 (0) | 2025.01.08 |