A kubeconfig file called admin.kubeconfig has been created in /root/CKA. There is something wrong with the configuration. Troubleshoot and fix it.
Fix /root/CKA/admin.kubeconfig
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURCVENDQWUyZ0F3SUJBZ0lJWTcrSHdPVmNQYjR3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TlRBeE1UZ3hNekkwTVRKYUZ3MHpOVEF4TVRZeE16STVNVEphTUJVeApFekFSQmdOVkJBTVRDbXQxWW1WeWJtVjBaWE13Z2dFaU1BMEdDU3FHU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLCkFvSUJBUUNzRTg3dUVxL3BKbU81VC9nVXlFOXYwTlhFVkx6TXFGYXBQa0JZa1Q0YUc4WENBRmgvem13KzlSNzYKa0wzendiOUsvK0xxTEZKcEkycEpPMkxWaFpiYTlHSytvZitSUzJETHdmWHBMOVQyOFVrdXNRUGJMa1M1L1NNQwpZQWFhT0NGOXlMWGZ2cDR3S2pMVkdmMmFXV1o1NTEvQUdHZTYrTWNIbTlXSWdnWHlnVW1teDN4dHlWUzZYdGJNCjFBQnV3UmtSTklUR2V2OXc0K1Zrak1mNjFkVUl4cGlXQUN5Z2kwUVRLL2k4R1hIeGx6ZWJrL3hJQzBDUGlIcFoKemVqNDd3Ym9tTmllT3VlTkFIZWFEWnN1TlFDbWZNeHY5SzdWWWJwWUYvcks1S3NqL1FpQit4RzVUUVpQamxRNAo0ZWx5bDMwSG1ONzlWb3JFc05vVlBnWFdmeEs5QWdNQkFBR2pXVEJYTUE0R0ExVWREd0VCL3dRRUF3SUNwREFQCkJnTlZIUk1CQWY4RUJUQURBUUgvTUIwR0ExVWREZ1FXQkJUVS8ycm1tN3dlVUpFMHpmMG1nZGJyaTVLUElqQVYKQmdOVkhSRUVEakFNZ2dwcmRXSmxjbTVsZEdWek1BMEdDU3FHU0liM0RRRUJDd1VBQTRJQkFRQU1GOE1hVU9qRwpmQ2pVYzNjMGJVeGhxUTU3MTFxdjZuelExN2FBcUVEUW1TdXg0RVlLQ1AvK2daeWZsdmhCVGFMbGpoUWdqSTNKCmpxdTFzdVY1VzdPZjcyamthaUEreWRQMlBvQURZZUZxYnRPM2FBY3ZKZjhydUFWRktRMXgzSlJsay9FVTVjdEUKSzdnSTk5b0JCcDNrOEE3bVlKa2NEbmZyYm54TDR5L0c4RmUwMHlxa05VelAzNW9vZkRrZnNrR3o4M21HT3A5Qwp0WmpXRzdxNkZoem40YVAyY3RYNkE3SHNyenpNc2NpV3JQZm51SzJDREJoZ3FMSnNPQW1IZnRRU1hBdUMweHpyCjFXcElYbHVRTXJlWkprdnpRdGZnU1dLMDJ4Mk5qOVdxOFZIZnNOd3hRaXJtRkhUY0YvMTc0UzlIby9PMUtMQ2EKVVRLcnIyeHJkQWtFCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
server: https://controlplane:4883
name: kubernetes
포트가 이상하기때문에 cluster-info로 포트 확인
controlplane ~ ➜ kubectl cluster-info
Kubernetes control plane is running at https://controlplane:6443
CoreDNS is running at https://controlplane:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURCVENDQWUyZ0F3SUJBZ0lJWTcrSHdPVmNQYjR3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TlRBeE1UZ3hNekkwTVRKYUZ3MHpOVEF4TVRZeE16STVNVEphTUJVeApFekFSQmdOVkJBTVRDbXQxWW1WeWJtVjBaWE13Z2dFaU1BMEdDU3FHU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLCkFvSUJBUUNzRTg3dUVxL3BKbU81VC9nVXlFOXYwTlhFVkx6TXFGYXBQa0JZa1Q0YUc4WENBRmgvem13KzlSNzYKa0wzendiOUsvK0xxTEZKcEkycEpPMkxWaFpiYTlHSytvZitSUzJETHdmWHBMOVQyOFVrdXNRUGJMa1M1L1NNQwpZQWFhT0NGOXlMWGZ2cDR3S2pMVkdmMmFXV1o1NTEvQUdHZTYrTWNIbTlXSWdnWHlnVW1teDN4dHlWUzZYdGJNCjFBQnV3UmtSTklUR2V2OXc0K1Zrak1mNjFkVUl4cGlXQUN5Z2kwUVRLL2k4R1hIeGx6ZWJrL3hJQzBDUGlIcFoKemVqNDd3Ym9tTmllT3VlTkFIZWFEWnN1TlFDbWZNeHY5SzdWWWJwWUYvcks1S3NqL1FpQit4RzVUUVpQamxRNAo0ZWx5bDMwSG1ONzlWb3JFc05vVlBnWFdmeEs5QWdNQkFBR2pXVEJYTUE0R0ExVWREd0VCL3dRRUF3SUNwREFQCkJnTlZIUk1CQWY4RUJUQURBUUgvTUIwR0ExVWREZ1FXQkJUVS8ycm1tN3dlVUpFMHpmMG1nZGJyaTVLUElqQVYKQmdOVkhSRUVEakFNZ2dwcmRXSmxjbTVsZEdWek1BMEdDU3FHU0liM0RRRUJDd1VBQTRJQkFRQU1GOE1hVU9qRwpmQ2pVYzNjMGJVeGhxUTU3MTFxdjZuelExN2FBcUVEUW1TdXg0RVlLQ1AvK2daeWZsdmhCVGFMbGpoUWdqSTNKCmpxdTFzdVY1VzdPZjcyamthaUEreWRQMlBvQURZZUZxYnRPM2FBY3ZKZjhydUFWRktRMXgzSlJsay9FVTVjdEUKSzdnSTk5b0JCcDNrOEE3bVlKa2NEbmZyYm54TDR5L0c4RmUwMHlxa05VelAzNW9vZkRrZnNrR3o4M21HT3A5Qwp0WmpXRzdxNkZoem40YVAyY3RYNkE3SHNyenpNc2NpV3JQZm51SzJDREJoZ3FMSnNPQW1IZnRRU1hBdUMweHpyCjFXcElYbHVRTXJlWkprdnpRdGZnU1dLMDJ4Mk5qOVdxOFZIZnNOd3hRaXJtRkhUY0YvMTc0UzlIby9PMUtMQ2EKVVRLcnIyeHJkQWtFCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
server: https://controlplane:6443
name: kubernetes
Create a new deployment called nginx-deploy, with image nginx:1.16 and 1 replica.
Next, upgrade the deployment to version 1.17 using rolling update and add the annotation message
Updated nginx image to 1.17.
Image: nginx:1.16
Task: Upgrade the version of the deployment to 1:17
controlplane ~ ➜ kubectl create deploy nginx-deploy --image=nginx:1.16 --replicas=1
deployment.apps/nginx-deploy created
controlplane ~ ➜ kubectl set image deploy/nginx-deploy nginx=nginx:1.17
deployment.apps/nginx-deploy image updated
controlplane ~ ➜ kubectl annotate deployment/nginx-deploy kubernetes.io/change-cause="Update
d nginx image to 1.17"
deployment.apps/nginx-deploy annotated
controlplane ~ ➜ kubectl rollout history deploy/nginx-deploy
deployment.apps/nginx-deploy
REVISION CHANGE-CAUSE
1 <none>
2 Updated nginx image to 1.17
A new deployment called alpha-mysql has been deployed in the alpha namespace. However, the pods are not running. Troubleshoot and fix the issue. The deployment should make use of the persistent volume alpha-pv to be mounted at /var/lib/mysql and should use the environment variable MYSQL_ALLOW_EMPTY_PASSWORD=1 to make use of an empty root password.
Important: Do not alter the persistent volume.
Troubleshoot and fix the issues
controlplane ~ ➜ kubectl config set-context --current --namespace=alpha
Context "kubernetes-admin@kubernetes" modified.
controlplane ~ ➜ kubectl describe deploy alpha-mysql
Name: alpha-mysql
Namespace: alpha
CreationTimestamp: Sat, 18 Jan 2025 14:18:48 +0000
Labels: app=alpha-mysql
Annotations: deployment.kubernetes.io/revision: 1
Selector: app=alpha-mysql
Replicas: 1 desired | 1 updated | 1 total | 0 available | 1 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: app=alpha-mysql
Containers:
mysql:
Image: mysql:5.6
Port: 3306/TCP
Host Port: 0/TCP
Environment:
MYSQL_ALLOW_EMPTY_PASSWORD: 1
Mounts:
/var/lib/mysql from mysql-data (rw)
Volumes:
mysql-data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: mysql-alpha-pvc
ReadOnly: false
Node-Selectors: <none>
Tolerations: <none>
Conditions:
Type Status Reason
---- ------ ------
Available False MinimumReplicasUnavailable
Progressing False ProgressDeadlineExceeded
OldReplicaSets: <none>
NewReplicaSet: alpha-mysql-78f449b485 (1/1 replicas created)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 20m deployment-controller Scaled up replica set alpha-mysql-78f449b485 to 1
pvc를 만들때 필요한 정보를 바인딩할 pv에서 확인
controlplane ~ ➜ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS VOLUMEATTRIBUTESCLASS REASON AGE
alpha-pv 1Gi RWO Retain Available slow <unset> 20m
a
controlplane ~ ➜ vim mysql-alpha-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-alpha-pvc
namespace: alpha
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: slow
controlplane ~ ➜ kubectl apply -f mysql-alpha-pvc.yaml
persistentvolumeclaim/mysql-alpha-pvc created
controlplane ~ ➜ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
alpha-claim Pending slow-storage <unset> 24m
mysql-alpha-pvc Bound alpha-pv 1Gi RWO slow <unset> 3s
Take the backup of ETCD at the location /opt/etcd-backup.db on the controlplane node.
Troubleshoot and fix the issues
controlplane ~ ➜ export ETCDCTL_API=3
controlplane ~ ➜ cat /etc/kubernetes/manifests/etcd.yaml | grep file
- --cert-file=/etc/kubernetes/pki/etcd/server.crt
- --key-file=/etc/kubernetes/pki/etcd/server.key
- --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt
- --peer-key-file=/etc/kubernetes/pki/etcd/peer.key
- --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
- --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
seccompProfile:
controlplane ~ ➜ cat /etc/kubernetes/manifests/etcd.yaml | grep 2379
kubeadhttp://m.kubernetes.io/etcd.advertise-client-urls: https://192.8.234.12:2379
- --advertise-client-urls=https://192.8.234.12:2379
- --listen-client-urls=https://127.0.0.1:2379,https://192.8.234.12:2379
controlplane ~ ✖ ETCDCTL_API=3 etcdctl --endpoints=https://127.0.0.1:2379 --cacert=/etc/kuber
netes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/et
cd/server.key snapshot save /opt/etcd-backup.db
Snapshot saved at /opt/etcd-backup.db
Create a pod called secret-1401 in the admin1401 namespace using the busybox image. The container within the pod should be called secret-admin and should sleep for 4800 seconds.
The container should mount a read-only secret volume called secret-volume at the path /etc/secret-volume. The secret being mounted has already been created for you and is called dotfile-secret.
Pod created correctly?
controlplane ~ ➜ kubectl run secret-1401 --image=busybox -n admin1401 --dry-run=client -o yaml --command -- sleep 4800 > secret-1401.yaml
controlplane ~ ➜ kubectl get secret
No resources found in default namespace.
controlplane ~ ➜ kubectl config set-context --current --namespace=admin1401
Context "kubernetes-admin@kubernetes" modified.
controlplane ~ ➜ kubectl get secret
NAME TYPE DATA AGE
dotfile-secret Opaque 1 31m
controlplane ~ ➜ vim secret-1401.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: secret-1401
name: secret-1401
namespace: admin1401
spec:
volumes:
- name: secret-volume
secret:
secretName: dotfile-secret
containers:
- command:
- sleep
- "4800"
image: busybox
name: secret-admin
resources: {}
volumeMounts:
- name: secret-volume
readOnly: true
mountPath: "/etc/secret-volume"
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
controlplane ~ ➜ kubectl apply -f secret-1401.yaml
pod/secret-1401 created
controlplane ~ ➜ kubectl describe po secret-1401
Name: secret-1401
Namespace: admin1401
Priority: 0
Service Account: default
Node: node01/192.8.234.3
Start Time: Sat, 18 Jan 2025 14:51:01 +0000
Labels: run=secret-1401
Annotations: <none>
Status: Running
IP: 10.244.192.3
IPs:
IP: 10.244.192.3
Containers:
secret-admin:
Container ID: containerd://031011ba545f970190678e0a70cde1203f410760db66443da9401f2c1d0e74d1
Image: busybox
Image ID: docker.io/library/busybox@sha256:a5d0ce49aa801d475da48f8cb163c354ab95cab073cd3c138bd458fc8257fbf1
Port: <none>
Host Port: <none>
Command:
sleep
4800
State: Running
Started: Sat, 18 Jan 2025 14:51:03 +0000
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/etc/secret-volume from secret-volume (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-g5c97 (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
secret-volume:
Type: Secret (a volume populated by a Secret)
SecretName: dotfile-secret
Optional: false
kube-api-access-g5c97:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 8s default-scheduler Successfully assigned admin1401/secret-1401 to node01
Normal Pulling 8s kubelet Pulling image "busybox"
Normal Pulled 8s kubelet Successfully pulled image "busybox" in 320ms (320ms including waiting). Image size: 2167089 bytes.
Normal Created 8s kubelet Created container secret-admin
Normal Started 7s kubelet Started container secret-admin
공식문서 secret volume 검색하여 참고 ( Secret 문서 )
'DevOps' 카테고리의 다른 글
graceful shutdown (1) | 2025.01.30 |
---|---|
CKA 예제 리마인더 - 39. Lightning Lab - Cluster Upgrade (0) | 2025.01.15 |
CKA 예제 리마인더 - 38. Mock Exam - 2 (0) | 2025.01.15 |
CKA 예제 리마인더 - 37. Mock Exam - 1 (0) | 2025.01.13 |
CKA 예제 리마인더 - 36. Worker Node Failure (0) | 2025.01.11 |