pod를 배포하세요
Name: nginx-pod
Image: nginx:alpine
controlplane ~ ➜ kubectl run nginx-pod --image=nginx:alpine --dry-run=client -o yaml > nginx.yaml
controlplane ~ ➜ cat nginx.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: nginx-pod
name: nginx-pod
spec:
containers:
- image: nginx:alpine
name: nginx-pod
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
controlplane ~ ➜ kubectl apply -f nginx.yaml
pod/nginx-pod created
controlplane ~ ➜ kubectl get po
NAME READY STATUS RESTARTS AGE
nginx-pod 1/1 Running 0 2s
pod를 배포하세요
Pod Name: messaging
Image: redis:alpine
Labels: tier=msg
controlplane ~ ➜ kubectl run messaging --image=redis:alpine --labels='tier=msg' --dry-run=client -o yaml > messaging.yaml
controlplane ~ ➜ cat messaging.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
tier: msg
name: messaging
spec:
containers:
- image: redis:alpine
name: messaging
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
controlplane ~ ➜ kubectl apply -f messaging.yaml
pod/messaging created
controlplane ~ ➜ kubectl get po
NAME READY STATUS RESTARTS AGE
messaging 1/1 Running 0 26s
nginx-pod 1/1 Running 0 2m19s
네임스페이스를 만드세요
Namespace: apx-x9984574
controlplane ~ ➜ kubectl create namespace apx-x9984574 --dry-run=client -o yaml > namespace.yaml
controlplane ~ ➜ cat namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
creationTimestamp: null
name: apx-x9984574
spec: {}
status: {}
controlplane ~ ➜ kubectl apply -f namespace.yaml
namespace/apx-x9984574 created
controlplane ~ ✖ kubectl get namespace
NAME STATUS AGE
apx-x9984574 Active 5s
default Active 18m
kube-flannel Active 18m
kube-node-lease Active 18m
kube-public Active 18m
kube-system Active 18m
node의 리스트를 JSON 포맷으로 /opt/outputs/nodes-z3444kd9.json 경로에 저장하세요
controlplane ~ ➜ kubectl get node -o json > /opt/outputs/nodes-z3444kd9.json
controlplane ~ ➜ cat /opt/outputs/nodes-z3444kd9.json
{
"apiVersion": "v1",
"items": [
{
"apiVersion": "v1",
"kind": "Node",
"metadata": {
"annotations": {
"flannel.alpha.coreos.com/backend-data": "{\"VNI\":1,\"VtepMAC\":\"c6:62:0d:38:e8:2e\"}",
"flannel.alpha.coreos.com/backend-type": "vxlan",
"flannel.alpha.coreos.com/kube-subnet-manager": "true",
"flannel.alpha.coreos.com/public-ip": "192.168.233.151",
"kubeadhttp://m.alpha.kubernetes.io/cri-socket": "unix:///var/run/containerd/containerd.sock",
"node.alpha.kubernetes.io/ttl": "0",
"volumes.kubernetes.io/controller-managed-attach-detach": "true"
},
"creationTimestamp": "2025-01-13T07:47:09Z",
"labels": {
"beta.kubernetes.io/arch": "amd64",
"beta.kubernetes.io/os": "linux",
"kubernetes.io/arch": "amd64",
"kubernetes.io/hostname": "controlplane",
"kubernetes.io/os": "linux",
"node-role.kubernetes.io/control-plane": "",
"node.kubernetes.io/exclude-from-external-load-balancers": ""
},
"name": "controlplane",
"resourceVersion": "1789",
"uid": "7a7373ec-3253-4728-a419-2d0d3d1b64b3"
},
"spec": {
"podCIDR": "172.17.0.0/24",
"podCIDRs": [
"172.17.0.0/24"
]
},
"status": {
"addresses": [
{
"address": "192.168.233.151",
"type": "InternalIP"
},
{
"address": "controlplane",
"type": "Hostname"
}
],
"allocatable": {
"cpu": "16",
"ephemeral-storage": "712126563583",
"hugepages-1Gi": "0",
"hugepages-2Mi": "0",
"memory": "65735884Ki",
"pods": "110"
},
"capacity": {
"cpu": "16",
"ephemeral-storage": "772706776Ki",
"hugepages-1Gi": "0",
"hugepages-2Mi": "0",
"memory": "65838284Ki",
"pods": "110"
},
"conditions": [
{
"lastHeartbeatTime": "2025-01-13T07:47:23Z",
"lastTransitionTime": "2025-01-13T07:47:23Z",
"message": "Flannel is running on this node",
"reason": "FlannelIsUp",
"status": "False",
"type": "NetworkUnavailable"
},
{
"lastHeartbeatTime": "2025-01-13T08:04:13Z",
"lastTransitionTime": "2025-01-13T07:47:09Z",
"message": "kubelet has sufficient memory available",
"reason": "KubeletHasSufficientMemory",
"status": "False",
"type": "MemoryPressure"
},
{
"lastHeartbeatTime": "2025-01-13T08:04:13Z",
"lastTransitionTime": "2025-01-13T07:47:09Z",
"message": "kubelet has no disk pressure",
"reason": "KubeletHasNoDiskPressure",
"status": "False",
"type": "DiskPressure"
},
{
"lastHeartbeatTime": "2025-01-13T08:04:13Z",
"lastTransitionTime": "2025-01-13T07:47:09Z",
"message": "kubelet has sufficient PID available",
"reason": "KubeletHasSufficientPID",
"status": "False",
"type": "PIDPressure"
},
{
"lastHeartbeatTime": "2025-01-13T08:04:13Z",
"lastTransitionTime": "2025-01-13T07:47:21Z",
"message": "kubelet is posting ready status",
"reason": "KubeletReady",
"status": "True",
"type": "Ready"
}
],
"daemonEndpoints": {
"kubeletEndpoint": {
"Port": 10250
}
},
"images": [
{
"names": [
"docker.io/kodekloud/fluent-ui-running@sha256:78fd68ba8a79adcd3e58897a933492886200be513076ba37f843008cc0168f81",
"docker.io/kodekloud/fluent-ui-running:latest"
],
"sizeBytes": 389734636
},
{
"names": [
"docker.io/library/nginx@sha256:28402db69fec7c17e179ea87882667f1e054391138f77ffaf0c3eb388efc3ffb",
"docker.io/library/nginx:latest"
],
"sizeBytes": 72950530
},
{
"names": [
"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a",
"registry.k8s.io/etcd:3.5.15-0"
],
"sizeBytes": 56909194
},
{
"names": [
"docker.io/weaveworks/weave-kube@sha256:d797338e7beb17222e10757b71400d8471bdbd9be13b5da38ce2ebf597fb4e63",
"docker.io/weaveworks/weave-kube:2.8.1"
],
"sizeBytes": 30924173
},
{
"names": [
"registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe",
"registry.k8s.io/kube-proxy:v1.31.0"
],
"sizeBytes": 30207900
},
{
"names": [
"registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf",
"registry.k8s.io/kube-apiserver:v1.31.0"
],
"sizeBytes": 28063421
},
{
"names": [
"docker.io/flannel/flannel@sha256:c951947891d7811a4da6bf6f2f4dcd09e33c6e1eb6a95022f3f621d00ed4615e",
"docker.io/flannel/flannel:v0.23.0"
],
"sizeBytes": 28051548
},
{
"names": [
"registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d",
"registry.k8s.io/kube-controller-manager:v1.31.0"
],
"sizeBytes": 26240868
},
{
"names": [
"docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250",
"docker.io/library/nginx:alpine"
],
"sizeBytes": 20506631
},
{
"names": [
"registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808",
"registry.k8s.io/kube-scheduler:v1.31.0"
],
"sizeBytes": 20196722
},
{
"names": [
"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
"registry.k8s.io/coredns/coredns:v1.11.1"
],
"sizeBytes": 18182961
},
{
"names": [
"docker.io/library/redis@sha256:1bf97f21f01b0e7bd4b7b34a26d3b9d8086e41e70c10f262e8a9e0b49b5116a0",
"docker.io/library/redis:alpine"
],
"sizeBytes": 17237642
},
{
"names": [
"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
"registry.k8s.io/coredns/coredns:v1.10.1"
],
"sizeBytes": 16190758
},
{
"names": [
"docker.io/weaveworks/weave-npc@sha256:38d3e30a97a2260558f8deb0fc4c079442f7347f27c86660dbfc8ca91674f14c",
"docker.io/weaveworks/weave-npc:2.8.1"
],
"sizeBytes": 12814131
},
{
"names": [
"docker.io/flannel/flannel-cni-plugin@sha256:ca6779c6ad63b77af8a00151cefc08578241197b9a6fe144b0e55484bc52b852",
"docker.io/flannel/flannel-cni-plugin:v1.2.0"
],
"sizeBytes": 3879095
},
{
"names": [
"docker.io/library/busybox@sha256:768e5c6f5cb6db0794eec98dc7a967f40631746c32232b78a3105fb946f3ab83",
"docker.io/library/busybox:latest"
],
"sizeBytes": 2166802
},
{
"names": [
"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a",
"registry.k8s.io/pause:3.10"
],
"sizeBytes": 320368
},
{
"names": [
"registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db",
"registry.k8s.io/pause:3.6"
],
"sizeBytes": 301773
}
],
"nodeInfo": {
"architecture": "amd64",
"bootID": "6a4d4838-3dd8-4631-8565-d57ff35ef42f",
"containerRuntimeVersion": "containerd://1.6.26",
"kernelVersion": "5.15.0-1072-gcp",
"kubeProxyVersion": "",
"kubeletVersion": "v1.31.0",
"machineID": "132e3d2451f947fe9214456160254717",
"operatingSystem": "linux",
"osImage": "Ubuntu 22.04.4 LTS",
"systemUUID": "2843aace-db26-8ff7-fc61-bfbf430486b3"
}
}
}
],
"kind": "List",
"metadata": {
"resourceVersion": ""
}
}
서비스를 만드세요
Service: messaging-service
Port: 6379
Type: ClusterIp
Use the right labels
controlplane ~ ➜ kubectl expose po messaging --port=6379 --name=messaging-service
service/messaging-service exposed
controlplane ~ ➜ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 172.20.0.1 <none> 443/TCP 33m
messaging-service ClusterIP 172.20.144.136 <none> 6379/TCP 7s
controlplane ~ ➜ kubectl describe svc messaging-service
Name: messaging-service
Namespace: default
Labels: tier=msg
Annotations: <none>
Selector: tier=msg
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 172.20.144.136
IPs: 172.20.144.136
Port: <unset> 6379/TCP
TargetPort: 6379/TCP
Endpoints: 172.17.0.6:6379
Session Affinity: None
Internal Traffic Policy: Cluster
Events: <none>
deployment를 만드세요
Name: hr-web-app
Image: kodekloud/webapp-color
Replicas: 2
controlplane ~ ➜ kubectl create deploy hr-web-app --image=kodekloud/webapp-color --replicas=2 --dry-run=client -o yaml > deploy.yaml
controlplane ~ ➜ cat deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: hr-web-app
name: hr-web-app
spec:
replicas: 2
selector:
matchLabels:
app: hr-web-app
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: hr-web-app
spec:
containers:
- image: kodekloud/webapp-color
name: webapp-color
resources: {}
status: {}
controlplane ~ ➜ kubectl apply -f deploy.yaml && kubectl get deploy
deployment.apps/hr-web-app created
NAME READY UP-TO-DATE AVAILABLE AGE
hr-web-app 0/2 2 0 0s
controlplane ~ ➜ kubectl get deploy
NAME READY UP-TO-DATE AVAILABLE AGE
hr-web-app 2/2 2 2 6s
controlplane ~ ➜ kubectl get po
NAME READY STATUS RESTARTS AGE
hr-web-app-69b94cfc67-5dqsg 1/1 Running 0 9s
hr-web-app-69b94cfc67-lgz6d 1/1 Running 0 9s
messaging 1/1 Running 0 20m
nginx-pod 1/1 Running 0 21m
pod를 만드세요
Name: static-busybox
Image: busybox
command: sleep 1000
controlplane ~ ✖ kubectl run static-busybox --image=busybox --dry-run=client -o yaml --command -- sleep 1000 > static.yaml
controlplane ~ ➜ cat static.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: static-busybox
name: static-busybox
spec:
containers:
- command:
- sleep
- "1000"
image: busybox
name: static-busybox
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
controlplane ~ ➜ kubectl apply -f static.yaml
pod/static-busybox created
Optional) Usage:
kubectl run NAME --image=image [--env="key=value"] [--port=port] [--dry-run=server|client] [--overrides=inline-json]
[--command] -- [COMMAND] [args...] [options]
dry-run 옵션을 커맨드 뒤에 썼더니 dry-run 옵션이 제대로 동작하지 않았음
pod를 만드세요
Name: temp-bus
Image Name: redis:alpine
Nampspace: finance
controlplane ~ ➜ kubectl get namespace
NAME STATUS AGE
apx-x9984574 Active 24m
default Active 43m
finance Active 35s
kube-flannel Active 43m
kube-node-lease Active 43m
kube-public Active 43m
kube-system Active 43m
controlplane ~ ➜ kubectl run temp-bus --image=redis:alpine -n finance --dry-run=client -o yaml > temp-bus.yaml
controlplane ~ ➜ cat temp-bus.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: temp-bus
name: temp-bus
namespace: finance
spec:
containers:
- image: redis:alpine
name: temp-bus
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
controlplane ~ ✖ kubectl apply -f temp-bus.yaml
pod/temp-bus created
controlplane ~ ➜ kubectl get po -n finance
NAME READY STATUS RESTARTS AGE
temp-bus 1/1 Running 0 6s
orange의 issue를 해결하세요
controlplane ~ ➜ kubectl get po
NAME READY STATUS RESTARTS AGE
hr-web-app-69b94cfc67-5dqsg 1/1 Running 0 9m4s
hr-web-app-69b94cfc67-lgz6d 1/1 Running 0 9m4s
messaging 1/1 Running 0 29m
nginx-pod 1/1 Running 0 30m
orange 0/1 Init:CrashLoopBackOff 2 (24s ago) 39s
static-busybox 1/1 Running 0 3m43s
status 의 Init 으로 init 컨테이너의 문제로 예상할 수 있다
controlplane ~ ✖ kubectl describe po orange
Name: orange
Namespace: default
Priority: 0
Service Account: default
Node: controlplane/192.168.233.151
Start Time: Mon, 13 Jan 2025 08:32:01 +0000
Labels: <none>
Annotations: <none>
Status: Pending
IP: 172.17.0.12
IPs:
IP: 172.17.0.12
Init Containers:
init-myservice:
Container ID: containerd://2c04ac32be2d636f34f41097477547e0e7fc46cf846ef9d22c49a240e0bd24e5
Image: busybox
Image ID: docker.io/library/busybox@sha256:2919d0172f7524b2d8df9e50066a682669e6d170ac0f6a49676d54358fe970b5
Port: <none>
Host Port: <none>
Command:
sh
-c
sleeeep 2;
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 127
Started: Mon, 13 Jan 2025 08:33:26 +0000
Finished: Mon, 13 Jan 2025 08:33:26 +0000
Ready: False
Restart Count: 4
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2lq8n (ro)
Containers:
orange-container:
Container ID:
Image: busybox:1.28
Image ID:
Port: <none>
Host Port: <none>
Command:
sh
-c
echo The app is running! && sleep 3600
State: Waiting
Reason: PodInitializing
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2lq8n (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized False
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-2lq8n:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 2m13s default-scheduler Successfully assigned default/orange to controlplane
Normal Pulled 2m12s kubelet Successfully pulled image "busybox" in 159ms (159ms including waiting). Image size: 2167089 bytes.
Normal Pulled 2m11s kubelet Successfully pulled image "busybox" in 137ms (137ms including waiting). Image size: 2167089 bytes.
Normal Pulled 118s kubelet Successfully pulled image "busybox" in 135ms (135ms including waiting). Image size: 2167089 bytes.
Normal Started 89s (x4 over 2m12s) kubelet Started container init-myservice
Normal Pulled 89s kubelet Successfully pulled image "busybox" in 141ms (141ms including waiting). Image size: 2167089 bytes.
Warning BackOff 61s (x6 over 2m11s) kubelet Back-off restarting failed container init-myservice in pod orange_default(5b627570-c7db-4b97-a293-996a1c041996)
Normal Pulling 48s (x5 over 2m12s) kubelet Pulling image "busybox"
Normal Created 48s (x5 over 2m12s) kubelet Created container init-myservice
Normal Pulled 48s kubelet Successfully pulled image "busybox" in 146ms (146ms including waiting). Image size: 2167089 bytes.
orage파드의 init 컨테이너인 init-myservice의 로그를 확인해보자
controlplane ~ ✖ kubectl logs orange init-myservice
sh: sleeeep: not found
command에 문제가 있는 것 같다
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: "2025-01-13T08:32:01Z"
name: orange
namespace: default
resourceVersion: "4575"
uid: 5b627570-c7db-4b97-a293-996a1c041996
spec:
containers:
- command:
- sh
- -c
- echo The app is running! && sleep 3600
image: busybox:1.28
imagePullPolicy: IfNotPresent
name: orange-container
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-api-access-2lq8n
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
initContainers:
- command:
- sh
- -c
- sleeeep 2;
image: busybox
imagePullPolicy: Always
name: init-myservice
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-api-access-2lq8n
readOnly: true
init 컨테이너의 잘못된 커맨드를 수정
노드포트를 생성하세요
Name: hr-web-app-service
Type: NodePort
Endpoints: 2
Port: 8080
NodePort: 30082
controlplane ~ ➜ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/hr-web-app-69b94cfc67-5dqsg 1/1 Running 0 18m
pod/hr-web-app-69b94cfc67-lgz6d 1/1 Running 0 18m
pod/messaging 1/1 Running 0 38m
pod/nginx-pod 1/1 Running 0 40m
pod/orange 1/1 Running 0 2m15s
pod/static-busybox 1/1 Running 0 13m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 172.20.0.1 <none> 443/TCP 55m
service/messaging-service ClusterIP 172.20.144.136 <none> 6379/TCP 21m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/hr-web-app 2/2 2 2 18m
NAME DESIRED CURRENT READY AGE
replicaset.apps/hr-web-app-69b94cfc67 2 2 2 18m
hr-web-app은 delpoy다
controlplane ~ ✖ kubectl expose deploy hr-web-app --name=hr-web-app-service --type=NodePort --port=8080 --dry-run=client -o yaml > hr-web-app-svc.yaml
controlplane ~ ➜ kubectl apply -f hr-web-app-svc.yaml
service/hr-web-app-service created
controlplane ~ ➜ kubectl edit svc hr-web-app-service
service/hr-web-app-service edited
apiVersion: v1
kind: Service
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"creationTimestamp":null,"labels":{"app":"hr-web-app"},"name":"hr-web-app-service","namespace":"default"},"spec":{"ports":[{"port":30082,"protocol":"TCP","targetPort":8080}],"selector":{"app":"hr-web-app"},"type":"NodePort"},"status":{"loadBalancer":{}}}
creationTimestamp: "2025-01-13T08:44:42Z"
labels:
app: hr-web-app
name: hr-web-app-service
namespace: default
resourceVersion: "5118"
uid: f6870333-b2f8-41cd-9d51-6b3bb1248e00
spec:
clusterIP: 172.20.143.133
clusterIPs:
- 172.20.143.133
externalTrafficPolicy: Cluster
internalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- nodePort: 30082
port: 8080
protocol: TCP
targetPort: 8080
selector:
app: hr-web-app
sessionAffinity: None
type: NodePort
status:
loadBalancer: {}
controlplane ~ ➜ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hr-web-app-service NodePort 172.20.143.133 <none> 8080:30082/TCP 80s
kubernetes ClusterIP 172.20.0.1 <none> 443/TCP 58m
messaging-service ClusterIP 172.20.144.136 <none> 6379/TCP 25m
controlplane ~ ➜ kubectl describe svc hr-web-app-service
Name: hr-web-app-service
Namespace: default
Labels: app=hr-web-app
Annotations: <none>
Selector: app=hr-web-app
Type: NodePort
IP Family Policy: SingleStack
IP Families: IPv4
IP: 172.20.143.133
IPs: 172.20.143.133
Port: <unset> 8080/TCP
TargetPort: 8080/TCP
NodePort: <unset> 30082/TCP
Endpoints: 172.17.0.7:8080,172.17.0.8:8080
Session Affinity: None
External Traffic Policy: Cluster
Internal Traffic Policy: Cluster
Events: <none>
모든 노드의 os image를 json path 쿼리를 사용해서 /opt/outputs/nodes_os_x43kj56.txt로 추출하세요
controlplane ~ ➜ kubectl get node
NAME STATUS ROLES AGE VERSION
controlplane Ready control-plane 63m v1.31.0
controlplane ~ ➜ kubectl get node -o json
{
"apiVersion": "v1",
"items": [
{
"apiVersion": "v1",
"kind": "Node",
"metadata": {
"annotations": {
"flannel.alpha.coreos.com/backend-data": "{\"VNI\":1,\"VtepMAC\":\"c6:62:0d:38:e8:2e\"}",
"flannel.alpha.coreos.com/backend-type": "vxlan",
"flannel.alpha.coreos.com/kube-subnet-manager": "true",
"flannel.alpha.coreos.com/public-ip": "192.168.233.151",
"kubeadhttp://m.alpha.kubernetes.io/cri-socket": "unix:///var/run/containerd/containerd.sock",
"node.alpha.kubernetes.io/ttl": "0",
"volumes.kubernetes.io/controller-managed-attach-detach": "true"
},
"creationTimestamp": "2025-01-13T07:47:09Z",
"labels": {
"beta.kubernetes.io/arch": "amd64",
"beta.kubernetes.io/os": "linux",
"kubernetes.io/arch": "amd64",
"kubernetes.io/hostname": "controlplane",
"kubernetes.io/os": "linux",
"node-role.kubernetes.io/control-plane": "",
"node.kubernetes.io/exclude-from-external-load-balancers": ""
},
"name": "controlplane",
"resourceVersion": "5583",
"uid": "7a7373ec-3253-4728-a419-2d0d3d1b64b3"
},
"spec": {
"podCIDR": "172.17.0.0/24",
"podCIDRs": [
"172.17.0.0/24"
]
},
"status": {
"addresses": [
{
"address": "192.168.233.151",
"type": "InternalIP"
},
{
"address": "controlplane",
"type": "Hostname"
}
],
"allocatable": {
"cpu": "16",
"ephemeral-storage": "712126563583",
"hugepages-1Gi": "0",
"hugepages-2Mi": "0",
"memory": "65735884Ki",
"pods": "110"
},
"capacity": {
"cpu": "16",
"ephemeral-storage": "772706776Ki",
"hugepages-1Gi": "0",
"hugepages-2Mi": "0",
"memory": "65838284Ki",
"pods": "110"
},
"conditions": [
{
"lastHeartbeatTime": "2025-01-13T07:47:23Z",
"lastTransitionTime": "2025-01-13T07:47:23Z",
"message": "Flannel is running on this node",
"reason": "FlannelIsUp",
"status": "False",
"type": "NetworkUnavailable"
},
{
"lastHeartbeatTime": "2025-01-13T08:50:31Z",
"lastTransitionTime": "2025-01-13T07:47:09Z",
"message": "kubelet has sufficient memory available",
"reason": "KubeletHasSufficientMemory",
"status": "False",
"type": "MemoryPressure"
},
{
"lastHeartbeatTime": "2025-01-13T08:50:31Z",
"lastTransitionTime": "2025-01-13T07:47:09Z",
"message": "kubelet has no disk pressure",
"reason": "KubeletHasNoDiskPressure",
"status": "False",
"type": "DiskPressure"
},
{
"lastHeartbeatTime": "2025-01-13T08:50:31Z",
"lastTransitionTime": "2025-01-13T07:47:09Z",
"message": "kubelet has sufficient PID available",
"reason": "KubeletHasSufficientPID",
"status": "False",
"type": "PIDPressure"
},
{
"lastHeartbeatTime": "2025-01-13T08:50:31Z",
"lastTransitionTime": "2025-01-13T07:47:21Z",
"message": "kubelet is posting ready status",
"reason": "KubeletReady",
"status": "True",
"type": "Ready"
}
],
"daemonEndpoints": {
"kubeletEndpoint": {
"Port": 10250
}
},
"images": [
{
"names": [
"docker.io/kodekloud/fluent-ui-running@sha256:78fd68ba8a79adcd3e58897a933492886200be513076ba37f843008cc0168f81",
"docker.io/kodekloud/fluent-ui-running:latest"
],
"sizeBytes": 389734636
},
{
"names": [
"docker.io/library/nginx@sha256:28402db69fec7c17e179ea87882667f1e054391138f77ffaf0c3eb388efc3ffb",
"docker.io/library/nginx:latest"
],
"sizeBytes": 72950530
},
{
"names": [
"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a",
"registry.k8s.io/etcd:3.5.15-0"
],
"sizeBytes": 56909194
},
{
"names": [
"docker.io/kodekloud/webapp-color@sha256:99c3821ea49b89c7a22d3eebab5c2e1ec651452e7675af243485034a72eb1423",
"docker.io/kodekloud/webapp-color:latest"
],
"sizeBytes": 31777918
},
{
"names": [
"docker.io/weaveworks/weave-kube@sha256:d797338e7beb17222e10757b71400d8471bdbd9be13b5da38ce2ebf597fb4e63",
"docker.io/weaveworks/weave-kube:2.8.1"
],
"sizeBytes": 30924173
},
{
"names": [
"registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe",
"registry.k8s.io/kube-proxy:v1.31.0"
],
"sizeBytes": 30207900
},
{
"names": [
"registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf",
"registry.k8s.io/kube-apiserver:v1.31.0"
],
"sizeBytes": 28063421
},
{
"names": [
"docker.io/flannel/flannel@sha256:c951947891d7811a4da6bf6f2f4dcd09e33c6e1eb6a95022f3f621d00ed4615e",
"docker.io/flannel/flannel:v0.23.0"
],
"sizeBytes": 28051548
},
{
"names": [
"registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d",
"registry.k8s.io/kube-controller-manager:v1.31.0"
],
"sizeBytes": 26240868
},
{
"names": [
"docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250",
"docker.io/library/nginx:alpine"
],
"sizeBytes": 20506631
},
{
"names": [
"registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808",
"registry.k8s.io/kube-scheduler:v1.31.0"
],
"sizeBytes": 20196722
},
{
"names": [
"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
"registry.k8s.io/coredns/coredns:v1.11.1"
],
"sizeBytes": 18182961
},
{
"names": [
"docker.io/library/redis@sha256:1bf97f21f01b0e7bd4b7b34a26d3b9d8086e41e70c10f262e8a9e0b49b5116a0",
"docker.io/library/redis:alpine"
],
"sizeBytes": 17237642
},
{
"names": [
"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
"registry.k8s.io/coredns/coredns:v1.10.1"
],
"sizeBytes": 16190758
},
{
"names": [
"docker.io/weaveworks/weave-npc@sha256:38d3e30a97a2260558f8deb0fc4c079442f7347f27c86660dbfc8ca91674f14c",
"docker.io/weaveworks/weave-npc:2.8.1"
],
"sizeBytes": 12814131
},
{
"names": [
"docker.io/flannel/flannel-cni-plugin@sha256:ca6779c6ad63b77af8a00151cefc08578241197b9a6fe144b0e55484bc52b852",
"docker.io/flannel/flannel-cni-plugin:v1.2.0"
],
"sizeBytes": 3879095
},
{
"names": [
"docker.io/library/busybox@sha256:2919d0172f7524b2d8df9e50066a682669e6d170ac0f6a49676d54358fe970b5",
"docker.io/library/busybox:latest"
],
"sizeBytes": 2167089
},
{
"names": [
"docker.io/library/busybox@sha256:768e5c6f5cb6db0794eec98dc7a967f40631746c32232b78a3105fb946f3ab83"
],
"sizeBytes": 2166802
},
{
"names": [
"docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47",
"docker.io/library/busybox:1.28"
],
"sizeBytes": 727869
},
{
"names": [
"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a",
"registry.k8s.io/pause:3.10"
],
"sizeBytes": 320368
},
{
"names": [
"registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db",
"registry.k8s.io/pause:3.6"
],
"sizeBytes": 301773
}
],
"nodeInfo": {
"architecture": "amd64",
"bootID": "6a4d4838-3dd8-4631-8565-d57ff35ef42f",
"containerRuntimeVersion": "containerd://1.6.26",
"kernelVersion": "5.15.0-1072-gcp",
"kubeProxyVersion": "",
"kubeletVersion": "v1.31.0",
"machineID": "132e3d2451f947fe9214456160254717",
"operatingSystem": "linux",
"osImage": "Ubuntu 22.04.4 LTS",
"systemUUID": "2843aace-db26-8ff7-fc61-bfbf430486b3"
}
}
}
],
"kind": "List",
"metadata": {
"resourceVersion": ""
}
}
controlplane ~ ➜ kubectl get nodes -o jsonpath='{.items[*].status.nodeInfo.osImage}' > /opt/outputs/nodes_os_x43kj56.txt
controlplane ~ ➜ cat /opt/outputs/nodes_os_x43kj56.txt
Ubuntu 22.04.4 LTS
해당 페이지 참고
https://kubernetes.io/docs/reference/kubectl/jsonpath/
pv를 생성하세요
Volume name: pv-analytics
Storage: 100Mi
Access mode: ReadWriteMany
Host path: /pv/data-analytics
해당 페이지 참고
https://kubernetes.io/docs/concepts/storage/persistent-volumes/
controlplane ~ ➜ vim pv.yaml
controlplane ~ ➜ cat pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-analytics
spec:
capacity:
storage: 100Mi
accessModes:
- ReadWriteMany
hostpath:
path: /pv/data-analytics
controlplane ~ ➜ kubectl apply -f pv.yaml
Error from server (BadRequest): error when creating "pv.yaml": PersistentVolume in version "v1" cannot be handled as a PersistentVolume: strict decoding error: unknown field "spec.hostpath"
camelcase 이슈
controlplane ~ ✖ vim pv.yaml
controlplane ~ ➜ cat pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-analytics
spec:
capacity:
storage: 100Mi
accessModes:
- ReadWriteMany
hostPath:
path: /pv/data-analytics
controlplane ~ ➜ kubectl apply -f pv.yaml
persistentvolume/pv-analytics created
controlplane ~ ➜ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS VOLUMEATTRIBUTESCLASS REASON AGE
pv-analytics 100Mi RWX Retain Available <unset> 13s
controlplane ~ ➜ kubectl describe pv pv-analytics
Name: pv-analytics
Labels: <none>
Annotations: <none>
Finalizers: [kubernetes.io/pv-protection]
StorageClass:
Status: Available
Claim:
Reclaim Policy: Retain
Access Modes: RWX
VolumeMode: Filesystem
Capacity: 100Mi
Node Affinity: <none>
Message:
Source:
Type: HostPath (bare host directory volume)
Path: /pv/data-analytics
HostPathType:
Events: <none>
static-busybox 생성 문제 틀림
kubectl run --restart=Never --image=busybox static-busybox --dry-run=client -oyaml --command -- sleep 1000 > /etc/kubernetes/manifests/static-busybox.yaml
static pod기 때문에 /etc/kubernetes/manifests 경로에 manifest를 넣어야했음
'DevOps' 카테고리의 다른 글
CKA 예제 리마인더 - 39. Lightning Lab - Cluster Upgrade (0) | 2025.01.15 |
---|---|
CKA 예제 리마인더 - 38. Mock Exam - 2 (0) | 2025.01.15 |
CKA 예제 리마인더 - 36. Worker Node Failure (0) | 2025.01.11 |
/var/libe/kubelet/config.yaml 과 /etc/kubernetes/kubelet.conf 차이 (0) | 2025.01.11 |
CKA 예제 리마인더 - 35. Control Plane Failure (0) | 2025.01.11 |