본문 바로가기

잡부생활/Container

쿠버네티스 기본(Kubernetes with GCP)

GCP를 통해 Kubernetes 클러스터를 생성(무료 90일까지 30만원 크레딧 사용 가능)

 

- GCP 회원가입 -> Kubernetes Engine -> Clutser -> Create

 

- GKE Standard 구성

 

- 클러스터 이름 지정, 한국 리전 asia-northeast3 선택 시 할당 문제로 생성이 안 됨

일단 디폴트 us-central 리전 선택

 

- 노드 이름, 노드 수(3개)와 위치 선택

디폴트노드: vCPU 2/ Mem 4 / Disk 100GB 할당

 

- kubectl 사용

 

- 해당 명령줄 액세스 복사 후 CLOUDSHELL에서 실행

로컬 참고: https://cloud.google.com/sdk/docs/install#deb

 

- 클러스터 설정 정보 확인(인증서, 마스터 서버 IP 등)

@cloudshell:~ (eco-droplet-344012)$ cat ~/.kube/config
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUVMRENDQXBTZ0F3SUJBZ0lRTmtFZHBJZEI2MWxIQnRBK0JZNkdkakFOQmdrcWhraUc5dzBCQVFzRkFEQXYKTVMwd0t3WURWUVFERXlSalpUY3pPVGd5T0MweE16bGhMVFEyTTJRdE9UbGhaaTAxTkRrM056UmhOalEzWmpNdwpJQmNOTWpJd016RXpNVEl4TXpNd1doZ1BNakExTWpBek1EVXhNekV6TXpCYU1DOHhMVEFyQmdOVkJBTVRKR05sCk56TTVPREk0TFRFek9XRXRORFl6WkMwNU9XRm1MVFUwT1RjM05HRTJORGRtTXpDQ0FhSXdEUVlKS29aSWh2Y04KQVFFQkJRQURnZ0dQQURDQ0FZb0NnZ0dCQU12YkM3TS9qWHc2cXJhOUN4OWpPV3RidjZCa2lPb3BFanRDUTRwSwpnNlBlRXdNOU5heWdBMHJzWDlZbGU4anFGRGhZalRRRGcyV1dFUnhCdk0vcU5OVnZpL3FoSCs2NHA4T1padXM1CllpcGRqYkJramFKeUlQMW5tbkNMY2hqMmRsNHh0cllaWGxlSVVwSUl3WU9vK0dRb3A0cjlXeVhHdTlHRHdDTWEKSHZGZ2w0WnNsR0hZc2IxVE5sRjJQN09CcDE5UWVWMG44bWw0NkoxMzAzaENDV1hHdjIvdWZIT2h6dmlmMnZVcQpMOTcyeDg2UEJSeDZzejNTVGlGaWE3RFBrYUtyYUVHYjRvTmw2MDZZT1FNV2NNMjcxUElYaXBZU3E4d0E0QXZMCjI0NVc2bWVjRGtyenVYT2JvQ1JSQ0dmUDhlVDhvaHhHdWc1bUk2WkYyVSt4R0N0Y2xnbVlSZ2JDTW1KUHRlTEEKTEUxQzhLUHNUZWM1cHRYKzJlKzh0eU9tWEFBMlhrOXdYUm1ZREwrcG9PZ05xR1FUdy9UZUFFQzZPdnJMTmx2WgpGVHkydnBCUGFRS1VwSFZ6RFRNaFRubHRpOUlaczRlc213WW8wOXQwcnRXZkFseDVoRExsTHVUK0xtV2xJS2JlCmF6NDNTYlFYQnhYZG8vVitpWEN0VWt5cllRSURBUUFCbzBJd1FEQU9CZ05WSFE4QkFmOEVCQU1DQWdRd0R3WUQKVlIwVEFRSC9CQVV3QXdFQi96QWRCZ05WSFE0RUZnUVVpdEFvVytxQ2Q3WUM2VFliZ1lZZHlmeXo1L2N3RFFZSgpLb1pJaHZjTkFRRUxCUUFEZ2dHQkFGVWZWak5xSllhUi9EWjdWb1dsNzZtWjJydDYvYUR1d0NMenNRN3NreUlGCnBlSXZjZ096VStpMU9LQ2ZWMmJVTStON2ZweHRHRXpNS0lvOU8wZWxWcFdJSXF2Y1p4ZW5OMEloWGVtNm1aMHQKQ0wxek9vZVVjRXpvSytYMmRPcUF2UHBDTFIrUW1lMmNOK0RrL2F5eGpsb1pXa0NjbXZ0di9qWWxyaE04d05kYwpwdDhBNk1YZG5KVFN4cnBoeW5Cek84WERoa3QrR092U3RBVHo3Q21hcFNLakEvcG1Vb2VGVFVQRVQ2TjVvaDk4Ck5CTzMxbzdKbVMraElxYzltYldsQ0lNUWRtaGhnSTJWcHp5RnNISUlHVG1uQSt1ekZmcG56MWtKYThjR2drbWIKa2lyODRjRWRnY0lwK3RMZGg4THhzdlIvZ2ZiS3ArK2YwRlNBMTUxKzYzVGtUbko2dCsxT2VQMjErdUE1SzFtcgpwMkhkZ2Q2dXZRQVVQbzlvUHBLU0lPaVpibDBRR2ZIWmVNOWtwdTVzM0RyZUxnQjdhNmdTVHNXeHFGdVY3ZDhBCmgycDMwUVkrWGIvaTJtY252NjcxTGNyL2xMVGFya1JndEFocitYNDRlL0dmeFN4a1h1bUV1MlFNajRuYlc0NnkKcHNrUUJKSGxTTW1WSnpXbDNiaml5QT09Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
    server: https://35.202.96.45
  name: gke_eco-droplet-344012_us-central1-c_openrun-cluster
contexts:
- context:
    cluster: gke_eco-droplet-344012_us-central1-c_openrun-cluster
    user: gke_eco-droplet-344012_us-central1-c_openrun-cluster
  name: gke_eco-droplet-344012_us-central1-c_openrun-cluster
current-context: gke_eco-droplet-344012_us-central1-c_openrun-cluster
kind: Config
preferences: {}
users:
- name: gke_eco-droplet-344012_us-central1-c_openrun-cluster
  user:
    auth-provider:
      config:
        access-token: ya29.A0ARrdaM995hO2DLXlmYUX6qwga17VlJ-5lXtRNQIzBAlQLIAQ7JnTEPnPz_jpEIfoQFtOFxuR3QYAkhrkgbeNAQT4n7bcCHAgZq7dUFbOomwzjXyH03506DGDaFzb-QpphJSuAFVyWN-vnNL1U2rX1eH9Uzl0AU6d_ie4PmGLhhgnNKuJ0VgX0rvaDkVvOv-oSY5NpmFLeCDZsCaiXQOOqnB7-3grXfJQpsf5BhXc9oRxtwHvIeaTuE4XNulFiECo4TVhXA
        cmd-args: config config-helper --format=json
        cmd-path: /usr/lib/google-cloud-sdk/bin/gcloud
        expiry: "2021-03-31T14:00:29Z"
        expiry-key: '{.credential.token_expiry}'
        token-key: '{.credential.access_token}'
      name: gcp

 

- 인증이 잘 되었는지 확인

GCP Shell

@cloudshell:~ (eco-droplet-344012)$ kubectl get service
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.108.0.1   <none>        443/TCP   15m

로컬

 $kubectl get nodes                                                                                              INT ✘  system   02:01:11  
NAME                                             STATUS   ROLES    AGE     VERSION
gke-openrun-cluster-openrun-pool-a68955f7-3nxv   Ready    <none>   3h44m   v1.21.6-gke.1503
gke-openrun-cluster-openrun-pool-a68955f7-jlsg   Ready    <none>   3h44m   v1.21.6-gke.1503
gke-openrun-cluster-openrun-pool-a68955f7-l724   Ready    <none>   3h44m   v1.21.6-gke.1503

 

 

K8s commands(Imperative, Declarative)

https://www.udemy.com/course/learn-kubernetes/

https://github.com/kodekloudhub/certified-kubernetes-administrator-course

 

[Pod]

$k explain pod --recursive |less
$k get pod -o wide
$k get deployment nginx-deployment -o yaml > nginx-deployment.yaml
$k run nginx --image=nginx --labels=tier=db,type=dev --dry-run=client -o yaml > nginx-pod.yaml
$k edit pod nginx
$k edit -f pod nginx-pod.yaml

 

 

pod.yaml

apiVersion: v1
kind: Pod
metadata:
  name: nginx
  labels:
    app: myapp
spec:
  containers:
  - name: nginx
    image: nginx

 

$k apply -f pod.yaml

$k get po -o wide -w

$k describe pod myapp

$k deplete pod myapp

$k edit -f pod.yaml
$k edit pod myapp

 

[Core-Concepts]

 

[ReplicaSets]

$k create replicaset --image=nginx replicaset-nginx --replicas=3 --dry-run=client -o yaml = replicaset-nginx.yaml
$k replace -f replicaset-nginx.yaml
$k scale --replicas=6 -f replicaset-nginx.yaml
$k scale replicaset replicaset-nginx --replicas=5

 

replicaset.yaml

apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: myapp-replicaset
  labels:
    app: myapp
    tier: stage
spec:
  selector:
    matchLabels:
      app: myapp
  replicas: 3
  template:
    metadata:
      name: nginx
      labels:
        app: myapp
        tier: stage
    spec:
      containers:
      - name: nginx
        image: nginx

 

[Deployment]

$k get all

$k create deployment nginx --image=nginx --replicas=3 --dry-run=client -o yaml > nginx-deployment.yaml
$k scale deployment nginx --replicas=3

$k rollout status deployment myapp
$k rollout history deployment myapp

# Create
$k create -f deployment.yaml

#get
$k get deployment 

#Update
$k apply -f deployment.yaml 
$k set image deployment myapp "containerName"=nginx:lastest

#Status
$k rollout status deployment myapp 
$k rollout history deployment myapp 

#Rollback
$k rollout undo deployment myapp

 

deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp-deployment
  labels:
    tier: frontend
    app: nginx
spec:
  selector:
    matchLabels:
      name: myapp
  strategy:
    type: RollingUpdate #RollingUpdate is default, and you can use Recreate
    RollingUpdate:
      maxUnavailable: 25%
      maxSurge: 25%
  replicas: 3
  template:
    metadata:
      name: nginx
      labels:
        name: myapp
        type: stage
    spec:
      containers:
      - name: nginx
        image: nginx

 

[Service]

$k create svc nginx nodeport --tcp=80:80 --node-port=30080 --dry-run=client -o yaml > svc.yaml
$k expose deployment nginx-deployment --name=nginx-svc --target-port=8080 --type=NodePort --port=8080 --dry-run=client -o yaml > svc.yaml
$k get endpoints

 

[NodePort]

service-nodeport.yaml

apiVersion: v1
kind: Service
metadata:
  name: front-end
spec:
  type: NodePort
  ports:
  - targetPort: 80
    port: 80
    nodePort: 32767 # 3000 ~ 32767
  selector:
    app: myapp
    type: front-end

 

[ClusterIP]

service-clusterip.yaml

apiVersion: v1
kind: Service
metadata:
  name: back-end
spec:
  type: ClusterIP
  ports:
  - targetPort: 80
    port: 80
  selector:
    app: myapp
    type: back-end

 

[LoadBalancer]

service-lb.yaml

apiVersion: v1
kind: Service
metadata:
  name: myapp-loadbalancer
spec:
  type: LoadBalancer
  ports:
  - targetPort: 80
    port: 80
    nodePort: 30004
  selector:
    app: myapp
    type: back-end

 

[MSA]

$docker run -d --name=redis redis
$docker run -d --name=db postgres
$docker run -d --name=vote -p 5000:80 --link redis:redis voting-app
$docker run -d --name=result -p 5001:80 --link db:db result-app
$docker run -d --name-worker --link db:db --link redis:redis worker

 

voting-app

https://github.com/dockersamples/example-voting-app

 

GitHub - dockersamples/example-voting-app: Example Docker Compose app

Example Docker Compose app. Contribute to dockersamples/example-voting-app development by creating an account on GitHub.

github.com

 

voting-app-pod.yaml

apiVersion: v1
kind: Pod
metadata:
  name: voting-app-pod
  labels: 
    name: voting-app-pod
    app: demo-voting-app
spec:
  containers:
  - name: voting-app
    image: kodekloud/examplevotingapp_vote:v1
    ports:
    - containerPort: 80

 

result-app-pod.yaml

apiVersion: v1
kind: Pod
metadata:
  name: result-app-pod
  labels:
    name: result-app-pod
    app: demo-voting-app
spec:
  containers:
  - name: result-app
      image: kodekloud/examplevotingapp_result:v1
      ports:
      - containerPort: 80

 

redis-pod.yaml

apiVersion: v1
kind: Pod
metadata:
  name: redis-pod
  labels:
    name: redis-pod
    app: demo-voting-app
spec:
  containers:
  - name: redis
    image: redis
    ports:
    - containerPort: 6379

 

postgres-pod.yaml

apiVersion: v1
kind: Pod
metadata:
  name: postgres-pod
  labels:
    name: postgres-pod
    app: demo-voting-app
spec:
  containers:
  - name: postgres
    image: postgres
    ports:
    - containerPort: 5432
      env:
      - name: POSTGRES_USER
        value: "postgres"
      - name: POSTGRES_PASSWORD
        value: "postgres"

 

worker-app-pod.yaml(postgres pod 때문에 문제 발생)

apiVersion: v1
kind: Pod
metadata:
  name: worker-app-pod
  labels:
    name: worker-app-pod
    app: demo-voting-app
spec:
  containers:
  - name: worker-app
    image: kodekloud/examplevotingapp_worker

 

redis-service.yaml

apiVersion: v1
kind: Service
metadata:
  name: redis
  labels:
    name: redis-service
    app: demo-voting-app
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    name: redis-pod
    app: demo-voting-app

 

postgres-service.yaml

apiVersion: v1
kind: Service
metadata:
  name: db
  labels:
    name: postgres-service
    app: demo-voting-app
spec:
  ports:
  - port: 5432
    targetPort: 5432
  selector:
    name: postgres-pod
    app: demo-voting-app

 

voting-app-service.yaml

apiVersion: v1
kind: Service
metadata:
  name: voting-service
  labels:
    name: voting-service
    app: demo-voting-app
spec:
  type: NodePort
  ports:
  - port: 80
    targetPort: 80
    nodePort: 30004
  selector:
    name: voting-app-pod
    app: demo-voting-app

 

result-app-service.yaml

apiVersion: v1
kind: Service
metadata:
  name: result-service
  labels:
    name: result-service
    app: demo-voting-app
spec:
  type: NodePort
  ports:
  - port: 80
    targetPort: 80
    nodePort: 30005
  selector:
    name: result-app-pod
    app: demo-voting-app

 

https://github.com/mmumshad/kubernetes-example-voting-app-singlefile/blob/master/voting-app.yaml

 

GitHub - mmumshad/kubernetes-example-voting-app-singlefile: Example voting app for deployment on Kubernetes

Example voting app for deployment on Kubernetes. Contribute to mmumshad/kubernetes-example-voting-app-singlefile development by creating an account on GitHub.

github.com

 

 

[Namespace]

$k get ns --no-headers
$k get po --namespace=dev
$k get po -n dev
$k get po --all-namespaces

$k create namespace dev
$k create -f pod.yaml --namespace=dev

$k config set-context --current --namespace=dev
$k config view

 

Namespace를 이용하여 쿼터(quota.yaml)

apiVersoin: v1
kind: ResourceQuota
metadata:
  name: compute-quota
  namespace: dev
spec:
  hard:
    pod: "10"
    requests.cpu: "4"
    requests.memory: 5Gi
    limits.cpu: "10"
    limits.memory: 10Gi

 

[Imperative vs Declarative]

# Imperative Commands
# Create Ojbects
$k run nginx --image=nginx --labels=tier=front,type=dev
$k create deployment nginx --image=nginx
$k expose deployment nginx --port=80 --type=NodePort --target-port=80

# Update Objects
$k edit deployment nginx
$k scale deployment nginx --replicas=5
$k set image deployment nginx nginx=nginx:1.18


# Declarative
# Create Ojbects
$k create -f nginx.yaml

# Update Objects
$k replace -f nginx.yaml (--force)
$k delete -f nginx.yaml

 

[Scheduling]

[Manual Scheduling]

$k get po -n kube-system
apiVersion: v1
kind: Pod
metadata:
  name: nginx
spec:
  nodeName: node01
  containers:
  -  image: nginx
     name: nginx
$k replace -f nginx.yaml --force

 

nginx-pod.yaml

apiVersion: v1
kind: Pod
metadata:
  name: nginx
  labels:
    app: myapp
spec:
  containers:
  - name: nginx
    image: nginx
  nodeName: node01

nginx-bind.yaml

apiVersion: v1
kind: Binding
metadata:
  name: nginx
target:
  apiVersion: v1
  kind: Node
  name: node02

 

Labels and Selectors

$k get po --show-labels
$k get po --selector app=App1
$k get po -l app=App1
$k get all -l app=App1 --no-headers |wc -l

 

nginx-replicaset.yaml

apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: simple-webapp
  labels:
    app: App1
    function: Front-end
  annotation:
    buildverion: 1.34
spec:
  replicas: 3
  selector: 
    matchLabels:
      app: App1
  template:
    metadata: 
      labels: 
        app: App1
        function: Front-end
    spec: 
      containers:
      - name: simple-webapp
        image: simple-webapp

 

[Taints and Tolerations]

Equal: Key, Value, Effect 모두 일치

Exists: Key or Effect 일치. Value가 있을 경우 에러 (Node affinity도 동일)

$k describe node controlplane |grep Taints
$k taint nodes node-name key=value:taint-effect(NoSchedule/PreferNoSchedule/NoExecute)
$k run mosqito --image=nginx --resatrt=Never
$k expain pod --recursive |less (/)

 

apiVersion: v1
kind: Pod
metadata:
  name: myapp-pod
spec:
  containers:
  - name: nginx-container
    image: nginx
  tolerations:
  - key: app
    operator: Equal
    value: blue
    effect: NoSchedule

 

[Node Selectors]

$k label nodes "node-name" "key":"value"

 

[Node Affinity]

apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: blue
  name: blue
spec:
  replicas: 3
  selector:
    matchLabels:
      app: blue
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: blue
    spec:
      containers:
      - image: nginx
        name: nginx
        resources: {}
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: color
                operator: In / NotIn
                values:
                - blue
status: {}

 

apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: red
  name: red
spec:
  replicas: 2
  selector:
    matchLabels:
      app: red
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: red
    spec:
      containers:
      - image: nginx
        name: nginx
        resources: {}
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: node-role.kubernetes.io/master
                operator: Exists
status: {}

 

[Taints and Tolerations vs Node Affinity]

참고: https://minkukjo.github.io/devops/2021/02/17/Kubernetes15-18/

 

[Resource Requirements and Limits]

default: 1 vCPU / 512Mi mem(OOM 주의)

pod-definition.yaml

apiVersion: v1
kind: Pod
metadata:
  name: nginx-pod
  lables:
    name: nginx-pod
spec:
  containers:
  - name: nginx
    image: nginx
    ports:
    - containerPort: 8080
    resources:
      requests:
        memory: "1Gi"
        cpu: 1
      limits:
        memory: "2Gi"
        cpu: 2

 

https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/

 

Manage Memory, CPU, and API Resources

Production-Grade Container Orchestration

kubernetes.io

 

$k replace -f changed.yaml --force

 

[DaemonSets]

NodeAffinity 사용

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: monitoring-daemon
spec:
  selector:
    matchLabels:
      app: monitoring-agent
  template:
    metadata:
      labels:
        app: monitoring-agent
    spec:
      containers:
      - name: monitoring-agent
        image: monitoring-agent

 

$k create -f daemon-set-definition.yaml

$k get daemonsets --all-namespaces
$k get daemonsets -A

$k describe daemonsets monitoring-daemon
$k describe ds monitoring-daemon

$k describe po kube-flannel-ds --namespace=kube-system
$k describe po kube-flannel-ds -n kube-system

$k create deployment daemonset-agent --image=monitoring --replicas=1 --dry-run=client -o yaml > daemonset.yaml

 

DaemonSet의 경우 $k create daemonset이 불가, 따라서 deployment로 작성 후 yaml 변경 권장

 

 

[Static Pods]

태그

,