多容器日志分析(题外知识点)

[zhangpeng@27ops 02canary]$ kubectl describe pod poller
...
Events:
  Type     Reason     Age                  From               Message
  ----     ------     ----                 ----               -------
  Normal   Scheduled  56m                  default-scheduler  Successfully assigned default/poller to 27ops.com
  Normal   Pulling    56m                  kubelet            Pulling image "nginx"
  Normal   Pulled     56m                  kubelet            Successfully pulled image "nginx" in 7.027305269s
  Normal   Created    56m                  kubelet            Created container poller
  Normal   Started    56m                  kubelet            Started container poller
  Normal   Started    55m (x4 over 56m)    kubelet            Started container ambassador-container
  Normal   Pulled     54m (x5 over 56m)    kubelet            Container image "haproxy:lts" already present on machine
  Normal   Created    54m (x5 over 56m)    kubelet            Created container ambassador-container
  Warning  BackOff    80s (x254 over 56m)  kubelet            Back-off restarting failed container
[zhangpeng@27ops 02canary]$ 
[zhangpeng@27ops 02canary]$ kubectl logs poller 
error: a container name must be specified for pod poller, choose one of: [poller ambassador-container]
[zhangpeng@27ops 02canary]$ 
[zhangpeng@27ops 02canary]$ 
[zhangpeng@27ops 02canary]$ kubectl logs poller -c poller
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
2022/08/20 18:00:19 [notice] 1#1: using the "epoll" event method
2022/08/20 18:00:19 [notice] 1#1: nginx/1.23.1
2022/08/20 18:00:19 [notice] 1#1: built by gcc 10.2.1 20210110 (Debian 10.2.1-6) 
2022/08/20 18:00:19 [notice] 1#1: OS: Linux 4.18.0-305.3.1.el8.x86_64
2022/08/20 18:00:19 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 1048576:1048576
2022/08/20 18:00:19 [notice] 1#1: start worker processes
2022/08/20 18:00:19 [notice] 1#1: start worker process 31
2022/08/20 18:00:19 [notice] 1#1: start worker process 32
[zhangpeng@27ops 02canary]$ 
[zhangpeng@27ops 02canary]$ 
[zhangpeng@27ops 02canary]$ 
[zhangpeng@27ops 02canary]$ kubectl logs poller -c ambassador-container
[NOTICE]   (1) : haproxy version is 2.6.3-76f187b
[NOTICE]   (1) : path to executable is /usr/local/sbin/haproxy
[ALERT]    (1) : config : Cannot open configuration file/directory /usr/local/etc/haproxy/haproxy.cfg : No such file or directory
[zhangpeng@27ops 02canary]$ 

1、Server 、Configmap、Sidecar

1.更新在namespace default中的Service nginxsvc来暴露端口9090

2.在namespace default中创建一个名为haproxy-config并存储着/opt/CKAD00006/haproxy.cfg的内容的Configmap

3.更新在namespace default中名为poller的Pod:首先,添加一个使用haproxy:lts镜像\暴露端口80并命名为ambassador-container的ambassador容器.

解题

1、更新svc

kubectl config use-context k8s
[root@k8s-master ckad]# kubectl expose pod  nginx --port=80 --target-port=80 --name=nginxsvc
service/nginxsvc exposed
[root@k8s-master ckad]#

#kubectl edit svc nginxsvc
apiVersion: v1
kind: Service
metadata:
 name: nginx
 namespace: default
spec:
 ports:
 - port: 9090
 protocol: TCP
 targetPort: 80
 selector:
app: nginx

2、创建ConfigMap

kubectl create configmap haproxy-config --from-file=/opt/CKA00006/haproxy.cfg
[root@k8s-master CKAD00006]# cat cm.yaml
apiVersion: v1
data:
    haproxy.cfg: ""
kind: ConfigMap
metadata:
  name: haproxy-config
[root@k8s-master CKAD00006]#

3、创建容器

# kubectl get pod poller -o yaml > poller.yaml
# kubectl delete -f poller.yaml
# vi poller.yaml
apiVersion: v1
kind: Pod
metadata:
 labels:
   run: poller
 name: poller
spec:
 containers:
 - image: nginx
   name: poller
 - image: haproxy:lts
   imagePullPolicy: IfNotPresent
   name: ambassador-container
   ports:
   - name: ambassador
     containerPort: 80

# kubectl apply -f poller.yaml

2、 金丝雀部署

Task

namespace goshawk中名为chipmunk-service的 Service指向名为current-chipmunk-deployment的Deployment创建的5个Pod。

1 在同一namespace中创建一个相同的Deployment,名为canary-chipmunk-deployment

graph TB
    subgraph current-chipmunk-deployment
    chipmunk-service-->pod1
    chipmunk-service-->pod2
    ...
    chipmunk-service-->pod5

    end

2 修改Deployment,以便

  • 在namespace goshawk中运行的Pod的最大数量为10个。

  • chipmunk-service 流量的40%流向Pod canary-chipmunk-deployment

graph TB
    subgraph current-chipmunk-deployment
    chipmunk-service-->pod1
    chipmunk-service-->pod2
    ...
    chipmunk-service-->pod5
    end
    subgraph 40% canary-chipmunk-deployment
    chipmunk-service-->podn
    ...
    chipmunk-service-->podn1
    end

解题:

创建svc,pod

kubectl expose deployment webtest3 --port=80 --target-port=80
kubectl create deployment web-v1 --image=nginx --replicas=5
kubectl expose deployment web-v1 --port=80 --target-port=80

ep

[zhangpeng@27ops 02canary]$ kg ep
NAME         ENDPOINTS                       AGE
kubernetes   10.0.20.7:6443                  288d
web-v1       10.244.0.54:80,10.244.0.55:80   7m12s
webtest3     10.244.0.17:8000                138d
[zhangpeng@27ops 02canary]$ k8s
NAMESPACE     NAME                                READY   STATUS             RESTARTS   AGE     IP            NODE        NOMINATED NODE   READINESS GATES
ckad00014     api                                 1/1     Running            3          3d3h    10.244.0.42   27ops.com   <none>           <none>
ckad00014     newpod                              1/1     Running            3          3d3h    10.244.0.40   27ops.com   <none>           <none>
ckad00014     proxy                               1/1     Running            3          3d3h    10.244.0.41   27ops.com   <none>           <none>
ckad00014     test-ping                           1/1     Running            3          3d3h    10.244.0.43   27ops.com   <none>           <none>
default       test-pd                             1/1     Running            0          106d    10.244.0.20   27ops.com   <none>           <none>
default       web-v1-5b548dd5f6-zzkj9             1/1     Running            0          11m     10.244.0.54   27ops.com   <none>           <none>
default       web-v2-579f4f796c-ksjxs             1/1     Running            0          8m34s   10.244.0.55   27ops.com   <none>           <none>
default       webtest3-789c5b5cbf-kzwdn           1/1     Running            0          138d    10.244.0.17   27ops.com   <none>           <none>
kube-system   coredns-558bd4d5db-bnh29            1/1     Running            0          288d    10.244.0.3    27ops.com   <none>           <none>
kube-system   coredns-558bd4d5db-pcj24            1/1     Running            0          288d    10.244.0.2    27ops.com   <none>           <none>
kube-system   etcd-27ops.com                      1/1     Running            0          288d    10.0.20.7     27ops.com   <none>           <none>
kube-system   kube-apiserver-27ops.com            1/1     Running            0          288d    10.0.20.7     27ops.com   <none>           <none>
kube-system   kube-controller-manager-27ops.com   1/1     Running            1          288d    10.0.20.7     27ops.com   <none>           <none>
kube-system   kube-flannel-ds-fs6fv               1/1     Running            0          288d    10.0.20.7     27ops.com   <none>           <none>
kube-system   kube-proxy-zw8tr                    1/1     Running            0          288d    10.0.20.7     27ops.com   <none>           <none>
kube-system   kube-scheduler-27ops.com            1/1     Running            1          288d    10.0.20.7     27ops.com   <none>           <none>
[zhangpeng@27ops 02canary]$ 

多svc关联方式selector,svc与deployment没有直接关系。yaml上也没有关联的。

也就是通过标签进行关联的。

查看标签

[zhangpeng@27ops 02canary]$ kg pod --show-labels -n ckad00014
NAME        READY   STATUS    RESTARTS   AGE    LABELS
api         1/1     Running   3          3d1h   run=api
newpod      1/1     Running   3          3d1h   run=newpod
proxy       1/1     Running   3          3d1h   run=proxy
test-ping   1/1     Running   3          3d1h   run=test-ping
[zhangpeng@27ops 02canary]$ 

通过标签查看pod

[zhangpeng@27ops 02canary]$ kg pod -l run=test-ping -n ckad00014
NAME        READY   STATUS    RESTARTS   AGE
test-ping   1/1     Running   3          3d1h
[zhangpeng@27ops 02canary] 

通过deployment获取yaml文件

[zhangpeng@27ops 02canary]$ kubectl get deployment web-v1 -oyaml > web-v2.yaml
[zhangpeng@27ops 02canary]$

编辑yaml文件,修改名称及镜像版本

[zhangpeng@27ops 02canary]$ cat web-v2.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: web-v1
  name: web-v2
  namespace: default
spec:
  replicas: 1
  selector:
    matchLabels:
      app: web-v1
  template:
    metadata:
      labels:
        app: web-v1
    spec:
      containers:
      - image: nginx:1.17
        imagePullPolicy: Always
        name: nginx
[zhangpeng@27ops 02canary]$ 

扩容及缩容

kubectl scale deployment web-v2 --replicas=5
kubectl scale deployment web-v2 --replicas=4
kubectl scale deployment web-v1 --replicas=6
# 总共 10 个 Pod,将 60%流量给当前版本 Pod
kubectl scale deployment current-chipmunk-deployment --replicas=6 -n goshawk
# 将 40%流量给金丝雀版本 Pod
kubectl scale deployment canary-chipmunk-deployment --replicas=4 -n goshawk

3、更新Deployment配置

Task

修改运行在namespace quetzal,名为broker-deployment的现有Deployment,使其容器

  • 以用户30000运行
  • 禁止特权提升

您可以在~/daring-moccasin/broker-deployment.yaml找到broker-deployment的清单文件.

解题

[zhangpeng@27ops 03privileges]$ docker run --help|grep pri
      --cgroupns string                Cgroup namespace to use (host|private)
                                       'private': Run the container in its own private cgroup namespace
  -d, --detach                         Run container in background and print container ID
      --privileged                     Give extended privileges to this container
[zhangpeng@27ops 03privileges]$ 

privileged: false 关闭特权

runAsUser:运行用户id

        securityContext:
          privileged: false
          runAsUser: 30000

yaml

# kubectl edit deployment broker-deployment -n quetzal
[zhangpeng@27ops 03privileges]$ cat broker-deployment.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: broker-deployment
  name: broker-deployment
  namespace: quetzal
spec:
  replicas: 1
  selector:
    matchLabels:
      app: broker-deployment
  template:
    metadata:
      labels:
        app: broker-deployment
    spec:
      containers:
      - image: nginx
        imagePullPolicy: IfNotPresent
        name: broker-deployment
        securityContext:
          privileged: false
          runAsUser: 30000
[zhangpeng@27ops 03privileges]$ 

4、创建Deployment并指定环境变量

Task

在现有的namespace ckad00014中创建一个运行6个Pod副本,名为api的Deployment。用nginx的镜像来指定一个容器。 将名为NGINX_PORT 且值为8000的环境变量添加到容器中,然后公开端口8000。

添加以下字段

        env:
        - name: NGINX_PORT
          value: "8000"
        ports:
        - containerPort: 8000
[root@master test]# cat 03/deployment.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: broker-deploy
spec:
  replicas: 1
  selector:
    matchLabels:
      app: broker-deploy
  template:
    metadata:
      labels:
        app: broker-deploy
    spec:
      containers:
      - image: nginx
        name: broker-deploy
        securityContext:
          privileged: false
          runAsUser: 1000
[root@master test]# 

验证

[zhangpeng@27ops 04envdeployment]$ kubectl exec -it -n ckad00014     api-f66c47cbc-gz98j -- sh
# echo $NGINX_PORT
8000
# 
[zhangpeng@27ops 04envdeployment]$ kubectl config use-context k8s
[zhangpeng@27ops 04envdeployment]$ kubectl create deployment api --image=nginx --replicas=6 -n ckad00014 --dryrun=client -o yaml > api.yaml
[zhangpeng@27ops 04envdeployment]$ cat api.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: api
  name: api
  namespace: ckad00014
spec:
  replicas: 6
  selector:
    matchLabels:
      app: api
  template:
    metadata:
      labels:
        app: api
    spec:
      containers:
      - image: nginx
        name: nginx
        env:
        - name: NGINX_PORT
          value: "8000"
        ports:
        - containerPort: 8000
[zhangpeng@27ops 04envdeployment]$ 

5、ServiceAccount 授权

Task 在名为honeybee-deployment的Deployment和namespace gorilla中的一个Pod正在记录错误。

  • 1 查看日志以识错误消息

请出错误 包括User "system:serviceaccount:gorilla:default"cannot list resource"pods"..J in the namespace"gorilla

  • 2 ”更新Deployment honeybee-deployment 以解决Pod日志中的错误。

请您可以在 -/promptescargot/honeybee-deployment.yaml 找到honeybee-deployment 的清单文件。

原理

访问方式:

  1. kubeconfig
  2. 携带serviceaccount(token)对应的token访问kube-apiserver,serviceaccount生成了一个token。serviceaccount概念名字,
graph TB

    subgraph 调用apiserver组件 需要rbac授权
    apiserver-->kubectl
    apiserver-->kubelet
    apiserver-->kubeproxy
    apiserver-->controller-manager
    apiserver-->progrem
    apiserver-->pod部署
    end

操作:

切换K8s集群

kubectl config use-context k8s

创建ServiceAccount

kubectl create sa honeybee -n gorilla

授权ServiceAccount

kubectl create role honeybee --verb=list --resource=pods -n gorilla
kubectl create role honeybee --verb=get,list --resource=pods -n gorilla

绑定rolebinding

 kubectl create rolebinding honeybee --role=honeybee serviceaccount=gorilla:honeybee -n gorilla

pod绑定账号

kubectl get deployment honeybee-deployment 

…
 spec:
 serviceAccountName: honeybee
 containers:
 - image: nginx
…

验证

kubectl --as=system:serviceaccount:gorilla:honeybee get pods -n gorilla
[root@master ~]# k --as=system:serviceaccount:ckad0815:mars get pods -n ckad0815
NAME                    READY   STATUS    RESTARTS   AGE
api                     1/1     Running   0          32d
newpod                  1/1     Running   0          32d
proxy                   1/1     Running   0          32d
test-ping               1/1     Running   0          32d
webapp-9d76666c-kdx9x   1/1     Running   0          11m
webapp-9d76666c-q2pbr   1/1     Running   0          40m
webapp-9d76666c-z7tj8   1/1     Running   0          40m
[root@master ~]# k --as=system:serviceaccount:ckad0815:xxxxx get pods -n ckad0815
Error from server (Forbidden): pods is forbidden: User "system:serviceaccount:ckad0815:xxxxx" cannot list resource "pods" in API group "" in the namespace "ckad0815"
[root@master ~]# k --as=system:serviceaccount:ckad0815:xxxxx get pods 
Error from server (Forbidden): pods is forbidden: User "system:serviceaccount:ckad0815:xxxxx" cannot list resource "pods" in API group "" in the namespace "default"
[root@master ~]# k --as=system:serviceaccount:ckad0815:mars get pods 
Error from server (Forbidden): pods is forbidden: User "system:serviceaccount:ckad0815:mars" cannot list resource "pods" in API group "" in the namespace "default"
[root@master ~]#
[root@k8s-master 05]# cat cmd.md
kubectl create sa zhangp -n ckad
kubectl create role zhangp --verb=list,get --resource=pods -n ckad
kubectl create rolebinding zhangp --role=zhangp --serviceaccount=ckad:zhangp -n ckad

kubectl --as=system:serviceaccount:ckad:zhangp get pod -n ckad

kubectl --as=system:serviceaccount:ckad:mars get pods -n ckad
[root@k8s-master 05]#
[root@k8s-master 05]# kubectl create sa zhangp -n ckad
serviceaccount/zhangp created
[root@k8s-master 05]# kubectl create role zhangp --verb=list,get --resource=pods -n ckad
role.rbac.authorization.k8s.io/zhangp created
[root@k8s-master 05]# kubectl create rolebinding zhangp --role=zhangp --serviceaccount=ckad:zhangp -n ckad
rolebinding.rbac.authorization.k8s.io/zhangp created
[root@k8s-master 05]# kubectl --as=system:serviceaccount:ckad:zhangp get pod -n ckad
NAME    READY   STATUS    RESTARTS   AGE
test1   1/1     Running   0          2m15s
[root@k8s-master 05]#

6、ConfigMap

Task

  1. 在namespace default中创建一个名为some-config并存储着以下键值对的ConfigMap:

key3:value4

  1. 在namespace default 中创建一个名为nginx-configmap的Pod。用nginx:stable的镜像来指定一个容器。 用存储在ConfigMap some-config中的数据来填充卷,并将其安装在路径/some/path。
# kubectl config use-context k8s
# kubectl create configmap some-config --from-literal=key3=value4
[root@k8s-master 06]# cat cm.yaml
apiVersion: v1
data:
  key3: value4
kind: ConfigMap
metadata:
  name: some-config
[root@k8s-master 06]#


# vi nginx-configmap.yaml
apiVersion: v1
kind: Pod
metadata:
  name: nginx-configmap
spec:
  containers:
  - name: nginx-configmap
    image: nginx:stable
    volumeMounts:
    - name: config
      mountPath: "/some/path"
  volumes:
  - name: config
    configMap:
      name: some-config

验证:

kubectl exec -it nginx-configmap -- sh
cd /some/path
ls
cat key3

7、升级与回滚

Task

  1. 更新namespace ckad00015中的Deployment webapp的比例缩放配置,将maxSurge设置为10%,将 maxunavailable设置为4
  2. 更新Deployment webapp 以让容器镜像Ifccncf/nginx使用版本标签1.13.7
  3. 将Deployment webapp回滚至前一版本。

定义

maxUnavailable:指定更新过程中不可用的Pod的个数上限,默认25%
maxSurge:指定可以创建的超出期望Pod个数的Pod数量,默认25%
10副本,2-3个新版本的pod副本
10副本,2-3个不可用,杀掉的pod副本数

解题

 maxSurge: 10%
 maxUnavailable: 4

操作命令

# kubectl config use-context k8s
# kubectl edit deployment webapp -n ckad00015
...
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: webapp
  strategy:
    rollingUpdate:
      maxSurge: 10%
      maxUnavailable: 4
...

升级与回滚

kubectl set image deployment webapp nginx=lfccncf/nginx:1.13.7
kubectl rollout undo deployment webapp
[root@k8s-master ckad]# kubectl rollout history deployment myapp-deploy1
deployment.apps/myapp-deploy1
REVISION  CHANGE-CAUSE
1         <none>
2         <none>

[root@k8s-master ckad]#
[root@k8s-master ckad]# k set image deployment myapp-deploy1 myapp=lfccncf/nginx:1.13.7
deployment.apps/myapp-deploy1 image updated
[root@k8s-master ckad]#
[root@k8s-master ckad]# kubectl rollout undo deployment myapp-deploy1
deployment.apps/myapp-deploy1 rolled back
[root@k8s-master ckad]#
[root@k8s-master ckad]#
[root@k8s-master ckad]# kubectl rollout history deployment myapp-deploy1
deployment.apps/myapp-deploy1
REVISION  CHANGE-CAUSE
2         <none>
3         <none>

[root@k8s-master ckad]#

8、pv/pvc使用

Task

To facilitate this, perform the following tasks:

  • Create a file on node sk8s-node-0 at /opt/KDSP00101/data/index. html with the content WEPKEY=7789
  • Create a PersistentVolume named task-pv-volume using hostPath and allocate 2Gi to it, specifying that the volume is at /opt/KDSP00101/data on thecluster's node. The configuration should specify the access mode of ReadWriteOnce . It should define the StorageClass name keys for the PersistentVolume,which will be used to bind PersistentVolumeClaim requests to this Persistentvolume
  • Create a PersistentVolumeClaim named task-pv-claim that requests a volume of at least 200Mi and specifies an access mode of ReadWriteOnce
  • Create a pod that uses the PersistentVolumeClaim as a volume with a label app:my-storage-app mounting the resulting volume to a mountPath /usr/share/nginx/html inside the pod
you can access sk8s-node-0 by issuing the following command:

[student@node-1]$ ssh sk8s-node-0
Ensure that you return to the base
node once you have completed your
work on sk8s-node-0:
  • 在 sk8s-node-0 节点上创建一个文件 /opt/KDSP00101/data/index.html , 内容为 WEPKEY=7789

  • 使用 hostPath 创建一个名为 task-pv-volume 的 PersistentVolume,并分配 2Gi 容量,指 定该卷位于集群节点上的/opt/KDSP00101/data,访问模式 ReadWriteOnce。它应该为 PersistentVolume 定义 StorageClass 名称测试,它将被用来绑定 PersistentVolumeClaim 请求到这个 PersistenetVolume。

  • 创建一个名为 task-pv-claim 的 PersistentVolumeClaim,请求容量 200Mi,并指定访问 模式 ReadWriteOnce
  • 创建一个 pod,使用 PersistentVolmeClaim 作为一个卷,带有一个标签 app:my-storageapp 将卷挂载到 pod 内的/usr/share/nginx/html
# 切换k8s集群
kubectl config use-context k8s
# 登录sk8s-node-0节点
ssh sk8s-node-0节点
# 创建文件
echo WEPKEY=7789 > /opt/KDSP00101/data/index.html

# vi task-pv-volume.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: task-pv-volume
  labels:
spec:
  storageClassName: keys
  capacity:
    storage: 2Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/opt/KDSP00101/data"
---
# vi task-pv-claim.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: task-pv-claim
spec:
  storageClassName: keys
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 200Mi
---


# vi task-pv-pod.yaml
apiVersion: v1
kind: Pod
metadata:
  name: task-pv-pod
  app: my-storage-app
spec:
  nodeName: 27ops.com
  volumes:
    - name: task-pv-storage
      persistentVolumeClaim:
        claimName: task-pv-claim
  containers:
    - name: nginx
      image: nginx
      ports:
        - containerPort: 80
          name: "http-server"
      volumeMounts:
        - mountPath: "/usr/share/nginx/html"
          name: task-pv-storage

9、Pod配置Requests

Task

  1. 在现有的namespace pod-resources中创建名为nginx-resources的Pod.用nginx:stable的镜像指定一个容器.
  2. 为其容器指定300m的CPU和1Gi的内存的资源请求.
# kubectl config use-context k8s
# kubectl run nginx-resources --image=nginx:stable --requests=cpu=300m,memory=1Gi -n pod-resources --dry-run=client -o yaml
[root@k8s-master 09]# cat 10.yaml
apiVersion: v1
kind: Pod
metadata:
  name: nginx-resources
spec:
  containers:
  - name: nginx-resources
    image: nginx
    resources:
      requests:
        memory: "1Gi"
        cpu: "300m"

[root@k8s-master 09]# 

10、Secret

Task

  1. 在namespace default中创建一个名为another-secret并包含以下单个键值对的Secret。

key1: value1

  1. 在namespace default中创建一个名为nginx-secret的Pod.用nginx:stable的镜像来指定一个容器

添加一个名为COOL_VAEIABLE的环境变量来使用secret键key1的值

# kubectl config use-context k8s
# kubectl create secret generic another-secret --from-literal=key1=value1
[root@k8s-master 10]# echo -n 'value1' |base64
dmFsdWUx
[root@k8s-master 10]#

[root@k8s-master 10]# cat 10.yaml
apiVersion: v1
data:
  key1: dmFsdWUx
kind: Secret
metadata:
  creationTimestamp: null
  name: another-secret
[root@k8s-master 10]# 

pod-yaml

apiVersion: v1
kind: Pod
metadata:
  name: nginx-secret
spec:
  containers:
  - name: nginx-secret
    image: nginx:stable
    env:
    - name: COOL_VAEIABLE
      valueFrom:
        secretKeyRef:
          name: another-secret
          key: key1

11、Pod健康检查

由于Liveness Probe发生了问题,您无法访问一个应用程序.该应用程序可以在任何namespace中运行.

  1. 找出对应的Podcast并将其名称和namespace写入文件/opt/CKAD00011/broken.txt.使用以

下格式:

用输出格式wide

<namespace>/<podName>

文件/opt/CKAD00011/broken.txt已存在

  1. 用kubectl get events来获取相关错误事件并将其写入文件/opt/CKAD00011/error.txt.请使用输出格式wide。

不使用输出格式wide将导致分数降低

文件/opt/CKAD00011/error.txt已存在

  1. 修复故障的Podcast的Liveness Probe问题
# kubectl config use-context dk8s
# kubectl get pods
# echo <Pod 名称>/<命名空间> > /opt/CKAD00011/broken.txt
# kubectl get events -o wide |grep <Pod 名称> > /opt/CKAD00011/error.txt
# kubectl get pods <Pod 名称> -o yaml > probe.yaml
# kubectl delete -f probe.yaml
# vi probe.yaml
apiVersion: v1
kind: Pod
metadata:
    test: liveness
  name: probe-demo
spec:
  containers:
  - name: web
    image: nginx
    livenessProbe:
      httpGet:
        path: /
        port: 80
      initialDelaySeconds: 5
      periodSeconds: 10

12、Pod多容器

Task

  • Create a deployment named deployment-web in the default namespace, that:

  • Includes a primary lfccncf/busybox:1 container, named logger-123

  • Includes a sidecar Ifccncf/fluentd:v0.12 container, named adaptor-dev

  • Mounts a shared volume/tmp/log on both containers, which does not persist when the pod is deleted

  • Instructs the logger-123 container to run the command

    while true;do echo "i luv cncf">>/tmp/1og/input.1og; sleep 10; done

    which should output logs to /tmp/log/input. log in plain text format, with example values:

    i luv cncf i luv cncf i luv cncf

    The adaptor-dev sidecar container should read /tmp/log/input. log and output the data to /tmp/log/output. in Fluentd JSON format. Note that no knowledge of Fluentd is required to complete this task: all you will need to achieve this is to create the ConfigMap from the spec file provided at /opt/KDMC00102/fluentd-configmap. yaml, and mount that ConfigMap to /fluentd/etc in the adaptor-dev* sidecar container

解读:

  • 在default命名空间创建一个deployment名为deploymenb-web: ■包含一个主容器1fccncf/busybox:1,名称1ogger-123 ■报名一个边车容器1fccncf/fluentd:v0.12,名称 adaptor-dev ■在两个容器上挂载一个共享卷/tmp/1og,当pod删除,这个卷不会持久。 ■在logger-123容器运行以下命令:
while true;do 
  echo "i luv cncf">>/tmp/1og/input.1og; 
  sleep 10; 
done

结果会文本输出到/tmp/log/input.log,格式示例如下:

i luv cncf
i luv cncf
i luv cncf

​ ■adaptor-dev 容器读取/tmp/1og/input.1og,并将数据输出到/tmp/1og/output.*格式为Fluentd JSON。 请注意:完成此任务不需要了解Fluentd,完成此任务所需要的知识从 /opt/KDMC00102/fluentd-configmap.yaml 提供规范文件中创建 configmap,并将该configmap挂载到边车容器adapter-dev中的/fluentd/etc

kubect1 apply -f /opt/KDMC00102/fluentd-configmap.yaml

内容大致如下:

apiVersion: v1
kind: Pod
metadata:
  name: counter
spec:
  containers:
  - name: count
    image: busybox:1.28
    args:
    - /bin/sh
    - -c
    - >
      i=0;
      while true;
      do
        echo "$i: $(date)" >> /tmp/log/input.log;
        echo "$(date) INFO $i" >> /tmp/log/2.log;
        i=$((i+1));
        sleep 1;
      done      
    volumeMounts:
    - name: varlog
      mountPath: /tmp/log
  - name: count-log-1
    image: busybox:1.28
    args: [/bin/sh, -c, 'tail -n+1 -F /tmp/log/input.log']
    volumeMounts:
    - name: varlog
      mountPath: /tmp/log
  - name: count-log-2
    image: busybox:1.28
    args: [/bin/sh, -c, 'tail -n+1 -F /tmp/log/2.log']
    volumeMounts:
    - name: varlog
      mountPath: /tmp/log
  volumes:
  - name: varlog
    emptyDir: {}
#fluentd-sidecar-config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: fluentd-config
data:
  fluentd.conf: |
    <source>
      type tail
      format none
      path /tmp/log/input.log
      pos_file /var/log/1.log.pos
      tag fluent.test
    </source>
    <match **>
      type file
      path /tmp/log/input.log
    </match>   

13、Deployment修改镜像

Task

在namespace default 中的一个Deployment由于指定了错误的容器镜像而失败。 找出此Deployment 并修复问题。

解题:

kubectl get pod
kubectl edit deployment 

pod不支持在线的编辑,只能修改deployment的yaml文件

14、更新Deployment并暴露Service

Task

  1. 首先,更新在namespace ckad00017中的Deployment ckad00017-deoloyment:

  2. 以使其运行5个Pod的副本

  3. 将以下标签添加到Pod:

    tier:dmz

2.然后,在namespace ckad00017中创建一个名为rover的NodePort Service以在TCP端口81上公开Deployment ckad00017-deployment。

解题:

kubectl edit deployment ckad00017-deployment

spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: webtest3
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: webtest3
        tier:dmz
kubectl expose deployment ckad00017-deployment --name rover --port=81 --
type=NodePort -n ckad00017

15、Deployment使用ServiceAccount

Task

更新在namespace frontend 中的Deployment,使其使用现有的ServiceAccount app

解题:

kubectl edit deployment frontend
spec:
  serviceAccountName: app
  containers:
  ...
[root@k8s-master 15]# cat 10.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: frontend
  name: frontend
  namespace: ckad
spec:
  replicas: 1
  selector:
    matchLabels:
      app: frontend
  strategy: {}
  template:
    metadata:
      labels:
        app: frontend
    spec:
      serviceAccount: zhangp
      containers:
      - image: nginx
        name: nginx
[root@k8s-master 15]#

16、网络策略

Task

更新在namespace ckad00018中的Pod ckad00018-newpod,使其使用一个只允许此Pod与Pod proxy和api之间收发流量的NetworkPolicy。

所有必要的NetworkPolicy均已创建

在完成此项目时,请勿创建、修改或删除任何NetworkPolicy。您只能使用现有的NetworkPolicy。

获取networkpolicy

kubectl get netpol -nckad00018
kubectl edit networkpolicy  -nckad00018
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: test-network-policy
  namespace: ckad00014
  annotations:
    imageregistry: "https://hub.27ops.com/"
spec:
  podSelector:
    matchLabels:
      run: newpod
  policyTypes:
    - Ingress
    - Egress
  ingress:
    - from:
        - podSelector:
            matchLabels:
              run: proxy
        - podSelector:
            matchLabels:
              run: api
  egress:
    - to:
        - podSelector:
            matchLabels:
              run: proxy
        - podSelector:
            matchLabels:
              run: api

17、top

Task

监控在namespace cpu-stress中运行的Pod并将消耗最多CPU的Pod的名称写入文件/opt/CKAD00010/pod.txt

文件/opt/CKAD00010/pod.txt已存在

解题:

kubectl top pod --sort-by='cpu' -n cpu-stress 
echo <pod 名称> > /opt/CKAD00010/pod.txt

18、job

kubectl create job myjob1 --image=busybox --dry-run=client -oyaml -- date 



kubectl create job myjob1 --image=busybox --dry-run=client -oyaml -- echo  $(expr  3 + 2) > 5.yaml



kubectl create cronjob mycjl --image=busybox --schedule="*/1 * * * *" --dry-run=client -o yaml > 6.yaml 

19、docker

docker save -o mkdocs.tar.gz mkdocs:v1.1

docker build --rm --tag mkdocs:v1.5 .

docker build -f /path/to/a/Dockerfile .

20、Ingress

[root@master ing]# cat http.yaml 
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: note-http
spec:
  rules:
    - host: note.27ops.com
      http:
        paths:
        - path: /
          backend:
            serviceName: myapp-cluster1
            servicePort: 80
[root@master ing]# 

21、为命名空间配置内存和 CPU 配额

dmidecode |grep size 查看内存大小 ResourceQuota 在 quota-mem-cpu-example 命名空间中设置了如下要求:

  • 在该命名空间中的每个 Pod 的所有容器都必须要有内存请求和限制,以及 CPU 请求和限制。
  • 在该命名空间中所有 Pod 的内存请求总和不能超过 1 GiB。
  • 在该命名空间中所有 Pod 的内存限制总和不能超过 2 GiB。
  • 在该命名空间中所有 Pod 的 CPU 请求总和不能超过 1 cpu。
  • 在该命名空间中所有 Pod 的 CPU 限制总和不能超过 2 cpu。
apiVersion: v1
kind: ResourceQuota
metadata:
  name: mem-cpu-demo
spec:
  hard:
    requests.cpu: "1"
    requests.memory: 1Gi
    limits.cpu: "2"
    limits.memory: 2Gi
[root@k8s-master 21]# k get quota -n ckad
NAME           AGE    REQUEST                                     LIMIT
mem-cpu-demo   7m5s   requests.cpu: 0/1, requests.memory: 0/1Gi   limits.cpu: 0/2, limits.memory: 0/2Gi
[root@k8s-master 21]#
[root@k8s-master 21]#
[root@k8s-master 21]# kg limitranges
No resources found
[root@k8s-master 21]#

[root@k8s-master 21]# kg limits
No resources found
[root@k8s-master 21]#

为命名空间配置 CPU 最小和最大约束

为命名空间配置内存和 CPU 配额