kubernetes进阶使用

一、 服务探针

对线上业务来说,保证服务的正常稳定是重中之重,对故障服务的及时处理避免影响业务以及快速恢复一直是开发运维的难点。Kubernetes 提供了健康检查服务,对于检测到故障服务会被及时自动下线,以及通过重启服务的方式使服务自动恢复。

1、 存活性探测(LivenessProbe)

用于判断容器是否存活,即 Pod 是否为 running 状态,如果 LivenessProbe 探针探测到容器不健康,则 kubelet将 kill 掉容器,并根据容器的重启策略判断按照那种方式重启,如果一个容器不包含 LivenessProbe 探针,则 Kubelet认为容器的 LivenessProbe 探针的返回值永远成功。存活性探测支持的方法有三种:ExecAction,TCPSocketAction,HTTPGetAction。

1.1、 Exe

apiVersion: v1
kind: Pod
metadata:
  name: probe-exec
  namespace: defualt
spec:
  containers:
    - name: nginx
      image: nginx
      livenessProbe:
        exec:
          command:
            - cat
            - /tmp/health
        initialDelaySeconds: 5
        timeoutSeconds: 1


#创建
kubectl apply -f  file

1.2、 TCPSocket

apiVersion: v1
kind: Pod
metadata:
  name: probe-tcp
  namespace: default
spec:
  containers:
    - name: nginx
      image: nginx
      livenessProbe:
        initialDelaySeconds: 5
        timeoutSeconds: 1
        tcpSocket:
          port: 80

#创建
kubectl apply -f  file

1.3、 HTTPGet

apiVersion: v1
kind: Pod
metadata:
  name: probe-http
  namespace: default
spec:
  containers:
    - name: nginx
      image: nginx
      livenessProbe:
        httpGet:
          path: /
          port: 80
          host: 127.0.0.1
          scheme: HTTP
        initialDelaySeconds: 5
        timeoutSeconds: 1

#创建
kubectl apply -f  file

1.4、 参数详解

 failureThreshold:最少连续几次探测失败的次数,满足该次数则认为 fail
 initialDelaySeconds:容器启动之后开始进行存活性探测的秒数。不填立即进行
 periodSeconds:执行探测的频率(秒)。默认为 10 秒。最小值为 1。
 successThreshold:探测失败后,最少连续探测成功多少次才被认定为成功,满足该次数则认为 success。(但是如果是 liveness 则必须是 1。最小值是 1。)
 timeoutSeconds:每次执行探测的超时时间,默认 1 秒,最小 1 秒。

2、 就绪性探测

用于判断容器是否正常提供服务,即容器的 Ready 是否为 True,是否可以接收请求,如果 ReadinessProbe探测失败,则容器的 Ready 将设置为 False,控制器将此 Pod 的 Endpoint 从对应的 service 的 Endpoint 列表中移除,从此不再将任何请求调度此 Pod 上,直到下次探测成功。(剔除此 pod,不参与接收请求不会将流量转发给此 Pod)

2.1、 HTTPGet

通过访问某个 URL 的方式探测当前 POD 是否可以正常对外提供服务。

kind: Pod
apiVersion: v1
metadata:
  name: readinessprobe-nginx
  namespace: default
  labels:
    provider: aliyun
    business: pms
    environmental: dev
spec:
  containers:
    - name: readinessprobe-nginx
      image: nginx
      imagePullPolicy: Always
      ports:
        - containerPort: 80
          name: http
          protocol: TCP
        - containerPort: 443
          name: https
          protocol: TCP
      readinessProbe:
        httpGet:
          port: 80
          path: /demo.html

#创建
kubectl apply -f  file

2.2、 Exec

通过执行一条命令,探测服务是否可以正常对外提供服务。
kind: Pod
apiVersion: v1
  metadata:
  name: exec-pod
spec:
  containers:
    - name: nginx
      imagePullPolicy: IfNotPresent
      image: nginx
      readinessProbe:
        exec:
          command:
            - cat
            - /usr/share/nginx/html/index.html

2.3、 TCPSocket

通过 ping 某个端口的方式,探测服务是否可以正常对外提供服务。


kind: Pod
apiVersion: v1
metadata:
  name: exec-pod
spec:
  containers:
    - name: nginx
      imagePullPolicy: IfNotPresent
      image: nginx
      readinessProbe:
        tcpSocket:
          port: 80

3、 回调 HOOK

实际上 Kubernetes 为我们的容器提供了生命周期钩子的,就是我们说的 Pod Hook,Pod Hook 是由 kubelet发起的,当容器中的进程启动前或者容器中的进程终止之前运行,这是包含在容器的生命周期之中。我们可以同时为 Pod 中的所有容器都配置 hook。

Kubernetes 为我们提供了两种钩子函数:
 PostStart:这个钩子在容器创建后立即执行。但是,并不能保证钩子将在容器 ENTRYPOINT 之前运行,因为
没有参数传递给处理程序。主要用于资源部署、环境准备等。不过需要注意的是如果钩子花费太长时间以至
于不能运行或者挂起, 容器将不能达到 running 状态。
 PreStop:这个钩子在容器终止之前立即被调用。它是阻塞的,意味着它是同步的, 所以它必须在删除容器的调用发出之前完成。主要用于优雅关闭应用程序、通知其他系统等。如果钩子在执行期间挂起, Pod 阶段将停留在
如果 PostStart 或者 PreStop 钩子失败, 它会杀死容器。所以我们应该让钩子函数尽可能的轻量。当然有些情况下,长时间运行命令是合理的, 比如在停止容器之前预先保存状态。

apiVersion: v1
kind: Pod
metadata:
  name: hook-demo1
spec:
  containers:
    - name: hook-demo1
      image: nginx
      lifecycle:
        postStart:
          exec:
            command: ["/bin/sh", "-c", "echo Hello from the postStart handler > /usr/share/message"]

二、 K8S 监控组件 metrics-server

1、 创建用户

# Metrics-server 需要读取 kubernetes 中数据,所以需要创建一个有权限的用户来给 metrics-server 使用。

kubectl create clusterrolebinding system:anonymous --clusterrole=cluster-admin --user=system:anonymous

2、 创建配置文件

文件格式带更改
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: system:aggregated-metrics-reader
labels:
rbac.authorization.k8s.io/aggregate-to-view: "true"
rbac.authorization.k8s.io/aggregate-to-edit: "true"
rbac.authorization.k8s.io/aggregate-to-admin: "true"
rules:
- apiGroups: ["metrics.k8s.io"]
resources: ["pods", "nodes"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: metrics-server:system:auth-delegator
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:auth-delegator
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: metrics-server-auth-reader
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: kube-system
---
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
name: v1beta1.metrics.k8s.io
spec:
service:
name: metrics-server
namespace: kube-system
group: metrics.k8s.io
version: v1beta1
insecureSkipTLSVerify: true
groupPriorityMinimum: 100
versionPriority: 100
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: metrics-server
namespace: kube-system
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: metrics-server
namespace: kube-system
labels:
k8s-app: metrics-server
spec:
selector:
matchLabels:
k8s-app: metrics-server
template:
metadata:
name: metrics-server
labels:
k8s-app: metrics-server
spec:
serviceAccountName: metrics-server
volumes:
# mount in tmp so we can safely use from-scratch images and/or read-only containers
- name: tmp-dir
emptyDir: {}
- name: ca-ssl
hostPath:
path: /etc/kubernetes/ssl
containers:
- name: metrics-server
image: registry.cn-hangzhou.aliyuncs.com/k8sos/metrics-server:v0.4.1
imagePullPolicy: IfNotPresent
args:
- --cert-dir=/tmp
- --secure-port=4443
- --metric-resolution=30s
- --kubelet-insecure-tls
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
- --requestheader-username-headers=X-Remote-User
- --requestheader-group-headers=X-Remote-Group
- --requestheader-extra-headers-prefix=X-Remote-Extraports:
- name: main-port
containerPort: 4443
protocol: TCP
securityContext:
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
volumeMounts:
- name: tmp-dir
mountPath: /tmp
- name: ca-ssl
mountPath: /etc/kubernetes/ssl
nodeSelector:
kubernetes.io/os: linux
---
apiVersion: v1
kind: Service
metadata:
name: metrics-server
namespace: kube-system
labels:
kubernetes.io/name: "Metrics-server"
kubernetes.io/cluster-service: "true"
spec:
selector:
k8s-app: metrics-server
ports:
- port: 443
protocol: TCP
targetPort: main-port
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: system:metrics-server
rules:
- apiGroups:
- ""
resources:
- pods
- nodes
- nodes/stats
- namespaces
- configmaps
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: system:metrics-server
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:metrics-server
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: kube-system

2.1、 复制上面的配置文件,创建对应的服务

[root@kubernetes-master-01 yunweiwang]# kubectl apply -f components.yaml
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegato
r created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
serviceaccount/metrics-server created
deployment.apps/metrics-server created
service/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
[root@kubernetes-master-01 yunweiwang]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-db75d7667-rkgwt 1/1 Running 3 16d
metrics-server-6485844744-c2nl8 1/1 Running 0 12s

[root@kubernetes-master-01 yunweiwang]# kubectl top nodes
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
kubernetes-master-01 628m 7% 3549Mi 22%
kubernetes-master-02 693m 8% 1876Mi 11%
kubernetes-master-03 482m 6% 1920Mi 12%
kubernetes-node-01 159m 1% 993Mi 6%
kubernetes-node-02 260m 3% 930Mi 5%
kubernetes-node-03 155m 1% 765Mi 4%
kubernetes-node-04 169m 2% 923Mi 5%
kubernetes-node-05 141m 1% 907Mi 5%
kubernetes-node-06 158m 1% 786Mi 20%

三、 HPA 自动伸缩

在生产环境中,总会有一些意想不到的事情发生,比如公司网站流量突然升高,此时之前创建的 Pod 已不足以撑住所有的访问,而运维人员也不可能 24 小时守着业务服务,这时就可以通过配置 HPA,实现负载过高的情况下自动扩容 Pod 副本数以分摊高并发的流量,当流量恢复正常后,HPA 会自动缩减 Pod 的数量。HPA 是根据CPU 的使用率、内存使用率自动扩展 Pod 数量的,所以要使用 HPA 就必须定义 Requests 参数。

1.1、 创建 HPA

kind: Deployment
apiVersion: apps/v1
metadata:
name: nginx
namespace: default
labels:
app: nginx
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
resources:
limits:
cpu: 10m
memory: 50Mi
requests:
cpu: 10m
memory: 50Mi
---
kind: Service
apiVersion: v1
metadata:
name: svc
namespace: default
spec:
selector:
app: nginx
ports:
- port: 80
targetPort: 80
---
kind: HorizontalPodAutoscaler
apiVersion: autoscaling/v2beta1
metadata:
name: docs
namespace: default
spec:
# HPA 的最小 pod 数量和最大 pod 数量
maxReplicas: 10
minReplicas: 1
# HPA 的伸缩对象描述,HPA 会动态修改该对象的 pod 数量
scaleTargetRef:
kind: Deployment
name: nginx
apiVersion: apps/v1
# 监控的指标数组,支持多种类型的指标共存
metrics:
- type: Resource
# 核心指标,包含 cpu 和内存两种(被弹性伸缩的 pod 对象中容器的 requests 和 limits 中定义的
指标。)
resource:
name: cpu
# CPU 阈值
# 计算公式:所有目标 pod 的 metric 的使用率(百分比)的平均值,
# 例如 limit.cpu=1000m,实际使用 500m,则 utilization=50%
# 例如 deployment.replica=3, limit.cpu=1000m,则 pod1 实际使用 cpu=500m, pod2=300m,
pod=600m
## 则 averageUtilization=(500/1000+300/1000+600/1000)/3 = (500 + 300 +
600)/(3*1000))
targetAverageUtilization: 5

1.2、 测试

[root@kubernetes-master-01 ~]# kubectl run test --rm -it --image=busybox --command
-- sh
If you don't see a command prompt, try pressing enter.
/ # while true; do wget -O- -q 10.96.62.55; done

[root@kubernetes-master-01 ~]# kubectl get pods -o wide -w
NAME READY STATUS RESTARTS AGE IP NODE
NOMINATED NODE READINESS GATES
hook-demo1 1/1 Running 0 20h 10.242.112.3
kubernetes-node-02 <none> <none>
nginx-7bf65b57b6-6sfp5 1/1 Running 0 5m6s 10.240.120.3
kubernetes-node-03 <none> <none>
tcp-pod 1/1 Running 0 36m 10.242.0.3
kubernetes-node-05 <none> <none>
test 1/1 Running 0 2m3s 10.240.120.4
kubernetes-node-03 <none> <none>
nginx-7bf65b57b6-2qwn9 0/1 Pending 0 0s <none> <none>
<none> <none>
nginx-7bf65b57b6-2qwn9 0/1 Pending 0 0s <none>
kubernetes-node-01 <none> <none>
nginx-7bf65b57b6-2qwn9 0/1 ContainerCreating 0 0s <none>
kubernetes-node-01 <none> <none>

[root@kubernetes-master-01 ~]# kubectl top pods
NAME CPU(cores) MEMORY(bytes)
nginx-7bf65b57b6-2qwn9 10m 4Mi
nginx-7bf65b57b6-6sfp5 10m 4Mi
nginx-7bf65b57b6-h8gp2 0m 0Mi
nginx-7bf65b57b6-mb2rz 0m 0Mi

四、 Nginx Ingress

Ingress 为 Kubernetes 集群中的服务提供了入口,可以提供负载均衡、SSL 终止和基于名称的虚拟主机,在生产环境中常用的 Ingress 有 Treafik、Nginx、HAProxy、Istio 等。在 Kubernetesv 1.1 版中添加的 Ingress 用于从集群外部到集群内部 Service 的 HTTP 和 HTTPS 路由,流量从 Internet 到 Ingress 再到 Services 最后到 Pod 上,通常情况下,Ingress 部署在所有的 Node 节点上。Ingress 可以配置提供服务外部访问的 URL、负载均衡、终止 SSL,并提供基于域名的虚拟主机。但 Ingress 不会暴露任意端口或协议。

1.1、 安装 nginx ingress


# 下载 nginx ingress 配置清单
[root@kubernetes-master-01 ~]# wget
https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.41.2/de
ploy/static/provider/baremetal/deploy.yaml
--2020-12-04 15:39:50--
https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.41.2/de
ploy/static/provider/baremetal/deploy.yaml
正 在 解 析 主 机 raw.githubusercontent.com (raw.githubusercontent.com)...
151.101.108.133
正 在 连 接 raw.githubusercontent.com
(raw.githubusercontent.com)|151.101.108.133|:443... 已连接。
已发出 HTTP 请求,正在等待回应... 200 OK
长度:18331 (18K) [text/plain]
正在保存至: “deploy.yaml”
100%[=========================================>] 18,331 --.-K/s 用时 0.005s
2020-12-04 15:39:51 (3.83 MB/s) - 已保存 “deploy.yaml.1” [18331/18331])


# 安装
[root@kubernetes-master-01 ~]# kubectl apply -f deploy.yaml
namespace/ingress-nginx unchanged
serviceaccount/ingress-nginx unchanged
configmap/ingress-nginx-controller configured
clusterrole.rbac.authorization.k8s.io/ingress-nginx unchanged
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx unchanged
role.rbac.authorization.k8s.io/ingress-nginx unchanged
rolebinding.rbac.authorization.k8s.io/ingress-nginx unchanged
service/ingress-nginx-controller-admission unchanged
service/ingress-nginx-controller unchanged
deployment.apps/ingress-nginx-controller configured
validatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-admiss
ion configured
serviceaccount/ingress-nginx-admission unchanged
clusterrole.rbac.authorization.k8s.io/ingress-nginx-admission unchanged
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission unchanged
role.rbac.authorization.k8s.io/ingress-nginx-admission unchanged
rolebinding.rbac.authorization.k8s.io/ingress-nginx-admission unchanged
job.batch/ingress-nginx-admission-create unchanged
job.batch/ingress-nginx-admission-patch unchanged

# 检查安装是否成功
[root@kubernetes-master-01 ~]# kubectl get pods -n ingress-nginx
NAME READY STATUS RESTARTS AGE
ingress-nginx-admission-create-sqcmg 0/1 Completed 0 17d
ingress-nginx-admission-patch-zjd5v 0/1 Completed 0 17d
ingress-nginx-controller-6cc8fd98bd-62wsz 1/1 Running 0 17d

1.2、 基于 TLS 的 Ingress

1.2.1、创建证书

创建证书,生产环境的证书为公司购买的证书。
[root@kubernetes-master-01 ~/cert]# openssl genrsa -out tls.key 2048
Generating RSA private key, 2048 bit long modulus
..........+++
.....................................................................+++
e is 65537 (0x10001)

[root@kubernetes-master-01 ~/cert]# openssl req -new -x509 -key tls.key -out tls.crt -subj /C=CN/ST=ShangHai/L=ShangHai/O=Ingress/CN=www.test.com
[root@kubernetes-master-01 ~/cert]# ll
total 8
-rw-r--r-- 1 root root 1289 Dec 6 01:06 tls.crt
-rw-r--r-- 1 root root 1679 Dec 6 01:06 tls.key

[root@kubernetes-master-01 ~/cert]# kubectl -n default create secret tls ingress-tls --cert=tls.crt --key=tls.key
secret/ingress-tls created

[root@kubernetes-master-01 ~/cert]# kubectl get secrets
NAME TYPE DATA AGE
default-token-fvqhj kubernetes.io/service-account-token 3 6d1h
ingress-tls kubernetes.io/tls 2 4s

1.2.2、定义 Ingress

kind: Ingress
apiVersion: extensions/v1beta1
metadata:
name: ingress-ingress
namespace: default
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
tls:
- secretName: ingress-tls
rules:
- host: www.test.com
http:
paths:
- path: /
backend:
serviceName: svc
servicePort: 80

1.2.2.1、 部署

[root@kubernetes-master-01 ~]# kubectl apply -f ingress-nginx-tls.yaml
ingress.extensions/ingress-ingress created
[root@kubernetes-master-01 ~]# kubectl get svc -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
AGE
ingress-nginx-controller NodePort 10.96.181.29 <none>
80:12616/TCP,443:24030/TCP 12m
ingress-nginx-controller-admission ClusterIP 10.96.145.183 <none>
443/TCP 12m
[root@kubernetes-master-01 ~]# kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
ingress-ingress <none> www.test.com 80, 443 20s

五、 数据持久化

我们知道,Pod 是由容器组成的,而容器宕机或停止之后,数据就随之丢了,那么这也就意味着我们在做Kubernetes 集群的时候就不得不考虑存储的问题,而存储卷就是为了 Pod 保存数据而生的。存储卷的类型有很多,我们常用到一般有四种:emptyDir,hostPath,NFS 以及云存储(ceph, glasterfs...)等。

1、 emptyDir

emptyDir 类型的 volume 在 pod 分配到 node 上时被创建,kubernetes 会在 node 上自动分配 一个目录,因此无需指定宿主机 node 上对应的目录文件。这个目录的初始内容为空,当 Pod 从 node 上移除时,emptyDir 中的数据会被永久删除。emptyDir Volume 主要用于某些应用程序无需永久保存的临时目录。
kind: Deployment
apiVersion: apps/v1
metadata:
name: test-volume-deployment
namespace: default
labels:
app: test-volume-deployment
spec:
selector:
matchLabels:
app: test-volume-pod
template:
metadata:
labels:
app: test-volume-pod
spec:
containers:
- name: nginx
image: busybox
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
name: http
- containerPort: 443
name: https
volumeMounts:
- mountPath: /data/
name: empty
command: ['/bin/sh','-c','while true;do echo $(date) >>/data/index.html;sleep 2;done']
- name: os
imagePullPolicy: IfNotPresent
image: busybox
volumeMounts:
- mountPath: /data/
name: empty
command: ['/bin/sh','-c','while true;do echo 'budybox' >> /data/index.html;sleep 2;done']
volumes:
- name: empty
emptyDir: {}

2、 hostPath

hostPath 类型则是映射 node 文件系统中的文件或者目录到 pod 里。在使用 hostPath 类型的存储卷时,也可以设置 type 字段,支持的类型有文件、目录、File、Socket、CharDevice 和 BlockDevice。
apiVersion: v1
kind: Pod
metadata:
name: vol-hostpath
namespace: default
spec:
volumes:
- name: html
hostPath:
path: /data/pod/volume1/
type: DirectoryOrCreate
containers:
- name: myapp
image: nginx
volumeMounts:
- name: html
mountPath: /usr/share/nginx/html/

3、 PV 和 PVC

PersistentVolume(PV)是集群中已由管理员配置的一段网络存储。 集群中的资源就像一个节点是一个集群资源。 PV 是诸如卷之类的卷插件,但是具有独立于使用 PV 的任何单个 pod 的生命周期。 该 API 对象捕获存储的实现细节,即 NFS,iSCSI 或云提供商特定的存储系统。PersistentVolumeClaim(PVC)是用户存储的请求。PVC 的使用逻辑:在 pod 中定义一个存储卷(该存储卷类型为 PVC),定义的时候直接指定大小,pvc 必须与对应的 pv 建立关系,pvc 会根据定义去 pv 申请,而 pv 是由
存储空间创建出来的。pv 和 pvc 是 kubernetes 抽象出来的一种存储资源。

3.1、 Nfs

nfs 使得我们可以挂载已经存在的共享到我们的 Pod 中,和 emptyDir 不同的是,当 Pod 被删除时,emptyDir也会被删除。但是 nfs 不会被删除,仅仅是解除挂在状态而已,这就意味着 NFS 能够允许我们提前对数据进行处理,而且这些数据可以在 Pod 之间相互传递,并且 nfs 可以同时被多个 pod 挂在并进行读写。
3.1.1、再所有节点上安装 nfs
yum install nfs-utils.x86_64 -y
3.1.2、配置 nfs
[root@kubernetes-master-01 nfs]# mkdir -p /nfs/v{1..5}
[root@kubernetes-master-01 nfs]# cat > /etc/exports <<EOF
/nfs/v1 172.16.0.0/16(rw,no_root_squash)
/nfs/v2 172.16.0.0/16(rw,no_root_squash)
/nfs/v3 172.16.0.0/16(rw,no_root_squash)
/nfs/v4 172.16.0.0/16(rw,no_root_squash)
/nfs/v5 172.16.0.0/16(rw,no_root_squash)
EOF
[root@kubernetes-master-01 nfs]# exportfs -arv
exporting 172.16.0.0/16:/nfs/v5
exporting 172.16.0.0/16:/nfs/v4
exporting 172.16.0.0/16:/nfs/v3
exporting 172.16.0.0/16:/nfs/v2
exporting 172.16.0.0/16:/nfs/v1
[root@kubernetes-master-01 nfs]# showmount -e
Export list for kubernetes-master-01:
/nfs/v5 172.16.0.0/16
/nfs/v4 172.16.0.0/16
/nfs/v3 172.16.0.0/16
/nfs/v2 172.16.0.0/16
3.1.3、创建 POD 使用 Nfs
[root@kubernetes-master-01 ~]# cat nfs.yaml
kind: Deployment
apiVersion: apps/v1
metadata:
name: nfs
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
volumeMounts:
- mountPath: /usr/share/nginx/html/
name: nfs
volumes:
- name: nfs
nfs:
path: /nfs/v1
server: 192.168.12.71
[root@kubernetes-master-01 ~]# kubectl get pods -l app=nfs
NAME READY STATUS RESTARTS AGE
nfs-5f56db5995-9shkg 1/1 Running 0 24s
nfs-5f56db5995-ht7ww 1/1 Running 0 24s
[root@kubernetes-master-01 ~]# echo "index" > /nfs/v1/index.html
[root@kubernetes-master-01 ~]# kubectl exec -it nfs-5f56db5995-ht7ww -- bash
root@nfs-5f56db5995-ht7ww:/# cd /usr/share/nginx/html/
root@nfs-5f56db5995-ht7ww:/usr/share/nginx/html# ls
index.html

3.2、 PV 的访问模式(accessModes)

模式 解释
ReadWriteOnce(RWO) 可读可写,但只支持被单个节点挂载。
ReadOnlyMany(ROX) 只读,可以被多个节点挂载。
ReadWriteMany(RWX) 多路可读可写。这种存储可以以读写的方式被多个节点共享。不是每一种存储都支持这三种方式,像共享方式,目前支持的还比较少,比较常用的是 NFS。在 PVC 绑定 PV 时通常根据两个条件来绑定,一个是存储的大小,另一个就是访问模式。

3.3、 PV 的回收策略(persistentVolumeReclaimPolicy)

策略 解释
Retain 不清理, 保留 Volume(需要手动清理)
Recycle 删除数据,即 rm -rf /thevolume/*(只有 NFS 和 HostPath 支持)
Delete 删除存储资源,比如删除 AWS EBS 卷(只有 AWS EBS, GCE PD, Azure Disk 和 Cinder 支持)

3.4、PV的状态

状态 解释
Available 可用
Bound 已经分配给 PVC。
Released PVC 解绑但还未执行回收策略。
Failed 发生错误。

3.5、 创建 PV

[root@kubernetes-master-01 ~]# cat pv.yaml
kind: PersistentVolume
apiVersion: v1
metadata:
name: pv001
labels:
app: pv001
spec:
nfs:
path: /nfs/v2
server: 192.168.12.71
accessModes:
- "ReadWriteMany"
- "ReadWriteOnce"
capacity:
storage: 2Gi
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: pv002
labels:
app: pv002
spec:
nfs:
path: /nfs/v3
server: 192.168.12.71
accessModes:
- "ReadWriteMany"
- "ReadWriteOnce"
capacity:
storage: 5Gi
persistentVolumeReclaimPolicy: Delete
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: pv003
labels:
app: pv003
spec:
nfs:
path: /nfs/v4
server: 192.168.12.71
accessModes:
- "ReadWriteMany"
- "ReadWriteOnce"
capacity:
storage: 10Gi
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: pv004
labels:
app: pv004
spec:
nfs:
path: /nfs/v5
server: 192.168.12.71
accessModes:
- "ReadWriteMany"
- "ReadWriteOnce"
capacity:
storage: 20Gi
3.5.1、查看 pv
[root@kubernetes-master-01 ~]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS
REASON AGE
pv001 2Gi RWO,RWX Retain Available 110m
pv002 5Gi RWO,RWX Delete Available 110m
pv003 10Gi RWO,RWX Retain Available 110m
pv004 20Gi RWO,RWX Retain Available 110m

3.6、 使用 pv

[root@kubernetes-master-01 ~]# cat pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc
namespace: default
spec:
accessModes:
- "ReadWriteMany"
resources:
requests:
storage: "6Gi"
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: nfs
namespace: default
spec:
replicas: 2
selector:
matchLabels:
app: nfs
template:
metadata:
labels:
app: nfs
spec:
containers:
- name: nginx
imagePullPolicy: IfNotPresent
image: nginx
volumeMounts:
- name: html
mountPath: /usr/share/nginx/html/
volumes:
- name: html
persistentVolumeClaim:
claimName: pvc

3.7、 查看 pvc

[root@kubernetes-master-01 ~]# kubectl get pvc,pv
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS
AGE
persistentvolumeclaim/pvc Bound pv003 10Gi RWO,RWX
12m
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM
STORAGECLASS REASON AGE
persistentvolume/pv001 2Gi RWO,RWX Retain Available
129m
persistentvolume/pv002 5Gi RWO,RWX Delete Available
129m
persistentvolume/pv003 10Gi RWO,RWX Retain Bound
default/pvc 129m
persistentvolume/pv004 20Gi RWO,RWX Retain Available
129m

4、 StorageClass

在一个大规模的Kubernetes集群里,可能有成千上万个 PVC,这就意味着运维人员必须实现创建出这个多个PV, 此外,随着项目的需要,会有新的 PVC 不断被提交,那么运维人员就需要不断的添加新的,满足要求的 PV,否则新的
Pod 就会因为 PVC 绑定不到 PV 而导致创建失败。而且通过 PVC 请求到一定的存储空间也很有可能不足以满足应
用对于存储设备的各种需求,而且不同的应用程序对于存储性能的要求可能也不尽相同,比如读写速度、并发性
能等,为了解决这一问题,Kubernetes 又为我们引入了一个新的资源对象:StorageClass,通过 StorageClass 的
定义,管理员可以将存储资源定义为某种类型的资源,比如快速存储、慢速存储等,kubernetes 根据 StorageClass
的描述就可以非常直观的知道各种存储资源的具体特性了,这样就可以根据应用的特性去申请合适的存储资源
了。

4.1、 定义 StorageClass

# 下载 helm
wget https://get.helm.sh/helm-v3.3.4-linux-amd64.tar.gz
# 解压
tar xf helm-v3.3.4-linux-amd64.tar.gz
# 安装
mv linux-amd64/helm /usr/local/bin/
# 验证
helm version
# 添加阿里云镜像仓库
helm repo add ali-stable https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts
# 添加官方仓库
helm repo add stable https://kubernetes-charts.storage.googleapis.com/
# 添加微软云 helm 仓库
helm repo add azure http://mirror.azure.cn/kubernetes/charts/
# 下载 nfs
helm pull stable/nfs-client-provisioner
# 安装 nfs client
helm install nfs ./
# 运行案例
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
namespace: default
name: test-nfs
labels:
app: test-nfs
spec:
accessModes:
- ReadWriteMany
storageClassName: nfs-client
resources:
requests:
storage: 8Gi
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: test-nfs-storageclass
namespace: default
labels:
app: test-nfs
spec:
selector:
matchLabels:
app: test-nfs
template:
metadata:
labels:
app: test-nfs
spec:
containers:
- name: nginx
image: nginx
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: /usr/share/nginx/html
name: test-nfs
volumes:
- name: test-nfs
persistentVolumeClaim:
claimName: test-nfs
# 测试
[root@instance-gvpb80ao helm]# vim test.yaml
[root@instance-gvpb80ao helm]# kubectl apply -f test.yaml
persistentvolumeclaim/test-nfs created
deployment.apps/test-nfs-storageclass created
[root@instance-gvpb80ao helm]# kubectl get pvc,pv
NAME STATUS VOLUME
CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/test-nfs Bound pvc-cf0c8175-f145-438a-ba5c-35effc080231
8Gi RWX nfs-client 7s
NAME CAPACITY ACCESS MODES
RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/pvc-cf0c8175-f145-438a-ba5c-35effc080231 8Gi RWX
Delete Bound default/test-nfs nfs-client 6s
[root@instance-gvpb80ao helm]# ll /nfs/v1
total 4
drwxrwxrwx 2 root root 4096 Oct 16 09:56
default-test-nfs-pvc-cf0c8175-f145-438a-ba5c-35effc080231
# 编写一个 index.html 测试网络存储。
[root@instance-gvpb80ao helm]# cd
/nfs/v1/default-test-nfs-pvc-cf0c8175-f145-438a-ba5c-35effc080231/[root@instance
-gvpb80ao default-test-nfs-pvc-cf0c8175-f145-438a-ba5c-35effc080231]# echo
"index" > index.html[root@instance-gvpb80ao
default-test-nfs-pvc-cf0c8175-f145-438a-ba5c-35effc080231]# kubectl get pods -l
app=test-nfs
NAME READY STATUS RESTARTS AGE
test-nfs-storageclass-68c8996847-lsxwn 1/1 Running 0
75s[root@instance-gvpb80ao
default-test-nfs-pvc-cf0c8175-f145-438a-ba5c-35effc080231]# kubectl get pods -l
app=test-nfs -o wide
NAME READY STATUS RESTARTS AGE IP
NODE NOMINATED NODE READINESS GATES
test-nfs-storageclass-68c8996847-lsxwn 1/1 Running 0 80s
10.244.0.12 instance-gvpb80ao <none> <none>[root@instance-gvpb80ao
default-test-nfs-pvc-cf0c8175-f145-438a-ba5c-35effc080231]# curl 10.244.0.12
index

六、 K8S 配置中心

在生产环境中经常会遇到需要修改配置文件的情况,传统的修改方式不仅会影响到服务的正常运行,而且操
作步骤也很繁琐。为了解决这个问题,kubernetes 项目从 1.2 版本引入了 ConfigMap 功能,用于将应用的配置信
息与程序的分离。这种方式不仅可以实现应用程序被的复用,而且还可以通过不同的配置实现更灵活的功能。在
创建容器时,用户可以将应用程序打包为容器镜像后,通过环境变量或者外接挂载文件的方式进行配置注入。
ConfigMap && Secret 是 K8S 中的针对应用的配置中心,它有效的解决了应用挂载的问题,并且支持加密以及热
更新等功能,可以说是一个 k8s 提供的一件非常好用的功能。

1、 创建 configmap

1.1、 指定配置文件

[root@kubernetes-master-01 ~]# kubectl create configmap file --from-file=test.yaml
configmap/file created
[root@kubernetes-master-01 ~]# kubectl get configmaps
NAME DATA AGE
file 1 7s
[root@kubernetes-master-01 ~]# kubectl describe configmaps file
Name: file
Namespace: default
Labels: <none>
Annotations: <none>
Data
====
test.yaml:
----
kind: Pod
apiVersion: v1
metadata:
name: tcp-pod
spec:
containers:
- name: nginx
imagePullPolicy: IfNotPresent
image: nginx
readinessProbe:
tcpSocket:
port: 80
Events: <none>
[root@kubernetes-master-01 ~]#

1.2、 指定配置目录

[root@kubernetes-master-01 ~]# mkdir configmap
[root@kubernetes-master-01 ~]# touch configmap/index.html
[root@kubernetes-master-01 ~]# echo "index" > configmap/index.html
[root@kubernetes-master-01 ~]# kubectl create configmap catalog
--from-file=configmap/
configmap/catalog created
[root@kubernetes-master-01 ~]# kubectl describe configmaps catalog
Name: catalog
Namespace: default
Labels: <none>
Annotations: <none>
Data
====
index.html:
----
index
Events: <none>
[root@kubernetes-master-01 ~]#

1.3、 指定配置项

[root@kubernetes-master-01 test]# kubectl create configmap calue
--from-literal=key=value --from-literal=key1=value1
configmap/calue created
[root@kubernetes-master-01 test]# kubectl describe configmaps calue
Name: calue
Namespace: default
Labels: <none>
Annotations: <none>
Data
====
key:
----
value
key1:
----
value1
Events: <none>

1.4、 通过配置清单的方式创建 configmap

[root@kubernetes-master-01 configmap]# cat test.yaml
kind: ConfigMap
apiVersion: v1
metadata:
name: configmap-yaml
labels:
app: configmap
data:
key: value
nginx_config: |-
upstream tomcatserver1 {
server 192.168.72.49:8081;
}
upstream tomcatserver2 {
server 192.168.72.49:8082;
}
server {
listen 80;
server_name 8081.max.com;
location / {
proxy_pass http://tomcatserver1;
index index.html index.htm;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}
[root@kubernetes-master-01 configmap]# kubectl apply -f test.yaml
configmap/configmap-yaml created
[root@kubernetes-master-01 configmap]# kubectl describe configmaps configmap-yaml
Name: configmap-yaml
Namespace: default
Labels: app=configmap
Annotations: <none>
Data
====
key:
----
value
nginx_config:
----
upstream tomcatserver1 {
server 192.168.72.49:8081;
}
upstream tomcatserver2 {
server 192.168.72.49:8082;
}
server {
listen 80;
server_name 8081.max.com;
location / {
proxy_pass http://tomcatserver1;
index index.html index.htm;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}
Events: <none>

2、 使用 configmap

一般情况下,我们是通过挂载的方式使用 configmap。
[root@kubernetes-master-01 configmap]# cat test.yaml
kind: ConfigMap
apiVersion: v1
metadata:
name: configmap-yaml
labels:
app: configmap
data:
key: value
nginx_config: |-
upstream tomcatserver1 {
server 192.168.72.49:8081;
}
server {
listen 80;
server_name 8081.max.com;
location / {
proxy_pass http://tomcatserver1;
index index.html index.htm;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}
---
kind: Pod
apiVersion: v1
metadata:
name: configmap-pod
labels:
app: configmap-pod
spec:
containers:
- name: nginx
image: nginx
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
volumeMounts:
- mountPath: /usr/share/nginx/demo
name: conf
volumes:
- name: conf
configMap:
name: configmap-yaml
items:
- key: nginx_config
path: nginx_config
- key: key
path: key
[root@kubernetes-master-01 configmap]# kubectl apply -f test.yaml
configmap/configmap-yaml unchanged
pod/configmap-pod created
[root@kubernetes-master-01 configmap]# kubectl get pod -l app=configmap-pod
NAME READY STATUS RESTARTS AGE
configmap-pod 1/1 Running 0 25s
[root@kubernetes-master-01 configmap]# kubectl exec -it configmap-pod -- bash
root@configmap-pod:/# cd /usr/share/nginx/
root@configmap-pod:/usr/share/nginx# ls -l
total 8
drwxrwxrwx 3 root root 4096 Oct 7 12:57 demo
drwxr-xr-x 2 root root 4096 Sep 10 12:33 html
root@configmap-pod:/usr/share/nginx# cd demo/
root@configmap-pod:/usr/share/nginx/demo# ls
key nginx_config
root@configmap-pod:/usr/share/nginx/demo# cat nginx_config
upstream tomcatserver1 {
server 192.168.72.49:8081;
}
upstream tomcatserver2 {
server 192.168.72.49:8082;
}
server {
listen 80;
server_name 8081.max.com;
location / {
proxy_pass http://tomcatserver1;
index index.html index.htm;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}
root@configmap-pod:/usr/share/nginx/demo# cat key
valueroot@configmap-pod:/usr/share/nginx/demo#

2.1、 subPath

mountPath 结合 subPath(也可解决多个 configmap 挂载同一目录,导致覆盖)作用。
[root@kubernetes-master-01 configmap]# cat test.yaml
kind: ConfigMap
apiVersion: v1
metadata:
name: configmap-yaml
labels:
app: configmap
data:
key: value
nginx_config: |-
upstream tomcatserver1 {
server 192.168.72.49:8081;
}
upstream tomcatserver2 {
server 192.168.72.49:8082;
}
server {
listen 80;
server_name 8081.max.com;
location / {
proxy_pass http://tomcatserver1;
index index.html index.htm;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}
---
kind: Pod
apiVersion: v1
metadata:
name: configmap-pod
labels:
app: configmap-pod
spec:
containers:
- name: nginx
image: nginx
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
volumeMounts:
- mountPath: /usr/share/nginx/html/nginx.conf
name: conf
subPath: nginx_config
volumes:
- name: conf
configMap:
name: configmap-yaml
items:
- key: nginx_config
path: nginx_config
[root@kubernetes-master-01 configmap]# kubectl apply -f test.yaml
configmap/configmap-yaml created
pod/configmap-pod created
[root@kubernetes-master-01 configmap]# kubectl get pods
NAME READY STATUS RESTARTS AGE
busybox 1/1 Running 5 3d
configmap-pod 1/1 Running 0 8s
nfs-nfs-client-provisioner-867cff4fd7-9wm9z 1/1 Running 5 3d4h
nginx-6cf7488b57-bd887 1/1 Running 5 7d21h
redis-cluster-operator-545d888d7d-7m7fc 1/1 Running 0 2d3h
[root@kubernetes-master-01 configmap]# kubectl exec -it configmap-pod -- bash
root@configmap-pod:/# cd /usr/share/nginx/html/
root@configmap-pod:/usr/share/nginx/html# ls -l
total 12
-rw-r--r-- 1 root root 494 Aug 11 14:50 50x.html
-rw-r--r-- 1 root root 612 Aug 11 14:50 index.html
-rw-r--r-- 1 root root 340 Oct 7 13:32 nginx.conf
root@configmap-pod:/usr/share/nginx/html# cat nginx.conf
upstream tomcatserver1 {
server 192.168.72.49:8081;
}
upstream tomcatserver2 {
server 192.168.72.49:8082;
}
server {
listen 80;
server_name 8081.max.com;
location / {
proxy_pass http://tomcatserver1;
index index.html index.htm;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}

2.2、 configmap 热更新

DEBUG[root@kubernetes-master-01 configmap]# cat test.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-nginx
spec:
replicas: 1
selector:
matchLabels:
run: my-nginx
template:
metadata:
labels:
run: my-nginx
spec:
containers:
- name: my-nginx
image: nginx
ports:
- containerPort: 80
envFrom:
- configMapRef:
name: env-config
volumeMounts:
- mountPath: /usr/share/nginx/demo/
name: config
volumes:
- name: config
configMap:
name: env-config
items:
- key: log_level
path: log_level
---
apiVersion: v1
kind: ConfigMap
metadata:
name: env-config
namespace: default
data:
log_level: INFO
[root@kubernetes-master-01 configmap]# kubectl apply -f test.yaml
deployment.apps/my-nginx configured
configmap/env-config unchanged
[root@kubernetes-master-01 configmap]# kubectl exec my-nginx-6947589dc-tbvs5 -- cat
/usr/share/nginx/demo/log_level
INFO
[root@kubernetes-master-01 configmap]# kubectl edit configmaps env-config
configmap/env-config edited
[root@kubernetes-master-01 configmap]# kubectl exec my-nginx-6947589dc-tbvs5 -- cat
/usr/share/nginx/demo/log_level
DEBU

3、 Secret

Secret 解决了密码、token、密钥等敏感数据的配置问题,而不需要把这些敏感数据暴露到镜像或者 Pod Spec
中。Secret 可以以 Volume 或者环境变量的方式使用。
Secret 有三种类型:
 Service Account : 用 来 访 问 Kubernetes API , 由 Kubernetes 自 动 创 建 , 并 且 会 自 动 挂 载 到 Pod 的
/run/secrets/kubernetes.io/serviceaccount 目录中;
 Opaque :base64 编码格式的 Secret,用来存储密码、密钥等;
 kubernetes.io/dockerconfigjson :用来存储私有 docker registry 的认证信息。

3.1、 Opaque Secret

Opaque 类型的数据是一个 map 类型,要求 value 是 base64 编码格式。
[root@kubernetes-master-01 ~]# echo -n "oldboy" | base64
b2xkYm95
[root@kubernetes-master-01 ~]# echo "oldboy123" | base64
b2xkYm95MTIzCg==
3.1.1、编写 secret 配置清单
apiVersion: v1
kind: Secret
metadata:
name: mysecret
type: Opaque
data:
password: b2xkYm95MTIzCg==
username: b2xkYm95
3.1.2、创建 secret 资源
[root@kubernetes-master-01 secret]# vim secret.yaml
[root@kubernetes-master-01 secret]# kubectl apply -f secret.yaml
secret/mysecret created
[root@kubernetes-master-01 secret]# kubectl describe secrets mysecret
Name: mysecret
Namespace: default
Labels: <none>
Annotations: <none>
Type: Opaque
Data
====
password: 10 bytes
username: 6 bytes
3.1.3、编写测试应用配置清单
kind: Deployment
apiVersion: apps/v1
metadata:
name: mysecret
spec:
selector:
matchLabels:
app: mysecret
template:
metadata:
labels:
app: mysecret
spec:
containers:
- name: nginx
imagePullPolicy: IfNotPresent
image: nginx
volumeMounts:
- name: mysecret
mountPath: "/opt/secrets"
readOnly: true
volumes:
- name: mysecret
secret:
secretName: mysecret
3.1.4、创建测试资源
[root@kubernetes-master-01 secret]# kubectl apply -f secret.yaml
deployment.apps/mysecret created
secret/mysecret unchanged
[root@kubernetes-master-01 secret]# kubectl exec -it mysecret-5bcb897fff-77bn5 --
bash
root@mysecret-5bcb897fff-77bn5:/# cd /opt/secrets/
root@mysecret-5bcb897fff-77bn5:/opt/secrets# ls
password username
root@mysecret-5bcb897fff-77bn5:/opt/secrets# cat username
oldboyroot@mysecret-5bcb897fff-77bn5:/opt/secrets# cat password
oldboy123

3.2、 kubernetes.io/dockerconfigjson

用来存储私有 docker registry 的认证信息。
3.2.1、创建 secret
export DOCKER_REGISTRY_SERVER=10.0.0.100
export DOCKER_USER=root
export DOCKER_PASSWORD=root@123
export DOCKER_EMAIL=root@123.com
kubectl create secret docker-registry myregistrykey
--docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USER
--docker-password=DOCKER_PASSWORD --docker-email=DOCKER_EMAIL
[root@kubernetes-master-01 secret]# export DOCKER_REGISTRY_SERVER=10.0.0.100
[root@kubernetes-master-01 secret]# export DOCKER_USER=root
[root@kubernetes-master-01 secret]# export DOCKER_PASSWORD=root@123
[root@kubernetes-master-01 secret]# export DOCKER_EMAIL=root@123.com
[root@kubernetes-master-01 secret]#
[root@kubernetes-master-01 secret]# kubectl create secret docker-registry
myregistrykey --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USER
--docker-password=DOCKER_PASSWORD --docker-email=DOCKER_EMAIL
secret/myregistrykey created
[root@kubernetes-master-01 secret]# kubectl describe secret myregistrykey
Name: myregistrykey
Namespace: default
Labels: <none>
Annotations: <none>
Type: kubernetes.io/dockerconfigjson
Data
====
.dockerconfigjson: 161 bytes
3.2.2、创建测试 secret
kind: Deployment
apiVersion: apps/v1
metadata:
name: mysecret
spec:
selector:
matchLabels:
app: mysecret
template:
metadata:
labels:
app: mysecret
spec:
containers:
- name: nginx
imagePullSecrets:
- name: myregistrykey
3.3、 Service Account
Service Account 用 来 访 问 Kubernetes API , 由 Kubernetes 自 动 创 建 , 并 且 会 自 动 挂 载 到 Pod 的
/run/secrets/kubernetes.io/serviceaccount 目录中。
[root@kubernetes-master-01 secret]# kubectl exec nginx-6cf7488b57-bd887 -- ls
/run/secrets/kubernetes.io/serviceaccount
ca.crt
namespace
Token
Copyright © 高程程 all right reserved,powered by Gitbook修订于: 2021-06-16 17:21:29

results matching ""

    No results matching ""