04 Pod 介绍与容器分类


Pod 简介

1
2
3
4
• 最小部署单元 
• 一组容器的集合 紧密的服务
• 一个Pod中的容器共享网络命名空间,可以使用127.0.0.1访问服务
• Pod是短暂的

Pod容器分类

1
2
3
4
5
6
• Infrastructure Container:基础容器
• 维护整个Pod网络空间
• InitContainers:初始化容器,在部署之前做一些初始化操作
• 先于业务容器开始执行
• Containers:业务容器 主要使用这个
• 并行启动
  1. 基础容器会启动,他是透明的但是可以在node节点使用docker images查看到
  2. 应该把pause-amd64:3.0下载下来保存到私有仓库,将地址替换成私有仓库地址
  3. 每次创建一个pod,都会创建一个这个容器,他的作用是将pod里面的所有容器放到一个命名空间
1
2
[root@k8s-node1 cfg]# cat /opt/kubernetes/cfg/kubelet.conf 
--pod-infra-container-image=lizhenliang/pause-amd64:3.0"

Pod 实现机制

1
2
1. 共享网络
2. 共享存储

共享网络

1
2
3
4
5
6
7
8
9
10
# 1. 创建pod后在创建业务容器之前会先创建基础容器pause-amd64:3.0
# 2. 创建的业务容器会加入到 pause-amd64:3.0的 network-namespace里
# 3. pod的IP 绑定的是 pause-amd64:3.0 容器里,让其他的容器使用一个 namespace

[root@k8s-node1 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
8e6be9e3476c b50b08c36b60 "nginx -g 'daemon of…" 4 seconds ago Up 3 seconds k8s_nginx_nginx-deployment-594cc45b78-dzmcw_default_77001f39-25e7-4e5d-b279-f06aa3c17382_0
c4383d31c17a b50b08c36b60 "nginx -g 'daemon of…" 4 seconds ago Up 3 seconds k8s_nginx_nginx-deployment-594cc45b78-pm688_default_f8f84530-abc1-4c8c-83f3-8d8caab08217_0
d0eea8a485bf 172.17.70.252/base/pause-amd64:3.0 "/pause" 4 seconds ago Up 3 seconds k8s_POD_nginx-deployment-594cc45b78-pm688_default_f8f84530-abc1-4c8c-83f3-8d8caab08217_0
102650cf1d5e 172.17.70.252/base/pause-amd64:3.0 "/pause" 4 seconds ago Up 3 seconds k8s_POD_nginx-deployment-594cc45b78-dzmcw_default_77001f39-25e7-4e5d-b279-f06aa3c17382_0
1
2
3
4
# 1个POD里面有多个容器 他们共享一个网络命名空间
POD
容器1 Java
容器2 Nginx
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
# 导出一个正在运行的Pod修改
[root@k8s-master1 java-demo]# kubectl get pod java-demo-7746bb968c-4hj9k -o yaml > pod.yaml

# 1个Pod运行多个容器,容器使用-name 分割
[root@k8s-master1 java-demo]# vim pod.yaml

apiVersion: v1
kind: Pod
metadata:
labels:
app: my-pod
name: my-pod
namespace: default
spec:
containers:
- name: my-nginx
image: nginx:1.7.9
- name: my-java
image: 172.17.70.252/project/java-demo:latest
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
# 创建
[root@k8s-master1 java-demo]# kubectl apply -f pod.yaml
pod/my-pod created
# READY 运行个数为2个
[root@k8s-master1 java-demo]# kubectl get pods,deploy -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/my-pod 2/2 Running 0 19s 10.244.1.30 k8s-node2 <none> <none>

# 看下事件
[root@k8s-master1 java-demo]# kubectl describe pod my-pod
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned default/my-pod to k8s-node2
Normal Pulling 2m21s kubelet, k8s-node2 Pulling image "nginx:1.7.9"
Normal Pulled 2m14s kubelet, k8s-node2 Successfully pulled image "nginx:1.7.9"
Normal Created 2m14s kubelet, k8s-node2 Created container my-nginx
Normal Started 2m14s kubelet, k8s-node2 Started container my-nginx
Normal Pulling 2m14s kubelet, k8s-node2 Pulling image "172.17.70.252/project/java-demo:latest"
Normal Pulled 2m14s kubelet, k8s-node2 Successfully pulled image "172.17.70.252/project/java-demo:latest"
Normal Created 2m14s kubelet, k8s-node2 Created container my-java
Normal Started 2m14s kubelet, k8s-node2 Started container my-java

# 查看node2容器 只启动使用了一个基础镜像,那么就是说 新建的两个容器会加入这一个pause容器的命名空间
[root@k8s-node2 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5701076efaf9 172.17.70.252/project/java-demo "catalina.sh run" 3 minutes ago Up 3 minutes k8s_my-java_my-pod_default_363d3a86-4892-4e5b-bb09-8a421cbb7ad8_0
013fd701fbeb nginx "nginx -g 'daemon of…" 3 minutes ago Up 3 minutes k8s_my-nginx_my-pod_default_363d3a86-4892-4e5b-bb09-8a421cbb7ad8_0
0208eacccdce 172.17.70.252/base/pause-amd64:3.0 "/pause" 3 minutes ago Up 3 minutes k8s_POD_my-pod_default_363d3a86-4892-4e5b-bb09-8a421cbb7ad8_0
1
2
3
4
5
6
7
8
9
# 验证是否同一个网络命名空间
# 在一个pod中启动多个容器,那么他们是共享同一网络命名空间,IP、prot、mac

[root@k8s-master1 java-demo]# kubectl exec -it my-pod bash
Defaulting container name to my-nginx.
Use 'kubectl describe pod/my-pod -n default' to see all of the containers in this pod.

[root@k8s-master1 java-demo]# kubectl exec -it my-pod -c my-nginx bash
[root@k8s-master1 java-demo]# kubectl exec -it my-pod -c my-java bash

共享存储

1
2
3
4
5
6
1. Pod 为亲密性应用而存在
亲密性应用场景:
- 两个应用之间发生交互
- 两个应用需要通过127.0.0.1 或者 socket通信
- 两个应用需要发生频繁的调用
- 常用的有 nginx 代理 tomcat应用 127.0.0.1:8080
1
2
3
1. 共享存储不能使用 同一个存储的命名空间
2. 因为镜像中的操作系统不太相同,违背了容器的概念
3. 存储使用数据卷,将应用数据持久化
1
2
3
4
5
6
7
8
9
10
11
12
13
14
1. POD 持久化数据:
- 临时数据
- 日志
- 业务数据 重要的 mysql的/data目录

2. 有状态的需要存储数据的应用程序能够在多个节点之间相互漂移

node1 pod1 挂掉了,数据不能存储在本地,需要使用存储数据卷
node2 pod2 重新拉起一个pod,使用node1上的pod1同一个数据卷,数据不会丢失,接着用
node3

node1 node2 node3
Volumes
共享存储

同一Pod下的容器共享数据

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
# 1. emptyDir
# 2. 创建一个空卷,挂载到Pod中的容器。
# 3. Pod删除该卷也会被删除。
$ .4 应用场景:Pod中容器之间数据共享

[root@k8s-master demo2]# vim emptydir.yaml

apiVersion: v1
kind: Pod
metadata:
name: my-pod2
spec:
containers:
- name: write
image: centos:7
command: ["bash","-c","for i in {1..100};do echo $i >> /data/hello;sleep 1;done"]
volumeMounts:
- name: data
mountPath: /data

- name: read
image: centos:7
command: ["bash","-c","tail -f /data/hello"]
volumeMounts:
- name: data
mountPath: /data

volumes:
- name: data
emptyDir: {}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
# 创建了两个容器 read 和 write
# 一个Pod里的多个容器会被分配到一个Node节点上
# 会在当前的节点创建一个空目录 /data,然后两个容器都挂载这个目录
[root@k8s-master1 java-demo]# kubectl apply -f emptydir.yaml
pod/my-pod2 created

[root@k8s-master1 java-demo]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
my-pod 2/2 Running 0 35m 10.244.1.30 k8s-node2 <none> <none>
my-pod2 2/2 Running 0 34s 10.244.0.33 k8s-node1 <none> <none>

# 进入写容器查看
[root@k8s-master1 java-demo]# kubectl exec -it my-pod2 -c write bash
[root@my-pod2 /]# cd /data/
[root@my-pod2 data]# tailf hello
87
88

# 进入读容器查看
[root@k8s-master1 java-demo]# kubectl exec -it my-pod2 -c read bash
[root@my-pod2 data]# tailf hello

# 看read的日志 他后台运行的就是tailf
[root@k8s-master1 java-demo]# kubectl logs my-pod2 -c read

Pod 存在的意义

Pod 镜像拉取策略

  1. imagePullPolicy
1
2
3
• IfNotPresent:默认值,镜像在宿主机上不存在时才拉取
• Always:每次创建 Pod 都会重新拉取一次镜像
• Never: Pod 永远不会主动拉取这个镜像
1
2
3
4
5
6
7
8
# 导出的yaml文件中 默认值就是IfNotPresent
containers:
- image: nginx:1.16
imagePullPolicy: IfNotPresent
name: nginx
ports:
- containerPort: 80
protocol: TCP

拉取私有仓库镜像 harbord的认证凭据

配置可信任

1
2
3
4
5
6
7
8
[root@k8s-node1 ~]# cat /etc/docker/daemon.json 
{
"registry-mirrors": ["http://bc437cce.m.daocloud.io"],
"insecure-registries": ["172.17.70.245"]
}

# 添加或者修改 需要重启docker
systemctl restart docker.service
1
2
3
4
5
6
7
8
9
10
# 镜像仓库 上传一个镜像
[root@k8s-node1 ~]# docker login 172.17.70.245


# 上传一个tomcat镜像
[root@k8s-node1 ~]# docker pull tomcat
# docker tag SOURCE_IMAGE[:TAG] 172.17.70.245/project/IMAGE[:TAG]
# docker push 172.17.70.245/project/IMAGE[:TAG]

[root@k8s-node1 ~]# docker push 172.17.70.245/project/tomcat

配置k8s认证

  1. docker主机虽然login 但是不代表k8s也是通过认证的,她不是登录的状态
  2. 因为私有仓库下载需要凭证
1
2
# 参考文档 
https://kubernetes.io/zh/docs/concepts/containers/images/

测试k8s无凭证拉取私有仓库镜像

  1. docker 与 k8s 登录私有仓库的凭证 不是一套
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
# 创建一个tomcat.yaml
# 由于本地有镜像了 所以改下拉取策略 一直会去重新拉取
[root@k8s-master demo]# cp mynginx.yaml tomcat-deployment.yaml

[root@k8s-master demo]# vim tomcat-deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: tomcat
name: tomcat
spec:
replicas: 3
selector:
matchLabels:
app: tomcat
template:
metadata:
labels:
app: tomcat
spec:
containers:
- image: 172.17.70.245/project/tomcat # 私有镜像仓库
imagePullPolicy: Always # 拉取策略
name: tomcat
ports:
- containerPort: 8080 # 容器端口

---

apiVersion: v1
kind: Service
metadata:
name: tomcat-service
labels:
app: tomcat
spec:
type: NodePort # 随机分配
ports:
- port: 80 # service负载端口
targetPort: 8080 # 容器端口
selector:
app: tomcat
1
2
3
[root@k8s-master demo]# kubectl apply -f tomcat-deployment.yaml 
deployment.apps/tomcat created
service/tomcat-service unchanged
1
2
3
4
5
6
7
8
9
10
11
12
13
14
# 镜像无法拉取

[root@k8s-master demo]# kubectl get pods,svc,deploy
NAME READY STATUS RESTARTS AGE
pod/tomcat-d54b746dc-f5pht 0/1 ImagePullBackOff 0 54s
pod/tomcat-d54b746dc-kz9vj 0/1 ImagePullBackOff 0 54s
pod/tomcat-d54b746dc-tqstj 0/1 ImagePullBackOff 0 54s

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 24h
service/tomcat-service NodePort 10.0.0.126 <none> 80:31882/TCP 2m9s

NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/tomcat 0/3 3 0 54s

查看pod事件

1
[root@k8s-master demo]# kubectl describe pod tomcat-d54b746dc-f5pht

配置凭据

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
# 获取docker的认证信息
# node节点上登录都存在一致的
# 保存登录harbor的认证信息


[root@k8s-node1 ~]# cat .docker/config.json
{
"auths": {
"172.17.70.245": {
"auth": "YWRtaW46bHg2ODMyODE1Mw=="
}
},
"HttpHeaders": {
"User-Agent": "Docker-Client/18.09.6 (linux)"
}
}

# 编码 base64 不换行
[root@k8s-node1 ~]# cat .docker/config.json|base64 -w 0
ewoJImF1dGhzIjogewoJCSIxNzIuMTcuNzAuMjQ1IjogewoJCQkiYXV0aCI6ICJZV1J0YVc0NmJIZzJPRE15T0RFMU13PT0iCgkJfQoJfSwKCSJIdHRwSGVhZGVycyI6IHsKCQkiVXNlci1BZ2VudCI6ICJEb2NrZXItQ2xpZW50LzE4LjA5LjYgKGxpbnV4KSIKCX0KfQ==
1
2
3
4
5
6
7
8
9
10
11
12
13
14
# Secret 配置
# Pod只能引用和它相同namespace的ImagePullSecrets, 所以需要为每一个namespace做配置
# 使用 Secret 来保存认证信息
# registry-pull-secret 就是拉取镜像的策略

[root@k8s-master ~]# vim registry-pull-secret.yaml

apiVersion: v1
kind: Secret
metadata:
name: registry-pull-secret
data:
.dockerconfigjson: ewoJImF1dGhzIjogewoJCSIxNzIuMTcuNzAuMjQ1IjogewoJCQkiYXV0aCI6ICJZV1J0YVc0NmJIZzJPRE15T0RFMU13PT0iCgkJfQoJfSwKCSJIdHRwSGVhZGVycyI6IHsKCQkiVXNlci1BZ2VudCI6ICJEb2NrZXItQ2xpZW50LzE4LjA5LjYgKGxpbnV4KSIKCX0KfQ==
type: kubernetes.io/dockerconfigjson
1
2
3
4
5
6
7
8
9
[root@k8s-master ~]# kubectl create -f registry-pull-secret.yaml 
secret/registry-pull-secret created

# 如果数据是0 就是没有成功保存进去

[root@k8s-master ~]# kubectl get secret
NAME TYPE DATA AGE
default-token-m8m58 kubernetes.io/service-account-token 3 24h
registry-pull-secret kubernetes.io/dockerconfigjson 1 76s
1
2
3
4
5
6
7
8
9
10
# 配置凭据
spec:
imagePullSecrets:
- name: registry-pull-secret
containers:
- image: 172.17.70.245/project/tomcat
imagePullPolicy: Always
name: tomcat
ports:
- containerPort: 8080

1
2
3
4
# 创建
[root@k8s-master demo]# kubectl apply -f tomcat-deployment.yaml
deployment.apps/tomcat configured # 有修改
service/tomcat-service unchanged
1
2
3
4
5
6
7
8
9
10
11
12
13
14
# 可以创建啦

[root@k8s-master demo]# kubectl get pods,svc,deploy
NAME READY STATUS RESTARTS AGE
pod/tomcat-67c68f7479-c68fx 1/1 Running 0 107s
pod/tomcat-67c68f7479-q8jrx 1/1 Running 0 96s
pod/tomcat-67c68f7479-v27gp 1/1 Running 0 108s

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 25h
service/tomcat-service NodePort 10.0.0.126 <none> 80:31882/TCP 39m

NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/tomcat 3/3 3 3 38m

1
2
3
4
5
# 测试访问
http://39.106.100.108:31882
http://123.56.14.192:31882

# 仓库三个pod 都会拉取1次 仓库显示下载次数3次

Pod 资源限制

  1. Pod和Container的资源请求和限制:
  2. 不能让容器占用所有的物理资源
1
2
# 官方示例
https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/
1
2
3
4
• spec.containers[].resources.limits.cpu
• spec.containers[].resources.limits.memory
• spec.containers[].resources.requests.cpu
• spec.containers[].resources.requests.memory
1
2
limits   资源的总限制
requests 创建pod最低 如果node上没有资源,就不会调度到node上,必须满足才能调度上去
1
2
3
4
5
6
7
8
9
# 调用了docker自身限制

requests: # 创建容器时要满足
memory: "64Mi" # 内存64M
cpu: "250m" # 1核cpu的25%

limits: # 最大使用
memory: "128Mi" # 最大使用内存 128M
cpu: "500m" # 最大使用cpu 1核心 50% 0.5个cpu
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
[root@k8s-master demo]# vim pod2.yaml 

apiVersion: v1
kind: Pod
metadata:
name: frontend
spec:
containers:
- name: db
image: mysql:5.6
env:
- name: MYSQL_ROOT_PASSWORD
value: "password"
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
- name: wp
image: wordpress
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
1
2
[root@k8s-master demo]# kubectl apply -f pod2.yaml 
pod/frontend created
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
[root@k8s-master demo]# kubectl get pod
# 在一个pod中创建了两个容器 wordpress 和 mysql
# 本地没有镜像会去下载镜像 然后再启动容器

NAME READY STATUS RESTARTS AGE
frontend 0/2 ContainerCreating 0 28s
tomcat-67c68f7479-c68fx 1/1 Running 0 56m
tomcat-67c68f7479-q8jrx 1/1 Running 0 56m
tomcat-67c68f7479-v27gp 1/1 Running 0 56m

[root@k8s-master demo]# kubectl describe pod frontend

Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned default/frontend to k8s-node1
Normal Pulling 65s kubelet, k8s-node1 Pulling image "wordpress"
Normal Created 50s kubelet, k8s-node1 Created container wp
Normal Started 50s kubelet, k8s-node1 Started container wp
Normal Pulled 50s kubelet, k8s-node1 Successfully pulled image "wordpress"
Normal Pulling 24s (x3 over 2m37s) kubelet, k8s-node1 Pulling image "mysql"
Normal Created 23s (x3 over 66s) kubelet, k8s-node1 Created container db
Normal Started 23s (x3 over 65s) kubelet, k8s-node1 Started container db
Normal Pulled 23s (x3 over 66s) kubelet, k8s-node1 Successfully pulled image "mysql"
Warning BackOff 2s (x3 over 38s) kubelet, k8s-node1 Back-off restarting failed container
1
2
3
4
5
# 查看pod日志 pod中两个容器会有选择
[root@k8s-master demo]# kubectl logs frontend
Error from server (BadRequest): a container name must be specified for pod frontend, choose one of: [db wp]

[root@k8s-master demo]# kubectl logs frontend db

检查节点容量和分配的数量

1
2
3
4
5
6
7
8
9
10
11
12
# kubectl describe nodes k8s-node1
# 显示的是pod的总和数据
[root@k8s-master demo]# kubectl describe nodes k8s-node1


# 查看命名空间
[root@k8s-master demo]# kubectl get ns
NAME STATUS AGE
default Active 26h
kube-node-lease Active 26h
kube-public Active 26h
kube-system Active 26h

Pod 重启策略

  1. 再K8S中 POD 是没有重启的概念,每次都是重建
  2. jod 计划任务 都是一次性的 不适合 Always
  3. Always 适合 web服务 一直在运行 挂了再拉起一个
1
2
3
4
5
6
7
8
9
10
• Always:当容器终止退出后,总是重启容器,默认策略。
• OnFailure:当容器异常退出(退出状态码非0)时,才重启容器。
• Never::当容器终止推出,从不重启容器。


spec:
containers:
- name: foo
image: janedoe/awesomeapp:v1
restartPolicy: Always

异常退出重启

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
[root@k8s-master demo]# kubectl edit deployment tomcat
restartPolicy: Always # 默认策略


# 写个例子
# busybox镜像 重启30秒后exit退出
[root@k8s-master demo]# vim pod3.yaml

apiVersion: v1
kind: Pod
metadata:
name: foo
spec:
containers:
- name: busybox
image: busybox
args:
- /bin/sh
- -c
- sleep 30; exit 3
1
2
3
4
5
6
7
8
9
[root@k8s-master demo]# kubectl apply -f pod3.yaml 

# RESTARTS 重启的次数 运行了32秒 重启了一次 说明pod是默认Always重启策略
[root@k8s-master demo]# kubectl get pods
NAME READY STATUS RESTARTS AGE
foo 1/1 Running 1 32s
tomcat-67c68f7479-c68fx 1/1 Running 0 108m
tomcat-67c68f7479-q8jrx 1/1 Running 0 108m
tomcat-67c68f7479-v27gp 1/1 Running 0 108m

改成正常退出不重启

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
[root@k8s-master demo]# vim pod3.yaml 

apiVersion: v1
kind: Pod
metadata:
name: foo
spec:
containers:
- name: busybox
image: busybox
args:
- /bin/sh
- -c
- sleep 30; exit 0
restartPolicy: Never
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
[root@k8s-master demo]# kubectl delete -f pod3.yaml 
pod "foo" deleted
[root@k8s-master demo]# kubectl apply -f pod3.yaml
pod/foo created

[root@k8s-master demo]# kubectl get pods
NAME READY STATUS RESTARTS AGE
foo 0/1 Completed 0 36s
tomcat-67c68f7479-c68fx 1/1 Running 0 113m
tomcat-67c68f7479-q8jrx 1/1 Running 0 113m
tomcat-67c68f7479-v27gp 1/1 Running 0 113m

# Completed 完成 pod不再会重启,重建
# 正常拉起是25秒左右
# 持久运行的应用就是默认值

根据yaml文件删除pod

1
[root@k8s-master demo]# kubectl delete -f pod3.yaml

Pod 健康检查 probes探针

1
https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/
  1. service endpoints 就是service关联的pod的ip地址和端口
  2. 健康检查的目的 就是再服务出现问题的情况下 能够重启pod
1
2
3
4
5

[root@k8s-master ~]# kubectl get ep
NAME ENDPOINTS AGE
kubernetes 172.17.70.245:6443,172.17.70.246:6443 28h
tomcat-service 10.244.0.24:8080,10.244.0.25:8080,10.244.1.15:8080 3h56m
  1. Probe有以下两种类型
1
2
3
4
1. livenessProbe
如果检查失败,将杀死容器,根据Pod的restartPolicy来操作。
2. readinessProbe
如果检查失败,Kubernetes会把Pod从service endpoints中剔除。
  1. Probe支持以下三种检查方法
1
2
3
4
5
6
1. httpGet
发送HTTP请求,返回200-400范围状态码为成功。
2. exec
执行Shell命令返回状态码是0为成功。
3. tcpSocket
发起TCP Socket建立成功。

健康检查示例

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
[root@k8s-master demo]# vim pod4.yaml 
# 文件不存在就是非0
# initialDelaySeconds: 5 # 容器启动5秒后 开始健康检查
# periodSeconds: 5 # 每隔5秒执行一次

apiVersion: v1
kind: Pod
metadata:
labels:
test: liveness
name: liveness-exec
spec:
containers:
- name: liveness
image: busybox
args:
- /bin/sh
- -c
- touch /tmp/healthy; sleep 10; rm -rf /tmp/healthy;
livenessProbe:
exec:
command:
- cat
- /tmp/healthy
initialDelaySeconds: 5
periodSeconds: 5
1
2
3
4
5
6
7
8
9
10
11
[root@k8s-master demo]# kubectl apply -f pod4.yaml 
pod/liveness-exec created

#
[root@k8s-master demo]# kubectl get pods
NAME READY STATUS RESTARTS AGE
foo 0/1 Completed 0 105m
liveness-exec 1/1 Running 1 13s
tomcat-67c68f7479-c68fx 1/1 Running 0 3h37m
tomcat-67c68f7479-q8jrx 1/1 Running 0 3h37m
tomcat-67c68f7479-v27gp 1/1 Running 0 3h37m

Pod 调度约束

  1. 让pod调度到指定的节点上
  2. 比如我们有很多node节点 ,希望根据部门区分,A部门使用node 123,B部门使用node 456
  3. 默认的调度规则是根据资源利用率,做打分

pod工作流程图

  1. 用户在命令行创建pod请求发送给 apiserver,
  2. apiserver收到请求,写入到 etcd中,里面记录了用户请求要创建的pod属性,
  3. scheduler 调度器 watch 获取 etcd中有新pod需要创建,
  4. scheduler 调度器通过算法判断交给哪个节点创建,并更新给etcd,etcd记录要调配到哪个node上,
  5. kubelet 通过 watch 从etcd中,获取哪个pod要绑定到自己node中,
  6. kubelet 拿到pod要创建的信息后,通过 docker run启动容器,将启动的Pod状态更新到etcd中,
  7. 最后 kubectl get pod 请求 apiserver 从etcd中拿到 pod的状态
  8. 如果创建的 deployment 还需要控制器参与处理

调度约束

  1. 两个字段指定
1
2
1. nodeName用于将Pod调度到指定的Node名称上  
2. nodeSelector用于将Pod调度到匹配Label的Node上,用于资源区分

指定node创建pod

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
[root@k8s-master demo]# vim pod5.yaml

apiVersion: v1
kind: Pod
metadata:
name: pod-example
labels:
app: nginx
spec:
nodeName: k8s-node2
containers:
- name: nginx
image: nginx:1.16

[root@k8s-master demo]# kubectl apply -f pod5.yaml

# 调度到 k8s-node2

[root@k8s-master demo]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
foo 0/1 Completed 0 135m 10.244.1.19 k8s-node2 <none> <none>
pod-example 1/1 Running 0 44s 10.244.1.22 k8s-node2 <none> <none>
tomcat-67c68f7479-c68fx 1/1 Running 0 4h8m 10.244.1.15 k8s-node2 <none> <none>
tomcat-67c68f7479-q8jrx 1/1 Running 0 4h8m 10.244.0.25 k8s-node1 <none> <none>
tomcat-67c68f7479-v27gp 1/1 Running 0 4h8m 10.244.0.24 k8s-node1 <none> <none>

[root@k8s-master demo]# kubectl describe pod pod-example
# 没有走调度器 直接创建 ,也就是没有default-scheduler这一步
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Pulled 63s kubelet, k8s-node2 Container image "nginx:1.16" already present on machine
Normal Created 63s kubelet, k8s-node2 Created container nginx
Normal Started 63s kubelet, k8s-node2 Started container nginx

通过yaml一次性删除所有pod

1
2
[root@k8s-master demo]# cd /opt/demo/
[root@k8s-master demo]# kubectl delete -f .

nodeSelector 按照标签调度

  1. 先给node打标签
  2. node pod 都可以设计标签
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
[root@k8s-master demo]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-node1 Ready <none> 29h v1.16.0
k8s-node2 Ready <none> 29h v1.16.0

[root@k8s-master demo]# kubectl label nodes k8s-node1 team=a
node/k8s-node1 labeled
[root@k8s-master demo]# kubectl label nodes k8s-node2 team=b
node/k8s-node2 labeled

[root@k8s-master demo]# kubectl get nodes --show-labels
NAME STATUS ROLES AGE VERSION LABELS
k8s-node1 Ready <none> 29h v1.16.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node1,kubernetes.io/os=linux,team=a
k8s-node2 Ready <none> 29h v1.16.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node2,kubernetes.io/os=linux,team=b

# 前面的都是默认,最后team才是我们自己打的标签
# 可以区分团队node了
# env_role: dev
# env_role: prod
1
2
3
4
5
6
7
8
9
10
11
12
13
14
[root@k8s-master demo]# vim pod5.yaml 

apiVersion: v1
kind: Pod
metadata:
name: pod-example
labels:
app: nginx
spec:
nodeSelector:
team: b
containers:
- name: nginx
image: nginx:1.16
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
[root@k8s-master demo]# kubectl delete -f . 

[root@k8s-master demo]# kubectl apply -f pod5.yaml

[root@k8s-master demo]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod-example 1/1 Running 0 8s 10.244.1.24 k8s-node2 <none> <none>

[root@k8s-master demo]# kubectl describe pod pod-example
# 走了调度器
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned default/pod-example to k8s-node2
Normal Pulled 37s kubelet, k8s-node2 Container image "nginx:1.16" already present on machine
Normal Created 37s kubelet, k8s-node2 Created container nginx
Normal Started 37s kubelet, k8s-node2 Started container nginx

# 开发
# 测试
# 生产都在一套集群下 就可以按照标签调度
# 创建一个pod 经过哪些流程

Pod 故障排查

  1. 理清思路
1
2
# 官方手册
https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/

pod 的状态

1
2
3
4
# STATUS
[root@k8s-master demo]# kubectl get pods
NAME READY STATUS RESTARTS AGE
pod-example 1/1 Running 0 5m49s

查看 pod 事件

1
2
3
4
5
6
7
8
9
# Pending 因为某种原因而不能顺利创建 例如 下载镜像慢或者调度不成功 
# 查看 pod 事件
[root@k8s-master demo]# kubectl describe pod "podname"
# 镜像问题的话 会一直处于 pulling image 的状态
# 调度失败在上一层Scheduled
# 顺序是从上之下 先调度 再下载镜像 最后启动容器

# 下载镜像慢 可以使用加速器或者内部仓库
# 调度失败 看看node节点是否满足创建需求 是否不能满足 或者标签不存在

查看 pod 日志

1
2
3
4
5
# kubectl logs "podname" -n "namespace"

[root@k8s-master demo]# kubectl logs kube-flannel-ds-amd64-4jkk8 -n kube-system

# 所以image需要做的很好 才能减少错误

进入pod中的容器

1
2
3
kubectl describe TYPE/NAME
kubectl logs TYPE/NAME [-c CONTAINER]
kubectl exec POD [-c CONTAINER] -- COMMAND [args...]
1
2
[root@k8s-master demo]# kubectl exec -it pod-example bash
root@pod-example:/#
1
2
# running状态 但是服务没有正常启动 可以进入容器查看
# 调度流程非常重要