13 副本机制和其他控制器


保持 pod 的健康

  1. 在实际的应用里,希望部署的pod能自动保持运行和健康,无需手动干预
  2. 不要直接创建pod,而是创建RC或者Deployment这样的资源由他们来管理
  3. 如何托管我们的pod,k8s如何自动重启pod,node节点失败pod如何被调度。
  4. 即使进程崩溃,应用程序会停止工作,比如java进程的内存泄露

存活探针

1
2
3
4
5
6
7
8
# k8s三种探测容器机制 

1. httpGet
发送HTTP请求,返回2XX-3XX范围状态码为成功。
2. exec
执行Shell命令返回状态码是0为成功。
3. tcpSocket
发起TCP Socket建立成功。

基于HTTP的存活探针

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
# https://github.com/luksa/kubernetes-in-action/tree/master/Chapter04/kubia-unhealthy

[root@k8s-master1 demo]# vim kubia-libeness-probe.yaml

apiVersion: v1
kind: Pod
metadata:
name: kubia-liveness
spec:
containers:
- image: 172.31.228.68/project/kubia-httpget # 出现访问问题的镜像
name: kubia
livenessProbe: # 存活探针
httpGet: # httpget类型
path: / # 请求路径
port: 8080 # 请求端口

# 该镜像出现2次访问后会返回 http 500
1
2
3
4
5
6
7
8
9
10
11
12
# 大约1分半后容器会重启
# RESTARTS 显示重启次数
# 现在的模式是 无限循环的重启
[root@k8s-master1 demo]# kubectl create -f kubia-libeness-probe.yaml
pod/kubia-liveness created

[root@k8s-master1 demo]# kubectl get pods
NAME READY STATUS RESTARTS AGE
kubia-liveness 1/1 Running 0 61s
[root@k8s-master1 demo]# kubectl get pods
NAME READY STATUS RESTARTS AGE
kubia-liveness 1/1 Running 1 2m1s

获取崩溃容器的应用日志

1
2
3
4
5
6
7
8
9
10
11
12
# 当想知道前一个容器日志,而不是当前的容器时 加上 --previous
[root@k8s-master1 demo]# kubectl logs kubia-liveness --previous

Kubia server starting...
Received request from ::ffff:10.244.2.1
Received request from ::ffff:10.244.2.1
Received request from ::ffff:10.244.2.1
Received request from ::ffff:10.244.2.1
Received request from ::ffff:10.244.2.1
Received request from ::ffff:10.244.2.1
Received request from ::ffff:10.244.2.1
Received request from ::ffff:10.244.2.1

查看事件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
[root@k8s-master1 demo]# kubectl describe pod kubia-liveness

# 先前容器发生错误 返回码 137
# 137 = 128+x 其中x是终止进程的信号编号,在这个例子里x=9,是sigkill的信号编号,意味着被强行终止
# Restart Count 重启次数
...
Last State: Terminated
Reason: Error
Exit Code: 137
Started: Wed, 11 Mar 2020 11:39:53 +0800
Finished: Wed, 11 Mar 2020 11:41:43 +0800
Ready: True
Restart Count: 3
Liveness: http-get http://:8080/ delay=0s timeout=1s period=10s #success=1 #failure=3
...


Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned default/kubia-liveness to k8s-master1
Normal Created 4m36s (x3 over 8m11s) kubelet, k8s-master1 Created container kubia
Normal Started 4m36s (x3 over 8m11s) kubelet, k8s-master1 Started container kubia
Warning Unhealthy 3m16s (x9 over 7m16s) kubelet, k8s-master1 Liveness probe failed: HTTP probe failed with statuscode: 500
Normal Killing 3m16s (x3 over 6m56s) kubelet, k8s-master1 Container kubia failed liveness probe, will be restarted
Normal Pulling 2m46s (x4 over 8m11s) kubelet, k8s-master1 Pulling image "172.31.228.68/project/kubia-httpget"
Normal Pulled 2m46s (x4 over 8m11s) kubelet, k8s-master1 Successfully pulled image "172.31.228.68/project/kubia-httpget"

# 事件中告诉我们为什么会被重启
# 当容器被强行终止时,会创建一个全新的容器,而不是重启原来的容器

配置存活探针的附加属性

1
2
3
4
5
6
7
8
9
10
11
12
# http-get http://:8080/ delay=0s timeout=1s period=10s #success=1 #failure=3
# delay 延迟
# timeout 超时
# period 周期

# delay=0s 容器启动后立即探测
# timeout=1s 容器需要再1秒钟内响应,否则检测失败
# period=10s 每10秒探测一次
# failure=3 连续3次探测后重启容器

# 务必设置一个初始延迟来说明应用的启动时间
# 退出代码 137表示进程被外部信号终止,128+9 如果是143对应128+15
1
2
# initialDelaySeconds: 10 # 容器启动5秒后 开始健康检查
# periodSeconds: 5 # 每隔5秒执行一次
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
[root@k8s-master1 demo]# vim kubia-libeness-probe.yaml 

apiVersion: v1
kind: Pod
metadata:
name: kubia-liveness
spec:
containers:
- image: 172.31.228.68/project/kubia-httpget
name: kubia
livenessProbe:
httpGet:
path: /
port: 8080
initialDelaySeconds: 10
periodSeconds: 5

[root@k8s-master1 demo]# kubectl describe pod kubia-liveness
...
Liveness: http-get http://:8080/ delay=10s timeout=1s period=5s #success=1 #failure=3
...

创建 ReplicationController

1
2
3
4
# ReplicationController 三部分
# 1. 标签选择器
# 2. 副本个数
# 3. pod 模板
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
[root@k8s-master1 demo]# vim kubia-rc.yaml 

apiVersion: v1
kind: ReplicationController # 资源对象
metadata:
name: kubia # rc名称
spec:
replicas: 3 # 副本个数
selector:
app: kubia # pod 选择器
template: # pod 模板
metadata:
labels:
app: kubia # 模板中的pod标签 需要与pod标签选择器相同
spec:
containers:
- image: 172.31.228.68/project/kubia
name: kubia
ports:
- containerPort: 8080
protocol: TCP

[root@k8s-master1 demo]# kubectl create -f kubia-rc.yaml

[root@k8s-master1 demo]# kubectl get pods
NAME READY STATUS RESTARTS AGE
kubia-dhj7g 1/1 Running 0 33s
kubia-k659d 1/1 Running 0 33s
kubia-l4fpp 1/1 Running 0 33s

# 删除pod 再查看
[root@k8s-master1 demo]# kubectl delete pod kubia-dhj7g

[root@k8s-master1 ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
kubia-dhj7g 1/1 Terminating 0 115s # 终止
kubia-k659d 1/1 Running 0 115s
kubia-l4fpp 1/1 Running 0 115s
kubia-mqk8k 1/1 Running 0 28s # 重新创建

# 查看 rc
[root@k8s-master1 demo]# kubectl get rc
NAME DESIRED CURRENT READY AGE
kubia 3 3 3 2m35s

# 查看rc详细信息
[root@k8s-master1 demo]# kubectl describe rc kubia
Name: kubia
Namespace: default
Selector: app=kubia
Labels: app=kubia
Annotations: <none>
Replicas: 3 current / 3 desired # pod 实际数量和目标数量
Pods Status: 3 Running / 0 Waiting / 0 Succeeded / 0 Failed # 每种状态下的pod数量
Pod Template:
Labels: app=kubia
Containers:
kubia:
Image: 172.31.228.68/project/kubia
Port: 8080/TCP
Host Port: 0/TCP
Environment: <none>
Mounts: <none>
Volumes: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulCreate 7m51s replication-controller Created pod: kubia-dhj7g
Normal SuccessfulCreate 7m51s replication-controller Created pod: kubia-l4fpp
Normal SuccessfulCreate 7m51s replication-controller Created pod: kubia-k659d
Normal SuccessfulCreate 6m24s replication-controller Created pod: kubia-mqk8k

断开 node2 节点测试

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
[root@k8s-master1 demo]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kubia-k659d 1/1 Running 0 11m 10.244.1.16 k8s-node2 <none> <none>
kubia-l4fpp 1/1 Running 0 11m 10.244.2.16 k8s-master1 <none> <none>
kubia-mqk8k 1/1 Running 0 10m 10.244.0.14 k8s-node1 <none> <none>

# 如果节点在几分钟之内无法访问,那么节点上的pod的状态会变成unknown,rc会重新创建pod
[root@k8s-master1 demo]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master1 Ready <none> 46h v1.16.0
k8s-node1 Ready <none> 46h v1.16.0
k8s-node2 NotReady <none> 46h v1.16.0

[root@k8s-master1 demo]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kubia-k659d 1/1 Terminating 0 18m 10.244.1.16 k8s-node2 <none> <none>
kubia-l4fpp 1/1 Running 0 18m 10.244.2.16 k8s-master1 <none> <none>
kubia-mqk8k 1/1 Running 0 16m 10.244.0.14 k8s-node1 <none> <none>
kubia-sfhkc 1/1 Running 0 17s 10.244.2.17 k8s-master1 <none> <none>

pod 迁移或迁出 rc 的作用域

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
1. 通过修改pod的标签 可以将它从rc的作用域中添加或者删除

# 给现有rc中的pod增加新标签
# rc并不关心 新增的标签 只关心pod是否有标签选择器中的标签
[root@k8s-master1 demo]# kubectl get pods
NAME READY STATUS RESTARTS AGE
kubia-l4fpp 1/1 Running 0 29m
kubia-mqk8k 1/1 Running 0 27m
kubia-sfhkc 1/1 Running 0 11m

[root@k8s-master1 demo]# kubectl label pod kubia-l4fpp team=A
pod/kubia-l4fpp labeled

[root@k8s-master1 demo]# kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
kubia-l4fpp 1/1 Running 0 29m app=kubia,team=A
kubia-mqk8k 1/1 Running 0 28m app=kubia
kubia-sfhkc 1/1 Running 0 11m app=kubia
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
# 更改已托管到rc中的pod标签

[root@k8s-master1 demo]# kubectl label pod kubia-l4fpp app=foo --overwrite
pod/kubia-l4fpp labeled

[root@k8s-master1 demo]# kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
kubia-l4fpp 1/1 Running 0 31m app=foo,team=A # 不再被rc管理,可以手动删除该pod
kubia-mqk8k 1/1 Running 0 30m app=kubia
kubia-sfhkc 1/1 Running 0 13m app=kubia
kubia-wsqgg 1/1 Running 0 4s app=kubia # rc新建的pod

[root@k8s-master1 demo]# kubectl delete pod kubia-l4fpp

# 如果修改rc控制器的标签选择器,那么rc之前管理的pod都会被脱离该rc,并且会创建3个新的pod
# 不要修改rc控制器的标签选择器,可以修改pod模板

kubectl edit 修改模板

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
[root@k8s-master1 demo]# kubectl edit rc kubia

...
template:
metadata:
creationTimestamp: null
labels:
app: kubia
team: C # 新增标签
...

# 1. 不会影响现有pod
# 2. 如果删除再重新创建 则会带有新标签
# 3. 如果我们修改了容器镜像,则相当于更新本次pod,后续还有更好的方法更新pod

[root@k8s-master1 demo]# kubectl get pods
NAME READY STATUS RESTARTS AGE
kubia-mqk8k 1/1 Running 0 39m
kubia-sfhkc 1/1 Running 0 22m
kubia-wsqgg 1/1 Running 0 9m2s

[root@k8s-master1 demo]# kubectl delete pod kubia-sfhkc kubia-wsqgg
pod "kubia-sfhkc" deleted
pod "kubia-wsqgg" deleted

[root@k8s-master1 ~]# kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
kubia-2ftc2 1/1 Running 0 40s app=kubia,team=C
kubia-mqk8k 1/1 Running 0 40m app=kubia
kubia-x6xm2 1/1 Running 0 40s app=kubia,team=C

kubectl sacle 水平扩容

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
[root@k8s-master1 demo]# kubectl scale rc kubia --replicas=5
replicationcontroller/kubia scaled

[root@k8s-master1 demo]# kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
kubia-2ftc2 1/1 Running 0 4m18s app=kubia,team=C
kubia-cf627 1/1 Running 0 32s app=kubia,team=C
kubia-mqk8k 1/1 Running 0 44m app=kubia
kubia-t2k4h 1/1 Running 0 32s app=kubia,team=C
kubia-x6xm2 1/1 Running 0 4m18s app=kubia,team=C

# edit 编辑定义修改 rc
[root@k8s-master1 demo]# kubectl edit rc kubia
...
spec:
replicas: 3 # 副本个数
...

[root@k8s-master1 demo]# kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
kubia-2ftc2 1/1 Running 0 7m30s app=kubia,team=C
kubia-mqk8k 1/1 Running 0 47m app=kubia
kubia-x6xm2 1/1 Running 0 7m30s app=kubia,team=C

# 声明式的集群伸缩: 我想要运行x个实例

删除 ReplicationController

1
2
3
4
5
6
7
8
9
10
11
# 1. 删除rc的同时,它管理的pod也会被删除
# 2. 当删除rc时可以通过 --cascade=false 来保持pod的运行
# 3. 当你删除rc时,pod已经不受管理,可以新建rc管理他们,只要rc的标签管理器对应pod的标签

[root@k8s-master1 demo]# kubectl delete rc kubia --cascade=false

[root@k8s-master1 demo]# kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
kubia-2ftc2 1/1 Running 0 13m app=kubia,team=C
kubia-mqk8k 1/1 Running 0 53m app=kubia
kubia-x6xm2 1/1 Running 0 13m app=kubia,team=C

使用 ReplicatSet 替换 ReplicationController

1
2
3
# 1. ReplicationController 是用于复制和在异常时重新调度节点的最早组件
# 2. ReplicatSet 是 新一代的 ReplicationController 并且已经完全替换掉
# 3. 通常不会直接创建ReplicatSet,而是创建更高层级的 Deployment时自动创建

ReplicatSet 和 ReplicationController 的区别

1
2
3
# 1. ReplicationController 只能匹配一组标签
# 2. ReplicatSet 可以匹配多组标签
# 3. ReplicatSet 可以匹配标签的键,比如名为env的标签,而不在乎值,(env=*)

定义 ReplicatSet

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
# 参考
[root@k8s-master1 demo]# kubectl explain rs
KIND: ReplicaSet
VERSION: apps/v1

[root@k8s-master1 demo]# kubectl explain rc
KIND: ReplicationController
VERSION: v1


# 创建 ReplicatSet 并且接管之前三个被移除rc的pod

[root@k8s-master1 demo]# vim kubia-rs.yaml

apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: kubia
spec:
replicas: 3
selector:
matchLabels: # 选择器里增加了 matchLabels选择器
app: kubia # 标签选择器
template:
metadata:
labels: # pod中的标签对应rs的标签选择器
app: kubia
spec:
containers:
- image: 172.31.228.68/project/kubia
name: kubia
ports:
- containerPort: 8080

[root@k8s-master1 demo]# kubectl create -f kubia-rs.yaml
replicaset.apps/kubia created

[root@k8s-master1 demo]# kubectl get rs
NAME DESIRED CURRENT READY AGE
kubia 3 3 3 26s

ReplicatSet 的标签选择器

1
2
3
4
5
6
7
8
9
10
11
12
# ReplicatSet 对于 rc的主要改进就是标签选择器

selector:
matchExpressions:
- key: app
operator: In
values:
- kubia
# 会有四种 In NotIn Exists DoesNotExist
# 如果是多个表达式 则所有条件必须都为true
# 如果同时指定 matchExpressions 和 matchLabels 则所有标签条件都必须为true
# 后续应该始终使用rs,当然也有其他人的部署里看到rc

删除 ReplicatSet

1
2
3
# 删除rs,则下面的pod同时删除
[root@k8s-master1 ~]# kubectl delete rs kubia
replicaset.apps "kubia" deleted

DaemonSet 在每个 node节点上运行1个pod

1
2
3
4
1. 如果 node节点下线 DaemonSet不会再其他节点上重新部署pod
2. 如果一个新的node节点上线,DaemonSet会立即在这个新节点上创建一个pod
3. 除非指定pod在某个node上运行 nodeSelector
4. DaemonSet 会绕过调度器,即使节点被设置为不可调度

创建 DaemonSet

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
[root@k8s-master2 ssd]# vim Dockerfile

FROM busybox
ENTRYPOINT while true; do echo 'SSD OK'; sleep 5; done

docker build -t ssd-monitor .
docker images
docker tag ssd-monitor 172.31.228.68/project/ssd-monitor
docker push 172.31.228.68/project/ssd-monitor

[root@k8s-master1 demo]# kubectl explain ds
KIND: DaemonSet
VERSION: apps/v1

[root@k8s-master1 demo]# vim kubia-ds.yaml

apiVersion: apps/v1
kind: DaemonSet
metadata:
name: ssd-monitor
spec:
selector:
matchLabels:
app: ssd-monitor
template:
metadata:
labels:
app: ssd-monitor
spec:
nodeSelector: # 节点选择器 选择标签有disk: ssd的node
disk: ssd
containers:
- image: 172.31.228.68/project/ssd-monitor
name: main
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
[root@k8s-master1 demo]# kubectl create -f kubia-ds.yaml 

# 还没有给node 打上 disk=ssd标签 所有没有pod产生
[root@k8s-master1 demo]# kubectl get ds
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
ssd-monitor 0 0 0 0 0 disk=ssd 33s

# 给两个node打上标签
[root@k8s-master1 demo]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master1 Ready <none> 2d v1.16.0
k8s-node1 Ready <none> 2d v1.16.0
k8s-node2 Ready <none> 2d v1.16.0

[root@k8s-master1 demo]# kubectl label node k8s-node1 k8s-node2 disk=ssd
node/k8s-node1 labeled
node/k8s-node2 labeled

[root@k8s-master1 demo]# kubectl get node -L disk
NAME STATUS ROLES AGE VERSION DISK
k8s-master1 Ready <none> 2d v1.16.0
k8s-node1 Ready <none> 2d v1.16.0 ssd
k8s-node2 Ready <none> 2d v1.16.0 ssd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
# 查看
[root@k8s-master1 demo]# kubectl get ds
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
ssd-monitor 2 2 2 2 2 disk=ssd 3m29s

[root@k8s-master1 demo]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
ssd-monitor-9twms 1/1 Running 0 4s 10.244.1.22 k8s-node2 <none> <none>
ssd-monitor-tlrfq 1/1 Running 0 4s 10.244.0.16 k8s-node1 <none> <none>

[root@k8s-master1 demo]# kubectl logs ssd-monitor-9twms
SSD OK
SSD OK
...

# 更换一个node标签 合理的下线了 ds

[root@k8s-master1 demo]# kubectl label node k8s-node2 disk=hdd --overwrite
node/k8s-node2 labeled

[root@k8s-master1 demo]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
ssd-monitor-9twms 1/1 Terminating 0 74s 10.244.1.22 k8s-node2 <none> <none>
ssd-monitor-tlrfq 1/1 Running 0 74s 10.244.0.16 k8s-node1 <none> <none>

[root@k8s-master1 demo]# kubectl get ds
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
ssd-monitor 1 1 1 1 1 disk=ssd 77s
1
2
3
# 删除 ds
[root@k8s-master1 demo]# kubectl delete ds ssd-monitor
daemonset.apps "ssd-monitor" deleted

Job 运行单个任务的 pod

1
2
3
1. jod 一旦运行完成不会重启容器
2. Job 分为 普通任务(Job) 和 定时任务 (CronJob)
3. 应用场景: 离线数据处理,视频解码等业务

创建 Job

1
2
3
4
5
6
7
8
[root@k8s-master2 job]# vim Dockerfile 

FROM busybox
ENTRYPOINT echo "$(date) Batch job starting"; sleep 120; echo "$(date) Finished succesfully"

[root@k8s-master2 job]# docker build -t batch-job .
[root@k8s-master2 job]# docker tag batch-job 172.31.228.68/project/batch-job
[root@k8s-master2 job]# docker push 172.31.228.68/project/batch-job
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
[root@k8s-master1 demo]# kubectl explain job
KIND: Job
VERSION: batch/v1

[root@k8s-master1 demo]# vim kubia-job.yaml

apiVersion: batch/v1
kind: Job
metadata:
name: batch-job
spec:
template:
metadata:
labels:
app: batch-job
spec:
restartPolicy: OnFailure
containers:
- image: 172.31.228.68/project/batch-job
name: main

[root@k8s-master1 demo]# kubectl create -f kubia-job.yaml
1
2
3
• Always:   当容器终止退出后,总是重启容器,默认策略。
• OnFailure:当容器异常退出(退出状态码非0)时,才重启容器。
• Never:: 当容器终止推出,从不重启容器。

查看 Job 运行

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
[root@k8s-master1 demo]# kubectl get jobs
NAME COMPLETIONS DURATION AGE
batch-job 0/1 50s 50s


[root@k8s-master1 demo]# kubectl get pods
NAME READY STATUS RESTARTS AGE
batch-job-5tr5g 1/1 Running 0 70s

[root@k8s-master1 demo]# kubectl describe jobs batch-job

[root@k8s-master1 demo]# kubectl get pods
NAME READY STATUS RESTARTS AGE
batch-job-5tr5g 0/1 Completed 0 2m31s

# 读取 job 日志
[root@k8s-master1 demo]# kubectl logs batch-job-5tr5g
Wed Mar 11 10:00:51 UTC 2020 Batch job starting
Wed Mar 11 10:02:51 UTC 2020 Finished succesfully

# 删除 job
[root@k8s-master1 demo]# kubectl delete job batch-job
job.batch "batch-job" deleted

计划任务 CronJob

创建 CronJob

1
2
3
4
5
6
7
8
[root@k8s-master1 demo]# kubectl explain cronjob
KIND: CronJob
VERSION: batch/v1beta1

定时任务,像Linux的Crontab一样
应用场景: 通知,备份
crontab的格式如下:
分 时 日 月 周 要运行的命令: 第1列分钟0~59 第2列小时0~23) 第3列日1~31 第4列月1~12 第5列星期0~7(0和7表示星期天) 第6列要运行的命令
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
[root@k8s-master1 demo]# vim cronjob.yaml 

apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: hello
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox
args:
- /bin/sh
- -c
- date; echo Hello from the Kubernetes cluster
restartPolicy: OnFailure


[root@k8s-master1 demo]# kubectl get cronjob
NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE
hello */1 * * * * False 0 <none> 12s

[root@k8s-master1 demo]# kubectl get cronjob
NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE
hello */1 * * * * False 0 31s 58s

[root@k8s-master1 demo]# kubectl get pods
NAME READY STATUS RESTARTS AGE
hello-1583921700-s4rwp 0/1 Completed 0 28s

[root@k8s-master1 demo]# kubectl logs hello-1583921700-s4rwp
Wed Mar 11 10:15:16 UTC 2020
Hello from the Kubernetes cluster

# 到达定时后 会再次执行并产生pod

[root@k8s-master1 demo]# kubectl get pods
NAME READY STATUS RESTARTS AGE
hello-1583921700-s4rwp 0/1 Completed 0 61s
hello-1583921760-sxz8q 0/1 ContainerCreating 0 1s

# 删除 cronjob
[root@k8s-master1 demo]# kubectl delete cronjob hello