05 Helm 应用包管理器


为什么需要 Helm?

  1. K8S上的应用对象,都是由特定的资源描述组成,包括deployment、service等。
  2. 都保存各自文件中或者集中写到一个配置文件。然后kubectl apply –f 部署。

  1. 如果应用只由一个或几个这样的服务组成,上面部署方式足够了。
  2. 而对于一个复杂的应用,会有很多类似上面的资源描述文件,例如微服务架构应用,组成应用的服务可能多达十个,几十个。
  3. 如果有更新或回滚应用的需求,可能要修改和维护所涉及的大量资源文件,而这种组织和管理应用的方式就显得力不从心了。
  4. 且由于缺少对发布过的应用版本管理和控制,使Kubernetes上的应用维护和更新等面临诸多的挑战,主要面临以下问题:
    • 如何将这些服务作为一个整体管理
    • 这些资源文件如何高效复用
    • 不支持应用级别的版本管理

Helm 介绍

  1. Helm是一个Kubernetes的包管理工具,就像Linux下的包管理器,如yum/apt等,可以很方便的将之前打包好的yaml文件部署到kubernetes上。
  2. Helm有3个重要概念:

    • helm:一个命令行客户端工具,主要用于Kubernetes应用chart的创建、打包、发布和管理。

    • Chart:应用描述,一系列用于描述 k8s 资源相关文件的集合。

    • Release:基于Chart的部署实体,一个 chart 被 Helm 运行后将会生成对应的一个 release;将在k8s中创建出真实运行的资源对象。

Helm v3 变化

  1. 2019年11月13日, Helm团队发布 Helm v3的第一个稳定版本。
  2. 该版本主要变化如下:

架构变化

  1. 最明显的变化是 Tiller的删除
  2. Release名称可以在不同命名空间重用
  3. 支持将 Chart 推送至 Docker 镜像仓库中
  4. 使用JSONSchema验证chart values

  1. 其他
1
2
3
4
5
6
7
8
9
10
11
12
13
1. 为了更好地协调其他包管理者的措辞 `Helm CLI `个别更名

helm delete 更名为 helm uninstall
helm inspect 更名为 helm show
helm fetch 更名为 helm pull
但以上旧的命令当前仍能使用。

2. 移除了用于本地临时搭建 `Chart Repository `的 `helm serve` 命令。

3. 自动创建名称空间
在不存在的命名空间中创建发行版时,Helm 2创建了命名空间。Helm 3遵循其他Kubernetes对象的行为,如果命名空间不存在则返回错误。

4. 不再需要`requirements.yaml`, 依赖关系是直接在`chart.yaml`中定义。

Helm 客户端

部署 Helm 客户端

  1. Helm客户端下载地址:https://github.com/helm/helm/releases
  2. 解压移动到/usr/bin/目录即可。
1
2
3
wget https://get.helm.sh/helm-v3.0.0-linux-amd64.tar.gz
tar zxvf helm-v3.0.0-linux-amd64.tar.gz
mv linux-amd64/helm /usr/bin/

Helm 常用命令

命令 描述
create 创建一个chart并指定名字
dependency 管理chart依赖
get 下载一个release。可用子命令:all、hooks、manifest、notes、values
history 获取release历史
install 安装一个chart
list 列出release
package 将chart目录打包到chart存档文件中
pull 从远程仓库中下载chart并解压到本地 # helm pull stable/mysql –untar
repo 添加,列出,移除,更新和索引chart仓库。可用子命令:add、index、list、remove、update
rollback 从之前版本回滚
search 根据关键字搜索chart。可用子命令:hub、repo
show 查看chart详细信息。可用子命令:all、chart、readme、values
status 显示已命名版本的状态
template 本地呈现模板
uninstall 卸载一个release
upgrade 更新一个release
version 查看helm客户端版本

配置国内 Chart 仓库

1
2
3
helm repo add stable http://mirror.azure.cn/kubernetes/charts
helm repo add aliyun https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts
helm repo update
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
[root@k8s-master1 ~]# helm repo add stable http://mirror.azure.cn/kubernetes/charts
"stable" has been added to your repositories

[root@k8s-master1 ~]# helm repo list
NAME URL
stable http://mirror.azure.cn/kubernetes/charts

[root@k8s-master1 ~]# helm search repo mysql
NAME CHART VERSION APP VERSION DESCRIPTION
stable/mysql 1.6.2 5.7.28 Fast, reliable, scalable, and easy to use open-...
stable/mysqldump 2.6.0 2.4.1 A Helm chart to help backup MySQL databases usi...
stable/prometheus-mysql-exporter 0.5.2 v0.11.0 A Helm chart for prometheus mysql exporter with...
stable/percona 1.2.0 5.7.17 free, fully compatible, enhanced, open source d...
stable/percona-xtradb-cluster 1.0.3 5.7.19 free, fully compatible, enhanced, open source d...
stable/phpmyadmin 4.2.4 4.9.2 phpMyAdmin is an mysql administration frontend
stable/gcloud-sqlproxy 0.6.1 1.11 DEPRECATED Google Cloud SQL Proxy
stable/mariadb 7.3.1 10.3.21 Fast, reliable, scalable, and easy to use open-...
1
2
3
# 查看仓库中所有chart
[root@k8s-master1 ~]# helm search repo stable
[root@k8s-master1 ~]# helm search repo stable |grep swift

添加多个仓库

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
[root@k8s-master1 ~]# helm repo add aliyun https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts 
"aliyun" has been added to your repositories

[root@k8s-master1 ~]# helm repo list
NAME URL
stable http://mirror.azure.cn/kubernetes/charts
aliyun https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts

[root@k8s-master1 ~]# helm search repo mysql
NAME CHART VERSION APP VERSION DESCRIPTION
aliyun/mysql 0.3.5 Fast, reliable, scalable, and easy to use open-...
stable/mysql 1.6.2 5.7.28 Fast, reliable, scalable, and easy to use open-...
stable/mysqldump 2.6.0 2.4.1 A Helm chart to help backup MySQL databases usi...
stable/prometheus-mysql-exporter 0.5.2 v0.11.0 A Helm chart for prometheus mysql exporter with...
aliyun/percona 0.3.0 free, fully compatible, enhanced, open source d...
aliyun/percona-xtradb-cluster 0.0.2 5.7.19 free, fully compatible, enhanced, open source d...
stable/percona 1.2.0 5.7.17 free, fully compatible, enhanced, open source d...
stable/percona-xtradb-cluster 1.0.3 5.7.19 free, fully compatible, enhanced, open source d...
stable/phpmyadmin 4.2.4 4.9.2 phpMyAdmin is an mysql administration frontend
aliyun/gcloud-sqlproxy 0.2.3 Google Cloud SQL Proxy
aliyun/mariadb 2.1.6 10.1.31 Fast, reliable, scalable, and easy to use open-...
stable/gcloud-sqlproxy 0.6.1 1.11 DEPRECATED Google Cloud SQL Proxy
stable/mariadb 7.3.1 10.3.21 Fast, reliable, scalable, and easy to use open-...

删除存储库

1
[root@k8s-master1 ~]# helm repo remove aliyun

Helm 基本使用

  1. 主要介绍三个命令:

    • chart install

    • chart update

    • chart rollback

使用 chart 部署一个应用

1
2
3
# 查找chart 
[root@k8s-master1 ~]# helm search repo
[root@k8s-master1 ~]# helm search repo mysql
1
2
3
4
# 查看chart信息
[root@k8s-master1 ~]# helm show chart stable/mysql
# values 相当于 模板的变量 yaml文件中的动态字段需要动态传值
[root@k8s-master1 ~]# helm show values stable/mysql
1
2
3
4
5
6
# 安装 db1 是 Release 的名称
# 部署后会弹出 使用信息
[root@k8s-master1 ~]# helm install db1 stable/mysql
# 获取密码
[root@k8s-master1 ~]# kubectl get secret --namespace default db1-mysql -o jsonpath="{.data.mysql-root-password}" | base64 --decode; echo
5R4VprCnNT

1
2
3
4
# 查看部署状态
[root@k8s-master1 ~]# helm list
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
db1 default 1 2019-12-17 11:23:30.533875697 +0800 CST deployed mysql-1.6.2 5.7.28

查看状态

1
2
# 查看发布状态
[root@k8s-master1 ~]# helm status db1
1
2
3
4
5
6
7
8
9
10
# 查看pod状态
[root@k8s-master1 ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
db1-mysql-868b97747b-tnpwk 0/1 Pending 0 9m59s # 等待
metrics-app-7674cfb699-nzmdz 1/1 Running 0 132m
metrics-app-7674cfb699-thjzk 1/1 Running 0 132m
metrics-app-7674cfb699-xvfpx 1/1 Running 0 132m
nfs-client-provisioner-5dd6f66f47-w5t6w 1/1 Running 0 137m
web-6f4b67f8cc-j2ccg 1/1 Running 0 148m
web-6f4b67f8cc-mljnd 1/1 Running 0 148m
1
2
3
4
5
6
7
8
9
# 查看事件
[root@k8s-master1 ~]# kubectl describe pod db1-mysql-868b97747b-tnpwk

# 没有 pvc
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling <unknown> default-scheduler pod has unbound immediate PersistentVolumeClaims (repeated 2 times)
Warning FailedScheduling <unknown> default-scheduler pod has unbound immediate PersistentVolumeClaims (repeated 2 times)
1
2
3
# 绑定pvc的方法:
1. 静态
2. 动态
1
2
3
4
# 查看pvc
[root@k8s-master1 ~]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
db1-mysql Pending 12m
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
# 创建一个pvc匹配到 db1-mysql 
[root@k8s-master1 ~]# kubectl describe pvc db1-mysql
Name: db1-mysql
Namespace: default
StorageClass:
Status: Pending
Volume:
Labels: app=db1-mysql
chart=mysql-1.6.2
heritage=Helm
release=db1
Annotations: <none>
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode: Filesystem
Mounted By: db1-mysql-868b97747b-tnpwk
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal FailedBinding 3m57s (x42 over 14m) persistentvolume-controller no persistent volumes available for this claim and no storage class is set
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
[root@k8s-master1 ~]# kubectl get pvc db1-mysql -o yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: "2019-12-17T03:23:30Z"
finalizers:
- kubernetes.io/pvc-protection
labels:
app: db1-mysql
chart: mysql-1.6.2
heritage: Helm
release: db1
name: db1-mysql
namespace: default
resourceVersion: "18423"
selfLink: /api/v1/namespaces/default/persistentvolumeclaims/db1-mysql
uid: 8fc33553-3154-448b-92a5-cff2a7c3b757
spec:
accessModes:
- ReadWriteOnce # 所有节点可以读写
resources:
requests:
storage: 8Gi # 8G存储
volumeMode: Filesystem
status:
phase: Pending

创建 pv

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
# 模板
https://kubernetes.io/docs/concepts/storage/persistent-volumes/

# Persistent Volumes
# Each PV contains a spec and status, which is the specification and status of the volume.

apiVersion: v1
kind: PersistentVolume
metadata:
name: pv0003
spec:
capacity:
storage: 5Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
storageClassName: slow
mountOptions:
- hard
- nfsvers=4.1
nfs:
path: /tmp
server: 172.17.0.2
1
2
# nfs主机上创建目录
[root@k8s-node2 ~]# mkdir /ifs/kubernetes/db
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
# 创建 pv
[root@k8s-master1 pv]# vim pv.yaml

apiVersion: v1
kind: PersistentVolume
metadata:
name: pv0003
spec:
capacity:
storage: 8Gi
accessModes:
- ReadWriteOnce
nfs:
path: /ifs/kubernetes/db
server: 172.17.70.254
1
2
3
4
5
6
7
8
9
10
11
12
13
14
[root@k8s-master1 pv]# kubectl apply -f pv.yaml 
persistentvolume/pv0003 created

[root@k8s-master1 pv]# kubectl get pv,pvc
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/pv0003 8Gi RWO Retain Bound default/db1-mysql 70s
persistentvolume/pvc-56346fbf-b298-4573-a0dd-1429325dcb71 16Gi RWO Delete Bound kube-system/prometheus-data-prometheus-0 managed-nfs-storage 151m

NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/db1-mysql Bound pv0003 8Gi RWO 25m

[root@k8s-master1 pv]# kubectl get pods
NAME READY STATUS RESTARTS AGE
db1-mysql-868b97747b-tnpwk 1/1 Running 0 26m

登录测试

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
# 由于网络原因 就不使用官网容器测试了 直接登录测试
# 先拿到密码
[root@k8s-master1 pv]# helm list
[root@k8s-master1 pv]# helm status db1
[root@k8s-master1 pv]# kubectl get secret --namespace default db1-mysql -o jsonpath="{.data.mysql-root-password}" | base64 --decode; echo
5R4VprCnNT

# 登录 mysql
[root@k8s-master1 pv]# kubectl get pods
[root@k8s-master1 pv]# kubectl exec -it db1-mysql-868b97747b-tnpwk bash

root@db1-mysql-868b97747b-tnpwk:/# mysql -uroot -p5R4VprCnNT
mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 58
Server version: 5.7.28 MySQL Community Server (GPL)

Copyright (c) 2000, 2019, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.


mysql> create database test;
Query OK, 1 row affected (0.01 sec)

mysql> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| mysql |
| performance_schema |
| sys |
| test |
+--------------------+
5 rows in set (0.01 sec)

# 看下变量 发现有的是空的
root@db1-mysql-868b97747b-tnpwk:/# echo $MYSQL_ROOT_PASSWORD
5R4VprCnNT
root@db1-mysql-868b97747b-tnpwk:/# echo $MYSQL_PORT

root@db1-mysql-868b97747b-tnpwk:/# echo $MYSQL_HOST

使用自己的 nfs pv自动供给

  1. 修改chart的部署选项
  2. 上面部署的mysql一开始并没有成功,这是因为并不是所有的chart都能按照默认配置运行成功,可能会需要一些环境依赖,例如PV。
  3. 所以我们需要自定义chart配置选项,安装过程中有两种方法可以传递配置数据:
    • –values(或-f):指定带有覆盖的YAML文件。这可以多次指定,最右边的文件优先
    • –set:在命令行上指定替代。如果两者都用,–set优先级高

values 使用

  1. 先将修改的变量写到一个文件中
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
[root@k8s-master1 pv]# helm show values stable/mysql > values.yaml   # 后面看默认的位置
# 只保留要配置的地方 比如pvc
# 增加一些配置 如 创建用户 数据库 等
# 查看下自动供给
[root@k8s-master1 ~]# kubectl get sc
NAME PROVISIONER AGE
managed-nfs-storage fuseim.pri/ifs 169m

# 修改配置文件
[root@k8s-master1 pv]# vim values.yaml

## Specify password for root user
## Default: random 10 character string
mysqlRootPassword: testing

## Create a database user
mysqlUser: k8s
## Default: random 10 character string
mysqlPassword: k8s123

## Create a database
mysqlDatabase: k8s

## Persist data to a persistent volume
persistence:
enabled: true
## database data Persistent Volume Storage Class
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
storageClass: "managed-nfs-storage"
accessMode: ReadWriteOnce
size: 8Gi
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
# 以上将创建具有名称的默认MySQL用户k8s,并授予此用户访问新创建的k8s数据库的权限,但将接受该图表的所有其余默认值。
# 指定配置文件部署
[root@k8s-master1 pv]# helm install db2 -f values.yaml stable/mysql

[root@k8s-master1 pv]# helm list
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
db1 default 1 2019-12-17 11:23:30.533875697 +0800 CST deployed mysql-1.6.2 5.7.28
db2 default 1 2019-12-17 12:08:36.436489783 +0800 CST deployed mysql-1.6.2 5.7.28

# 直接可以运行
[root@k8s-master1 pv]# kubectl get pods
NAME READY STATUS RESTARTS AGE
db1-mysql-868b97747b-tnpwk 1/1 Running 0 45m
db2-mysql-76495946b5-7x9jw 1/1 Running 0 44s

# 进入测试
root@db2-mysql-76495946b5-7x9jw:/# mysql -uroot -ptesting
mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 19
Server version: 5.7.28 MySQL Community Server (GPL)

Copyright (c) 2000, 2019, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| k8s |
| mysql |
| performance_schema |
| sys |
+--------------------+
5 rows in set (0.05 sec)

root@db2-mysql-76495946b5-7x9jw:/# mysql -uk8s -pk8s123
1
2
3
4
# 小总结: 
如果helm使用官方创建,有一些依赖需要提前准备,比如pv
1. 引用values.yaml
2. --set

set 使用

1
2
3
4
5
6
7
8
9
10
11
12
13
[root@k8s-master1 pv]# helm install db3 --set persistence.storageClass="managed-nfs-storage" stable/mysql

[root@k8s-master1 pv]# helm list
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
db1 default 1 2019-12-17 11:23:30.533875697 +0800 CST deployed mysql-1.6.2 5.7.28
db2 default 1 2019-12-17 12:08:36.436489783 +0800 CST deployed mysql-1.6.2 5.7.28
db3 default 1 2019-12-17 12:13:53.856941497 +0800 CST deployed mysql-1.6.2 5.7.28

[root@k8s-master1 pv]# kubectl get pods
NAME READY STATUS RESTARTS AGE
db1-mysql-868b97747b-tnpwk 1/1 Running 0 51m
db2-mysql-76495946b5-7x9jw 1/1 Running 0 6m1s
db3-mysql-59585b7656-vwfvw 1/1 Running 0 40s
1
2
# set传入需要遵循语法 values 结构化数据
# values yaml与set使用:

拉取整个chart包

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
# --untar 拉取后直接解压
[root@k8s-master1 pv]# helm pull stable/mysql --untar
[root@k8s-master1 pv]# ls
mysql

[root@k8s-master1 pv]# cd mysql/
[root@k8s-master1 mysql]# ls -l
total 40
-rw-r--r-- 1 root root 502 Dec 17 12:17 Chart.yaml
-rw-r--r-- 1 root root 22284 Dec 17 12:17 README.md
drwxr-xr-x 3 root root 4096 Dec 17 12:17 templates
-rw-r--r-- 1 root root 5646 Dec 17 12:17 values.yaml # 覆盖的就是这个配置文件

# 部署mysql的yaml文件目录
[root@k8s-master1 mysql]# ls -l templates/
total 52
-rw-r--r-- 1 root root 292 Dec 17 12:17 configurationFiles-configmap.yaml
-rw-r--r-- 1 root root 8610 Dec 17 12:17 deployment.yaml
-rw-r--r-- 1 root root 1290 Dec 17 12:17 _helpers.tpl
-rw-r--r-- 1 root root 295 Dec 17 12:17 initializationFiles-configmap.yaml
-rw-r--r-- 1 root root 1797 Dec 17 12:17 NOTES.txt
-rw-r--r-- 1 root root 868 Dec 17 12:17 pvc.yaml
-rw-r--r-- 1 root root 1475 Dec 17 12:17 secrets.yaml
-rw-r--r-- 1 root root 328 Dec 17 12:17 serviceaccount.yaml
-rw-r--r-- 1 root root 800 Dec 17 12:17 servicemonitor.yaml
-rw-r--r-- 1 root root 1104 Dec 17 12:17 svc.yaml
drwxr-xr-x 2 root root 4096 Dec 17 12:17 tests

helm install 命令可以从多个来源安装

构建一个 Helm Chart

自动生成目录

1
2
3
4
5
6
7
8
9
[root@k8s-master1 ~]# helm create mychart
Creating mychart

[root@k8s-master1 ~]# cd mychart/
[root@k8s-master1 mychart]# ls -l
drwxr-xr-x 2 root root 4096 Dec 17 16:54 charts
-rw-r--r-- 1 root root 905 Dec 17 16:54 Chart.yaml
drwxr-xr-x 3 root root 4096 Dec 17 16:54 templates
-rw-r--r-- 1 root root 1490 Dec 17 16:54 values.yaml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
# 启动这个默认自动创建的 mychart 会发现是一个 nginx服务
[root@k8s-master1 ~]# helm install test mychart/

[root@k8s-master1 ~]# kubectl get pods | grep test
test-mychart-b5cd6d7c8-5mz8b 1/1 Running 0 42s

[root@k8s-master1 ~]# kubectl get pods -o wide | grep test
test-mychart-b5cd6d7c8-5mz8b 1/1 Running 0 59s 10.244.2.9 k8s-node2 <none> <none>
[root@k8s-master1 ~]# curl -I 10.244.2.9
HTTP/1.1 200 OK
Server: nginx/1.16.0
Date: Tue, 17 Dec 2019 08:58:21 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 23 Apr 2019 10:18:21 GMT
Connection: keep-alive
ETag: "5cbee66d-264"
Accept-Ranges: bytes

[root@k8s-master1 ~]# helm list
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
db1 default 1 2019-12-17 11:23:30.533875697 +0800 CST deployed mysql-1.6.2 5.7.28
db2 default 1 2019-12-17 12:08:36.436489783 +0800 CST deployed mysql-1.6.2 5.7.28
db3 default 1 2019-12-17 12:13:53.856941497 +0800 CST deployed mysql-1.6.2 5.7.28
test default 1 2019-12-17 16:57:09.278738153 +0800 CST deployed mychart-0.1.0 1.16.0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
# 文件内容: 
[root@k8s-master1 ~]# tree /root/mychart/
/root/mychart/
├── charts # 目录里存放这个chart依赖的所有子chart。
├── Chart.yaml # 用于描述这个 Chart的基本信息,包括名字、描述信息以及版本等。
├── templates # 目录里面存放所有yaml模板文件。
│   ├── deployment.yaml
│   ├── _helpers.tpl # 放置模板助手的地方,可以在整个 chart 中重复使用
│   ├── ingress.yaml
│   ├── NOTES.txt # 用于介绍Chart帮助信息,helm install 部署后展示给用户。例如:如何使用这个 Chart、列出缺省的设置等。
│   ├── serviceaccount.yaml
│   ├── service.yaml
│   └── tests
│   └── test-connection.yaml
└── values.yaml # 动态变量 用于存储 templates 目录中模板文件中用到变量的值。

简单制作一个 chart

创建

1
2
3
4
5
6
7
# 将文件和目录清空
[root@k8s-master1 mychart]# tree
.
├── charts
├── Chart.yaml
├── templates
└── values.yaml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
# 创建nginx的 deployment
[root@k8s-master1 templates]# kubectl create deployment mychart --image=nginx:1.16 -o yaml --dry-run > deployment.yaml
# 删除不要的配置
[root@k8s-master1 templates]# vim deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: mychart
name: mychart
spec:
replicas: 1
selector:
matchLabels:
app: mychart
template:
metadata:
labels:
app: mychart
spec:
containers:
- image: nginx:1.16
name: nginx

模板

  1. Helm最核心的就是模板,即模板化的K8S manifests文件。
  2. 它本质上就是一个Go的template模板。Helm在Go template模板的基础上,还会增加很多东西。
  3. 如一些自定义的元数据信息、扩展的库以及一些类似于编程形式的工作流,例如条件语句、管道等等。这些东西都会使得我们的模板变得更加丰富。
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
# 修改成 动态模板
# 有了模板,我们怎么把我们的配置融入进去呢?用的就是这个values文件。这两部分内容其实就是chart的核心功能。

[root@k8s-master1 templates]# vim deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Values.name }}
spec:
replicas: {{ .Values.replicas }}
selector:
matchLabels:
app: mychart
template:
metadata:
labels:
app: mychart
spec:
containers:
- image: {{ .Values.image }}:{{ .Values.imageTag }}
name: nginx
1
2
3
4
5
6
7
# 加入变量 引入名字需要一致
[root@k8s-master1 templates]# vim ../values.yaml

name: hello
replicas: 3
image: nginx
imageTag: 1.15
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
# 安装
[root@k8s-master1 ~]# helm install hello mychart/
NAME: hello
LAST DEPLOYED: Tue Dec 17 17:56:12 2019
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None

# 卸载
[root@k8s-master1 ~]# helm uninstall hello
release "hello" uninstalled

# 查看
[root@k8s-master1 ~]# kubectl get pods -o wide| grep hello
hello-d8ccdfdd8-cw7qh 1/1 Running 0 38s 10.244.2.10 k8s-node2 <none> <none>
hello-d8ccdfdd8-mqh76 1/1 Running 0 38s 10.244.1.9 k8s-node1 <none> <none>
hello-d8ccdfdd8-skdjw 1/1 Running 0 38s 10.244.0.11 k8s-master1 <none> <none>

[root@k8s-master1 ~]# helm list|grep hello
hello default 1 2019-12-17 17:56:12.316160884 +0800 CST deployed mychart-0.1.0 1.16.0

# 查看渲染后的部署文件
# 我们希望能在一个地方统一定义这些会经常变换的字段,这就需要用到Chart的模板了。
[root@k8s-master1 ~]# helm get manifest hello

---
# Source: mychart/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello
spec:
replicas: 3
selector:
matchLabels:
app: mychart
template:
metadata:
labels:
app: mychart
spec:
containers:
- image: nginx:1.15
name: nginx

升级

  1. 发布新版本的chart时,或者当您要更改发布的配置时,可以使用该helm upgrade 命令。
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
# 修改values.yaml
[root@k8s-master1 mychart]# vim values.yaml

name: hello
replicas: 3
image: nginx
imageTag: 1.16

# 升级
[root@k8s-master1 mychart]# helm upgrade hello /root/mychart/
Release "hello" has been upgraded. Happy Helming!
NAME: hello
LAST DEPLOYED: Tue Dec 17 18:04:30 2019
NAMESPACE: default
STATUS: deployed
REVISION: 2
TEST SUITE: None
[root@k8s-master1 mychart]# kubectl get pods -o wide| grep hello
hello-5b97fb4c85-hssrt 1/1 Running 0 7s 10.244.0.12 k8s-master1 <none> <none>
hello-5b97fb4c85-wlwnj 1/1 Running 0 3s 10.244.1.10 k8s-node1 <none> <none>
hello-5b97fb4c85-x2zp7 1/1 Running 0 4s 10.244.2.11 k8s-node2 <none> <none>
hello-d8ccdfdd8-mqh76 0/1 Terminating 0 8m25s 10.244.1.9 k8s-node1 <none> <none>
hello-d8ccdfdd8-skdjw 1/1 Terminating 0 8m25s 10.244.0.11 k8s-master1 <none> <none>
[root@k8s-master1 mychart]# curl -I 10.244.0.12

回滚

  1. 如果在发布后没有达到预期的效果,则可以使用helm rollback回滚到之前的版本。
1
2
# 回滚到上一个版本
[root@k8s-master1 mychart]# helm rollback hello
1
2
3
4
5
6
7
8
# 回滚到指定版本
[root@k8s-master1 mychart]# helm history hello
REVISION UPDATED STATUS CHART APP VERSION DESCRIPTION
1 Tue Dec 17 17:56:12 2019 superseded mychart-0.1.0 1.16.0 Install complete
2 Tue Dec 17 18:04:30 2019 superseded mychart-0.1.0 1.16.0 Upgrade complete
3 Tue Dec 17 18:07:56 2019 deployed mychart-0.1.0 1.16.0 Rollback to 1

[root@k8s-master1 mychart]# helm rollback hello 2

卸载

1
2
3
4
# 卸载
[root@k8s-master1 ~]# helm list
[root@k8s-master1 ~]# helm uninstall hello
release "hello" uninstalled

打包

1
2
3
# 可以打包推送的charts仓库共享别人使用。
[root@k8s-master1 ~]# helm package mychart
Successfully packaged chart and saved it to: /root/mychart-0.1.0.tgz

深入学习 Helm

Chart 模板

  1. Helm最核心的就是模板,即模板化的K8S manifests文件。
  2. 它本质上就是一个Go的template模板。Helm在Go template模板的基础上,还会增加很多东西。
  3. 如一些自定义的元数据信息、扩展的库以及一些类似于编程形式的工作流,例如条件语句、管道等等。这些东西都会使得我们的模板变得更加丰富。
1
2
3
[root@k8s-master1 ~]# helm create app01
Creating app01
[root@k8s-master1 ~]# cd app01/
1
2
3
4
# 清除默认生成的模板文件
[root@k8s-master1 templates]# rm -rf *

# 保留变量
1
2
3
4
5
# 创建deployment 和 service
[root@k8s-master1 templates]# kubectl create deployment web --image=nginx:1.16 --dry-run -o yaml > deployment.yaml
[root@k8s-master1 templates]# kubectl apply -f deployment.yaml
[root@k8s-master1 templates]# kubectl expose deployment web --port=80 --target-port=80 --dry-run -o yaml > service.yaml
[root@k8s-master1 templates]# kubectl delete deploy web
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
# 直接部署
[root@k8s-master1 templates]# kubectl apply -f .
deployment.apps/web created
service/web created

[root@k8s-master1 templates]# kubectl get pods,svc
NAME READY STATUS RESTARTS AGE
pod/db2-mysql-76495946b5-kv76b 1/1 Running 2 7h27m
pod/nfs-client-provisioner-5dd6f66f47-9gb4k 1/1 Running 2 7h31m
pod/web-866f97c649-vrmrm 1/1 Running 0 5s

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/db2-mysql ClusterIP 10.0.0.159 <none> 3306/TCP 7h27m
service/kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 30h
service/metrics-app ClusterIP 10.0.0.24 <none> 80/TCP 30h
service/web ClusterIP 10.0.0.29 <none> 80/TCP 5s
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
# helm 部署
[root@k8s-master1 templates]# helm install app01 /root/app01/
NAME: app01
LAST DEPLOYED: Thu Dec 19 08:00:41 2019
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None

[root@k8s-master1 templates]# kubectl get pods,svc
NAME READY STATUS RESTARTS AGE
pod/nfs-client-provisioner-5dd6f66f47-9gb4k 1/1 Running 3 23h
pod/web-866f97c649-b76cr 1/1 Running 0 5s

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 47h
service/metrics-app ClusterIP 10.0.0.24 <none> 80/TCP 46h
service/web ClusterIP 10.0.0.206 <none> 80/TCP 5s

[root@k8s-master1 templates]# curl -I 10.0.0.206
HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Thu, 19 Dec 2019 00:00:54 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 13 Aug 2019 10:05:00 GMT
Connection: keep-alive
ETag: "5d528b4c-264"
Accept-Ranges: bytes

动态使用模板

使用内置对象

Release.Name release 名称
Release.Name release 名字
Release.Namespace release 命名空间
Release.Service release 服务的名称
Release.Revision release 修订版本号,从1开始累加
1
# 部署另外的应用 修改标签选择器和镜像名称
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
# 通过模板渲染
[root@k8s-master1 templates]# vim deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
labels:
chart: {{ .Chart.name }} # Chart.yaml定义变量
app: {{ .Release.Name }} # 内置变量
name: {{ .Release.Name }}
spec:
replicas: {{ .Values.replicas }} # Values.yaml 定义变量 副本数
selector:
matchLabels:
app: {{ .Values.label }} # Pod 标签
template:
metadata:
labels:
app: {{ .Values.label }} # Pod 标签
spec:
containers:
- image: {{ .Values.image }}:{{ .Values.imageTag }} # 镜像
name: {{ .Release.Name }}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
[root@k8s-master1 templates]# vim service.yaml 

apiVersion: v1
kind: Service
metadata:
labels:
chart: {{ .Chart.Name }}
app: {{ .Release.Name }}
name: {{ .Release.Name }}
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
# 匹配POD标签
app: {{ .Values.label }}
1
2
3
4
5
6
7
8
# 修改 values.yaml 引用变量的值
[root@k8s-master1 app01]# >values.yaml
[root@k8s-master1 app01]# vim values.yaml

replicas: 3
image: nginx
imageTag: 1.17
label: app01

调试验证

  1. Helm也提供了--dry-run --debug调试参数,帮助你验证模板正确性。
  2. 在执行helm install时候带上这两个参数就可以把对应的values值和渲染的资源清单打印出来,而不会真正的去部署一个release。
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
[root@k8s-master1 app01]# helm install web --dry-run /root/app01/
NAME: web
LAST DEPLOYED: Thu Dec 19 08:44:16 2019
NAMESPACE: default
STATUS: pending-install
REVISION: 1
TEST SUITE: None
HOOKS:
MANIFEST:
---
# Source: app01/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
labels:
chart: app01
app: web
name: web
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
# 匹配POD标签
app: app01
---
# Source: app01/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
chart: app01
app: web
name: web
spec:
replicas: 3
selector:
matchLabels:
app: app01
template:
metadata:
labels:
app: app01
spec:
containers:
- image: nginx:1.17
name: web
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
# 执行验证

[root@k8s-master1 app01]# helm install web /root/app01/
NAME: web
LAST DEPLOYED: Thu Dec 19 08:45:22 2019
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
[root@k8s-master1 app01]# kubectl get pods,svc,ep
NAME READY STATUS RESTARTS AGE
pod/nfs-client-provisioner-5dd6f66f47-9gb4k 1/1 Running 3 24h
pod/web-79dd649678-2q7s9 1/1 Running 0 5s
pod/web-79dd649678-l95t2 1/1 Running 0 5s
pod/web-79dd649678-v9tjv 1/1 Running 0 5s

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 47h
service/web ClusterIP 10.0.0.98 <none> 80/TCP 5s

NAME ENDPOINTS AGE
endpoints/kubernetes 172.17.70.251:6443 47h
endpoints/web 10.244.0.43:80,10.244.1.42:80,10.244.2.52:80 5s

[root@k8s-master1 app01]# curl -I 10.0.0.98
HTTP/1.1 200 OK
Server: nginx/1.17.6
Date: Thu, 19 Dec 2019 00:47:09 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 19 Nov 2019 12:50:08 GMT
Connection: keep-alive
ETag: "5dd3e500-264"
Accept-Ranges: bytes
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
# 升级更新
[root@k8s-master1 app01]# vim values.yaml

replicas: 3
image: nginx
imageTag: 1.16
label: app01

[root@k8s-master1 app01]# helm upgrade web /root/app01/
Release "web" has been upgraded. Happy Helming!
NAME: web
LAST DEPLOYED: Thu Dec 19 08:48:16 2019
NAMESPACE: default
STATUS: deployed
REVISION: 2
TEST SUITE: None
[root@k8s-master1 app01]# kubectl get pod,svc,ep
NAME READY STATUS RESTARTS AGE
pod/nfs-client-provisioner-5dd6f66f47-9gb4k 1/1 Running 3 24h
pod/web-559fb69f57-4gsqj 1/1 Running 0 12s
pod/web-559fb69f57-9hsx2 1/1 Running 0 14s
pod/web-559fb69f57-ghvls 1/1 Running 0 11s
pod/web-79dd649678-l95t2 0/1 Terminating 0 3m7s

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 47h
service/metrics-app ClusterIP 10.0.0.24 <none> 80/TCP 47h
service/web ClusterIP 10.0.0.98 <none> 80/TCP 3m7s

NAME ENDPOINTS AGE
endpoints/kubernetes 172.17.70.251:6443 47h
endpoints/web 10.244.0.44:80,10.244.1.43:80,10.244.2.53:80 3m7s # 一组新的pod

[root@k8s-master1 app01]# curl -I 10.0.0.98
HTTP/1.1 200 OK
Server: nginx/1.16.1 # 镜像版本变化
Date: Thu, 19 Dec 2019 00:48:38 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 13 Aug 2019 10:05:00 GMT
Connection: keep-alive
ETag: "5d528b4c-264"
Accept-Ranges: bytes

使用通用模板 创建新的POD

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
[root@k8s-master1 app01]# vim values.yaml 

replicas: 1
image: lizhenliang/java-demo
imageTag: latest
label: java-demo

[root@k8s-master1 app01]# helm install java-demo /root/app01/
NAME: java-demo
LAST DEPLOYED: Thu Dec 19 08:57:34 2019
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None

[root@k8s-master1 app01]# helm install java-demo /root/app01/
NAME: java-demo
LAST DEPLOYED: Thu Dec 19 08:57:34 2019
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
[root@k8s-master1 app01]# kubectl get pods,svc,ep
NAME READY STATUS RESTARTS AGE
pod/java-demo-89485b486-gfnzd 0/1 ContainerCreating 0 12s
pod/nfs-client-provisioner-5dd6f66f47-9gb4k 1/1 Running 3 24h
pod/web-559fb69f57-4gsqj 1/1 Running 0 9m28s
pod/web-559fb69f57-9hsx2 1/1 Running 0 9m30s
pod/web-559fb69f57-ghvls 1/1 Running 0 9m27s

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/java-demo ClusterIP 10.0.0.118 <none> 80/TCP 12s
service/kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 47h
service/metrics-app ClusterIP 10.0.0.24 <none> 80/TCP 47h
service/web ClusterIP 10.0.0.98 <none> 80/TCP 12m

NAME ENDPOINTS AGE
endpoints/kubernetes 172.17.70.251:6443 47h
endpoints/web 10.244.0.44:80,10.244.1.43:80,10.244.2.53:80 12m

Values

  1. Values对象是为Chart模板提供值,这个对象的值有4个来源:
    • chart 包中的 values.yaml 文件
    • 父 chart 包的 values.yaml 文件
    • 通过 helm install 或者 helm upgrade 的 -f或者 --values参数传入的自定义的 yaml 文件
    • 通过 --set 参数传入的值
  1. chart 的 values.yaml 提供的值可以被用户提供的 values 文件覆盖,而该文件同样可以被 --set提供的参数所覆盖。
1
2
# 通过set传值创建
[root@k8s-master1 app01]# helm install web3 --set replicas=1 /root/app01/

管道与函数

1
2
3
4
5
6
7
8
9
10
# quote 函数增加双引号
chart: {{ quote .Chart.Name }} # 将后面的值作为参数传递给quote函数。

[root@k8s-master1 templates]# helm install web05 --dry-run /root/app01/

...
metadata:
labels:
chart: "app01"
...
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
# default 默认值
# default函数,该函数允许在模板中指定默认值,以防止该值被忽略掉。
# 例如忘记定义,执行helm install 会因为缺少字段无法创建资源,这时就可以定义一个默认值。
# 如果values.yaml里面test存在,则会使用模板文件中的值

[root@k8s-master1 templates]# vim deployment.yaml
...
app: {{ quote .Values.label }}
test: {{ default "hello" .Values.test }}
...

[root@k8s-master1 templates]# helm install web05 --dry-run /root/app01/
...
spec:
replicas: 1
selector:
matchLabels:
app: "java-demo"
test: hello
...

其他函数:

1
2
3
缩进:{{ .Values.resources | indent 12 }}
大写:{{ upper .Values.resources }}
首字母大写:{{ title .Values.resources }}

流程控制

Helm模板语言提供以下流程控制语句:

1
2
3
4
# 满足更复杂的数据逻辑处理 
if/else 条件块
with 指定范围
range 循环块

if … else

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
[root@k8s-master1 templates]# vim deployment.yaml 
# 如果 test = “k8s” 那么 devops的值就是123
# 判断会留下空行,需要去除
spec:
replicas: {{ .Values.replicas }}
selector:
matchLabels:
app: {{ quote .Values.label }}
test: {{ default "hello" .Values.test }}
{{ if eq .Values.test "k8s"}}
devops: 123
{{ else }}
devops: 456
{{ end }}

[root@k8s-master1 templates]# helm install web05 --dry-run /root/app01/

...
spec:
replicas: 1
selector:
matchLabels:
app: "java-demo"
test: k8s

devops: 123

...
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
# 去除空行 加上 - 
[root@k8s-master1 templates]# vim deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
labels:
chart: {{ quote .Chart.Name }}
app: {{ .Release.Name }}
name: {{ .Release.Name }}
spec:
replicas: {{ .Values.replicas }}
selector:
matchLabels:
app: {{ quote .Values.label }}
test: {{ default "hello" .Values.test }}
{{- if eq .Values.test "k8s"}}
devops: 123
{{- else }}
devops: 456
{{- end }}
template:
metadata:
labels:
app: {{ .Values.label }}
spec:
containers:
- image: {{ .Values.image }}:{{ .Values.imageTag }}
name: {{ .Release.Name }}


[root@k8s-master1 templates]# helm install web05 --dry-run /root/app01/
...
spec:
replicas: 1
selector:
matchLabels:
app: "java-demo"
test: k8s
devops: 123
template:
metadata:
labels:
app: java-demo
spec:
containers:
- image: lizhenliang/java-demo:latest
name: web05
...

修改回实例变量

1
2
3
4
5
# 使用默认模板
[root@k8s-master1 app01]# cp values.yaml values.yaml_bak
[root@k8s-master1 ~]# helm create /root/app02
[root@k8s-master1 app01]# cp /root/app02/values.yaml .
cp: overwrite ‘./values.yaml’? y
1
2
3
4
5
6
7
8
9
[root@k8s-master1 app01]# vim values.yaml

replicaCount: 1

image:
repository: nginx
tag: 1.16
pullPolicy: IfNotPresent
...
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
# 修改deployment和service文件的变量引用
[root@k8s-master1 app01]# vim templates/deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
labels:
chart: {{ .Chart.Name }}
app: {{ .Release.Name }}
name: {{ .Release.Name }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
chart: {{ .Chart.Name }}
app: {{ .Release.Name }}
template:
metadata:
labels:
chart: {{ .Chart.Name }}
app: {{ .Release.Name }}
spec:
containers:
- image: {{ .Values.image.repository }}:{{ .Values.image.tag }}
name: {{ .Release.Name }}

[root@k8s-master1 app01]# vim templates/service.yaml

apiVersion: v1
kind: Service
metadata:
labels:
chart: {{ .Chart.Name }}
app: {{ .Release.Name }}
name: {{ .Release.Name }}
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
# 匹配POD标签
app: {{ .Release.Name }}
chart: {{ .Chart.Name }}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
# 测试 
[root@k8s-master1 app01]# helm install web05 --dry-run /root/app01/
NAME: web05
LAST DEPLOYED: Thu Dec 19 10:44:59 2019
NAMESPACE: default
STATUS: pending-install
REVISION: 1
TEST SUITE: None
HOOKS:
MANIFEST:
---
# Source: app01/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
labels:
chart: app01
app: web05
name: web05
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
# 匹配POD标签
app: web05
chart: app01
---
# Source: app01/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
chart: app01
app: web05
name: web05
spec:
replicas: 1
selector:
matchLabels:
chart: app01
app: web05
template:
metadata:
labels:
chart: app01
app: web05
spec:
containers:
- image: nginx:1.16
name: web05

资源限制判断

1
2
3
4
5
6
7
8
# 修改 values 增加资源限制
resources:
limits:
cpu: 100m
memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
# 修改 deployment 
# 加上条件判断,如果values没有资源的值 就使用{}空 如果有就使用

[root@k8s-master1 app01]# vim templates/deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
labels:
chart: {{ .Chart.Name }}
app: {{ .Release.Name }}
name: {{ .Release.Name }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
chart: {{ .Chart.Name }}
app: {{ .Release.Name }}
template:
metadata:
labels:
chart: {{ .Chart.Name }}
app: {{ .Release.Name }}
spec:
containers:
- image: {{ .Values.image.repository }}:{{ .Values.image.tag }}
name: {{ .Release.Name }}
{{- if .Values.resources }}
resources:
limits:
cpu: {{ .Values.resources.limits.cpu }}
memory: {{ .Values.resources.limits.memory }}
{{- else }}
resources: {}
{{- end }}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
[root@k8s-master1 app01]# helm install web05 --dry-run /root/app01/
NAME: web05
LAST DEPLOYED: Thu Dec 19 11:07:36 2019
NAMESPACE: default
STATUS: pending-install
REVISION: 1
TEST SUITE: None
HOOKS:
MANIFEST:
---
# Source: app01/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
labels:
chart: app01
app: web05
name: web05
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
# 匹配POD标签
app: web05
chart: app01
---
# Source: app01/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
chart: app01
app: web05
name: web05
spec:
replicas: 1
selector:
matchLabels:
chart: app01
app: web05
template:
metadata:
labels:
chart: app01
app: web05
spec:
containers:
- image: nginx:1.16
name: web05
resources:
limits:
cpu: 100m
memory: 128Mi
1
2
3
4
5
6
7
8
9
10
[root@k8s-master1 app01]# kubectl get pods -o wide 
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
web05-6bcbfb5fdc-7fwn2 1/1 Running 0 81s 10.244.0.45 k8s-master1 <none> <none>

[root@k8s-master1 app01]# kubectl describe node k8s-master1
...
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
--------- ---- ------------ ---------- --------------- ------------- ---
default web05-6bcbfb5fdc-7fwn2 100m (5%) 100m (5%) 128Mi (7%) 128Mi (7%) 42s
...

设置开关

1
2
3
4
5
6
7
8
9
# 设置开关 判断 enable: false | true
resources:
enable: false
limits:
cpu: 100m
memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
1
2
3
4
5
6
7
8
9
10
11
12
13
14
[root@k8s-master1 app01]# vim templates/deployment.yaml 

spec:
containers:
- image: {{ .Values.image.repository }}:{{ .Values.image.tag }}
name: {{ .Release.Name }}
{{- if .Values.resources.enable }}
resources:
limits:
cpu: {{ .Values.resources.limits.cpu }}
memory: {{ .Values.resources.limits.memory }}
{{- else }}
resources: {}
{{- end }}

ingress 开关

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
# 如果Values存在ingress 就进行资源配置 否则不进行
[root@k8s-master1 templates]# vim ingress.yaml

{{- if .Values.ingress.enabled }}
# 如果Values存在 ingress 就要配置 资源文件

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: test-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /testpath
backend:
serviceName: test
servicePort: 80

{{ end }}

[root@k8s-master1 app01]# vim values.yaml
...
ingress:
enabled: false
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
hosts:
- host: chart-example.local
paths: []
tls: []
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
...

1
2
3
# 启用 如果是false则不会创建
ingress:
enabled: true

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
[root@k8s-master1 app01]# helm install web /root/app01/
NAME: web
LAST DEPLOYED: Thu Dec 19 11:39:30 2019
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None

[root@k8s-master1 app01]# kubectl get pods
NAME READY STATUS RESTARTS AGE
nfs-client-provisioner-5dd6f66f47-9gb4k 1/1 Running 3 27h
web-944f58c9c-n2nw9 1/1 Running 0 14s

[root@k8s-master1 app01]# kubectl get ingress
NAME HOSTS ADDRESS PORTS AGE
test-ingress * 80 43s

with 控制变量作用域

1
2
3
4
5
6
[root@k8s-master1 app01]# vim values.yaml

nodeSelector:
# node 标签
team: a
gpu: ok
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
# 正常写法
[root@k8s-master1 ~]# vim /root/app01/templates/deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
labels:
chart: {{ .Chart.Name }}
app: {{ .Release.Name }}
name: {{ .Release.Name }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
chart: {{ .Chart.Name }}
app: {{ .Release.Name }}
template:
metadata:
labels:
chart: {{ .Chart.Name }}
app: {{ .Release.Name }}
spec:
containers:
{{ if .Values.nodeSelector }}
nodeSelector:
team: {{ .Values.nodeSelector.team }}
gpu: {{ .Values.nodeSelector.gpu }}
{{ end }}
- image: {{ .Values.image.repository }}:{{ .Values.image.tag }}
name: {{ .Release.Name }}
{{- if .Values.resources.enable }}
resources:
limits:
cpu: {{ .Values.resources.limits.cpu }}
memory: {{ .Values.resources.limits.memory }}
{{- else }}
resources: {}
{{- end }}

[root@k8s-master1 app01]# helm install web05 --dry-run /root/app01/
...
# Source: app01/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
chart: app01
app: web05
name: web05
spec:
replicas: 1
selector:
matchLabels:
chart: app01
app: web05
template:
metadata:
labels:
chart: app01
app: web05
spec:
nodeSelector:
team: a
gpu: true
containers:
- image: nginx:1.16
name: web05
resources:
limits:
cpu: 100m
memory: 128Mi
...
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
# 使用 with 
# 值会被取出
spec:
{{- with .Values.nodeSelector }}
nodeSelector:
team: {{ .team }}
gpu: {{ .gpu }}
{{- end }}

...
spec:
nodeSelector:
team: a
gpu: ok
...
1
2
3
4
5
6
7
8
9
# with + toYaml + 缩进
# 先看有多少个 空格
# | nindent 8 缩进8个空格并且换行 不加n不换行

spec:
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}

range 循环

  1. 在 Helm 模板语言中,使用 range关键字来进行循环操作。
  2. 循环内部我们使用的是一个 .,这是因为当前的作用域就在当前循环内,这个 .引用的当前读取的元素。
1
2
3
4
test:
- 1
- 2
- 3
1
2
3
4
5
6
7
8
9
10
# Source: app01/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: web05
data:
test: |
1
2
3

变量

  1. 在with中使用内置变量
  2. 直接使用$引用
1
2
3
4
5
6
7
# 在with里面引用内置变量会报错 找不到
spec:
{{- with .Values.nodeSelector }}
nodeSelector:
app: {{ .Release.Name }}
{{- toYaml . | nindent 8 }}
{{- end }}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
# 使用$引用内置变量
spec:
{{- with .Values.nodeSelector }}
nodeSelector:
app: {{ $.Release.Name }}
{{- toYaml . | nindent 8 }}
{{- end }}

...
nodeSelector:
app: web05
gpu: ok
team: a
...
1
2
3
4
5
6
7
8
# 在with上面定义变量
spec:
{{- $releaseName := .Release.Name -}}
{{- with .Values.nodeSelector }}
nodeSelector:
app: {{ $releaseName }}
{{- toYaml . | nindent 8 }}
{{- end }}
1
2
3
4
5
6
7
8
# range 使用变量 
# 正常使用
containers:
- image: {{ .Values.image.repository }}:{{ .Values.image.tag }}
name: {{ .Release.Name }}
env:
- name: TEST
value: 123
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
# 引用变量 错误引用
{{- range .Values.env }}
env:
- name: {{ . }}
value: {{ . }}
{{- end }}

...
env:
- name: -Xmx1g
value: -Xmx1g
env:
- name: JAVA
value: JAVA

...
1
2
3
4
5
6
7
8
9
10
11
12
13
14
# 正确引用
env:
{{- range $key,$val := .Values.env }}
- name: {{ $key }}
value: {{ $val }}
{{- end }}

...
env:
- name: JAVA_OPTS
value: -Xmx1g
- name: NAME
value: JAVA
...

命名模板

  1. 命名模板:使用define定义,template引入,在templates目录中默认下划线_开头的文件为公共模板(_helpers.tpl)
  2. 重复使用的代码块 放到命名模板
1
2
3
4
5
6
# 定义
[root@k8s-master1 templates]# vim _helpers.tpl

{{- define "name" -}}
{{- .Chart.Name -}}-{{ .Release.Name }}
{{- end -}}
1
2
3
4
5
6
7
8
# 引用 
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
chart: {{ .Chart.Name }}
app: {{ .Release.Name }}
name: {{ template "name" . }}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
# include 支持函数处理 
# 模板里面定投写 在引用的时候缩进空格

[root@k8s-master1 templates]# vim _helpers.tpl

{{- define "name" -}}
{{- .Chart.Name -}}-{{ .Release.Name }}
{{- end -}}

{{- define "labels" -}}
app: {{ template "name" . }}
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
release: "{{ .Release.Name }}"
{{- end -}}

# 引入
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
{{- include "labels" . | nindent 4 }}
name: {{ template "name" . }}
...


# 结果
...
# Source: app01/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: app01-web05
chart: "app01-0.1.0"
release: "web05"
name: app01-web05
...

开发自己的 Chart Java应用实例

  1. 先创建模板
1
helm create demo
  1. 修改Chart.yaml,Values.yaml,添加常用的变量
  2. 在templates目录下创建部署镜像所需要的yaml文件,并变量引用yaml里经常变动的字段

创建模板目录

1
2
3
4
5
6
7
8
9
10
[root@k8s-master1 opt]# cd /opt/
[root@k8s-master1 opt]# helm create demo
Creating demo

[root@k8s-master1 demo]# cd /opt/demo/
[root@k8s-master1 demo]# mkdir bak
[root@k8s-master1 demo]# cp Chart.yaml values.yaml bak/

# 清理文件注释
[root@k8s-master1 demo]# egrep -v '#|^$' /opt/demo/bak/Chart.yaml > Chart.yaml

修改 Chart.yaml

1
2
3
4
5
6
7
8
9
10
[root@k8s-master1 demo]# vim Chart.yaml 

apiVersion: v2
name: demo
description: My java demo
type: application
# chart 版本
version: 0.1.0
# app 版本
appVersion: 1.16.0

修改 Values.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
# 只保留用到的变量 
[root@k8s-master1 demo]# vim values.yaml

# Default values for demo.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.

replicaCount: 1

image:
repository: nginx
pullPolicy: IfNotPresent

imagePullSecrets: []

service:
type: ClusterIP
port: 80

ingress:
enabled: false
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
hosts:
- host: chart-example.local
paths: []
tls: []
# - secretName: chart-example-tls
# hosts:
# - chart-example.local

resources: {}
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi

nodeSelector: {}

tolerations: []

准备应用 yaml 文件

1
2
3
4
[root@k8s-master1 templates]# rm -rf tests/
[root@k8s-master1 templates]# rm -rf serviceaccount.yaml
[root@k8s-master1 templates]# ls
deployment.yaml _helpers.tpl ingress.yaml NOTES.txt service.yaml
1
2
3
4
5
6
7
8
9
# 三个公共模板
# _helpers.tpl 公共模板 service deploy ingress 都会用到
name: {{ include "demo.fullname" . }}

# deploy 标签
{{- include "demo.labels" . | nindent 4 }}

# 标签选择器 service.selector 一致
{{- include "demo.selectorLabels" . | nindent 8 }}
1
2
3
4
5
6
# 打包
[root@k8s-master1 opt]# helm package demo/

# 再启动一个
[root@k8s-master1 opt]# cp demo/values.yaml ./
[root@k8s-master1 opt]# helm install web2 -f values.yaml demo-0.1.0.tgz

使用 Harbor 作为 Chart 仓库