Kubernetes-详解云原生时代的容器编排与管理
Kubernetes 详解:云原生时代的容器编排与管理
一 Kubernetes 简介及部署方法
1.1 应用部署方式演变
在部署应用程序的方式上,主要经历了三个阶段:
传统部署:互联网早期,会直接将应用程序部署在物理机上
- 优点:简单,不需要其它技术的参与
- 缺点:不能为应用程序定义资源使用边界,很难合理地分配计算资源,而且程序之间容易产生影响
虚拟化部署:可以在一台物理机上运行多个虚拟机,每个虚拟机都是独立的一个环境
- 优点:程序环境不会相互产生影响,提供了一定程度的安全性
- 缺点:增加了操作系统,浪费了部分资源
容器化部署:与虚拟化类似,但是共享了操作系统
容器化部署方式给带来很多的便利,但是也会出现一些问题,比如说:
- 一个容器故障停机了,怎么样让另外一个容器立刻启动去替补停机的容器
- 当并发访问量变大的时候,怎么样做到横向扩展容器数量
1.2 容器编排应用
为了解决这些容器编排问题,就产生了一些容器编排的软件:
- Swarm:Docker自己的容器编排工具
- Mesos:Apache的一个资源统一管控的工具,需要和Marathon结合使用
- Kubernetes:Google开源的的容器编排工具
1.3 kubernetes 简介
- 在Docker 作为高级容器引擎快速发展的同时,在Google内部,容器技术已经应用了很多年
- Borg系统运行管理着成千上万的容器应用。
- Kubernetes项目来源于Borg,可以说是集结了Borg设计思想的精华,并且吸收了Borg系统中的经验和教训。
- Kubernetes对计算资源进行了更高层次的抽象,通过将容器进行细致的组合,将最终的应用服务交给用户。
kubernetes的本质是一组服务器集群,它可以在集群的每个节点上运行特定的程序,来对节点中的容器进行管理。目的是实现资源管理的自动化,主要提供了如下的主要功能:
- 自我修复:一旦某一个容器崩溃,能够在1秒中左右迅速启动新的容器
- 弹性伸缩:可以根据需要,自动对集群中正在运行的容器数量进行调整
- 服务发现:服务可以通过自动发现的形式找到它所依赖的服务
- 负载均衡:如果一个服务起动了多个容器,能够自动实现请求的负载均衡
- 版本回退:如果发现新发布的程序版本有问题,可以立即回退到原来的版本
- 存储编排:可以根据容器自身的需求自动创建存储卷
1.4 K8S的设计架构
1.4.1 K8S各个组件用途
一个kubernetes集群主要是由控制节点(master)、**工作节点(node)**构成,每个节点上都会安装不同的组件
1 master:集群的控制平面,负责集群的决策
- ApiServer : 资源操作的唯一入口,接收用户输入的命令,提供认证、授权、API注册和发现等机制
- Scheduler : 负责集群资源调度,按照预定的调度策略将Pod调度到相应的node节点上
- ControllerManager : 负责维护集群的状态,比如程序部署安排、故障检测、自动扩展、滚动更新等
- Etcd :负责存储集群中各种资源对象的信息
2 node:集群的数据平面,负责为容器提供运行环境
- kubelet:负责维护容器的生命周期,同时也负责Volume(CVI)和网络(CNI)的管理
- Container runtime:负责镜像管理以及Pod和容器的真正运行(CRI)
- kube-proxy:负责为Service提供cluster内部的服务发现和负载均衡
1.4.2 K8S 各组件之间的调用关系
当我们要运行一个web服务时
kubernetes环境启动之后,master和node都会将自身的信息存储到etcd数据库中
web服务的安装请求会首先被发送到master节点的apiServer组件
apiServer组件会调用scheduler组件来决定到底应该把这个服务安装到哪个node节点上
在此时,它会从etcd中读取各个node节点的信息,然后按照一定的算法进行选择,并将结果告知apiServer
apiServer调用controller-manager去调度Node节点安装web服务
kubelet接收到指令后,会通知docker,然后由docker来启动一个web服务的pod
如果需要访问web服务,就需要通过kube-proxy来对pod产生访问的代理
1.4.3 K8S 的 常用名词感念
- Master:集群控制节点,每个集群需要至少一个master节点负责集群的管控
- Node:工作负载节点,由master分配容器到这些node工作节点上,然后node节点上的
- Pod:kubernetes的最小控制单元,容器都是运行在pod中的,一个pod中可以有1个或者多个容器
- Controller:控制器,通过它来实现对pod的管理,比如启动pod、停止pod、伸缩pod的数量等等
- Service:pod对外服务的统一入口,下面可以维护者同一类的多个pod
- Label:标签,用于对pod进行分类,同一类pod会拥有相同的标签
- NameSpace:命名空间,用来隔离pod的运行环
1.4.4 k8S的分层架构
- 核心层:Kubernetes最核心的功能,对外提供API构建高层的应用,对内提供插件式应用执行环境
- 应用层:部署(无状态应用、有状态应用、批处理任务、集群应用等)和路由(服务发现、DNS解析等)
- 管理层:系统度量(如基础设施、容器和网络的度量),自动化(如自动扩展、动态Provision等)以及策略管理(RBAC、Quota、PSP、NetworkPolicy等)
- 接口层:kubectl命令行工具、客户端SDK以及集群联邦
- 生态系统:在接口层之上的庞大容器集群管理调度的生态系统,可以划分为两个范畴
- Kubernetes外部:日志、监控、配置管理、CI、CD、Workflow、FaaS、OTS应用、ChatOps等
- Kubernetes内部:CRI、CNI、CVI、镜像仓库、Cloud Provider、集群自身的配置和管理等
二 K8S集群环境搭建
2.1 k8s中容器的管理方式
K8S 集群创建方式有3种:
centainerd
默认情况下,K8S在创建集群时使用的方式
docker
Docker使用的普记录最高,虽然K8S在1.24版本后已经费力了kubelet对docker的支持,但时可以借助cri-docker方式来实现集群创建
cri-o
CRI-O的方式是Kubernetes创建容器最直接的一种方式,在创建集群的时候,需要借助于cri-o插件的方式来实现Kubernetes集群的创建。
注意“docker 和cri-o 这两种方式要对kubelet程序的启动参数进行设置
2.2 k8s 集群部署
2.2.1 k8s 环境部署说明
K8S中文官网:
主机名 | ip | 角色 |
---|---|---|
harbor | 192.168.121.200 | harbor仓库 |
master | 192.168.121.100 | master,k8s集群控制节点 |
node1 | 192.168.121.10 | worker,k8s集群工作节点 |
node2 | 192.168.121.20 | worker,k8s集群工作节点 |
- 所有节点禁用selinux和防火墙
- 所有节点同步时间和解析
- 所有节点安装docker-ce
- 所有节点禁用swap,注意注释掉/etc/fstab文件中的定义
2.2.2 集群环境初始化
2.2.2.1.配置时间同步
在配置 Kubernetes(或任何分布式系统)时间同步时,通常会选一台主机作为“内部时间服务器(NTP Server)”,这台主机本身会先从公网或更上层 NTP 服务器同步时间,然后再让集群中的其他节点作为客户端,同步到这台内部 server,从而保证整个集群时间一致、可靠、高效。
这里我选择harbor作为server
(1)server配置
下载chrony用于时间同步
[root@harbor ~]#yum install chrony
修改配置文件运行其他主机跟server同步时间
[root@harbor ~]# cat /etc/chrony.conf
# Allow NTP client access from local network.
allow 192.168.121.0/24
(2)client配置
全部下载chrony
[root@master+node1+node2 ~]#yum install chrony
修改配置文件
[root@master+node1+node2 ~]# cat /etc/chrony.conf
# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (https://www.pool.ntp.org/join.html).
#pool 2.rhel.pool.ntp.org iburst
server 192.168.121.200 iburst
查看当前系统通过 chrony 服务同步时间的时间源列表及同步状态
[root@master+node1+node2 ~]# chronyc sources
MS Name/IP address Stratum Poll Reach LastRx Last sample
===============================================================================
^* 192.168.121.200 3 6 377 14 -54us[ -100us] +/- 17ms
2.2.2.2.所有禁用swap和设置本地域名解析
]# systemctl mask swap.target
]# swapoff -a
]# vim /etc/fstab
#
# /etc/fstab
# Created by anaconda on Sun Feb 19 17:38:40 2023
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/rhel-root / xfs defaults 0 0
UUID=ddb06c77-c9da-4e92-afd7-53cd76e6a94a /boot xfs defaults 0 0
#/dev/mapper/rhel-swap swap swap defaults 0 0
/dev/cdrom /media iso9660 defaults 0 0
~]# vim /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.121.200 reg.timingy.org #这里是你的harbor仓库域名
192.168.121.100 master
192.168.121.10 node1
192.168.121.20 node2
2.2.2.3.所有安装docker
~]# vim /etc/yum.repos.d/docker.repo
[docker]
name=docker
baseurl=https://mirrors.aliyun.com/docker-ce/linux/rhel/9/x86_64/stable/
gpgcheck=0
~]# dnf install docker-ce -y
~]# cat /lib/systemd/system/docker.service
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --iptables=true
#--iptables=true是 Docker 的一个启动参数,表示让 Docker 自动管理系统的 iptables 规则,用于实现端口映射、容器网络通信等功能。默认开启,一般不要修改,否则可能导致网络功能(如端口转发)失效。
~]# systemctl enable --now docker
2.2.2.4.harbor仓库搭建和设置registry加密传输
[root@harbor packages]# ll
total 3621832
-rw-r--r-- 1 root root 131209386 Aug 23 2024 1panel-v1.10.13-lts-linux-amd64.tar.gz
-rw-r--r-- 1 root root 4505600 Aug 26 2024 busybox-latest.tar.gz
-rw-r--r-- 1 root root 211699200 Aug 26 2024 centos-7.tar.gz
-rw-r--r-- 1 root root 22456832 Aug 26 2024 debian11.tar.gz
-rw-r--r-- 1 root root 693103681 Aug 26 2024 docker-images.tar.gz
-rw-r--r-- 1 root root 57175040 Aug 26 2024 game2048.tar.gz
-rw-r--r-- 1 root root 102946304 Aug 26 2024 haproxy-2.3.tar.gz
-rw-r--r-- 1 root root 738797440 Aug 17 2024 harbor-offline-installer-v2.5.4.tgz
-rw-r--r-- 1 root root 207404032 Aug 26 2024 mario.tar.gz
-rw-r--r-- 1 root root 519596032 Aug 26 2024 mysql-5.7.tar.gz
-rw-r--r-- 1 root root 146568704 Aug 26 2024 nginx-1.23.tar.gz
-rw-r--r-- 1 root root 191849472 Aug 26 2024 nginx-latest.tar.gz
-rw-r--r-- 1 root root 574838784 Aug 26 2024 phpmyadmin-latest.tar.gz
-rw-r--r-- 1 root root 26009088 Aug 17 2024 registry.tag.gz
drwxr-xr-x 2 root root 277 Aug 23 2024 rpm
-rw-r--r-- 1 root root 80572416 Aug 26 2024 ubuntu-latest.tar.gz
#解压Harbor 私有镜像仓库的离线安装包
[root@harbor packages]# tar zxf harbor-offline-installer-v2.5.4.tgz
[root@harbor packages]# mkdir -p /data/certs
生成一个有效期为 365 天的自签名 HTTPS 证书(timingy.org.crt)和对应的私钥(timingy.org.key),该证书可用于域名 reg.timingy.org,私钥不加密,密钥长度 4096 位,使用 SHA-256 签名。
[root@harbor packages]# openssl req -newkey rsa:4096 -nodes -sha256 -keyout /data/certs/timingy.org.key --addext "subjectAltName = DNS:reg.timingy.org" -x509 -days 365 -out /data/certs/timingy.org.crt
Common Name (eg, your name or your server's hostname) []:reg.timingy.org #这里域名不能填错
[root@harbor harbor]# ls
common.sh harbor.v2.5.4.tar.gz harbor.yml.tmpl install.sh LICENSE prepare
[root@harbor harbor]# cp harbor.yml.tmpl harbor.yml #复制模板文件为harbor.yml
harbor.yml是 Harbor 的核心配置文件,你通过编辑它来定义 Harbor 的访问域名、是否启用 HTTPS、管理员密码、数据存储位置等关键信息。编辑完成后,运行 ./install.sh即可基于该配置完成 Harbor 的安装部署。
[root@harbor harbor]# vim harbor.yml
hostname: reg.timingy.org #harbor仓库域名
https: #https设置
# https port for harbor, default is 443
port: 443
certificate: /data/certs/timingy.org.crt #公钥位置
private_key: /data/certs/timingy.org.key #私钥位置
harbor_admin_password: 123 #harbor仓库admin用户密码
#在使用 Harbor 离线安装脚本时,显式要求安装并启用 ChartMuseum 组件,用于支持 Helm Chart(Kubernetes 应用包)的存储与管理
[root@harbor harbor]# ./install.sh --with-chartmuseum
#为集群中的多个 Docker 节点配置私有镜像仓库的信任证书,解决 HTTPS 访问时的证书验证问题,保证镜像拉取流程正常。
[root@harbor ~]# for i in 100 200 10 20
> do
> ssh -l root 192.168.121.$i mkdir -p /etc/docker/certs.d/reg.timingy.org
> scp /data/certs/timingy.org.crt root@192.168.121.$i:/etc/docker/certs.d/reg.timingy.org/ca.crt
> done
#设置搭建的harbor仓库为docker默认仓库(所有主机)
~]# vim /etc/docker/daemon.json
{
"registry-mirrors":["https://reg.timingy.org"]
}
#重启docker让配置生效
~]# systemctl restart docker.service
#查看docker信息
~]# docker info
Registry Mirrors:
https://reg.timingy.org/
Harbor 仓库的启动本质上就是通过
docker-compose
按照你配置的参数(源自 harbor.yml)来拉起一组 Docker 容器,组成完整的 Harbor 服务。当你执行 Harbor 的安装脚本:
./install.sh
1. 读取你的配置:harbor.yml
- 你之前编辑的
harbor.yml
文件是 Harbor 的核心配置文件,用于定义如下内容:- Harbor 的访问域名(hostname)
- 是否启用 HTTPS,以及证书和私钥路径
- 数据存储目录(data_volume)
- 是否启用 ChartMuseum(用于 Helm Chart 存储)
- 管理员密码等
它决定了 Harbor 的运行方式,例如使用什么域名访问、是否启用加密、数据存放在哪里等。
2. 生成 docker-compose 配置 & 加载 Docker 镜像
install.sh
脚本会根据harbor.yml
中的配置:
- 自动生成一份
docker-compose.yml
文件(通常在内部目录如./make/
下生成,不直接展示给用户)- 将 Harbor 所需的各个服务(如 UI、Registry、数据库、Redis、ChartMuseum 等)打包为 Docker 镜像
- 如果你使用的是 离线安装包,这些镜像通常已经包含在包中,无需联网下载
- 脚本会将这些镜像通过
docker load
命令加载到本地 Docker 环境中3. 调用 docker-compose 启动服务
最终,
install.sh
会调用类似于下面的命令(或内部等效逻辑)来启动 Harbor 服务:docker-compose up -d
该命令会根据生成的
docker-compose.yml
配置,以后台模式启动 Harbor 所需的多个容器些容器共同构成了一个完整的 Harbor 私有镜像仓库服务,支持镜像管理、用户权限、Helm Chart 存储等功能。
在harbor安装目录下启用harbor仓库
访问测试
点击高级–>继续访问
登录后创建公开项目k8s用于k8s集群搭建
2.2.2.5 安装K8S部署工具
#部署软件仓库,添加K8S源
~]# vim /etc/yum.repos.d/k8s.repo
[k8s]
name=k8s
baseurl=https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.30/rpm
gpgcheck=0
#安装软件
~]# dnf install kubelet-1.30.0 kubeadm-1.30.0 kubectl-1.30.0 -y
2.2.2.6 设置kubectl命令补齐功能
[root@k8s-master ~]# dnf install bash-completion -y
[root@k8s-master ~]# echo "source <(kubectl completion bash)" >> ~/.bashrc
[root@k8s-master ~]# source ~/.bashrc
2.2.2.7 在所有节点安装cri-docker
k8s从1.24版本开始移除了dockershim,所以需要安装cri-docker插件才能使用docker
软件下载:
下载docker连接插件及其依赖(让k8s支持docker容器):
所有节点~] #dnf install libcgroup-0.41-19.el8.x86_64.rpm \
> cri-dockerd-0.3.14-3.el8.x86_64.rpm -y
所有节点~]# cat /lib/systemd/system/cri-docker.service
[Unit]
Description=CRI Interface for Docker Application Container Engine
Documentation=https://docs.mirantis.com
After=network-online.target firewalld.service docker.service
Wants=network-online.target
Requires=cri-docker.socket
[Service]
Type=notify
#指定网络插件名称及基础容器镜像
ExecStart=/usr/bin/cri-dockerd --container-runtime-endpoint fd:// --network-plugin=cni --pod-infra-container-image=reg.timingy.org/k8s/pause:3.9
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=always
所有节点~]# systemctl daemon-reload
所有节点~]# systemctl enable --now cri-docker
所有节点~]# ll /var/run/cri-dockerd.sock
srw-rw---- 1 root docker 0 Aug 20 21:44 /var/run/cri-dockerd.sock #cri-dockerd的套接字文件
2.2.2.8 在master节点拉取K8S所需镜像
方法1.在线拉取
#拉取k8s集群所需要的镜像
[root@k8s-master ~]# kubeadm config images pull \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.30.0 \
--cri-socket=unix:///var/run/cri-dockerd.sock
#上传镜像到harbor仓库
[root@k8s-master ~]# docker images | awk '/google/{ print $1":"$2}' \
| awk -F "/" '{system("docker tag "$0" reg.timinglee.org/k8s/"$3)}'
[root@k8s-master ~]# docker images | awk '/k8s/{system("docker push "$1":"$2)}'
方法2:离线导入
[root@master k8s-img]# ll
total 650320
-rw-r--r-- 1 root root 84103168 Aug 20 21:55 flannel-0.25.5.tag.gz
-rw-r--r-- 1 root root 581815296 Aug 20 21:55 k8s_docker_images-1.30.tar
-rw-r--r-- 1 root root 4406 Aug 20 21:55 kube-flannel.yml
#导入镜像
[root@master k8s-img]# docker load -i k8s_docker_images-1.30.tar
3d6fa0469044: Loading layer 327.7kB/327.7kB
49626df344c9: Loading layer 40.96kB/40.96kB
945d17be9a3e: Loading layer 2.396MB/2.396MB
4d049f83d9cf: Loading layer 1.536kB/1.536kB
af5aa97ebe6c: Loading layer 2.56kB/2.56kB
ac805962e479: Loading layer 2.56kB/2.56kB
bbb6cacb8c82: Loading layer 2.56kB/2.56kB
2a92d6ac9e4f: Loading layer 1.536kB/1.536kB
1a73b54f556b: Loading layer 10.24kB/10.24kB
f4aee9e53c42: Loading layer 3.072kB/3.072kB
b336e209998f: Loading layer 238.6kB/238.6kB
06ddf169d3f3: Loading layer 1.69MB/1.69MB
c0cb02961a3c: Loading layer 112.9MB/112.9MB
Loaded image: registry.aliyuncs.com/google_containers/kube-apiserver:v1.30.0
7b631378e22a: Loading layer 107.4MB/107.4MB
Loaded image: registry.aliyuncs.com/google_containers/kube-controller-manager:v1.30.0
62baa24e327e: Loading layer 58.3MB/58.3MB
Loaded image: registry.aliyuncs.com/google_containers/kube-scheduler:v1.30.0
3113ebfbe4c2: Loading layer 28.35MB/28.35MB
f76f3fb0cfaa: Loading layer 57.58MB/57.58MB
Loaded image: registry.aliyuncs.com/google_containers/kube-proxy:v1.30.0
e023e0e48e6e: Loading layer 327.7kB/327.7kB
6fbdf253bbc2: Loading layer 51.2kB/51.2kB
7bea6b893187: Loading layer 3.205MB/3.205MB
ff5700ec5418: Loading layer 10.24kB/10.24kB
d52f02c6501c: Loading layer 10.24kB/10.24kB
e624a5370eca: Loading layer 10.24kB/10.24kB
1a73b54f556b: Loading layer 10.24kB/10.24kB
d2d7ec0f6756: Loading layer 10.24kB/10.24kB
4cb10dd2545b: Loading layer 225.3kB/225.3kB
aec96fc6d10e: Loading layer 217.1kB/217.1kB
545a68d51bc4: Loading layer 57.16MB/57.16MB
Loaded image: registry.aliyuncs.com/google_containers/coredns:v1.11.1
e3e5579ddd43: Loading layer 746kB/746kB
Loaded image: registry.aliyuncs.com/google_containers/pause:3.9
54ad2ec71039: Loading layer 327.7kB/327.7kB
6fbdf253bbc2: Loading layer 51.2kB/51.2kB
accc3e6808c0: Loading layer 3.205MB/3.205MB
ff5700ec5418: Loading layer 10.24kB/10.24kB
d52f02c6501c: Loading layer 10.24kB/10.24kB
e624a5370eca: Loading layer 10.24kB/10.24kB
1a73b54f556b: Loading layer 10.24kB/10.24kB
d2d7ec0f6756: Loading layer 10.24kB/10.24kB
4cb10dd2545b: Loading layer 225.3kB/225.3kB
a9f9fc6d48ba: Loading layer 2.343MB/2.343MB
b48a138a7d6b: Loading layer 124.2MB/124.2MB
b4b40553581c: Loading layer 20.36MB/20.36MB
Loaded image: registry.aliyuncs.com/google_containers/etcd:3.5.12-0
#打标签
[root@master k8s-img]# docker images | awk '/google/{print $1":"$2}' | awk -F / '{system("docker tag "$0" reg.timingy.org/k8s/"$3)}'
[root@master k8s-img]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
reg.timingy.org/k8s/kube-apiserver v1.30.0 c42f13656d0b 16 months ago 117MB
registry.aliyuncs.com/google_containers/kube-apiserver v1.30.0 c42f13656d0b 16 months ago 117MB
reg.timingy.org/k8s/kube-controller-manager v1.30.0 c7aad43836fa 16 months ago 111MB
registry.aliyuncs.com/google_containers/kube-controller-manager v1.30.0 c7aad43836fa 16 months ago 111MB
reg.timingy.org/k8s/kube-scheduler v1.30.0 259c8277fcbb 16 months ago 62MB
registry.aliyuncs.com/google_containers/kube-scheduler v1.30.0 259c8277fcbb 16 months ago 62MB
reg.timingy.org/k8s/kube-proxy v1.30.0 a0bf559e280c 16 months ago 84.7MB
#推送镜像
[root@master k8s-img]# docker images | awk '/timingy/{system("docker push " $1":"$2)}'
2.2.2.9 集群初始化
#执行初始化命令
[root@master k8s-img]# kubeadm init --pod-network-cidr=10.244.0.0/16 \
> --image-repository reg.timingy.org/k8s \
> --kubernetes-version v1.30.0 \
> --cri-socket=unix:///var/run/cri-dockerd.sock
#指定集群配置文件变量
[root@master ~]# echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
[root@master k8s-img]# source ~/.bash_profile
#当前节点没有就绪,因为还没有安装网络插件,容器没有运行
[root@master k8s-img]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master NotReady control-plane 3m v1.30.0
[root@master k8s-img]# kubectl get pod -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-7c677d6c78-7n96p 0/1 Pending 0 3m2s
kube-system coredns-7c677d6c78-jp6c5 0/1 Pending 0 3m2s
kube-system etcd-master 1/1 Running 0 3m16s
kube-system kube-apiserver-master 1/1 Running 0 3m18s
kube-system kube-controller-manager-master 1/1 Running 0 3m16s
kube-system kube-proxy-rjzl9 1/1 Running 0 3m2s
kube-system kube-scheduler-master 1/1 Running 0 3m16s
Note:
在此阶段如果生成的集群token找不到了可以重新生成
[root@master ~]# kubeadm token create --print-join-command kubeadm join 192.168.121.100:6443 --token slx36w.np3pg2xzfhtj8hsr \ --discovery-token-ca-cert-hash sha256:29389ead6392e0bb1f68adb025e3a6817c9936a26f9140f8a166528e521addb3 --cri-socket=unix:///var/run/cri-dockerd.sock
2.2.2.10 安装flannel网络插件
官方网站:
#下载flannel的yaml部署文件
[root@k8s-master ~]# wget https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml
#下载镜像:
[root@k8s-master ~]# docker pull docker.io/flannel/flannel:v0.25.5
[root@k8s-master ~]# docekr docker.io/flannel/flannel-cni-plugin:v1.5.1-flannel1
#注意得现在harbor中建立flannel公开项目
[root@master k8s-img]# docker tag flannel/flannel:v0.25.5 reg.timingy.org/flannel/flannel:v0.25.5
[root@master k8s-img]# docker tag flannel/flannel-cni-plugin:v1.5.1-flannel1 reg.timingy.org/flannel/flannel-cni-plugin:v1.5.1-flannel1
#推送
[root@master k8s-img]# docker push reg.timingy.org/flannel/flannel:v0.25.5
[root@master k8s-img]# docker push reg.timingy.org/flannel/flannel-cni-plugin:v1.5.1-flannel1
#修改yml配置文件指定镜像仓库,官方的是docker.io,删掉这里就行,docker会从默认的仓库也就是我们的harbor仓库拉取镜像
[root@master k8s-img]# vim kube-flannel.yml
image: flannel/flannel:v0.25.5
image: flannel/flannel-cni-plugin:v1.5.1-flannel1
image: flannel/flannel:v0.25.5
[root@master k8s-img]# kubectl apply -f kube-flannel.yml
namespace/kube-flannel created
serviceaccount/flannel created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created
#查看pods运行情况
[root@master k8s-img]# kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-flannel kube-flannel-ds-jqz8p 1/1 Running 0 15s
kube-system coredns-7c677d6c78-7n96p 1/1 Running 0 18m
kube-system coredns-7c677d6c78-jp6c5 1/1 Running 0 18m
kube-system etcd-master 1/1 Running 0 18m
kube-system kube-apiserver-master 1/1 Running 0 18m
kube-system kube-controller-manager-master 1/1 Running 0 18m
kube-system kube-proxy-rjzl9 1/1 Running 0 18m
kube-system kube-scheduler-master 1/1 Running 0 18m
#查看节点是否ready
[root@master k8s-img]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready control-plane 18m v1.30.0
2.2.2.11 节点扩容
在所有的worker节点中
1 确认部署好以下内容
2 禁用swap
3 安装:
- kubelet-1.30.0
- kubeadm-1.30.0
- kubectl-1.30.0
- docker-ce
- cri-dockerd
4 修改cri-dockerd启动文件添加
- –network-plugin=cni
- –pod-infra-container-image=reg.timinglee.org/k8s/pause:3.9
5 启动服务
- kubelet.service
- cri-docker.service
以上信息确认完毕后即可加入集群
[root@master k8s-img]# kubeadm token create --print-join-command
kubeadm join 192.168.121.100:6443 --token p3kfyl.ljipmtsklr21r9ah --discovery-token-ca-cert-hash sha256:e01d3ac26e5c7b3100487dae6e14ce16e49f183b1b35f18cacd2be8006177293
[root@node1 ~]# kubeadm join 192.168.121.100:6443 --token p3kfyl.ljipmtsklr21r9ah --discovery-token-ca-cert-hash sha256:e01d3ac26e5c7b3100487dae6e14ce16e49f183b1b35f18cacd2be8006177293 --cri-socket=unix:///var/run/cri-dockerd.sock
[root@node2 ~]# kubeadm join 192.168.121.100:6443 --token p3kfyl.ljipmtsklr21r9ah --discovery-token-ca-cert-hash sha256:e01d3ac26e5c7b3100487dae6e14ce16e49f183b1b35f18cacd2be8006177293 --cri-socket=unix:///var/run/cri-dockerd.sock
[preflight] Running pre-flight checks
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 2.50498435s
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
在master节点中查看所有node的状态
Note:
所有阶段的STATUS为Ready状态,那么恭喜你,你的kubernetes就装好了!!
测试集群运行情况
[root@harbor ~]# cd packages/
[root@harbor packages]# ll
total 3621832
-rw-r--r-- 1 root root 131209386 Aug 23 2024 1panel-v1.10.13-lts-linux-amd64.tar.gz
-rw-r--r-- 1 root root 4505600 Aug 26 2024 busybox-latest.tar.gz
-rw-r--r-- 1 root root 211699200 Aug 26 2024 centos-7.tar.gz
-rw-r--r-- 1 root root 22456832 Aug 26 2024 debian11.tar.gz
-rw-r--r-- 1 root root 693103681 Aug 26 2024 docker-images.tar.gz
-rw-r--r-- 1 root root 57175040 Aug 26 2024 game2048.tar.gz
-rw-r--r-- 1 root root 102946304 Aug 26 2024 haproxy-2.3.tar.gz
drwxr-xr-x 3 root root 180 Aug 20 20:06 harbor
-rw-r--r-- 1 root root 738797440 Aug 17 2024 harbor-offline-installer-v2.5.4.tgz
-rw-r--r-- 1 root root 207404032 Aug 26 2024 mario.tar.gz
-rw-r--r-- 1 root root 519596032 Aug 26 2024 mysql-5.7.tar.gz
-rw-r--r-- 1 root root 146568704 Aug 26 2024 nginx-1.23.tar.gz
-rw-r--r-- 1 root root 191849472 Aug 26 2024 nginx-latest.tar.gz
-rw-r--r-- 1 root root 574838784 Aug 26 2024 phpmyadmin-latest.tar.gz
-rw-r--r-- 1 root root 26009088 Aug 17 2024 registry.tag.gz
drwxr-xr-x 2 root root 277 Aug 23 2024 rpm
-rw-r--r-- 1 root root 80572416 Aug 26 2024 ubuntu-latest.tar.gz
#加载压缩包为镜像
[root@harbor packages]# docker load -i nginx-latest.tar.gz
#打标签并推送
[root@harbor packages]# docker tag nginx:latest reg.timingy.org/library/nginx:latest
[root@harbor packages]# docker push reg.timingy.org/library/nginx:latest
#建立一个pod
[root@master k8s-img]# kubectl run test --image=nginx
pod/test created
#查看pod状态
[root@master k8s-img]# kubectl get pods
NAME READY STATUS RESTARTS AGE
test 1/1 Running 0 35s
#删除pod
[root@master k8s-img]# kubectl delete pod test
pod "test" deleted
三 kubernetes 中的资源
3.1 资源管理介绍
- 在kubernetes中,所有的内容都抽象为资源,用户需要通过操作资源来管理kubernetes。
- kubernetes的本质上就是一个集群系统,用户可以在集群中部署各种服务
- 所谓的部署服务,其实就是在kubernetes集群中运行一个个的容器,并将指定的程序跑在容器中。
- kubernetes的最小管理单元是pod而不是容器,只能将容器放在
Pod
中, - kubernetes一般也不会直接管理Pod,而是通过
Pod控制器
来管理Pod的。 - Pod中服务的访问是由kubernetes提供的
Service
资源来实现。 - Pod中程序的数据需要持久化是由kubernetes提供的各种存储系统来实现
3.2 资源管理方式
命令式对象管理:直接使用命令去操作kubernetes资源
kubectl run nginx-pod --image=nginx:latest --port=80
命令式对象配置:通过命令配置和配置文件去操作kubernetes资源
kubectl create/patch -f nginx-pod.yaml
声明式对象配置:通过apply命令和配置文件去操作kubernetes资源
kubectl apply -f nginx-pod.yaml
类型 | 适用环境 | 优点 | 缺点 |
---|---|---|---|
命令式对象管理 | 测试 | 简单 | 只能操作活动对象,无法审计、跟踪 |
命令式对象配置 | 开发 | 可以审计、跟踪 | 项目大时,配置文件多,操作麻烦 |
声明式对象配置 | 开发 | 支持目录操作 | 意外情况下难以调试 |
3.2.1 命令式对象管理
kubectl是kubernetes集群的命令行工具,通过它能够对集群本身进行管理,并能够在集群上进行容器化应用的安装部署
kubectl命令的语法如下:
kubectl [command] [type] [name] [flags]
comand:指定要对资源执行的操作,例如create、get、delete
type:指定资源类型,比如deployment、pod、service
name:指定资源的名称,名称大小写敏感
flags:指定额外的可选参数
# 查看所有pod
kubectl get pod
# 查看某个pod
kubectl get pod pod_name
# 查看某个pod,以yaml格式展示结果
kubectl get pod pod_name -o yaml
3.2.2 资源类型
kubernetes中所有的内容都抽象为资源
kubectl api-resources
常用资源类型
kubect 常见命令操作
3.2.3 基本命令示例
kubectl的详细说明地址:
[root@master ~]# kubectl version
Client Version: v1.30.0
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.30.0
#显示集群信息
[root@master ~]# kubectl cluster-info
Kubernetes control plane is running at https://192.168.121.100:6443
CoreDNS is running at https://192.168.121.100:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
#创建一个webcluster控制器,控制器中pod数量为2
[root@master ~]# kubectl create deployment webcluster --image nginx --replicas 2
#查看控制器
[root@master ~]# kubectl get deployments.apps
NAME READY UP-TO-DATE AVAILABLE AGE
webcluster 2/2 2 2 22s
[root@master ~]# kubectl get deployments.apps
NAME READY UP-TO-DATE AVAILABLE AGE
webcluster 2/2 2 2 22s
#查看资源帮助
[root@master ~]# kubectl explain deployment
GROUP: apps
KIND: Deployment
VERSION: v1
DESCRIPTION:
Deployment enables declarative updates for Pods and ReplicaSets.
FIELDS:
apiVersion <string>
APIVersion defines the versioned schema of this representation of an object.
Servers should convert recognized schemas to the latest internal value, and
may reject unrecognized values. More info:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
kind <string>
Kind is a string value representing the REST resource this object
represents. Servers may infer this from the endpoint the client submits
requests to. Cannot be updated. In CamelCase. More info:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
metadata <ObjectMeta>
Standard object's metadata. More info:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
spec <DeploymentSpec>
Specification of the desired behavior of the Deployment.
status <DeploymentStatus>
Most recently observed status of the Deployment.
#查看控制器参数帮助
[root@master ~]# kubectl explain deployment.spec
GROUP: apps
KIND: Deployment
VERSION: v1
FIELD: spec <DeploymentSpec>
DESCRIPTION:
Specification of the desired behavior of the Deployment.
DeploymentSpec is the specification of the desired behavior of the
Deployment.
FIELDS:
minReadySeconds <integer>
Minimum number of seconds for which a newly created pod should be ready
without any of its container crashing, for it to be considered available.
Defaults to 0 (pod will be considered available as soon as it is ready)
paused <boolean>
Indicates that the deployment is paused.
progressDeadlineSeconds <integer>
The maximum time in seconds for a deployment to make progress before it is
considered to be failed. The deployment controller will continue to process
failed deployments and a condition with a ProgressDeadlineExceeded reason
will be surfaced in the deployment status. Note that progress will not be
estimated during the time a deployment is paused. Defaults to 600s.
replicas <integer>
Number of desired pods. This is a pointer to distinguish between explicit
zero and not specified. Defaults to 1.
revisionHistoryLimit <integer>
The number of old ReplicaSets to retain to allow rollback. This is a pointer
to distinguish between explicit zero and not specified. Defaults to 10.
selector <LabelSelector> -required-
Label selector for pods. Existing ReplicaSets whose pods are selected by
this will be the ones affected by this deployment. It must match the pod
template's labels.
strategy <DeploymentStrategy>
The deployment strategy to use to replace existing pods with new ones.
template <PodTemplateSpec> -required-
Template describes the pods that will be created. The only allowed
template.spec.restartPolicy value is "Always".
#编辑控制器配置
[root@master ~]# kubectl edit deployments.apps webcluster
@@@@省略内容@@@@@@
spec:
progressDeadlineSeconds: 600
replicas: 3 #pods数量改为3
@@@@省略内容@@@@@@
#查看控制器
[root@master ~]# kubectl get deployments.apps
NAME READY UP-TO-DATE AVAILABLE AGE
webcluster 3/3 3 3 4m56s
#利用补丁更改控制器配置
[root@master ~]# kubectl patch deployments.apps webcluster -p '{"spec":{"replicas":4}}'
deployment.apps/webcluster patched
[root@master ~]# kubectl get deployments.apps webcluster
NAME READY UP-TO-DATE AVAILABLE AGE
webcluster 4/4 4 4 7m6s
#删除资源
[root@master ~]# kubectl delete deployments.apps webcluster
deployment.apps "webcluster" deleted
[root@master ~]# kubectl get deployments.apps
No resources found in default namespace.
3.2.4 运行和调试命令示例
#拷贝文件到pod中
[root@master ~]# kubectl cp anaconda-ks.cfg nginx:/
[root@master ~]# kubectl exec -it pods/nginx /bin/bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
root@nginx:/# ls
anaconda-ks.cfg boot docker-entrypoint.d etc lib media opt root sbin sys usr
bin dev docker-entrypoint.sh home lib64 mnt proc run srv tmp var
#拷贝pod中的文件到本机
[root@master ~]# kubectl cp nginx:/anaconda-ks.cfg ./
tar: Removing leading `/' from member names
3.2.5 高级命令示例
#利用命令生成yaml模板文件
[root@master ~]# kubectl create deployment webcluster --image nginx --dry-run=client -o yaml > webcluster.yml
#利用yaml文件生成资源
(删除不需要的配置后)
[root@master podsManager]# cat webcluster.yml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: webcluster
name: webcluster
spec:
replicas: 1
selector:
matchLabels:
app: webcluster
template:
metadata:
labels:
app: webcluster
spec:
containers:
- image: nginx
name: nginx
#利用 YAML 文件定义并创建 Kubernetes 资源
[root@master podsManager]# kubectl apply -f webcluster.yml
deployment.apps/webcluster created
#查看控制器
[root@master podsManager]# kubectl get deployments.apps
NAME READY UP-TO-DATE AVAILABLE AGE
webcluster 1/1 1 1 6s
#删除资源
[root@master podsManager]# kubectl delete -f webcluster.yml
deployment.apps "webcluster" deleted
#管理资源标签
[root@master podsManager]# kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
nginx 1/1 Running 0 102s run=nginx
[root@master podsManager]# kubectl label pods nginx app=xxy
pod/nginx labeled
[root@master podsManager]# kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
nginx 1/1 Running 0 2m47s app=xxy,run=nginx
#更改标签
[root@master podsManager]# kubectl label pods nginx app=webcluster --overwrite
pod/nginx labeled
[root@master podsManager]# kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
nginx 1/1 Running 0 5m55s app=webcluster,run=nginx
#删除标签
[root@master podsManager]# cat webcluster.yml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: webcluster
name: webcluster
spec:
replicas: 2
selector:
matchLabels:
app: webcluster
template:
metadata:
labels:
app: webcluster
spec:
containers:
- image: nginx
name: nginx
[root@master podsManager]# kubectl apply -f webcluster.yml
deployment.apps/webcluster created
[root@master podsManager]# kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
webcluster-7c584f774b-7ncbj 1/1 Running 0 15s app=webcluster,pod-template-hash=7c584f774b
webcluster-7c584f774b-gxktm 1/1 Running 0 15s app=webcluster,pod-template-hash=7c584f774b
#删除pod上的标签
[root@master podsManager]# kubectl label pods webcluster-7c584f774b-7ncbj app-
pod/webcluster-7c584f774b-7ncbj unlabeled
#控制器会重新启动新pod
[root@master podsManager]# kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
webcluster-7c584f774b-52bbq 1/1 Running 0 26s app=webcluster,pod-template-hash=7c584f774b
webcluster-7c584f774b-7ncbj 1/1 Running 0 4m28s pod-template-hash=7c584f774b
webcluster-7c584f774b-gxktm 1/1 Running 0 4m28s app=webcluster,pod-template-hash=7c584f774b
四 pod
4.1 什么是pod
- Pod是可以创建和管理Kubernetes计算的最小可部署单元
- 一个Pod代表着集群中运行的一个进程,每个pod都有一个唯一的ip。
- 一个pod类似一个豌豆荚,包含一个或多个容器(通常是docker)
- 多个容器间共享IPC、Network和UTC namespace。
4.1.1 创建自主式pod (生产不推荐)
优点:
灵活性高:
- 可以精确控制 Pod 的各种配置参数,包括容器的镜像、资源限制、环境变量、命令和参数等,满足特定的应用需求。
学习和调试方便:
- 对于学习 Kubernetes 的原理和机制非常有帮助,通过手动创建 Pod 可以深入了解 Pod 的结构和配置方式。在调试问题时,可以更直接地观察和调整 Pod 的设置。
适用于特殊场景:
- 在一些特殊情况下,如进行一次性任务、快速验证概念或在资源受限的环境中进行特定配置时,手动创建 Pod 可能是一种有效的方式。
缺点:
管理复杂:
- 如果需要管理大量的 Pod,手动创建和维护会变得非常繁琐和耗时。难以实现自动化的扩缩容、故障恢复等操作。
缺乏高级功能:
- 无法自动享受 Kubernetes 提供的高级功能,如自动部署、滚动更新、服务发现等。这可能导致应用的部署和管理效率低下。
#查看所有pods(当前namespace)
[root@master podsManager]# kubectl get pods
No resources found in default namespace.
#建立一个名为timingy的pod
[root@master podsManager]# kubectl run timingy --image nginx
pod/timingy created
[root@master podsManager]# kubectl get pods
NAME READY STATUS RESTARTS AGE
timingy 1/1 Running 0 5s
#显示pod的较为详细的信息
[root@master podsManager]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
timingy 1/1 Running 0 14s 10.244.1.5 node1 <none> <none>
4.1.2 利用控制器管理pod(推荐)
高可用性和可靠性:
- 自动故障恢复:如果一个 Pod 失败或被删除,控制器会自动创建新的 Pod 来维持期望的副本数量。确保应用始终处于可用状态,减少因单个 Pod 故障导致的服务中断。
- 健康检查和自愈:可以配置控制器对 Pod 进行健康检查(如存活探针和就绪探针)。如果 Pod 不健康,控制器会采取适当的行动,如重启 Pod 或删除并重新创建它,以保证应用的正常运行。
可扩展性:
- 轻松扩缩容:可以通过简单的命令或配置更改来增加或减少 Pod 的数量,以满足不同的工作负载需求。例如,在高流量期间可以快速扩展以处理更多请求,在低流量期间可以缩容以节省资源。
- 水平自动扩缩容(HPA):可以基于自定义指标(如 CPU 利用率、内存使用情况或应用特定的指标)自动调整 Pod 的数量,实现动态的资源分配和成本优化。
版本管理和更新:
- 滚动更新:对于 Deployment 等控制器,可以执行滚动更新来逐步替换旧版本的 Pod 为新版本,确保应用在更新过程中始终保持可用。可以控制更新的速率和策略,以减少对用户的影响。
- 回滚:如果更新出现问题,可以轻松回滚到上一个稳定版本,保证应用的稳定性和可靠性。
声明式配置:
- 简洁的配置方式:使用 YAML 或 JSON 格式的声明式配置文件来定义应用的部署需求。这种方式使得配置易于理解、维护和版本控制,同时也方便团队协作。
- 期望状态管理:只需要定义应用的期望状态(如副本数量、容器镜像等),控制器会自动调整实际状态与期望状态保持一致。无需手动管理每个 Pod 的创建和删除,提高了管理效率。
服务发现和负载均衡:
- 自动注册和发现:Kubernetes 中的服务(Service)可以自动发现由控制器管理的 Pod,并将流量路由到它们。这使得应用的服务发现和负载均衡变得简单和可靠,无需手动配置负载均衡器。
- 流量分发:可以根据不同的策略(如轮询、随机等)将请求分发到不同的 Pod,提高应用的性能和可用性。
多环境一致性:
- 一致的部署方式:在不同的环境(如开发、测试、生产)中,可以使用相同的控制器和配置来部署应用,确保应用在不同环境中的行为一致。这有助于减少部署差异和错误,提高开发和运维效率。
示例:
#建立控制器并自动运行pod
[root@master ~]# kubectl create deployment timingy --image nginx
deployment.apps/timingy created
[root@master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
timingy-5bb68ff8f9-swfjk 1/1 Running 0 22s
#为timingy扩容
[root@master ~]# kubectl scale deployment timingy --replicas 6
deployment.apps/timingy scaled
[root@master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
timingy-5bb68ff8f9-8gc4z 1/1 Running 0 4m15s
timingy-5bb68ff8f9-hvn2j 1/1 Running 0 4m15s
timingy-5bb68ff8f9-mr48h 1/1 Running 0 4m15s
timingy-5bb68ff8f9-nsf4g 1/1 Running 0 4m15s
timingy-5bb68ff8f9-pnmk2 1/1 Running 0 4m15s
timingy-5bb68ff8f9-swfjk 1/1 Running 0 5m20s
#为timinglee缩容
[root@master ~]# kubectl scale deployment timingy --replicas 2
deployment.apps/timingy scaled
[root@master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
timingy-5bb68ff8f9-hvn2j 1/1 Running 0 5m5s
timingy-5bb68ff8f9-swfjk 1/1 Running 0 6m10s
4.1.3 应用版本的更新
#利用控制器建立pod
[root@master ~]# kubectl create deployment timingy --image myapp:v1 --replicas 2
deployment.apps/timingy created
#暴漏端口
[root@master ~]# kubectl expose deployment timingy --port 80 --target-port 80
service/timingy exposed
#访问服务
[root@master ~]# curl 10.107.166.185
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@master ~]# curl 10.107.166.185
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
#产看历史版本
[root@master ~]# kubectl rollout history deployment timingy
deployment.apps/timingy
REVISION CHANGE-CAUSE
1 <none>
#更新控制器镜像版本
[root@master ~]# kubectl set image deployments/timingy myapp=myapp:v2
deployment.apps/timingy image updated
#查看历史版本
[root@master ~]# kubectl rollout history deployment timingy
deployment.apps/timingy
REVISION CHANGE-CAUSE
1 <none>
2 <none>
#访问内容测试
[root@master ~]# curl 10.107.166.185
Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>
#版本回滚
[root@master ~]# kubectl rollout undo deployment timingy --to-revision 1
deployment.apps/timingy rolled back
[root@master ~]# curl 10.107.166.185
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
#不过还是建议在yaml文件中修改镜像版本
4.1.4 利用yaml文件部署应用
4.1.4.1 用yaml文件部署应用有以下优点
声明式配置:
- 清晰表达期望状态:以声明式的方式描述应用的部署需求,包括副本数量、容器配置、网络设置等。这使得配置易于理解和维护,并且可以方便地查看应用的预期状态。
- 可重复性和版本控制:配置文件可以被版本控制,确保在不同环境中的部署一致性。可以轻松回滚到以前的版本或在不同环境中重复使用相同的配置。
- 团队协作:便于团队成员之间共享和协作,大家可以对配置文件进行审查和修改,提高部署的可靠性和稳定性。
灵活性和可扩展性:
- 丰富的配置选项:可以通过 YAML 文件详细地配置各种 Kubernetes 资源,如 Deployment、Service、ConfigMap、Secret 等。可以根据应用的特定需求进行高度定制化。
- 组合和扩展:可以将多个资源的配置组合在一个或多个 YAML 文件中,实现复杂的应用部署架构。同时,可以轻松地添加新的资源或修改现有资源以满足不断变化的需求。
与工具集成:
- 与 CI/CD 流程集成:可以将 YAML 配置文件与持续集成和持续部署(CI/CD)工具集成,实现自动化的应用部署。例如,可以在代码提交后自动触发部署流程,使用配置文件来部署应用到不同的环境。
- 命令行工具支持:Kubernetes 的命令行工具
kubectl
对 YAML 配置文件有很好的支持,可以方便地应用、更新和删除配置。同时,还可以使用其他工具来验证和分析 YAML 配置文件,确保其正确性和安全性。
4.1.4.2 资源清单参数
参数名称 | 类型 | 参数说明 |
---|---|---|
version | String | 这里是指的是K8S API的版本,目前基本上是v1,可以用kubectl api-versions命令查询 |
kind | String | 这里指的是yaml文件定义的资源类型和角色,比如:Pod |
metadata | Object | 元数据对象,固定值就写metadata |
metadata.name | String | 元数据对象的名字,这里由我们编写,比如命名Pod的名字 |
metadata.namespace | String | 元数据对象的命名空间,由我们自身定义 |
Spec | Object | 详细定义对象,固定值就写Spec |
spec.containers[] | list | 这里是Spec对象的容器列表定义,是个列表 |
spec.containers[].name | String | 这里定义容器的名字 |
spec.containers[].image | string | 这里定义要用到的镜像名称 |
spec.containers[].imagePullPolicy | String | 定义镜像拉取策略,有三个值可选: (1) Always: 每次都尝试重新拉取镜像 (2) IfNotPresent:如果本地有镜像就使用本地镜像 (3) )Never:表示仅使用本地镜像 |
spec.containers[].command[] | list | 指定容器运行时启动的命令,若未指定则运行容器打包时指定的命令 |
spec.containers[].args[] | list | 指定容器运行参数,可以指定多个 |
spec.containers[].workingDir | String | 指定容器工作目录 |
spec.containers[].volumeMounts[] | list | 指定容器内部的存储卷配置 |
spec.containers[].volumeMounts[].name | String | 指定可以被容器挂载的存储卷的名称 |
spec.containers[].volumeMounts[].mountPath | String | 指定可以被容器挂载的存储卷的路径 |
spec.containers[].volumeMounts[].readOnly | String | 设置存储卷路径的读写模式,ture或false,默认为读写模式 |
spec.containers[].ports[] | list | 指定容器需要用到的端口列表 |
spec.containers[].ports[].name | String | 指定端口名称 |
spec.containers[].ports[].containerPort | String | 指定容器需要监听的端口号 |
spec.containers[] ports[].hostPort | String | 指定容器所在主机需要监听的端口号,默认跟上面containerPort相同,注意设置了hostPort同一台主机无法启动该容器的相同副本(因为主机的端口号不能相同,这样会冲突) |
spec.containers[].ports[].protocol | String | 指定端口协议,支持TCP和UDP,默认值为 TCP |
spec.containers[].env[] | list | 指定容器运行前需设置的环境变量列表 |
spec.containers[].env[].name | String | 指定环境变量名称 |
spec.containers[].env[].value | String | 指定环境变量值 |
spec.containers[].resources | Object | 指定资源限制和资源请求的值(这里开始就是设置容器的资源上限) |
spec.containers[].resources.limits | Object | 指定设置容器运行时资源的运行上限 |
spec.containers[].resources.limits.cpu | String | 指定CPU的限制,单位为核心数,1=1000m |
spec.containers[].resources.limits.memory | String | 指定MEM内存的限制,单位为MIB、GiB |
spec.containers[].resources.requests | Object | 指定容器启动和调度时的限制设置 |
spec.containers[].resources.requests.cpu | String | CPU请求,单位为core数,容器启动时初始化可用数量 |
spec.containers[].resources.requests.memory | String | 内存请求,单位为MIB、GIB,容器启动的初始化可用数量 |
spec.restartPolicy | string | 定义Pod的重启策略,默认值为Always. (1)Always: Pod-旦终止运行,无论容器是如何 终止的,kubelet服务都将重启它 (2)OnFailure: 只有Pod以非零退出码终止时,kubelet才会重启该容器。如果容器正常结束(退出码为0),则kubelet将不会重启它 (3) Never: Pod终止后,kubelet将退出码报告给Master,不会重启该 |
spec.nodeSelector | Object | 定义Node的Label过滤标签,以key:value格式指定 |
spec.imagePullSecrets | Object | 定义pull镜像时使用secret名称,以name:secretkey格式指定 |
spec.hostNetwork | Boolean | 定义是否使用主机网络模式,默认值为false。设置true表示使用宿主机网络,不使用docker网桥,同时设置了true将无法在同一台宿主机 上启动第二个副本 |
4.1.4.3 如何获得资源帮助
kubectl explain pod.spec.containers
4.1.4.4 编写示例
4.1.4.4.1 示例1:运行简单的单个容器pod
[root@k8s-master ~]# vim pod.yml
apiVersion: v1
kind: Pod
metadata:
labels:
run: timing #pod标签
name: timinglee #pod名称
spec:
containers:
- image: myapp:v1 #pod镜像
name: timinglee #容器名称
4.1.4.4.2 示例2:运行多个容器pod
**注意:**注意如果多个容器运行在一个pod中,资源共享的同时在使用相同资源时也会干扰,比如端口
#一个端口干扰示例:
[root@k8s-master ~]# vim pod.yml
apiVersion: v1
kind: Pod
metadata:
labels:
run: timing
name: timinglee
spec:
containers:
- image: nginx:latest
name: web1
- image: nginx:latest
name: web2
[root@k8s-master ~]# kubectl apply -f pod.yml
pod/timinglee created
[root@k8s-master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
timinglee 1/2 Error 1 (14s ago) 18s
#查看日志
[root@k8s-master ~]# kubectl logs timinglee web2
2024/08/31 12:43:20 [emerg] 1#1: bind() to [::]:80 failed (98: Address already in use)
nginx: [emerg] bind() to [::]:80 failed (98: Address already in use)
2024/08/31 12:43:20 [notice] 1#1: try again to bind() after 500ms
2024/08/31 12:43:20 [emerg] 1#1: still could not bind()
nginx: [emerg] still could not bind()
**注意:**在一个pod中开启多个容器时一定要确保容器彼此不能互相干扰
[root@k8s-master ~]# vim pod.yml
[root@k8s-master ~]# kubectl apply -f pod.yml
pod/timinglee created
apiVersion: v1
kind: Pod
metadata:
labels:
run: timing
name: timinglee
spec:
containers:
- image: nginx:latest
name: web1
- image: busybox:latest
name: busybox
command: ["/bin/sh","-c","sleep 1000000"]
[root@k8s-master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
timinglee 2/2 Running 0 19s
4.1.4.4.3 示例3:理解pod间的网络整合
同在一个pod中的容器公用一个网络
[root@master podsManager]# cat pod.yml
apiVersion: v1
kind: Pod
metadata:
labels:
run: timinglee
name: test
spec:
containers:
- image: myapp:v1
name: myapp1
- image: busyboxplus:latest
name: busyboxplus
command: ["/bin/sh","-c","sleep 1000000"]
[root@master podsManager]# kubectl apply -f pod.yml
pod/test created
[root@master podsManager]# kubectl get pods
NAME READY STATUS RESTARTS AGE
test 2/2 Running 0 18s
[root@master podsManager]# kubectl exec test -c busyboxplus -- curl -s localhost
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
可以看到同一个pod里容器共享一个网络
4.1.4.4.4 示例4:端口映射
[root@master podsManager]# cat 1-pod.yml
apiVersion: v1
kind: Pod
metadata:
labels:
run: timinglee
name: test
spec:
containers:
- image: myapp:v1
name: myapp1
ports:
- name: http
containerPort: 80
hostPort: 80 #映射端口到被调度的节点的真实网卡ip上
protocol: TCP
[root@master podsManager]# kubectl apply -f 1-pod.yml
pod/test created
#测试
[root@master podsManager]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
test 1/1 Running 0 69s 10.244.104.48 node2 <none> <none>
[root@master podsManager]# curl node2
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
4.1.4.4.5 示例5:如何设定环境变量
[root@master podsManager]# cat 2-pod.yml
apiVersion: v1
kind: Pod
metadata:
labels:
run: timinglee
name: test
spec:
containers:
- image: busybox:latest
name: busybox
command: ["/bin/sh","-c","echo $NAME;sleep 3000000"]
env:
- name: NAME
value: timinglee
[root@master podsManager]# kubectl apply -f 2-pod.yml
pod/test created
[root@master podsManager]# kubectl logs pods/test busybox
timinglee
4.1.4.4.6 示例6:资源限制
资源限制会影响pod的Qos Class资源优先级,资源优先级分为Guaranteed > Burstable > BestEffort
QoS(Quality of Service)即服务质量
资源设定 优先级类型 资源限定未设定 BestEffort 资源限定设定且最大和最小不一致 Burstable 资源限定设定且最大和最小一致 Guaranteed
[root@k8s-master ~]# vim pod.yml
apiVersion: v1
kind: Pod
metadata:
labels:
run: timinglee
name: test
spec:
containers:
- image: myapp:v1
name: myapp
resources:
limits: #pod使用资源的最高限制
cpu: 500m
memory: 100M
requests: #pod期望使用资源量,不能大于limits
cpu: 500m
memory: 100M
root@k8s-master ~]# kubectl apply -f pod.yml
pod/test created
[root@k8s-master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
test 1/1 Running 0 3s
[root@k8s-master ~]# kubectl describe pods test
Limits:
cpu: 500m
memory: 100M
Requests:
cpu: 500m
memory: 100M
QoS Class: Guaranteed
4.1.4.4.7 示例7 容器启动管理
[root@k8s-master ~]# vim pod.yml
apiVersion: v1
kind: Pod
metadata:
labels:
run: timinglee
name: test
spec:
restartPolicy: Always
containers:
- image: myapp:v1
name: myapp
[root@k8s-master ~]# kubectl apply -f pod.yml
pod/test created
[root@k8s-master ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
test 1/1 Running 0 6s 10.244.2.3 k8s-node2 <none> <none>
[root@k8s-node2 ~]# docker rm -f ccac1d64ea81
4.1.4.4.8 示例8 选择运行节点
[root@k8s-master ~]# vim pod.yml
apiVersion: v1
kind: Pod
metadata:
labels:
run: timinglee
name: test
spec:
nodeSelector:
kubernetes.io/hostname: k8s-node1
restartPolicy: Always
containers:
- image: myapp:v1
name: myapp
[root@k8s-master ~]# kubectl apply -f pod.yml
pod/test created
[root@k8s-master ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
test 1/1 Running 0 21s 10.244.1.5 k8s-node1 <none> <none>
4.1.4.4.9 示例9 共享宿主机网络
[root@k8s-master ~]# vim pod.yml
apiVersion: v1
kind: Pod
metadata:
labels:
run: timinglee
name: test
spec:
hostNetwork: true #共享宿主机网络
restartPolicy: Always
containers:
- image: busybox:latest
name: busybox
command: ["/bin/sh","-c","sleep 100000"]
[root@k8s-master ~]# kubectl apply -f pod.yml
pod/test created
[root@k8s-master ~]# kubectl exec -it pods/test -c busybox -- /bin/sh
/ # ifconfig
cni0 Link encap:Ethernet HWaddr E6:D4:AA:81:12:B4
inet addr:10.244.2.1 Bcast:10.244.2.255 Mask:255.255.255.0
inet6 addr: fe80::e4d4:aaff:fe81:12b4/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1450 Metric:1
RX packets:6259 errors:0 dropped:0 overruns:0 frame:0
TX packets:6495 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:506704 (494.8 KiB) TX bytes:625439 (610.7 KiB)
docker0 Link encap:Ethernet HWaddr 02:42:99:4A:30:DC
inet addr:172.17.0.1 Bcast:172.17.255.255 Mask:255.255.0.0
UP BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
eth0 Link encap:Ethernet HWaddr 00:0C:29:6A:A8:61
inet addr:172.25.254.20 Bcast:172.25.254.255 Mask:255.255.255.0
inet6 addr: fe80::8ff3:f39c:dc0c:1f0e/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:27858 errors:0 dropped:0 overruns:0 frame:0
TX packets:14454 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:26591259 (25.3 MiB) TX bytes:1756895 (1.6 MiB)
flannel.1 Link encap:Ethernet HWaddr EA:36:60:20:12:05
inet addr:10.244.2.0 Bcast:0.0.0.0 Mask:255.255.255.255
inet6 addr: fe80::e836:60ff:fe20:1205/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1450 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:40 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:163 errors:0 dropped:0 overruns:0 frame:0
TX packets:163 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:13630 (13.3 KiB) TX bytes:13630 (13.3 KiB)
veth9a516531 Link encap:Ethernet HWaddr 7A:92:08:90:DE:B2
inet6 addr: fe80::7892:8ff:fe90:deb2/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1450 Metric:1
RX packets:6236 errors:0 dropped:0 overruns:0 frame:0
TX packets:6476 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:592532 (578.6 KiB) TX bytes:622765 (608.1 KiB)
/ # exit
默认情况下,K8s 的 Pod 会有独立的网络命名空间(即独立的 IP、网卡、端口等),与宿主机(运行 K8s 的服务器)网络隔离。而通过设置
hostNetwork: true
,Pod 会放弃独立网络,直接使用宿主机的网络命名空间 —— 相当于 Pod 内的容器和宿主机 “共用一套网卡、IP 和端口”。
4.2 pod的生命周期
4.2.1 INIT 容器
官方文档:
Pod 可以包含多个容器,应用运行在这些容器里面,同时 Pod 也可以有一个或多个先于应用容器启动的 Init 容器。
Init 容器与普通的容器非常像,除了如下两点:
- 它们总是运行到完成
- init 容器不支持 Readiness,因为它们必须在 Pod 就绪之前运行完成,每个 Init 容器必须运行成功,下一个才能够运行。
如果Pod的 Init 容器失败,Kubernetes 会不断地重启该 Pod,直到 Init 容器成功为止。但是,如果 Pod 对应的 restartPolicy 值为 Never,它不会重新启动。
4.2.1.1 INIT 容器的功能
- Init 容器可以包含一些安装过程中应用容器中不存在的实用工具或个性化代码。
- Init 容器可以安全地运行这些工具,避免这些工具导致应用镜像的安全性降低。
- 应用镜像的创建者和部署者可以各自独立工作,而没有必要联合构建一个单独的应用镜像。
- Init 容器能以不同于Pod内应用容器的文件系统视图运行。因此,Init容器可具有访问 Secrets 的权限,而应用容器不能够访问。
- 由于 Init 容器必须在应用容器启动之前运行完成,因此 Init 容器提供了一种机制来阻塞或延迟应用容器的启动,直到满足了一组先决条件。一旦前置条件满足,Pod内的所有的应用容器会并行启动。
4.2.1.2 INIT 容器示例
[root@k8s-master ~]# vim pod.yml
apiVersion: v1
kind: Pod
metadata:
labels:
name: initpod
name: initpod
spec:
containers:
- image: myapp:v1
name: myapp
initContainers:
- name: init-myservice
image: busybox
command: ["sh","-c","until test -e /testfile;do echo wating for myservice; sleep 2;done"]
[root@k8s-master ~]# kubectl apply -f pod.yml
pod/initpod created
[root@k8s-master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
initpod 0/1 Init:0/1 0 3s
[root@k8s-master ~]# kubectl logs pods/initpod init-myservice
wating for myservice
wating for myservice
wating for myservice
wating for myservice
wating for myservice
wating for myservice
[root@k8s-master ~]# kubectl exec pods/initpod -c init-myservice -- /bin/sh -c "touch /testfile"
[root@k8s-master ~]# kubectl get pods NAME READY STATUS RESTARTS AGE
initpod 1/1 Running 0 62s
4.2.2 探针
探针是由 kubelet 对容器执行的定期诊断:
- ExecAction:在容器内执行指定命令。如果命令退出时返回码为 0 则认为诊断成功。
- TCPSocketAction:对指定端口上的容器的 IP 地址进行 TCP 检查。如果端口打开,则诊断被认为是成功的。
- HTTPGetAction:对指定的端口和路径上的容器的 IP 地址执行 HTTP Get 请求。如果响应的状态码大于等于200 且小于 400,则诊断被认为是成功的。
每次探测都将获得以下三种结果之一:
- 成功:容器通过了诊断。
- 失败:容器未通过诊断。
- 未知:诊断失败,因此不会采取任何行动。
Kubelet 可以选择是否执行在容器上运行的三种探针执行和做出反应:
- livenessProbe:指示容器是否正在运行。如果存活探测失败,则 kubelet 会杀死容器,并且容器将受到其重启策略的影响。如果容器不提供存活探针,则默认状态为 Success。
- readinessProbe:指示容器是否准备好服务请求。如果就绪探测失败,端点控制器将从与 Pod 匹配的所有 Service 的端点中删除该 Pod 的 IP 地址。初始延迟之前的就绪状态默认为 Failure。如果容器不提供就绪探针,则默认状态为 Success。
- startupProbe: 指示容器中的应用是否已经启动。如果提供了启动探测(startup probe),则禁用所有其他探测,直到它成功为止。如果启动探测失败,kubelet 将杀死容器,容器服从其重启策略进行重启。如果容器没有提供启动探测,则默认状态为成功Success。
ReadinessProbe 与 LivenessProbe 的区别
- ReadinessProbe 当检测失败后,将 Pod 的 IP:Port 从对应的 EndPoint 列表中删除。
- LivenessProbe 当检测失败后,将杀死容器并根据 Pod 的重启策略来决定作出对应的措施
StartupProbe 与 ReadinessProbe、LivenessProbe 的区别
- 如果三个探针同时存在,先执行 StartupProbe 探针,其他两个探针将会被暂时禁用,直到 pod 满足 StartupProbe 探针配置的条件,其他 2 个探针启动,如果不满足按照规则重启容器。
- 另外两种探针在容器启动后,会按照配置,直到容器消亡才停止探测,而 StartupProbe 探针只是在容器启动后按照配置满足一次后,不在进行后续的探测。
4.2.2.1 探针实例
4.2.1.1.1 存活探针示例:
[root@k8s-master ~]# vim pod.yml
apiVersion: v1
kind: Pod
metadata:
labels:
name: liveness
name: liveness
spec:
containers:
- image: myapp:v1
name: myapp
livenessProbe:
tcpSocket: #检测端口存在性
port: 8080
initialDelaySeconds: 3 #容器启动后要等待多少秒后就探针开始工作,默认是 0
periodSeconds: 1 #执行探测的时间间隔,默认为 10s
timeoutSeconds: 1 #探针执行检测请求后,等待响应的超时时间,默认为 1s
#测试:
[root@k8s-master ~]# kubectl apply -f pod.yml
pod/liveness created
[root@k8s-master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
liveness 0/1 CrashLoopBackOff 2 (7s ago) 22s
[root@k8s-master ~]# kubectl describe pods
Warning Unhealthy 1s (x9 over 13s) kubelet Liveness probe failed: dial tcp 10.244.2.6:8080: connect: connection refused
4.2.2.1.2 就绪探针示例:
[root@k8s-master ~]# vim pod.yml
apiVersion: v1
kind: Pod
metadata:
labels:
name: readiness
name: readiness
spec:
containers:
- image: myapp:v1
name: myapp
readinessProbe:
httpGet:
path: /test.html
port: 80
initialDelaySeconds: 1
periodSeconds: 3
timeoutSeconds: 1
#测试:
[root@k8s-master ~]# kubectl expose pod readiness --port 80 --target-port 80
[root@k8s-master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
readiness 0/1 Running 0 5m25s
[root@k8s-master ~]# kubectl describe pods readiness
Warning Unhealthy 26s (x66 over 5m43s) kubelet Readiness probe failed: HTTP probe failed with statuscode: 404
[root@k8s-master ~]# kubectl describe services readiness
Name: readiness
Namespace: default
Labels: name=readiness
Annotations: <none>
Selector: name=readiness
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.100.171.244
IPs: 10.100.171.244
Port: <unset> 80/TCP
TargetPort: 80/TCP
Endpoints: #没有暴漏端口,就绪探针探测不满足暴漏条件
Session Affinity: None
Events: <none>
kubectl exec pods/readiness -c myapp -- /bin/sh -c "echo test > /usr/share/nginx/html/test.html"
[root@k8s-master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
readiness 1/1 Running 0 7m49s
[root@k8s-master ~]# kubectl describe services readiness
Name: readiness
Namespace: default
Labels: name=readiness
Annotations: <none>
Selector: name=readiness
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.100.171.244
IPs: 10.100.171.244
Port: <unset> 80/TCP
TargetPort: 80/TCP
Endpoints: 10.244.2.8:80 #满组条件端口暴漏
Session Affinity: None
Events: <none>
五 控制器
5.1 什么是控制器
官方文档:
控制器也是管理pod的一种手段
- 自主式pod:pod退出或意外关闭后不会被重新创建
- 控制器管理的 Pod:在控制器的生命周期里,始终要维持 Pod 的副本数目
Pod控制器是管理pod的中间层,使用Pod控制器之后,只需要告诉Pod控制器,想要多少个什么样的Pod就可以了,它会创建出满足条件的Pod并确保每一个Pod资源处于用户期望的目标状态。如果Pod资源在运行中出现故障,它会基于指定策略重新编排Pod
当建立控制器后,会把期望值写入etcd,k8s中的apiserver检索etcd中我们保存的期望状态,并对比pod的当前状态,如果出现差异代码自驱动立即恢复
5.2 控制器常用类型
控制器名称 | 控制器用途 |
---|---|
Replication Controller | 比较原始的pod控制器,已经被废弃,由ReplicaSet替代 |
ReplicaSet | ReplicaSet 确保任何时间都有指定数量的 Pod 副本在运行 |
Deployment | 一个 Deployment 为 和 提供声明式的更新能力 |
DaemonSet | DaemonSet 确保全指定节点上运行一个 Pod 的副本 |
StatefulSet | StatefulSet 是用来管理有状态应用的工作负载 API 对象。 |
Job | 执行批处理任务,仅执行一次任务,保证任务的一个或多个Pod成功结束 |
CronJob | Cron Job 创建基于时间调度的 Jobs。 |
HPA全称Horizontal Pod Autoscaler | 根据资源利用率自动调整service中Pod数量,实现Pod水平自动缩放 |
5.3 replicaset控制器
5.3.1 replicaset功能
- ReplicaSet 是下一代的 Replication Controller,官方推荐使用ReplicaSet
- ReplicaSet和Replication Controller的唯一区别是选择器的支持,ReplicaSet支持新的基于集合的选择器需求
- ReplicaSet 确保任何时间都有指定数量的 Pod 副本在运行
- 虽然 ReplicaSets 可以独立使用,但今天它主要被Deployments 用作协调 Pod 创建、删除和更新的机制
5.3.2 replicaset参数说明
参数名称 | 字段类型 | 参数说明 |
---|---|---|
spec | Object | 详细定义对象,固定值就写Spec |
spec.replicas | integer | 指定维护pod数量 |
spec.selector | Object | Selector是对pod的标签查询,与pod数量匹配 |
spec.selector.matchLabels | string | 指定Selector查询标签的名称和值,以key:value方式指定 |
spec.template | Object | 指定对pod的描述信息,比如lab标签,运行容器的信息等 |
spec.template.metadata | Object | 指定pod属性 |
spec.template.metadata.labels | string | 指定pod标签 |
spec.template.spec | Object | 详细定义对象 |
spec.template.spec.containers | list | Spec对象的容器列表定义 |
spec.template.spec.containers.name | string | 指定容器名称 |
spec.template.spec.containers.image | string | 指定容器镜像 |
#生成yml文件
[root@k8s-master ~]# kubectl create deployment replicaset --image myapp:v1 --dry-run=client -o yaml > replicaset.yml
[root@k8s-master ~]# vim replicaset.yml
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: replicaset #指定pod名称,一定小写,如果出现大写报错
spec:
replicas: 2 #指定维护pod数量为2
selector: #指定检测匹配方式
matchLabels: #指定匹配方式为匹配标签
app: myapp #指定匹配的标签为app=myapp
template: #模板,当副本数量不足时,会根据下面的模板创建pod副本
metadata:
labels:
app: myapp
spec:
containers:
- image: myapp:v1
name: myapp
[root@k8s-master ~]# kubectl apply -f replicaset.yml
replicaset.apps/replicaset created
[root@k8s-master ~]# kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
replicaset-l4xnr 1/1 Running 0 96s app=myapp
replicaset-t2s5p 1/1 Running 0 96s app=myapp
#replicaset是通过标签匹配pod
[root@k8s-master ~]# kubectl label pod replicaset-f7ztm app=xie --overwrite
pod/replicaset-l4xnr labeled
[root@k8s-master ~]# kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
replicaset-gd5fh 1/1 Running 0 2s app=myapp #新开启的pod
replicaset-l4xnr 1/1 Running 0 3m19s app=timinglee
replicaset-t2s5p 1/1 Running 0 3m19s app=myapp
#恢复标签后
[root@k8s2 pod]# kubectl label pod replicaset-example-q2sq9 app-
[root@k8s2 pod]# kubectl get pod --show-labels
NAME READY STATUS RESTARTS AGE LABELS
replicaset-example-q2sq9 1/1 Running 0 3m14s app=nginx
replicaset-example-th24v 1/1 Running 0 3m14s app=nginx
replicaset-example-w7zpw 1/1 Running 0 3m14s app=nginx
#replicaset自动控制副本数量,pod可以自愈
[root@k8s-master ~]# kubectl delete pods replicaset-t2s5p
pod "replicaset-t2s5p" deleted
[root@k8s-master ~]# kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
replicaset-l4xnr 1/1 Running 0 5m43s app=myapp
replicaset-nxmr9 1/1 Running 0 15s app=myapp
回收资源
[root@k8s2 pod]# kubectl delete -f rs-example.yml
5.4 deployment 控制器
5.4.1 deployment控制器的功能
- 为了更好的解决服务编排的问题,kubernetes在V1.2版本开始,引入了Deployment控制器。
- Deployment控制器并不直接管理pod,而是通过管理ReplicaSet来间接管理Pod
- Deployment管理ReplicaSet,ReplicaSet管理Pod
- Deployment 为 Pod 和 ReplicaSet 提供了一个申明式的定义方法
- 在Deployment中ReplicaSet相当于一个版本
典型的应用场景:
- 用来创建Pod和ReplicaSet
- 滚动更新和回滚
- 扩容和缩容
- 暂停与恢复
5.4.2 deployment控制器示例
#生成yaml文件
[root@k8s-master ~]# kubectl create deployment deployment --image myapp:v1 --dry-run=client -o yaml > deployment.yml
[root@k8s-master ~]# vim deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment
spec:
replicas: 4
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- image: myapp:v1
name: myapp
#建立pod
root@k8s-master ~]# kubectl apply -f deployment.yml
deployment.apps/deployment created
#查看pod信息
[root@k8s-master ~]# kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
deployment-5d886954d4-2ckqw 1/1 Running 0 23s app=myapp,pod-template-hash=5d886954d4
deployment-5d886954d4-m8gpd 1/1 Running 0 23s app=myapp,pod-template-hash=5d886954d4
deployment-5d886954d4-s7pws 1/1 Running 0 23s app=myapp,pod-template-hash=5d886954d4
deployment-5d886954d4-wqnvv 1/1 Running 0 23s app=myapp,pod-template-hash=5d886954d4
5.4.2.1 版本迭代
[root@k8s-master ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
deployment-5d886954d4-2ckqw 1/1 Running 0 2m40s 10.244.2.14 k8s-node2 <none> <none>
deployment-5d886954d4-m8gpd 1/1 Running 0 2m40s 10.244.1.17 k8s-node1 <none> <none>
deployment-5d886954d4-s7pws 1/1 Running 0 2m40s 10.244.1.16 k8s-node1 <none> <none>
deployment-5d886954d4-wqnvv 1/1 Running 0 2m40s 10.244.2.15 k8s-node2 <none> <none>
#pod运行容器版本为v1
[root@k8s-master ~]# curl 10.244.2.14
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@k8s-master ~]# kubectl describe deployments.apps deployment
Name: deployment
Namespace: default
CreationTimestamp: Sun, 01 Sep 2024 23:19:10 +0800
Labels: <none>
Annotations: deployment.kubernetes.io/revision: 1
Selector: app=myapp
Replicas: 4 desired | 4 updated | 4 total | 4 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge #默认每次更新25%
#更新容器运行版本
[root@k8s-master ~]# vim deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment
spec:
minReadySeconds: 5 #最小就绪时间5秒
replicas: 4
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- image: myapp:v2 #更新为版本2
name: myapp
[root@k8s2 pod]# kubectl apply -f deployment-example.yaml
#更新过程
[root@k8s-master ~]# watch - n1 kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE
deployment-5d886954d4-8kb28 1/1 Running 0 48s
deployment-5d886954d4-8s4h8 1/1 Running 0 49s
deployment-5d886954d4-rclkp 1/1 Running 0 50s
deployment-5d886954d4-tt2hz 1/1 Running 0 50s
deployment-7f4786db9c-g796x 0/1 Pending 0 0s
#测试更新效果
[root@k8s-master ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
deployment-7f4786db9c-967fk 1/1 Running 0 10s 10.244.1.26 k8s-node1 <none> <none>
deployment-7f4786db9c-cvb9k 1/1 Running 0 10s 10.244.2.24 k8s-node2 <none> <none>
deployment-7f4786db9c-kgss4 1/1 Running 0 9s 10.244.1.27 k8s-node1 <none> <none>
deployment-7f4786db9c-qts8c 1/1 Running 0 9s 10.244.2.25 k8s-node2 <none> <none>
[root@k8s-master ~]# curl 10.244.1.26
Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>
Note:
更新的过程是重新建立一个版本的RS,新版本的RS会把pod 重建,然后把老版本的RS回收
5.4.2.2 版本回滚
[root@k8s-master ~]# vim deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment
spec:
replicas: 4
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- image: myapp:v1 #回滚到之前版本
name: myapp
[root@k8s-master ~]# kubectl apply -f deployment.yml
deployment.apps/deployment configured
#测试回滚效果
[root@k8s-master ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
deployment-5d886954d4-dr74h 1/1 Running 0 8s 10.244.2.26 k8s-node2 <none> <none>
deployment-5d886954d4-thpf9 1/1 Running 0 7s 10.244.1.29 k8s-node1 <none> <none>
deployment-5d886954d4-vmwl9 1/1 Running 0 8s 10.244.1.28 k8s-node1 <none> <none>
deployment-5d886954d4-wprpd 1/1 Running 0 6s 10.244.2.27 k8s-node2 <none> <none>
[root@k8s-master ~]# curl 10.244.2.26
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
5.4.2.3 滚动更新策略
[root@k8s-master ~]# vim deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment
spec:
minReadySeconds: 5 #最小就绪时间,指定pod每隔多久更新一次
replicas: 4
strategy: #指定更新策略
rollingUpdate:
maxSurge: 1 #比定义pod数量多几个
maxUnavailable: 0 #比定义pod个数少几个
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- image: myapp:v1
name: myapp
[root@k8s2 pod]# kubectl apply -f deployment-example.yaml
5.4.2.4 暂停及恢复
在实际生产环境中我们做的变更可能不止一处,当修改了一处后,如果执行变更就直接触发了
我们期望的触发时当我们把所有修改都搞定后一次触发
暂停,避免触发不必要的线上更新
[root@k8s2 pod]# kubectl rollout pause deployment deployment-example
[root@k8s2 pod]# vim deployment-example.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment-example
spec:
minReadySeconds: 5
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
replicas: 6
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: nginx
resources:
limits:
cpu: 0.5
memory: 200Mi
requests:
cpu: 0.5
memory: 200Mi
[root@k8s2 pod]# kubectl apply -f deployment-example.yaml
#调整副本数,不受影响
[root@k8s-master ~]# kubectl describe pods deployment-7f4786db9c-8jw22
Name: deployment-7f4786db9c-8jw22
Namespace: default
Priority: 0
Service Account: default
Node: k8s-node1/172.25.254.10
Start Time: Mon, 02 Sep 2024 00:27:20 +0800
Labels: app=myapp
pod-template-hash=7f4786db9c
Annotations: <none>
Status: Running
IP: 10.244.1.31
IPs:
IP: 10.244.1.31
Controlled By: ReplicaSet/deployment-7f4786db9c
Containers:
myapp:
Container ID: docker://01ad7216e0a8c2674bf17adcc9b071e9bfb951eb294cafa2b8482bb8b4940c1d
Image: myapp:v2
Image ID: docker-pullable://myapp@sha256:5f4afc8302ade316fc47c99ee1d41f8ba94dbe7e3e7747dd87215a15429b9102
Port: <none>
Host Port: <none>
State: Running
Started: Mon, 02 Sep 2024 00:27:21 +0800
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mfjjp (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-api-access-mfjjp:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 6m22s default-scheduler Successfully assigned default/deployment-7f4786db9c-8jw22 to k8s-node1
Normal Pulled 6m22s kubelet Container image "myapp:v2" already present on machine
Normal Created 6m21s kubelet Created container myapp
Normal Started 6m21s kubelet Started container myapp
#但是更新镜像和修改资源并没有触发更新
[root@k8s2 pod]# kubectl rollout history deployment deployment-example
deployment.apps/deployment-example
REVISION CHANGE-CAUSE
3 <none>
4 <none>
#恢复后开始触发更新
[root@k8s2 pod]# kubectl rollout resume deployment deployment-example
[root@k8s2 pod]# kubectl rollout history deployment deployment-example
deployment.apps/deployment-example
REVISION CHANGE-CAUSE
3 <none>
4 <none>
5 <none>
#回收
[root@k8s2 pod]# kubectl delete -f deployment-example.yaml
5.5 daemonset控制器
5.5.1 daemonset功能
DaemonSet 确保全部(或者某些)节点上运行一个 Pod 的副本。当有节点加入集群时, 也会为他们新增一个 Pod ,当有节点从集群移除时,这些 Pod 也会被回收。删除 DaemonSet 将会删除它创建的所有 Pod
DaemonSet 的典型用法:
- 在每个节点上运行集群存储 DaemonSet,例如 glusterd、ceph。
- 在每个节点上运行日志收集 DaemonSet,例如 fluentd、logstash。
- 在每个节点上运行监控 DaemonSet,例如 Prometheus Node Exporter、zabbix agent等
- 一个简单的用法是在所有的节点上都启动一个 DaemonSet,将被作为每种类型的 daemon 使用
- 一个稍微复杂的用法是单独对每种 daemon 类型使用多个 DaemonSet,但具有不同的标志, 并且对不同硬件类型具有不同的内存、CPU 要求
5.5.2 daemonset 示例
[root@k8s2 pod]# cat daemonset-example.yml
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: daemonset-example
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
tolerations: #对于污点节点的容忍
- effect: NoSchedule
operator: Exists
containers:
- name: nginx
image: nginx
[root@k8s-master ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
daemonset-87h6s 1/1 Running 0 47s 10.244.0.8 k8s-master <none> <none>
daemonset-n4vs4 1/1 Running 0 47s 10.244.2.38 k8s-node2 <none> <none>
daemonset-vhxmq 1/1 Running 0 47s 10.244.1.40 k8s-node1 <none> <none>
#回收
[root@k8s2 pod]# kubectl delete -f daemonset-example.yml
5.6 job 控制器
5.6.1 job控制器功能
Job,主要用于负责批量处理(一次要处理指定数量任务)短暂的一次性(每个任务仅运行一次就结束)任务
Job特点如下:
- 当Job创建的pod执行成功结束时,Job将记录成功结束的pod数量
- 当成功结束的pod达到指定的数量时,Job将完成执行
5.6.2 job 控制器示例:
[root@k8s2 pod]# vim job.yml
apiVersion: batch/v1
kind: Job
metadata:
name: pi
spec:
completions: 6 #一共完成任务数为6
parallelism: 2 #每次并行完成2个
template:
spec:
containers:
- name: pi
image: perl:5.34.0
command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"] 计算Π的后2000位
restartPolicy: Never #关闭后不自动重启
backoffLimit: 4 #运行失败后尝试4重新运行
[root@k8s2 pod]# kubectl apply -f job.yml
Note:
关于重启策略设置的说明:
如果指定为OnFailure,则job会在pod出现故障时重启容器
而不是创建pod,failed次数不变
如果指定为Never,则job会在pod出现故障时创建新的pod
并且故障pod不会消失,也不会重启,failed次数加1
如果指定为Always的话,就意味着一直重启,意味着job任务会重复去执行了
5.7 cronjob 控制器
5.7.1 cronjob 控制器功能
- Cron Job 创建基于时间调度的 Jobs。
- CronJob控制器以Job控制器资源为其管控对象,并借助它管理pod资源对象,
- CronJob可以以类似于Linux操作系统的周期性任务作业计划的方式控制其运行时间点及重复运行的方式。
- CronJob可以在特定的时间点(反复的)去运行job任务。
5.7.2 cronjob 控制器 示例
[root@k8s2 pod]# vim cronjob.yml
apiVersion: batch/v1
kind: CronJob
metadata:
name: hello
spec:
schedule: "* * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox
imagePullPolicy: IfNotPresent
command:
- /bin/sh
- -c
- date; echo Hello from the Kubernetes cluster
restartPolicy: OnFailure
[root@k8s2 pod]# kubectl apply -f cronjob.yml
六 微服务
6.1 什么是微服务
用控制器来完成集群的工作负载,那么应用如何暴漏出去?需要通过微服务暴漏出去后才能被访问
- Service是一组提供相同服务的Pod对外开放的接口。
- 借助Service,应用可以实现服务发现和负载均衡。
- service默认只支持4层负载均衡能力,没有7层功能。(可以通过Ingress实现)
6.2 微服务的类型
ClusterIP | 默认值,k8s系统给service自动分配的虚拟IP,只能在集群内部访问 |
---|---|
NodePort | 将Service通过指定的Node上的端口暴露给外部,访问任意一个NodeIP:nodePort都将路由到ClusterIP |
微服务类型 | 作用描述 |
LoadBalancer | 在NodePort的基础上,借助cloud provider创建一个外部的负载均衡器,并将请求转发到 NodeIP:NodePort,此模式只能在云服务器上使用 |
ExternalName | 将服务通过 DNS CNAME 记录方式转发到指定的域名(通过 spec.externlName 设定 |
示例:
#生成控制器文件并建立控制器
[root@k8s-master ~]# kubectl create deployment timinglee --image myapp:v1 --replicas 2 --dry-run=client -o yaml > timinglee.yaml
#生成微服务yaml追加到已有yaml中
[root@k8s-master ~]# kubectl expose deployment timinglee --port 80 --target-port 80 --dry-run=client -o yaml >> timinglee.yaml
[root@k8s-master ~]# vim timinglee.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: timinglee
name: timinglee
spec:
replicas: 2
selector:
matchLabels:
app: timinglee
template:
metadata:
creationTimestamp: null
labels:
app: timinglee
spec:
containers:
- image: myapp:v1
name: myapp
--- #不同资源间用---隔开
apiVersion: v1
kind: Service
metadata:
labels:
app: timinglee
name: timinglee
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: timinglee
[root@k8s-master ~]# kubectl apply -f timinglee.yaml
deployment.apps/timinglee created
service/timinglee created
[root@k8s-master ~]# kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 19h
timinglee ClusterIP 10.99.127.134 <none> 80/TCP 16s
微服务默认使用iptables调度
[root@k8s-master ~]# kubectl get services -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 19h <none>
timinglee ClusterIP 10.99.127.134 <none> 80/TCP 119s app=timinglee #集群内部IP 134
#可以在火墙中查看到策略信息
[root@k8s-master ~]# iptables -t nat -nL
KUBE-SVC-I7WXYK76FWYNTTGM 6 -- 0.0.0.0/0 10.99.127.134 /* default/timinglee cluster IP */ tcp dpt:80
6.3 ipvs模式
- Service 是由 kube-proxy 组件,加上 iptables 来共同实现的
- kube-proxy 通过 iptables 处理 Service 的过程,需要在宿主机上设置相当多的 iptables 规则,如果宿主机有大量的Pod,不断刷新iptables规则,会消耗大量的CPU资源
- IPVS模式的service,可以使K8s集群支持更多量级的Pod
6.3.1 ipvs模式配置方式
1 在所有节点中安装ipvsadm
[root@k8s-所有节点 pod]yum install ipvsadm –y
2 修改master节点的代理配置
[root@k8s-master ~]# kubectl -n kube-system edit cm kube-proxy
metricsBindAddress: ""
mode: "ipvs" #设置kube-proxy使用ipvs模式
nftables:
3 重启pod,在pod运行时配置文件中采用默认配置,当改变配置文件后已经运行的pod状态不会变化,所以要重启pod
[root@k8s-master ~]# kubectl -n kube-system get pods | awk '/kube-proxy/{system("kubectl -n kube-system delete pods "$1)}'
[root@k8s-master ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.96.0.1:443 rr
-> 172.25.254.100:6443 Masq 1 0 0
TCP 10.96.0.10:53 rr
-> 10.244.0.2:53 Masq 1 0 0
-> 10.244.0.3:53 Masq 1 0 0
TCP 10.96.0.10:9153 rr
-> 10.244.0.2:9153 Masq 1 0 0
-> 10.244.0.3:9153 Masq 1 0 0
TCP 10.97.59.25:80 rr
-> 10.244.1.17:80 Masq 1 0 0
-> 10.244.2.13:80 Masq 1 0 0
UDP 10.96.0.10:53 rr
-> 10.244.0.2:53 Masq 1 0 0
-> 10.244.0.3:53 Masq 1 0 0
Note:
切换ipvs模式后,kube-proxy会在宿主机上添加一个虚拟网卡:kube-ipvs0,并分配所有service IP
[root@k8s-master ~]# ip a | tail inet6 fe80::c4fb:e9ff:feee:7d32/64 scope link valid_lft forever preferred_lft forever 8: kube-ipvs0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN group default link/ether fe:9f:c8:5d:a6:c8 brd ff:ff:ff:ff:ff:ff inet 10.96.0.10/32 scope global kube-ipvs0 valid_lft forever preferred_lft forever inet 10.96.0.1/32 scope global kube-ipvs0 valid_lft forever preferred_lft forever inet 10.97.59.25/32 scope global kube-ipvs0 valid_lft forever preferred_lft forever
6.4 微服务类型详解
6.4.1 clusterip
特点:
clusterip模式只能在集群内访问,并对集群内的pod提供健康检测和自动发现功能
示例:
[root@k8s2 service]# vim myapp.yml
---
apiVersion: v1
kind: Service
metadata:
labels:
app: timinglee
name: timinglee
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: timinglee
type: ClusterIP
service创建后集群DNS提供解析
[root@k8s-master ~]# dig timinglee.default.svc.cluster.local @10.96.0.10
; <<>> DiG 9.16.23-RH <<>> timinglee.default.svc.cluster.local @10.96.0.10
;; global options: +cmd
;; Got answer:
;; WARNING: .local is reserved for Multicast DNS
;; You are currently testing what happens when an mDNS query is leaked to DNS
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 27827
;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; WARNING: recursion requested but not available
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
; COOKIE: 057d9ff344fe9a3a (echoed)
;; QUESTION SECTION:
;timinglee.default.svc.cluster.local. IN A
;; ANSWER SECTION:
timinglee.default.svc.cluster.local. 30 IN A 10.97.59.25
;; Query time: 8 msec
;; SERVER: 10.96.0.10#53(10.96.0.10)
;; WHEN: Wed Sep 04 13:44:30 CST 2024
;; MSG SIZE rcvd: 127
6.4.2 ClusterIP中的特殊模式headless
headless(无头服务)
对于无头 Services
并不会分配 Cluster IP,kube-proxy不会处理它们, 而且平台也不会为它们进行负载均衡和路由,集群访问通过dns解析直接指向到业务pod上的IP,所有的调度有dns单独完成
[root@k8s-master ~]# vim timinglee.yaml
---
apiVersion: v1
kind: Service
metadata:
labels:
app: timinglee
name: timinglee
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: timinglee
type: ClusterIP
clusterIP: None
[root@k8s-master ~]# kubectl delete -f timinglee.yaml
[root@k8s-master ~]# kubectl apply -f timinglee.yaml
deployment.apps/timinglee created
#测试
[root@k8s-master ~]# kubectl get services timinglee
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
timinglee ClusterIP None <none> 80/TCP 6s
[root@k8s-master ~]# dig timinglee.default.svc.cluster.local @10.96.0.10
; <<>> DiG 9.16.23-RH <<>> timinglee.default.svc.cluster.local @10.96.0.10
;; global options: +cmd
;; Got answer:
;; WARNING: .local is reserved for Multicast DNS
;; You are currently testing what happens when an mDNS query is leaked to DNS
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 51527
;; flags: qr aa rd; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 1
;; WARNING: recursion requested but not available
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
; COOKIE: 81f9c97b3f28b3b9 (echoed)
;; QUESTION SECTION:
;timinglee.default.svc.cluster.local. IN A
;; ANSWER SECTION:
timinglee.default.svc.cluster.local. 20 IN A 10.244.2.14 #直接解析到pod上
timinglee.default.svc.cluster.local. 20 IN A 10.244.1.18
;; Query time: 0 msec
;; SERVER: 10.96.0.10#53(10.96.0.10)
;; WHEN: Wed Sep 04 13:58:23 CST 2024
;; MSG SIZE rcvd: 178
#开启一个busyboxplus的pod测试
[root@k8s-master ~]# kubectl run test --image busyboxplus -it
If you don't see a command prompt, try pressing enter.
/ # nslookup timinglee-service
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: timinglee-service
Address 1: 10.244.2.16 10-244-2-16.timinglee-service.default.svc.cluster.local
Address 2: 10.244.2.17 10-244-2-17.timinglee-service.default.svc.cluster.local
Address 3: 10.244.1.22 10-244-1-22.timinglee-service.default.svc.cluster.local
Address 4: 10.244.1.21 10-244-1-21.timinglee-service.default.svc.cluster.local
/ # curl timinglee-service
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
/ # curl timinglee-service/hostname.html
timinglee-c56f584cf-b8t6m
6.4.3 nodeport
通过ipvs暴漏端口从而使外部主机通过master节点的对外ip:
其访问过程为:
示例:
[root@k8s-master ~]# vim timinglee.yaml
---
apiVersion: v1
kind: Service
metadata:
labels:
app: timinglee-service
name: timinglee-service
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: timinglee
type: NodePort
[root@k8s-master ~]# kubectl apply -f timinglee.yaml
deployment.apps/timinglee created
service/timinglee-service created
[root@k8s-master ~]# kubectl get services timinglee-service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
timinglee-service NodePort 10.98.60.22 <none> 80:31771/TCP 8
nodeport在集群节点上绑定端口,一个端口对应一个服务
[root@k8s-master ~]# for i in {1..5}
> do
> curl 172.25.254.100:31771/hostname.html
> done
timinglee-c56f584cf-fjxdk
timinglee-c56f584cf-5m2z5
timinglee-c56f584cf-z2w4d
timinglee-c56f584cf-tt5g6
timinglee-c56f584cf-fjxdk
Note:
nodeport默认端口
nodeport默认端口是30000-32767,超出会报错
[root@k8s-master ~]# vim timinglee.yaml
apiVersion: v1
kind: Service
metadata:
labels:
app: timinglee-service
name: timinglee-service
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
nodePort: 33333
selector:
app: timinglee
type: NodePort
[root@k8s-master ~]# kubectl apply -f timinglee.yaml
deployment.apps/timinglee created
The Service "timinglee-service" is invalid: spec.ports[0].nodePort: Invalid value: 33333: provided port is not in the valid range. The range of valid ports is 30000-32767
如果需要使用这个范围以外的端口就需要特殊设定
[root@k8s-master ~]# vim /etc/kubernetes/manifests/kube-apiserver.yaml
- --service-node-port-range=30000-40000
Note:
添加“–service-node-port-range=“ 参数,端口范围可以自定义
修改后api-server会自动重启,等apiserver正常启动后才能操作集群
集群重启自动完成在修改完参数后全程不需要人为干
6.4.4 loadbalancer
云平台会为我们分配vip并实现访问,如果是裸金属主机那么需要metallb来实现ip的分配
[root@k8s-master ~]# vim timinglee.yaml
---
apiVersion: v1
kind: Service
metadata:
labels:
app: timinglee-service
name: timinglee-service
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: timinglee
type: LoadBalancer
[root@k8s2 service]# kubectl apply -f myapp.yml
默认无法分配外部访问IP
[root@k8s2 service]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 4d1h
myapp LoadBalancer 10.107.23.134 <pending> 80:32537/TCP 4s
LoadBalancer模式适用云平台,裸金属环境需要安装metallb提供支持
6.4.5 metalLB
官网:
metalLB功能:为LoadBalancer分配vip
部署方式
1.设置ipvs模式
[root@k8s-master ~]# kubectl edit cm -n kube-system kube-proxy
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: "ipvs"
ipvs:
strictARP: true
[root@k8s-master ~]# kubectl -n kube-system get pods | awk '/kube-proxy/{system("kubectl -n kube-system delete pods "$1)}'
2.下载部署文件
[root@k8s2 metallb]# wget https://raw.githubusercontent.com/metallb/metallb/v0.13.12/config/manifests/metallb-native.yaml
3.修改文件中镜像地址,与harbor仓库路径保持一致
[root@k8s-master ~]# vim metallb-native.yaml
...
image: metallb/controller:v0.14.8
image: metallb/speaker:v0.14.8
4.上传镜像到harbor
[root@k8s-master ~]# docker pull quay.io/metallb/controller:v0.14.8
[root@k8s-master ~]# docker pull quay.io/metallb/speaker:v0.14.8
[root@k8s-master ~]# docker tag quay.io/metallb/speaker:v0.14.8 reg.timinglee.org/metallb/speaker:v0.14.8
[root@k8s-master ~]# docker tag quay.io/metallb/controller:v0.14.8 reg.timinglee.org/metallb/controller:v0.14.8
[root@k8s-master ~]# docker push reg.timinglee.org/metallb/speaker:v0.14.8
[root@k8s-master ~]# docker push reg.timinglee.org/metallb/controller:v0.14.8
部署服务
[root@k8s2 metallb]# kubectl apply -f metallb-native.yaml
[root@k8s-master ~]# kubectl -n metallb-system get pods
NAME READY STATUS RESTARTS AGE
controller-65957f77c8-25nrw 1/1 Running 0 30s
speaker-p94xq 1/1 Running 0 29s
speaker-qmpct 1/1 Running 0 29s
speaker-xh4zh 1/1 Running 0 30s
配置分配地址段
[root@k8s-master ~]# vim configmap.yml
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: first-pool #地址池名称
namespace: metallb-system
spec:
addresses:
- 172.25.254.50-172.25.254.99 #修改为自己本地地址段
--- #两个不同的kind中间必须加分割
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: example
namespace: metallb-system
spec:
ipAddressPools:
- first-pool #使用地址池
[root@k8s-master ~]# kubectl apply -f configmap.yml
ipaddresspool.metallb.io/first-pool created
l2advertisement.metallb.io/example created
[root@k8s-master ~]# kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 21h
timinglee-service LoadBalancer 10.109.36.123 172.25.254.50 80:31595/TCP 9m9s
#通过分配地址从集群外访问服务
[root@reg ~]# curl 172.25.254.50
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
6.4.6 externalname
- 开启services后,不会被分配IP,而是用dns解析CNAME固定域名来解决ip变化问题
- 一般应用于外部业务和pod沟通或外部业务迁移到pod内时
- 在应用向集群迁移过程中,externalname在过度阶段就可以起作用了。
- 集群外的资源迁移到集群时,在迁移的过程中ip可能会变化,但是域名+dns解析能完美解决此问题
示例:
[root@k8s-master ~]# vim timinglee.yaml
---
apiVersion: v1
kind: Service
metadata:
labels:
app: timinglee-service
name: timinglee-service
spec:
selector:
app: timinglee
type: ExternalName
externalName: www.timinglee.org
[root@k8s-master ~]# kubectl apply -f timinglee.yaml
[root@k8s-master ~]# kubectl get services timinglee-service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
timinglee-service ExternalName <none> www.timinglee.org <none> 2m58s
6.5 Ingress-nginx
官网:
6.5.1 ingress-nginx功能
- 一种全局的、为了代理不同后端 Service 而设置的负载均衡服务,支持7层
- Ingress由两部分组成:Ingress controller和Ingress服务
- Ingress Controller 会根据你定义的 Ingress 对象,提供对应的代理能力。
- 业界常用的各种反向代理项目,比如 Nginx、HAProxy、Envoy、Traefik 等,都已经为Kubernetes 专门维护了对应的 Ingress Controller。
6.5.2 部署ingress
6.5.2.1 下载部署文件
[root@k8s-master ~]# wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.11.2/deploy/static/provider/baremetal/deploy.yaml
上传ingress所需镜像到harbor
[root@k8s-master ~]# docker tag registry.k8s.io/ingress-nginx/controller:v1.11.2@sha256:d5f8217feeac4887cb1ed21f27c2674e58be06bd8f5184cacea2a69abaf78dce reg.timinglee.org/ingress-nginx/controller:v1.11.2
[root@k8s-master ~]# docker tag registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3 reg.timinglee.org/ingress-nginx/kube-webhook-certgen:v1.4.3
[root@k8s-master ~]# docker push reg.timinglee.org/ingress-nginx/controller:v1.11.2
[root@k8s-master ~]# docker push reg.timinglee.org/ingress-nginx/kube-webhook-certgen:v1.4.3
6.5.2.2 安装ingress
[root@k8s-master ~]# vim deploy.yaml
445 image: ingress-nginx/controller:v1.11.2
546 image: ingress-nginx/kube-webhook-certgen:v1.4.3
599 image: ingress-nginx/kube-webhook-certgen:v1.4.3
[root@k8s-master ~]# kubectl -n ingress-nginx get pods
NAME READY STATUS RESTARTS AGE
ingress-nginx-admission-create-ggqm6 0/1 Completed 0 82s
ingress-nginx-admission-patch-q4wp2 0/1 Completed 0 82s
ingress-nginx-controller-bb7d8f97c-g2h4p 1/1 Running 0 82s
[root@k8s-master ~]# kubectl -n ingress-nginx get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller NodePort 10.103.33.148 <none> 80:34512/TCP,443:34727/TCP 108s
ingress-nginx-controller-admission ClusterIP 10.103.183.64 <none> 443/TCP 108s
#修改微服务为loadbalancer
[root@k8s-master ~]# kubectl -n ingress-nginx edit svc ingress-nginx-controller
49 type: LoadBalancer
[root@k8s-master ~]# kubectl -n ingress-nginx get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller LoadBalancer 10.103.33.148 172.25.254.50 80:34512/TCP,443:34727/TCP 4m43s
ingress-nginx-controller-admission ClusterIP 10.103.183.64 <none> 443/TCP 4m43s
Note:
在ingress-nginx-controller中看到的对外IP就是ingress最终对外开放的ip
6.5.2.3 测试ingress
#生成yaml文件
[root@k8s-master ~]# kubectl create ingress webcluster --rule '*/=timinglee-svc:80' --dry-run=client -o yaml > timinglee-ingress.yml
[root@k8s-master ~]# vim timinglee-ingress.yml
aapiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: test-ingress
spec:
ingressClassName: nginx
rules:
- http:
paths:
- backend:
service:
name: timinglee-svc
port:
number: 80
path: /
pathType: Prefix
#Exact(精确匹配),ImplementationSpecific(特定实现),Prefix(前缀匹配),Regular expression(正则表达式匹配)
#建立ingress控制器
[root@k8s-master ~]# kubectl apply -f timinglee-ingress.yml
ingress.networking.k8s.io/webserver created
[root@k8s-master ~]# kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
test-ingress nginx * 172.25.254.10 80 8m30s
[root@reg ~]# for n in {1..5}; do curl 172.25.254.50/hostname.html; done
timinglee-c56f584cf-8jhn6
timinglee-c56f584cf-8cwfm
timinglee-c56f584cf-8jhn6
timinglee-c56f584cf-8cwfm
timinglee-c56f584cf-8jhn6
Note:
ingress必须和输出的service资源处于同一namespace
6.5.3 ingress 的高级用法
6.5.3.1 基于路径的访问
1.建立用于测试的控制器myapp
[root@k8s-master app]# kubectl create deployment myapp-v1 --image myapp:v1 --dry-run=client -o yaml > myapp-v1.yaml
[root@k8s-master app]# kubectl create deployment myapp-v2 --image myapp:v2 --dry-run=client -o yaml > myapp-v2.yaml
[root@k8s-master app]# vim myapp-v1.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: myapp-v1
name: myapp-v1
spec:
replicas: 1
selector:
matchLabels:
app: myapp-v1
strategy: {}
template:
metadata:
labels:
app: myapp-v1
spec:
containers:
- image: myapp:v1
name: myapp
---
apiVersion: v1
kind: Service
metadata:
labels:
app: myapp-v1
name: myapp-v1
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: myapp-v1
[root@k8s-master app]# vim myapp-v2.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: myapp-v2
name: myapp-v2
spec:
replicas: 1
selector:
matchLabels:
app: myapp-v2
template:
metadata:
labels:
app: myapp-v2
spec:
containers:
- image: myapp:v2
name: myapp
---
apiVersion: v1
kind: Service
metadata:
labels:
app: myapp-v2
name: myapp-v2
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: myapp-v2
[root@k8s-master app]# kubectl expose deployment myapp-v1 --port 80 --target-port 80 --dry-run=client -o yaml >> myapp-v1.yaml
[root@k8s-master app]# kubectl expose deployment myapp-v2 --port 80 --target-port 80 --dry-run=client -o yaml >> myapp-v1.yaml
[root@k8s-master app]# kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 29h
myapp-v1 ClusterIP 10.104.84.65 <none> 80/TCP 13s
myapp-v2 ClusterIP 10.105.246.219 <none> 80/TCP 7s
2.建立ingress的yaml
[root@k8s-master app]# vim ingress1.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/rewrite-target: / #访问路径后加任何内容都被定向到/
name: ingress1
spec:
ingressClassName: nginx
rules:
- host: www.timinglee.org
http:
paths:
- backend:
service:
name: myapp-v1
port:
number: 80
path: /v1
pathType: Prefix
- backend:
service:
name: myapp-v2
port:
number: 80
path: /v2
pathType: Prefix
#测试:
[root@reg ~]# echo 172.25.254.50 www.timinglee.org >> /etc/hosts
[root@reg ~]# curl www.timinglee.org/v1
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@reg ~]# curl www.timinglee.org/v2
Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>
#nginx.ingress.kubernetes.io/rewrite-target: / 的功能实现
[root@reg ~]# curl www.timinglee.org/v2/aaaa
Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>
6.5.3.2 基于域名的访问
#在测试主机中设定解析
[root@reg ~]# vim /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
172.25.254.250 reg.timinglee.org
172.25.254.50 www.timinglee.org myappv1.timinglee.org myappv2.timinglee.org
# 建立基于域名的yml文件
[root@k8s-master app]# vim ingress2.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
name: ingress2
spec:
ingressClassName: nginx
rules:
- host: myappv1.timinglee.org
http:
paths:
- backend:
service:
name: myapp-v1
port:
number: 80
path: /
pathType: Prefix
- host: myappv2.timinglee.org
http:
paths:
- backend:
service:
name: myapp-v2
port:
number: 80
path: /
pathType: Prefix
#利用文件建立ingress
[root@k8s-master app]# kubectl apply -f ingress2.yml
ingress.networking.k8s.io/ingress2 created
[root@k8s-master app]# kubectl describe ingress ingress2
Name: ingress2
Labels: <none>
Namespace: default
Address:
Ingress Class: nginx
Default backend: <default>
Rules:
Host Path Backends
---- ---- --------
myappv1.timinglee.org
/ myapp-v1:80 (10.244.2.31:80)
myappv2.timinglee.org
/ myapp-v2:80 (10.244.2.32:80)
Annotations: nginx.ingress.kubernetes.io/rewrite-target: /
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Sync 21s nginx-ingress-controller Scheduled for sync
#在测试主机中测试
[root@reg ~]# curl www.timinglee.org/v1
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@reg ~]# curl www.timinglee.org/v2
Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>
6.5.3.3 建立tls加密
建立证书
[root@k8s-master app]# openssl req -newkey rsa:2048 -nodes -keyout tls.key -x509 -days 365 -subj "/CN=nginxsvc/O=nginxsvc" -out tls.crt
#建立加密资源类型secret
[root@k8s-master app]# kubectl create secret tls web-tls-secret --key tls.key --cert tls.crt
secret/web-tls-secret created
[root@k8s-master app]# kubectl get secrets
NAME TYPE DATA AGE
web-tls-secret kubernetes.io/tls 2 6s
Note:
secret通常在kubernetes中存放敏感数据,他并不是一种加密方式,在后面课程中会有专门讲解
#建立ingress3基于tls认证的yml文件
[root@k8s-master app]# vim ingress3.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
name: ingress3
spec:
tls:
- hosts:
- myapp-tls.timinglee.org
secretName: web-tls-secret
ingressClassName: nginx
rules:
- host: myapp-tls.timinglee.org
http:
paths:
- backend:
service:
name: myapp-v1
port:
number: 80
path: /
pathType: Prefix
#测试
[root@reg ~]# curl -k https://myapp-tls.timinglee.org
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
6.5.3.4 建立auth认证
#建立认证文件
[root@k8s-master app]# dnf install httpd-tools -y
[root@k8s-master app]# htpasswd -cm auth lee
New password:
Re-type new password:
Adding password for user lee
[root@k8s-master app]# cat auth
lee:$apr1$BohBRkkI$hZzRDfpdtNzue98bFgcU10
#建立认证类型资源
[root@k8s-master app]# kubectl create secret generic auth-web --from-file auth
root@k8s-master app]# kubectl describe secrets auth-web
Name: auth-web
Namespace: default
Labels: <none>
Annotations: <none>
Type: Opaque
Data
====
auth: 42 bytes
#建立ingress4基于用户认证的yaml文件
[root@k8s-master app]# vim ingress4.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/auth-type: basic
nginx.ingress.kubernetes.io/auth-secret: auth-web
nginx.ingress.kubernetes.io/auth-realm: "Please input username and password"
name: ingress4
spec:
tls:
- hosts:
- myapp-tls.timinglee.org
secretName: web-tls-secret
ingressClassName: nginx
rules:
- host: myapp-tls.timinglee.org
http:
paths:
- backend:
service:
name: myapp-v1
port:
number: 80
path: /
pathType: Prefix
#建立ingress4
[root@k8s-master app]# kubectl apply -f ingress4.yml
ingress.networking.k8s.io/ingress4 created
[root@k8s-master app]# kubectl describe ingress ingress4
Name: ingress4
Labels: <none>
Namespace: default
Address:
Ingress Class: nginx
Default backend: <default>
TLS:
web-tls-secret terminates myapp-tls.timinglee.org
Rules:
Host Path Backends
---- ---- --------
myapp-tls.timinglee.org
/ myapp-v1:80 (10.244.2.31:80)
Annotations: nginx.ingress.kubernetes.io/auth-realm: Please input username and password
nginx.ingress.kubernetes.io/auth-secret: auth-web
nginx.ingress.kubernetes.io/auth-type: basic
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Sync 14s nginx-ingress-controller Scheduled for sync
#测试:
[root@reg ~]# curl -k https://myapp-tls.timinglee.org
<html>
<head><title>401 Authorization Required</title></head>
<body>
<center><h1>401 Authorization Required</h1></center>
<hr><center>nginx</center>
</body>
</html>
[root@reg ~]# curl -k https://myapp-tls.timinglee.org -ulee:lee
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
6.5.3.5 rewrite重定向
#指定默认访问的文件到hostname.html上
[root@k8s-master app]# vim ingress5.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/app-root: /hostname.html
nginx.ingress.kubernetes.io/auth-type: basic
nginx.ingress.kubernetes.io/auth-secret: auth-web
nginx.ingress.kubernetes.io/auth-realm: "Please input username and password"
name: ingress5
spec:
tls:
- hosts:
- myapp-tls.timinglee.org
secretName: web-tls-secret
ingressClassName: nginx
rules:
- host: myapp-tls.timinglee.org
http:
paths:
- backend:
service:
name: myapp-v1
port:
number: 80
path: /
pathType: Prefix
[root@k8s-master app]# kubectl apply -f ingress5.yml
ingress.networking.k8s.io/ingress5 created
[root@k8s-master app]# kubectl describe ingress ingress5
Name: ingress5
Labels: <none>
Namespace: default
Address: 172.25.254.10
Ingress Class: nginx
Default backend: <default>
TLS:
web-tls-secret terminates myapp-tls.timinglee.org
Rules:
Host Path Backends
---- ---- --------
myapp-tls.timinglee.org
/ myapp-v1:80 (10.244.2.31:80)
Annotations: nginx.ingress.kubernetes.io/app-root: /hostname.html
nginx.ingress.kubernetes.io/auth-realm: Please input username and password
nginx.ingress.kubernetes.io/auth-secret: auth-web
nginx.ingress.kubernetes.io/auth-type: basic
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Sync 2m16s (x2 over 2m54s) nginx-ingress-controller Scheduled for sync
#测试:
[root@reg ~]# curl -Lk https://myapp-tls.timinglee.org -ulee:lee
myapp-v1-7479d6c54d-j9xc6
[root@reg ~]# curl -Lk https://myapp-tls.timinglee.org/lee/hostname.html -ulee:lee
<html>
<head><title>404 Not Found</title></head>
<body bgcolor="white">
<center><h1>404 Not Found</h1></center>
<hr><center>nginx/1.12.2</center>
</body>
</html>
#解决重定向路径问题
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/auth-type: basic
nginx.ingress.kubernetes.io/auth-secret: auth-web
nginx.ingress.kubernetes.io/auth-realm: "Please input username and password"
name: ingress6
spec:
tls:
- hosts:
- myapp-tls.timinglee.org
secretName: web-tls-secret
ingressClassName: nginx
rules:
- host: myapp-tls.timinglee.org
http:
paths:
- backend:
service:
name: myapp-v1
port:
number: 80
path: /
pathType: Prefix
- backend:
service:
name: myapp-v1
port:
number: 80
path: /lee(/|$)(.*) #正则表达式匹配/lee/,/lee/abc
pathType: ImplementationSpecific
#测试
[root@reg ~]# curl -Lk https://myapp-tls.timinglee.org/lee/hostname.html -ulee:lee
myapp-v1-7479d6c54d-j9xc6
6.6 Canary金丝雀发布
6.6.1 什么是金丝雀发布
金丝雀发布(Canary Release)也称为灰度发布,是一种软件发布策略。
主要目的是在将新版本的软件全面推广到生产环境之前,先在一小部分用户或服务器上进行测试和验证,以降低因新版本引入重大问题而对整个系统造成的影响。
是一种Pod的发布方式。金丝雀发布采取先添加、再删除的方式,保证Pod的总量不低于期望值。并且在更新部分Pod后,暂停更新,当确认新Pod版本运行正常后再进行其他版本的Pod的更新。
6.6.2 Canary发布方式
6.6.2.1 基于header(http包头)灰度
- 通过Annotaion扩展
- 创建灰度ingress,配置灰度头部key以及value
- 灰度流量验证完毕后,切换正式ingress到新版本
- 之前我们在做升级时可以通过控制器做滚动更新,默认25%利用header可以使升级更为平滑,通过key 和vule 测试新的业务体系是否有问题。
示例:
#建立版本1的ingress
[root@k8s-master app]# vim ingress7.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
name: myapp-v1-ingress
spec:
ingressClassName: nginx
rules:
- host: myapp.timinglee.org
http:
paths:
- backend:
service:
name: myapp-v1
port:
number: 80
path: /
pathType: Prefix
[root@k8s-master app]# kubectl describe ingress myapp-v1-ingress
Name: myapp-v1-ingress
Labels: <none>
Namespace: default
Address: 172.25.254.10
Ingress Class: nginx
Default backend: <default>
Rules:
Host Path Backends
---- ---- --------
myapp.timinglee.org
/ myapp-v1:80 (10.244.2.31:80)
Annotations: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Sync 44s (x2 over 73s) nginx-ingress-controller Scheduled for sync
#建立基于header的ingress
[root@k8s-master app]# vim ingress8.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/canary: "true"
nginx.ingress.kubernetes.io/canary-by-header: “version”
nginx.ingress.kubernetes.io/canary-by-header-value: ”2“
name: myapp-v2-ingress
spec:
ingressClassName: nginx
rules:
- host: myapp.timinglee.org
http:
paths:
- backend:
service:
name: myapp-v2
port:
number: 80
path: /
pathType: Prefix
[root@k8s-master app]# kubectl apply -f ingress8.yml
ingress.networking.k8s.io/myapp-v2-ingress created
[root@k8s-master app]# kubectl describe ingress myapp-v2-ingress
Name: myapp-v2-ingress
Labels: <none>
Namespace: default
Address:
Ingress Class: nginx
Default backend: <default>
Rules:
Host Path Backends
---- ---- --------
myapp.timinglee.org
/ myapp-v2:80 (10.244.2.32:80)
Annotations: nginx.ingress.kubernetes.io/canary: true
nginx.ingress.kubernetes.io/canary-by-header: version
nginx.ingress.kubernetes.io/canary-by-header-value: 2
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Sync 21s nginx-ingress-controller Scheduled for sync
#测试:
[root@reg ~]# curl myapp.timinglee.org
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@reg ~]# curl -H "version: 2" myapp.timinglee.org
Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>
6.6.2.2 基于权重的灰度发布
- 通过Annotaion拓展
- 创建灰度ingress,配置灰度权重以及总权重
- 灰度流量验证完毕后,切换正式ingress到新版本
示例
#基于权重的灰度发布
[root@k8s-master app]# vim ingress8.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/canary: "true"
nginx.ingress.kubernetes.io/canary-weight: "10" #更改权重值
nginx.ingress.kubernetes.io/canary-weight-total: "100"
name: myapp-v2-ingress
spec:
ingressClassName: nginx
rules:
- host: myapp.timinglee.org
http:
paths:
- backend:
service:
name: myapp-v2
port:
number: 80
path: /
pathType: Prefix
[root@k8s-master app]# kubectl apply -f ingress8.yml
ingress.networking.k8s.io/myapp-v2-ingress created
#测试:
[root@reg ~]# vim check_ingress.sh
#!/bin/bash
v1=0
v2=0
for (( i=0; i<100; i++))
do
response=`curl -s myapp.timinglee.org |grep -c v1`
v1=`expr $v1 + $response`
v2=`expr $v2 + 1 - $response`
done
echo "v1:$v1, v2:$v2"
[root@reg ~]# sh check_ingress.sh
v1:90, v2:10
#更改完毕权重后继续测试可观察变化
七 k8s的存储
7.1 configmap
- configMap用于保存配置数据,以键值对形式存储。
- configMap 资源提供了向 Pod 注入配置数据的方法。
- 镜像和配置文件解耦,以便实现镜像的可移植性和可复用性。
- etcd限制了文件大小不能超过1M
7.1.1 configmap的使用场景
- 填充环境变量的值
- 设置容器内的命令行参数
- 填充卷的配置文件
7.1.2 configmap创建方式
7.1.2.1 字面值创建
[root@k8s-master ~]# kubectl create cm lee-config --from-literal fname=timing --from-literal name=lee
configmap/lee-config created
[root@k8s-master ~]# kubectl describe cm lee-config
Name: lee-config
Namespace: default
Labels: <none>
Annotations: <none>
Data #键值信息显示
====
fname:
----
timing
lname:
----
lee
BinaryData
====
Events: <none>
7.1.2.2 通过文件创建
[root@k8s-master ~]# cat /etc/resolv.conf
# Generated by NetworkManager
nameserver 114.114.114.114
[root@k8s-master ~]# kubectl create cm lee2-config --from-file /etc/resolv.conf
configmap/lee2-config created
[root@k8s-master ~]# kubectl describe cm lee2-config
Name: lee2-config
Namespace: default
Labels: <none>
Annotations: <none>
Data
====
resolv.conf:
----
# Generated by NetworkManager
nameserver 114.114.114.114
BinaryData
====
Events: <none>
7.1.2.3 通过目录创建
[root@k8s-master ~]# mkdir leeconfig
[root@k8s-master ~]# cp /etc/fstab /etc/rc.d/rc.local leeconfig/
[root@k8s-master ~]# kubectl create cm lee3-config --from-file leeconfig/
configmap/lee3-config created
[root@k8s-master ~]# kubectl describe cm lee3-config
Name: lee3-config
Namespace: default
Labels: <none>
Annotations: <none>
Data
====
fstab:
----
#
# /etc/fstab
# Created by anaconda on Fri Jul 26 13:04:22 2024
#
# Accessible filesystems, by reference, are maintained under '/dev/disk/'.
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info.
#
# After editing this file, run 'systemctl daemon-reload' to update systemd
# units generated from this file.
#
UUID=6577c44f-9c1c-44f9-af56-6d6b505fcfa8 / xfs defaults 0 0
UUID=eec689b4-73d5-4f47-b999-9a585bb6da1d /boot xfs defaults 0 0
UUID=ED00-0E42 /boot/efi vfat umask=0077,shortname=winnt 0 2
#UUID=be2f2006-6072-4c77-83d4-f2ff5e237f9f none swap defaults 0 0
rc.local:
----
#!/bin/bash
# THIS FILE IS ADDED FOR COMPATIBILITY PURPOSES
#
# It is highly advisable to create own systemd services or udev rules
# to run scripts during boot instead of using this file.
#
# In contrast to previous versions due to parallel execution during boot
# this script will NOT be run after all other services.
#
# Please note that you must run 'chmod +x /etc/rc.d/rc.local' to ensure
# that this script will be executed during boot.
touch /var/lock/subsys/local
mount /dev/cdrom /rhel9
BinaryData
====
Events: <none>
7.1.2.4 通过yaml文件创建
[root@k8s-master ~]# kubectl create cm lee4-config --from-literal db_host=172.25.254.100 --from-literal db_port=3306 --dry-run=client -o yaml > lee-config.yaml
[root@k8s-master ~]# vim lee-config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: lee4-config
data:
db_host: ”172.25.254.100“
db_port: "3306"
[root@k8s-master ~]# kubectl describe cm lee4-config
Name: lee4-config
Namespace: default
Labels: <none>
Annotations: <none>
Data
====
db_host:
----
172.25.254.100
db_port:
----
3306
BinaryData
====
Events: <none>
7.1.2.5 configmap的使用方式
- 通过环境变量的方式直接传递给pod
- 通过pod的 命令行运行方式
- 作为volume的方式挂载到pod内
7.1.2.5.1 使用configmap填充环境变量
#讲cm中的内容映射为指定变量
[root@k8s-master ~]# vim testpod1.yml
apiVersion: v1
kind: Pod
metadata:
labels:
run: testpod
name: testpod
spec:
containers:
- image: busyboxplus:latest
name: testpod
command:
- /bin/sh
- -c
- env
env:
- name: key1
valueFrom:
configMapKeyRef:
name: lee4-config
key: db_host
- name: key2
valueFrom:
configMapKeyRef:
name: lee4-config
key: db_port
restartPolicy: Never
[root@k8s-master ~]# kubectl apply -f testpod.yml
pod/testpod created
[root@k8s-master ~]# kubectl logs pods/testpod
KUBERNETES_PORT=tcp://10.96.0.1:443
KUBERNETES_SERVICE_PORT=443
MYAPP_V1_SERVICE_HOST=10.104.84.65
HOSTNAME=testpod
SHLVL=1
MYAPP_V2_SERVICE_HOST=10.105.246.219
HOME=/
MYAPP_V1_PORT=tcp://10.104.84.65:80
MYAPP_V1_SERVICE_PORT=80
MYAPP_V2_SERVICE_PORT=80
MYAPP_V2_PORT=tcp://10.105.246.219:80
MYAPP_V1_PORT_80_TCP_ADDR=10.104.84.65
MYAPP_V2_PORT_80_TCP_ADDR=10.105.246.219
KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
MYAPP_V1_PORT_80_TCP_PORT=80
MYAPP_V2_PORT_80_TCP_PORT=80
MYAPP_V1_PORT_80_TCP_PROTO=tcp
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
MYAPP_V2_PORT_80_TCP_PROTO=tcp
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_PROTO=tcp
key1=172.25.254.100
key2=3306
MYAPP_V1_PORT_80_TCP=tcp://10.104.84.65:80
MYAPP_V2_PORT_80_TCP=tcp://10.105.246.219:80
KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
KUBERNETES_SERVICE_PORT_HTTPS=443
PWD=/
KUBERNETES_SERVICE_HOST=10.96.0.1
#把cm中的值直接映射为变量
[root@k8s-master ~]# vim testpod2.yml
apiVersion: v1
kind: Pod
metadata:
labels:
run: testpod
name: testpod
spec:
containers:
- image: busyboxplus:latest
name: testpod
command:
- /bin/sh
- -c
- env
envFrom:
- configMapRef:
name: lee4-config
restartPolicy: Never
#查看日志
[root@k8s-master ~]# kubectl logs pods/testpod
KUBERNETES_PORT=tcp://10.96.0.1:443
KUBERNETES_SERVICE_PORT=443
MYAPP_V1_SERVICE_HOST=10.104.84.65
HOSTNAME=testpod
SHLVL=1
MYAPP_V2_SERVICE_HOST=10.105.246.219
HOME=/
db_port=3306
MYAPP_V1_SERVICE_PORT=80
MYAPP_V1_PORT=tcp://10.104.84.65:80
MYAPP_V2_SERVICE_PORT=80
MYAPP_V2_PORT=tcp://10.105.246.219:80
MYAPP_V1_PORT_80_TCP_ADDR=10.104.84.65
MYAPP_V2_PORT_80_TCP_ADDR=10.105.246.219
KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
MYAPP_V1_PORT_80_TCP_PORT=80
age=18
MYAPP_V2_PORT_80_TCP_PORT=80
MYAPP_V1_PORT_80_TCP_PROTO=tcp
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
KUBERNETES_PORT_443_TCP_PORT=443
MYAPP_V2_PORT_80_TCP_PROTO=tcp
KUBERNETES_PORT_443_TCP_PROTO=tcp
MYAPP_V1_PORT_80_TCP=tcp://10.104.84.65:80
MYAPP_V2_PORT_80_TCP=tcp://10.105.246.219:80
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
name=lee
PWD=/
KUBERNETES_SERVICE_HOST=10.96.0.1
db_host=172.25.254.100
#在pod命令行中使用变量
[root@k8s-master ~]# vim testpod3.yml
apiVersion: v1
kind: Pod
metadata:
labels:
run: testpod
name: testpod
spec:
containers:
- image: busyboxplus:latest
name: testpod
command:
- /bin/sh
- -c
- echo ${db_host} ${db_port} #变量调用需
envFrom:
- configMapRef:
name: lee4-config
restartPolicy: Never
#查看日志
[root@k8s-master ~]# kubectl logs pods/testpod
172.25.254.100 3306
7.1.2.5.2 通过数据卷使用configmap
[root@k8s-master ~]# vim testpod4.yml
apiVersion: v1
kind: Pod
metadata:
labels:
run: testpod
name: testpod
spec:
containers:
- image: busyboxplus:latest
name: testpod
command:
- /bin/sh
- -c
- cat /config/db_host
volumeMounts: #调用卷策略
- name: config-volume #卷名称
mountPath: /config
volumes: #声明卷的配置
- name: config-volume #卷名称
configMap:
name: lee4-config
restartPolicy: Never
#查看日志
[root@k8s-master ~]# kubectl logs testpod
172.25.254.100
7.1.2.5.3 利用configMap填充pod的配置文件
#建立配置文件模板
[root@k8s-master ~]# vim nginx.conf
server {
listen 8000;
server_name _;
root /usr/share/nginx/html;
index index.html;
}
#利用模板生成cm
root@k8s-master ~]# kubectl create cm nginx-conf --from-file nginx.conf
configmap/nginx-conf created
[root@k8s-master ~]# kubectl describe cm nginx-conf
Name: nginx-conf
Namespace: default
Labels: <none>
Annotations: <none>
Data
====
nginx.conf:
----
server {
listen 8000;
server_name _;
root /usr/share/nginx/html;
index index.html;
}
BinaryData
====
Events: <none>
#建立nginx控制器文件
[root@k8s-master ~]# kubectl create deployment nginx --image nginx:latest --replicas 1 --dry-run=client -o yaml > nginx.yml
#设定nginx.yml中的卷
[root@k8s-master ~]# vim nginx.yml
[root@k8s-master ~]# cat nginx.
cat: nginx.: 没有那个文件或目录
[root@k8s-master ~]# cat nginx.yml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx
name: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx:latest
name: nginx
volumeMounts:
- name: config-volume
mountPath: /etc/nginx/conf.d
volumes:
- name: config-volume
configMap:
name: nginx-conf
#测试
[root@k8s-master ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-8487c65cfc-cz5hd 1/1 Running 0 3m7s 10.244.2.38 k8s-node2 <none> <none>
[root@k8s-master ~]# curl 10.244.2.38:8000
7.1.2.5.4 通过热更新cm修改配置
[root@k8s-master ~]# kubectl edit cm nginx-conf
apiVersion: v1
data:
nginx.conf: |
server {
listen 8080; #端口改为8080
server_name _;
root /usr/share/nginx/html;
index index.html;
}
kind: ConfigMap
metadata:
creationTimestamp: "2024-09-07T02:49:20Z"
name: nginx-conf
namespace: default
resourceVersion: "153055"
uid: 20bee584-2dab-4bd5-9bcb-78318404fa7a
#查看配置文件
[root@k8s-master ~]# kubectl exec pods/nginx-8487c65cfc-cz5hd -- cat /etc/nginx/conf.d/nginx.conf
server {
listen 8080;
server_name _;
root /usr/share/nginx/html;
index index.html;
}
Note:
配置文件修改后不会生效,需要删除pod后控制器会重建pod,这时就生效了
[root@k8s-master ~]# kubectl delete pods nginx-8487c65cfc-cz5hd pod "nginx-8487c65cfc-cz5hd" deleted [root@k8s-master ~]# curl 10.244.2.41:8080
7.2 secrets配置管理
7.2.1 secrets的功能介绍
Secret 对象类型用来保存敏感信息,例如密码、OAuth 令牌和 ssh key。
敏感信息放在 secret 中比放在 Pod 的定义或者容器镜像中来说更加安全和灵活
Pod 可以用两种方式使用 secret:
- 作为 volume 中的文件被挂载到 pod 中的一个或者多个容器里。
- 当 kubelet 为 pod 拉取镜像时使用。
Secret的类型:
- Service Account:Kubernetes 自动创建包含访问 API 凭据的 secret,并自动修改 pod 以使用此类型的 secret。
- Opaque:使用base64编码存储信息,可以通过base64 –decode解码获得原始数据,因此安全性弱。
- kubernetes.io/dockerconfigjson:用于存储docker registry的认证信息
7.2.2 secrets的创建
在创建secrets时我们可以用命令的方法或者yaml文件的方法
7.2.2.1从文件创建
[root@k8s-master secrets]# echo -n timinglee > username.txt
[root@k8s-master secrets]# echo -n lee > password.txt
root@k8s-master secrets]# kubectl create secret generic userlist --from-file username.txt --from-file password.txt
secret/userlist created
[root@k8s-master secrets]# kubectl get secrets userlist -o yaml
apiVersion: v1
data:
password.txt: bGVl
username.txt: dGltaW5nbGVl
kind: Secret
metadata:
creationTimestamp: "2024-09-07T07:30:42Z"
name: userlist
namespace: default
resourceVersion: "177216"
uid: 9d76250c-c16b-4520-b6f2-cc6a8ad25594
type: Opaque
7.2.2.2 编写yaml文件
[root@k8s-master secrets]# echo -n timinglee | base64
dGltaW5nbGVl
[root@k8s-master secrets]# echo -n lee | base64
bGVl
[root@k8s-master secrets]# kubectl create secret generic userlist --dry-run=client -o yaml > userlist.yml
[root@k8s-master secrets]# vim userlist.yml
apiVersion: v1
kind: Secret
metadata:
creationTimestamp: null
name: userlist
type: Opaque
data:
username: dGltaW5nbGVl
password: bGVl
[root@k8s-master secrets]# kubectl apply -f userlist.yml
secret/userlist created
[root@k8s-master secrets]# kubectl describe secrets userlist
Name: userlist
Namespace: default
Labels: <none>
Annotations: <none>
Type: Opaque
Data
====
password: 3 bytes
username: 9 byte
7.2.3 Secret的使用方法
7.2.3.1 将Secret挂载到Volume中
[root@k8s-master secrets]# kubectl run nginx --image nginx --dry-run=client -o yaml > pod1.yaml
#向固定路径映射
[root@k8s-master secrets]# vim pod1.yaml
apiVersion: v1
kind: Pod
metadata:
labels:
run: nginx
name: nginx
spec:
containers:
- image: nginx
name: nginx
volumeMounts:
- name: secrets
mountPath: /secret
readOnly: true
volumes:
- name: secrets
secret:
secretName: userlist
[root@k8s-master secrets]# kubectl apply -f pod1.yaml
pod/nginx created
[root@k8s-master secrets]# kubectl exec pods/nginx -it -- /bin/bash
root@nginx:/# cat /secret/
cat: /secret/: Is a directory
root@nginx:/# cd /secret/
root@nginx:/secret# ls
password username
root@nginx:/secret# cat password
leeroot@nginx:/secret# cat username
timingleeroot@nginx:/secret#
7.2.3.2 向指定路径映射 secret 密钥
#向指定路径映射
[root@k8s-master secrets]# vim pod2.yaml
apiVersion: v1
kind: Pod
metadata:
labels:
run: nginx1
name: nginx1
spec:
containers:
- image: nginx
name: nginx1
volumeMounts:
- name: secrets
mountPath: /secret
readOnly: true
volumes:
- name: secrets
secret:
secretName: userlist
items:
- key: username
path: my-users/username
[root@k8s-master secrets]# kubectl apply -f pod2.yaml
pod/nginx1 created
[root@k8s-master secrets]# kubectl exec pods/nginx1 -it -- /bin/bash
root@nginx1:/# cd secret/
root@nginx1:/secret# ls
my-users
root@nginx1:/secret# cd my-users
root@nginx1:/secret/my-users# ls
username
root@nginx1:/secret/my-users# cat username
7.2.3.3 将Secret设置为环境变量
[root@k8s-master secrets]# vim pod3.yaml
apiVersion: v1
kind: Pod
metadata:
labels:
run: busybox
name: busybox
spec:
containers:
- image: busybox
name: busybox
command:
- /bin/sh
- -c
- env
env:
- name: USERNAME
valueFrom:
secretKeyRef:
name: userlist
key: username
- name: PASS
valueFrom:
secretKeyRef:
name: userlist
key: password
restartPolicy: Never
[root@k8s-master secrets]# kubectl apply -f pod3.yaml
pod/busybox created
[root@k8s-master secrets]# kubectl logs pods/busybox
KUBERNETES_SERVICE_PORT=443
KUBERNETES_PORT=tcp://10.96.0.1:443
HOSTNAME=busybox
MYAPP_V1_SERVICE_HOST=10.104.84.65
MYAPP_V2_SERVICE_HOST=10.105.246.219
SHLVL=1
HOME=/root
MYAPP_V1_SERVICE_PORT=80
MYAPP_V1_PORT=tcp://10.104.84.65:80
MYAPP_V2_SERVICE_PORT=80
MYAPP_V2_PORT=tcp://10.105.246.219:80
MYAPP_V1_PORT_80_TCP_ADDR=10.104.84.65
USERNAME=timinglee
MYAPP_V2_PORT_80_TCP_ADDR=10.105.246.219
KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
MYAPP_V1_PORT_80_TCP_PORT=80
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
MYAPP_V2_PORT_80_TCP_PORT=80
MYAPP_V1_PORT_80_TCP_PROTO=tcp
KUBERNETES_PORT_443_TCP_PORT=443
MYAPP_V2_PORT_80_TCP_PROTO=tcp
KUBERNETES_PORT_443_TCP_PROTO=tcp
MYAPP_V1_PORT_80_TCP=tcp://10.104.84.65:80
MYAPP_V2_PORT_80_TCP=tcp://10.105.246.219:80
PASS=lee
KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_SERVICE_HOST=10.96.0.1
PWD=/
7.2.3.4 存储docker registry的认证信息
建立私有仓库并上传镜像
#登陆仓库
[root@k8s-master secrets]# docker login reg.timinglee.org
Authenticating with existing credentials...
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credential-stores
Login Succeeded
#上传镜像
[root@k8s-master secrets]# docker tag timinglee/game2048:latest reg.timinglee.org/timinglee/game2048:latest
[root@k8s-master secrets]# docker push reg.timinglee.org/timinglee/game2048:latest
The push refers to repository [reg.timinglee.org/timinglee/game2048]
88fca8ae768a: Pushed
6d7504772167: Pushed
192e9fad2abc: Pushed
36e9226e74f8: Pushed
011b303988d2: Pushed
latest: digest: sha256:8a34fb9cb168c420604b6e5d32ca6d412cb0d533a826b313b190535c03fe9390 size: 1364
#建立用于docker认证的secret
[root@k8s-master secrets]# kubectl create secret docker-registry docker-auth --docker-server reg.timinglee.org --docker-username admin --docker-password lee --docker-email timinglee@timinglee.org
secret/docker-auth created
[root@k8s-master secrets]# vim pod3.yml
apiVersion: v1
kind: Pod
metadata:
labels:
run: game2048
name: game2048
spec:
containers:
- image: reg.timinglee.org/timinglee/game2048:latest
name: game2048
imagePullSecrets: #不设定docker认证时无法下载镜像
- name: docker-auth
[root@k8s-master secrets]# kubectl get pods
NAME READY STATUS RESTARTS AGE
game2048 1/1 Running 0 4s
7.3 volumes配置管理
- 容器中文件在磁盘上是临时存放的,这给容器中运行的特殊应用程序带来一些问题
- 当容器崩溃时,kubelet将重新启动容器,容器中的文件将会丢失,因为容器会以干净的状态重建。
- 当在一个 Pod 中同时运行多个容器时,常常需要在这些容器之间共享文件。
- Kubernetes 卷具有明确的生命周期与使用它的 Pod 相同
- 卷比 Pod 中运行的任何容器的存活期都长,在容器重新启动时数据也会得到保留
- 当一个 Pod 不再存在时,卷也将不再存在。
- Kubernetes 可以支持许多类型的卷,Pod 也能同时使用任意数量的卷。
- 卷不能挂载到其他卷,也不能与其他卷有硬链接。 Pod 中的每个容器必须独立地指定每个卷的挂载位置。
7.3.1 kubernets支持的卷的类型
官网:
k8s支持的卷的类型如下:
- awsElasticBlockStore 、azureDisk、azureFile、cephfs、cinder、configMap、csi
- downwardAPI、emptyDir、fc (fibre channel)、flexVolume、flocker
- gcePersistentDisk、gitRepo (deprecated)、glusterfs、hostPath、iscsi、local、
- nfs、persistentVolumeClaim、projected、portworxVolume、quobyte、rbd
- scaleIO、secret、storageos、vsphereVolume
7.3.2 emptyDir卷
功能:
当Pod指定到某个节点上时,首先创建的是一个emptyDir卷,并且只要 Pod 在该节点上运行,卷就一直存在。卷最初是空的。 尽管 Pod 中的容器挂载 emptyDir 卷的路径可能相同也可能不同,但是这些容器都可以读写 emptyDir 卷中相同的文件。 当 Pod 因为某些原因被从节点上删除时,emptyDir 卷中的数据也会永久删除
emptyDir 的使用场景:
- 缓存空间,例如基于磁盘的归并排序。
- 耗时较长的计算任务提供检查点,以便任务能方便地从崩溃前状态恢复执行。
- 在 Web 服务器容器服务数据时,保存内容管理器容器获取的文件。
示例:
[root@k8s-master volumes]# vim pod1.yml
apiVersion: v1
kind: Pod
metadata:
name: vol1
spec:
containers:
- image: busyboxplus:latest
name: vm1
command:
- /bin/sh
- -c
- sleep 30000000
volumeMounts:
- mountPath: /cache
name: cache-vol
- image: nginx:latest
name: vm2
volumeMounts:
- mountPath: /usr/share/nginx/html
name: cache-vol
volumes:
- name: cache-vol
emptyDir:
medium: Memory
sizeLimit: 100Mi
[root@k8s-master volumes]# kubectl apply -f pod1.yml
#查看pod中卷的使用情况
[root@k8s-master volumes]# kubectl describe pods vol1
#测试效果
[root@k8s-master volumes]# kubectl exec -it pods/vol1 -c vm1 -- /bin/sh
/ # cd /cache/
/cache # ls
/cache # curl localhost
<html>
<head><title>403 Forbidden</title></head>
<body>
<center><h1>403 Forbidden</h1></center>
<hr><center>nginx/1.27.1</center>
</body>
</html>
/cache # echo timinglee > index.html
/cache # curl localhost
timinglee
/cache # dd if=/dev/zero of=bigfile bs=1M count=101
dd: writing 'bigfile': No space left on device
101+0 records in
99+1 records out
7.3.3 hostpath卷
功能:
hostPath 卷能将主机节点文件系统上的文件或目录挂载到您的 Pod 中,不会因为pod关闭而被删除
hostPath 的一些用法
- 运行一个需要访问 Docker 引擎内部机制的容器,挂载 /var/lib/docker 路径。
- 在容器中运行 cAdvisor(监控) 时,以 hostPath 方式挂载 /sys。
- 允许 Pod 指定给定的 hostPath 在运行 Pod 之前是否应该存在,是否应该创建以及应该以什么方式存在
hostPath的安全隐患
- 具有相同配置(例如从 podTemplate 创建)的多个 Pod 会由于节点上文件的不同而在不同节点上有不同的行为。
- 当 Kubernetes 按照计划添加资源感知的调度时,这类调度机制将无法考虑由 hostPath 使用的资源。
- 基础主机上创建的文件或目录只能由 root 用户写入。您需要在 特权容器 中以 root 身份运行进程,或者修改主机上的文件权限以便容器能够写入 hostPath 卷。
示例:
[root@k8s-master volumes]# vim pod2.yml
apiVersion: v1
kind: Pod
metadata:
name: vol1
spec:
containers:
- image: nginx:latest
name: vm1
volumeMounts:
- mountPath: /usr/share/nginx/html
name: cache-vol
volumes:
- name: cache-vol
hostPath:
path: /data
type: DirectoryOrCreate #当/data目录不存在时自动建立
#测试:
[root@k8s-master volumes]# kubectl apply -f pod2.yml
pod/vol1 created
[root@k8s-master volumes]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
vol1 1/1 Running 0 10s 10.244.2.48 k8s-node2 <none> <none>
[root@k8s-master volumes]# curl 10.244.2.48
<html>
<head><title>403 Forbidden</title></head>
<body>
<center><h1>403 Forbidden</h1></center>
<hr><center>nginx/1.27.1</center>
</body>
</html>
[root@k8s-node2 ~]# echo timinglee > /data/index.html
[root@k8s-master volumes]# curl 10.244.2.48
timinglee
#当pod被删除后hostPath不会被清理
[root@k8s-master volumes]# kubectl delete -f pod2.yml
pod "vol1" deleted
[root@k8s-node2 ~]# ls /data/
index.html
7.3.4 nfs卷
NFS 卷允许将一个现有的 NFS 服务器上的目录挂载到 Kubernetes 中的 Pod 中。这对于在多个 Pod 之间共享数据或持久化存储数据非常有用
例如,如果有多个容器需要访问相同的数据集,或者需要将容器中的数据持久保存到外部存储,NFS 卷可以提供一种方便的解决方案。
7.3.4.1 部署一台nfs共享主机并在所有k8s节点中安装nfs-utils
#部署nfs主机
[root@reg ~]# dnf install nfs-utils -y
[root@reg ~]# systemctl enable --now nfs-server.service
[root@reg ~]# vim /etc/exports
/nfsdata *(rw,sync,no_root_squash)
[root@reg ~]# exportfs -rv
exporting *:/nfsdata
[root@reg ~]# showmount -e
Export list for reg.timinglee.org:
/nfsdata *
#在k8s所有节点中安装nfs-utils
[root@k8s-master & node1 & node2 ~]# dnf install nfs-utils -y
7.3.4.2 部署nfs卷
[root@k8s-master volumes]# vim pod3.yml
apiVersion: v1
kind: Pod
metadata:
name: vol1
spec:
containers:
- image: nginx:latest
name: vm1
volumeMounts:
- mountPath: /usr/share/nginx/html
name: cache-vol
volumes:
- name: cache-vol
nfs:
server: 172.25.254.250
path: /nfsdata
[root@k8s-master volumes]# kubectl apply -f pod3.yml
pod/vol1 created
#测试
[root@k8s-master volumes]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
vol1 1/1 Running 0 100s 10.244.2.50 k8s-node2 <none> <none>
[root@k8s-master volumes]# curl 10.244.2.50
<html>
<head><title>403 Forbidden</title></head>
<body>
<center><h1>403 Forbidden</h1></center>
<hr><center>nginx/1.27.1</center>
</body>
</html>
##在nfs主机中
[root@reg ~]# echo timinglee > /nfsdata/index.html
[root@k8s-master volumes]# curl 10.244.2.50
timinglee
7.3.5 PersistentVolume持久卷
7.3.5.1 静态持久卷pv与静态持久卷声明pvc
PersistentVolume(持久卷,简称PV)
pv是集群内由管理员提供的网络存储的一部分。
PV也是集群中的一种资源。是一种volume插件,
但是它的生命周期却是和使用它的Pod相互独立的。
PV这个API对象,捕获了诸如NFS、ISCSI、或其他云存储系统的实现细节
pv有两种提供方式:静态和动态
- 静态PV:集群管理员创建多个PV,它们携带着真实存储的详细信息,它们存在于Kubernetes API中,并可用于存储使用
- 动态PV:当管理员创建的静态PV都不匹配用户的PVC时,集群可能会尝试专门地供给volume给PVC。这种供给基于StorageClass
PersistentVolumeClaim(持久卷声明,简称PVC)
- 是用户的一种存储请求
- 它和Pod类似,Pod消耗Node资源,而PVC消耗PV资源
- Pod能够请求特定的资源(如CPU和内存)。PVC能够请求指定的大小和访问的模式持久卷配置
- PVC与PV的绑定是一对一的映射。没找到匹配的PV,那么PVC会无限期得处于unbound未绑定状态
volumes访问模式
ReadWriteOnce – 该volume只能被单个节点以读写的方式映射
ReadOnlyMany – 该volume可以被多个节点以只读方式映射
ReadWriteMany – 该volume可以被多个节点以读写的方式映射
在命令行中,访问模式可以简写为:
- RWO - ReadWriteOnce
- ROX - ReadOnlyMany
- RWX – ReadWriteMany
volumes回收策略
- Retain:保留,需要手动回收
- Recycle:回收,自动删除卷中数据(在当前版本中已经废弃)
- Delete:删除,相关联的存储资产,如AWS EBS,GCE PD,Azure Disk,or OpenStack Cinder卷都会被删除
注意:
只有NFS和HostPath支持回收利用
AWS EBS,GCE PD,Azure Disk,or OpenStack Cinder卷支持删除操作
volumes状态说明
- Available 卷是一个空闲资源,尚未绑定到任何申领
- Bound 该卷已经绑定到某申领
- Released 所绑定的申领已被删除,但是关联存储资源尚未被集群回收
- Failed 卷的自动回收操作失败
volumes回收策略
- Retain:保留,需要手动回收
- Recycle:回收,自动删除卷中数据(在当前版本中已经废弃)
- Delete:删除,相关联的存储资产,如AWS EBS,GCE PD,Azure Disk,or OpenStack Cinder卷都会被删除
注意:
只有NFS和HostPath支持回收利用
AWS EBS,GCE PD,Azure Disk,or OpenStack Cinder卷支持删除操作
volumes状态说明
- Available 卷是一个空闲资源,尚未绑定到任何申领
- Bound 该卷已经绑定到某申领
- Released 所绑定的申领已被删除,但是关联存储资源尚未被集群回收
- Failed 卷的自动回收操作失败
静态pv实例:
#在nfs主机中建立实验目录
[root@reg ~]# mkdir /nfsdata/pv{1..3}
#编写创建pv的yml文件,pv是集群资源,不在任何namespace中
[root@k8s-master pvc]# vim pv.yml
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv1
spec:
capacity:
storage: 5Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: nfs
nfs:
path: /nfsdata/pv1
server: 172.25.254.250
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv2
spec:
capacity:
storage: 15Gi
volumeMode: Filesystem
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: nfs
nfs:
path: /nfsdata/pv2
server: 172.25.254.250
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv3
spec:
capacity:
storage: 25Gi
volumeMode: Filesystem
accessModes:
- ReadOnlyMany
persistentVolumeReclaimPolicy: Retain
storageClassName: nfs
nfs:
path: /nfsdata/pv3
server: 172.25.254.250
[root@k8s-master pvc]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS VOLUMEATTRIBUTESCLASS REASON AGE
pv1 5Gi RWO Retain Available nfs <unset> 4m50s
pv2 15Gi RWX Retain Available nfs <unset> 4m50s
pv3 25Gi ROX Retain Available nfs <unset> 4m50s
#建立pvc,pvc是pv使用的申请,需要保证和pod在一个namesapce中
[root@k8s-master pvc]# vim pvc.ym
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc1
spec:
storageClassName: nfs
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc2
spec:
storageClassName: nfs
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc3
spec:
storageClassName: nfs
accessModes:
- ReadOnlyMany
resources:
requests:
storage: 15Gi
[root@k8s-master pvc]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
pvc1 Bound pv1 5Gi RWO nfs <unset> 5s
pvc2 Bound pv2 15Gi RWX nfs <unset> 4s
pvc3 Bound pv3 25Gi ROX nfs <unset> 4s
#在其他namespace中无法应用
[root@k8s-master pvc]# kubectl -n kube-system get pvc
No resources found in kube-system namespace.
在pod中使用pvc
[root@k8s-master pvc]# vim pod.yml
apiVersion: v1
kind: Pod
metadata:
name: timinglee
spec:
containers:
- image: nginx
name: nginx
volumeMounts:
- mountPath: /usr/share/nginx/html
name: vol1
volumes:
- name: vol1
persistentVolumeClaim:
claimName: pvc1
[root@k8s-master pvc]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
timinglee 1/1 Running 0 83s 10.244.2.54 k8s-node2 <none> <none>
[root@k8s-master pvc]# kubectl exec -it pods/timinglee -- /bin/bash
root@timinglee:/# curl localhost
<html>
<head><title>403 Forbidden</title></head>
<body>
<center><h1>403 Forbidden</h1></center>
<hr><center>nginx/1.27.1</center>
</body>
</html>
root@timinglee:/# cd /usr/share/nginx/
root@timinglee:/usr/share/nginx# ls
html
root@timinglee:/usr/share/nginx# cd html/
root@timinglee:/usr/share/nginx/html# ls
[root@reg ~]# echo timinglee > /data/pv1/index.html
[root@k8s-master pvc]# kubectl exec -it pods/timinglee -- /bin/bash
root@timinglee:/# cd /usr/share/nginx/html/
root@timinglee:/usr/share/nginx/html# ls
index.html
7.4 存储类storageclass
官网:
7.4.1 StorageClass说明
- StorageClass提供了一种描述存储类(class)的方法,不同的class可能会映射到不同的服务质量等级和备份策略或其他策略等。
- 每个 StorageClass 都包含 provisioner、parameters 和 reclaimPolicy 字段, 这些字段会在StorageClass需要动态分配 PersistentVolume 时会使用到
7.4.2 StorageClass的属性
属性说明:
Provisioner(存储分配器):用来决定使用哪个卷插件分配 PV,该字段必须指定。可以指定内部分配器,也可以指定外部分配器。外部分配器的代码地址为: kubernetes-incubator/external-storage,其中包括NFS和Ceph等。
Reclaim Policy(回收策略):通过reclaimPolicy字段指定创建的Persistent Volume的回收策略,回收策略包括:Delete 或者 Retain,没有指定默认为Delete。
7.4.3 存储分配器NFS Client Provisioner
源码地址:
- NFS Client Provisioner是一个automatic provisioner,使用NFS作为存储,自动创建PV和对应的PVC,本身不提供NFS存储,需要外部先有一套NFS存储服务。
- PV以 ${namespace}-${pvcName}-${pvName}的命名格式提供(在NFS服务器上)
- PV回收的时候以 archieved-${namespace}-${pvcName}-${pvName} 的命名格式(在NFS服务器上)
7.4.4 部署NFS Client Provisioner
7.4.4.1 创建sa并授权
[root@k8s-master storageclass]# vim rbac.yml
apiVersion: v1
kind: Namespace
metadata:
name: nfs-client-provisioner
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-client-provisioner
namespace: nfs-client-provisioner
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-client-provisioner-runner
rules:
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
namespace: nfs-client-provisioner
roleRef:
kind: ClusterRole
name: nfs-client-provisioner-runner
apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
namespace: nfs-client-provisioner
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
namespace: nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
namespace: nfs-client-provisioner
roleRef:
kind: Role
name: leader-locking-nfs-client-provisioner
apiGroup: rbac.authorization.k8s.io
#查看rbac信息
[root@k8s-master storageclass]# kubectl apply -f rbac.yml
namespace/nfs-client-provisioner created
serviceaccount/nfs-client-provisioner created
clusterrole.rbac.authorization.k8s.io/nfs-client-provisioner-runner created
clusterrolebinding.rbac.authorization.k8s.io/run-nfs-client-provisioner created
role.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created
rolebinding.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created
[root@k8s-master storageclass]# kubectl -n nfs-client-provisioner get sa
NAME SECRETS AGE
default 0 14s
nfs-client-provisioner 0 14s
7.4.4.2 部署应用
[root@k8s-master storageclass]# vim deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-client-provisioner
labels:
app: nfs-client-provisioner
namespace: nfs-client-provisioner
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: nfs-client-provisioner
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccountName: nfs-client-provisioner
containers:
- name: nfs-client-provisioner
image: sig-storage/nfs-subdir-external-provisioner:v4.0.2
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: k8s-sigs.io/nfs-subdir-external-provisioner
- name: NFS_SERVER
value: 172.25.254.250
- name: NFS_PATH
value: /nfsdata
volumes:
- name: nfs-client-root
nfs:
server: 172.25.254.250
path: /nfsdata
[root@k8s-master storageclass]# kubectl -n nfs-client-provisioner get deployments.apps nfs-client-provisioner
NAME READY UP-TO-DATE AVAILABLE AGE
nfs-client-provisioner 1/1 1 1 86s
7.4.4.3 创建存储类
[root@k8s-master storageclass]# vim class.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: nfs-client
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner
parameters:
archiveOnDelete: "false"
[root@k8s-master storageclass]# kubectl apply -f class.yaml
storageclass.storage.k8s.io/nfs-client created
[root@k8s-master storageclass]# kubectl get storageclasses.storage.k8s.io
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
nfs-client k8s-sigs.io/nfs-subdir-external-provisioner Delete Immediate false 9s
7.4.4.4 创建pvc
[root@k8s-master storageclass]# vim pvc.yml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: test-claim
spec:
storageClassName: nfs-client
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1G
[root@k8s-master storageclass]# kubectl apply -f pvc.yml
persistentvolumeclaim/test-claim created
[root@k8s-master storageclass]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
test-claim Bound pvc-7782a006-381a-440a-addb-e9d659b8fe0b 1Gi RWX nfs-client <unset> 21m
7.4.4.5 创建测试pod
[root@k8s-master storageclass]# vim pod.yml
kind: Pod
apiVersion: v1
metadata:
name: test-pod
spec:
containers:
- name: test-pod
image: busybox
command:
- "/bin/sh"
args:
- "-c"
- "touch /mnt/SUCCESS && exit 0 || exit 1"
volumeMounts:
- name: nfs-pvc
mountPath: "/mnt"
restartPolicy: "Never"
volumes:
- name: nfs-pvc
persistentVolumeClaim:
claimName: test-claim
[root@k8s-master storageclass]# kubectl apply -f pod.yml
[root@reg ~]# ls /data/default-test-claim-pvc-b1aef9cc-4be9-4d2a-8c5e-0fe7716247e2/
SUCCESS
7.4.4.6 设置默认存储类
- 在未设定默认存储类时pvc必须指定使用类的名称
- 在设定存储类后创建pvc时可以不用指定storageClassName
#一次性指定多个pvc
[root@k8s-master pvc]# vim pvc.yml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc1
spec:
storageClassName: nfs-client
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc2
spec:
storageClassName: nfs-client
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc3
spec:
storageClassName: nfs-client
accessModes:
- ReadOnlyMany
resources:
requests:
storage: 15Gi
root@k8s-master pvc]# kubectl apply -f pvc.yml
persistentvolumeclaim/pvc1 created
persistentvolumeclaim/pvc2 created
persistentvolumeclaim/pvc3 created
[root@k8s-master pvc]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
pvc1 Bound pvc-25a3c8c5-2797-4240-9270-5c51caa211b8 1Gi RWO nfs-client <unset> 4s
pvc2 Bound pvc-c7f34d1c-c8d3-4e7f-b255-e29297865353 10Gi RWX nfs-client <unset> 4s
pvc3 Bound pvc-5f1086ad-2999-487d-88d2-7104e3e9b221 15Gi ROX nfs-client <unset> 4s
test-claim Bound pvc-b1aef9cc-4be9-4d2a-8c5e-0fe7716247e2 1Gi RWX nfs-client <unset> 9m9s
设定默认存储类
[root@k8s-master storageclass]# kubectl edit sc nfs-client
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{},"name":"nfs-client"},"parameters":{"archiveOnDelete":"false"},"provisioner":"k8s-sigs.io/nfs-subdir-external-provisioner"}
storageclass.kubernetes.io/is-default-class: "true" #设定默认存储类
creationTimestamp: "2024-09-07T13:49:10Z"
name: nfs-client
resourceVersion: "218198"
uid: 9eb1e144-3051-4f16-bdec-30c472358028
parameters:
archiveOnDelete: "false"
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner
reclaimPolicy: Delete
volumeBindingMode: Immediate
#测试,未指定storageClassName参数
[root@k8s-master storageclass]# vim pvc.yml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: test-claim
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
[root@k8s-master storageclass]# kubectl apply -f pvc.yml
persistentvolumeclaim/test-claim created
[root@k8s-master storageclass]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
test-claim Bound pvc-b96c6983-5a4f-440d-99ec-45c99637f9b5 1Gi RWX nfs-client <unset> 7s
7.5 statefulset控制器
7.5.1 功能特性
- Statefulset是为了管理有状态服务的问提设计的
- StatefulSet将应用状态抽象成了两种情况:
- 拓扑状态:应用实例必须按照某种顺序启动。新创建的Pod必须和原来Pod的网络标识一样
- 存储状态:应用的多个实例分别绑定了不同存储数据。
- StatefulSet给所有的Pod进行了编号,编号规则是:$(statefulset名称)-$(序号),从0开始。
- Pod被删除后重建,重建Pod的网络标识也不会改变,Pod的拓扑状态按照Pod的“名字+编号”的方式固定下来,并且为每个Pod提供了一个固定且唯一的访问入口,Pod对应的DNS记录。
7.5.2 StatefulSet的组成部分
- Headless Service:用来定义pod网络标识,生成可解析的DNS记录
- volumeClaimTemplates:创建pvc,指定pvc名称大小,自动创建pvc且pvc由存储类供应。
- StatefulSet:管理pod的
7.5.3 构建方法
#建立无头服务
[root@k8s-master statefulset]# vim headless.yml
apiVersion: v1
kind: Service
metadata:
name: nginx-svc
labels:
app: nginx
spec:
ports:
- port: 80
name: web
clusterIP: None
selector:
app: nginx
[root@k8s-master statefulset]# kubectl apply -f headless.yml
#建立statefulset
[root@k8s-master statefulset]# vim statefulset.yml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
serviceName: "nginx-svc"
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumeClaimTemplates:
- metadata:
name: www
spec:
storageClassName: nfs-client
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
[root@k8s-master statefulset]# kubectl apply -f statefulset.yml
statefulset.apps/web configured
root@k8s-master statefulset]# kubectl get pods
NAME READY STATUS RESTARTS AGE
web-0 1/1 Running 0 3m26s
web-1 1/1 Running 0 3m22s
web-2 1/1 Running 0 3m18s
[root@reg nfsdata]# ls /nfsdata/
default-test-claim-pvc-34b3d968-6c2b-42f9-bbc3-d7a7a02dcbac
default-www-web-0-pvc-0390b736-477b-4263-9373-a53d20cc8f9f
default-www-web-1-pvc-a5ff1a7b-fea5-4e77-afd4-cdccedbc278c
default-www-web-2-pvc-83eff88b-4ae1-4a8a-b042-8899677ae854
7.5.4 测试:
#为每个pod建立index.html文件
[root@reg nfsdata]# echo web-0 > default-www-web-0-pvc-0390b736-477b-4263-9373-a53d20cc8f9f/index.html
[root@reg nfsdata]# echo web-1 > default-www-web-1-pvc-a5ff1a7b-fea5-4e77-afd4-cdccedbc278c/index.html
[root@reg nfsdata]# echo web-2 > default-www-web-2-pvc-83eff88b-4ae1-4a8a-b042-8899677ae854/index.html
#建立测试pod访问web-0~2
[root@k8s-master statefulset]# kubectl run -it testpod --image busyboxplus
/ # curl web-0.nginx-svc
web-0
/ # curl web-1.nginx-svc
web-1
/ # curl web-2.nginx-svc
web-2
#删掉重新建立statefulset
[root@k8s-master statefulset]# kubectl delete -f statefulset.yml
statefulset.apps "web" deleted
[root@k8s-master statefulset]# kubectl apply -f statefulset.yml
statefulset.apps/web created
#访问依然不变
[root@k8s-master statefulset]# kubectl attach testpod -c testpod -i -t
If you don't see a command prompt, try pressing enter.
/ # cu
curl cut
/ # curl web-0.nginx-svc
web-0
/ # curl web-1.nginx-svc
web-1
/ # curl web-2.nginx-svc
web-2
7.5.5 statefulset的弹缩
首先,想要弹缩的StatefulSet. 需先清楚是否能弹缩该应用
用命令改变副本数
$ kubectl scale statefulsets <stateful-set-name> --replicas=<new-replicas>
通过编辑配置改变副本数
$ kubectl edit statefulsets.apps <stateful-set-name>
statefulset有序回收
[root@k8s-master statefulset]# kubectl scale statefulset web --replicas 0
statefulset.apps/web scaled
[root@k8s-master statefulset]# kubectl delete -f statefulset.yml
statefulset.apps "web" deleted
[root@k8s-master statefulset]# kubectl delete pvc --all
persistentvolumeclaim "test-claim" deleted
persistentvolumeclaim "www-web-0" deleted
persistentvolumeclaim "www-web-1" deleted
persistentvolumeclaim "www-web-2" deleted
persistentvolumeclaim "www-web-3" deleted
persistentvolumeclaim "www-web-4" deleted
persistentvolumeclaim "www-web-5" deleted
[root@k8s2 statefulset]# kubectl scale statefulsets web --replicas=0
[root@k8s2 statefulset]# kubectl delete -f statefulset.yaml
[root@k8s2 mysql]# kubectl delete pvc --all
八 k8s网络通信
8.1 k8s通信整体架构
8s通过CNI接口接入其他插件来实现网络通讯。目前比较流行的插件有flannel,calico等
CNI插件存放位置:# cat /etc/cni/net.d/10-flannel.conflist
插件使用的解决方案如下
- 虚拟网桥,虚拟网卡,多个容器共用一个虚拟网卡进行通信。
- 多路复用:MacVLAN,多个容器共用一个物理网卡进行通信。
- 硬件交换:SR-LOV,一个物理网卡可以虚拟出多个接口,这个性能最好。
容器间通信:
- 同一个pod内的多个容器间的通信,通过lo即可实现pod之间的通信
- 同一节点的pod之间通过cni网桥转发数据包。
- 不同节点的pod之间的通信需要网络插件支持
pod和service通信: 通过iptables或ipvs实现通信,ipvs取代不了iptables,因为ipvs只能做负载均衡,而做不了nat转换
pod和外网通信:iptables的MASQUERADE
Service与集群外部客户端的通信;(ingress、nodeport、loadbalancer)
8.2 flannel网络插件
插件组成:
插件 | 功能 |
---|---|
VXLAN | 即Virtual Extensible LAN(虚拟可扩展局域网),是Linux本身支持的一网种网络虚拟化技术。VXLAN可以完全在内核态实现封装和解封装工作,从而通过“隧道”机制,构建出覆盖网络(Overlay Network) |
VTEP | VXLAN Tunnel End Point(虚拟隧道端点),在Flannel中 VNI的默认值是1,这也是为什么宿主机的VTEP设备都叫flannel.1的原因 |
Cni0 | 网桥设备,每创建一个pod都会创建一对 veth pair。其中一端是pod中的eth0,另一端是Cni0网桥中的端口(网卡) |
Flannel.1 | TUN设备(虚拟网卡),用来进行 vxlan 报文的处理(封包和解包)。不同node之间的pod数据流量都从overlay设备以隧道的形式发送到对端 |
Flanneld | flannel在每个主机中运行flanneld作为agent,它会为所在主机从集群的网络地址空间中,获取一个小的网段subnet,本主机内所有容器的IP地址都将从中分配。同时Flanneld监听K8s集群数据库,为flannel.1设备提供封装数据时必要的mac、ip等网络数据信息 |
8.2.1 flannel跨主机通信原理
- 当容器发送IP包,通过veth pair 发往cni网桥,再路由到本机的flannel.1设备进行处理。
- VTEP设备之间通过二层数据帧进行通信,源VTEP设备收到原始IP包后,在上面加上一个目的MAC地址,封装成一个内部数据帧,发送给目的VTEP设备。
- 内部数据桢,并不能在宿主机的二层网络传输,Linux内核还需要把它进一步封装成为宿主机的一个普通的数据帧,承载着内部数据帧通过宿主机的eth0进行传输。
- Linux会在内部数据帧前面,加上一个VXLAN头,VXLAN头里有一个重要的标志叫VNI,它是VTEP识别某个数据桢是不是应该归自己处理的重要标识。
- flannel.1设备只知道另一端flannel.1设备的MAC地址,却不知道对应的宿主机地址是什么。在linux内核里面,网络设备进行转发的依据,来自FDB的转发数据库,这个flannel.1网桥对应的FDB信息,是由flanneld进程维护的。
- linux内核在IP包前面再加上二层数据帧头,把目标节点的MAC地址填进去,MAC地址从宿主机的ARP表获取。
- 此时flannel.1设备就可以把这个数据帧从eth0发出去,再经过宿主机网络来到目标节点的eth0设备。目标主机内核网络栈会发现这个数据帧有VXLAN Header,并且VNI为1,Linux内核会对它进行拆包,拿到内部数据帧,根据VNI的值,交给本机flannel.1设备处理,flannel.1拆包,根据路由表发往cni网桥,最后到达目标容器。
#默认网络通信路由
[root@k8s-master ~]# ip r
default via 172.25.254.2 dev eth0 proto static metric 100
10.244.0.0/24 dev cni0 proto kernel scope link src 10.244.0.1
10.244.1.0/24 via 10.244.1.0 dev flannel.1 onlink
10.244.2.0/24 via 10.244.2.0 dev flannel.1 onlink
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown
172.25.254.0/24 dev eth0 proto kernel scope link src 172.25.254.100 metric 100
#桥接转发数据库
[root@k8s-master ~]# bridge fdb
01:00:5e:00:00:01 dev eth0 self permanent
33:33:00:00:00:01 dev eth0 self permanent
01:00:5e:00:00:fb dev eth0 self permanent
33:33:ff:65:cb:fa dev eth0 self permanent
33:33:00:00:00:fb dev eth0 self permanent
33:33:00:00:00:01 dev docker0 self permanent
01:00:5e:00:00:6a dev docker0 self permanent
33:33:00:00:00:6a dev docker0 self permanent
01:00:5e:00:00:01 dev docker0 self permanent
01:00:5e:00:00:fb dev docker0 self permanent
02:42:76:94:aa:bc dev docker0 vlan 1 master docker0 permanent
02:42:76:94:aa:bc dev docker0 master docker0 permanent
33:33:00:00:00:01 dev kube-ipvs0 self permanent
82:14:17:b1:1d:d0 dev flannel.1 dst 172.25.254.20 self permanent
22:7f:e7:fd:33:77 dev flannel.1 dst 172.25.254.10 self permanent
33:33:00:00:00:01 dev cni0 self permanent
01:00:5e:00:00:6a dev cni0 self permanent
33:33:00:00:00:6a dev cni0 self permanent
01:00:5e:00:00:01 dev cni0 self permanent
33:33:ff:aa:13:2f dev cni0 self permanent
01:00:5e:00:00:fb dev cni0 self permanent
33:33:00:00:00:fb dev cni0 self permanent
0e:49:e3:aa:13:2f dev cni0 vlan 1 master cni0 permanent
0e:49:e3:aa:13:2f dev cni0 master cni0 permanent
7a:1c:2d:5d:0e:9e dev vethf29f1523 master cni0
5e:4e:96:a0:eb:db dev vethf29f1523 vlan 1 master cni0 permanent
5e:4e:96:a0:eb:db dev vethf29f1523 master cni0 permanent
33:33:00:00:00:01 dev vethf29f1523 self permanent
01:00:5e:00:00:01 dev vethf29f1523 self permanent
33:33:ff:a0:eb:db dev vethf29f1523 self permanent
33:33:00:00:00:fb dev vethf29f1523 self permanent
b2:f9:14:9f:71:29 dev veth18ece01e master cni0
3a:05:06:21:bf:7f dev veth18ece01e vlan 1 master cni0 permanent
3a:05:06:21:bf:7f dev veth18ece01e master cni0 permanent
33:33:00:00:00:01 dev veth18ece01e self permanent
01:00:5e:00:00:01 dev veth18ece01e self permanent
33:33:ff:21:bf:7f dev veth18ece01e self permanent
33:33:00:00:00:fb dev veth18ece01e self permanent
#arp列表
[root@k8s-master ~]# arp -n
Address HWtype HWaddress Flags Mask Iface
10.244.0.2 ether 7a:1c:2d:5d:0e:9e C cni0
172.25.254.1 ether 00:50:56:c0:00:08 C eth0
10.244.2.0 ether 82:14:17:b1:1d:d0 CM flannel.1
10.244.1.0 ether 22:7f:e7:fd:33:77 CM flannel.1
172.25.254.20 ether 00:0c:29:6a:a8:61 C eth0
172.25.254.10 ether 00:0c:29:ea:52:cb C eth0
10.244.0.3 ether b2:f9:14:9f:71:29 C cni0
172.25.254.2 ether 00:50:56:fc:e0:b9 C eth0
8.2.2 flannel支持的后端模式
网络模式 | 功能 |
---|---|
vxlan | 报文封装,默认模式 |
Directrouting | 直接路由,跨网段使用vxlan,同网段使用host-gw模式 |
host-gw | 主机网关,性能好,但只能在二层网络中,不支持跨网络 如果有成千上万的Pod,容易产生广播风暴,不推荐 |
UDP | 性能差,不推荐 |
更改flannel的默认模式
[root@k8s-master ~]# kubectl -n kube-flannel edit cm kube-flannel-cfg
apiVersion: v1
data:
cni-conf.json: |
{
"name": "cbr0",
"cniVersion": "0.3.1",
"plugins": [
{
"type": "flannel",
"delegate": {
"hairpinMode": true,
"isDefaultGateway": true
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
}
]
}
net-conf.json: |
{
"Network": "10.244.0.0/16",
"EnableNFTables": false,
"Backend": {
"Type": "host-gw" #更改内容
}
}
#重启pod
[root@k8s-master ~]# kubectl -n kube-flannel delete pod --all
pod "kube-flannel-ds-bk8wp" deleted
pod "kube-flannel-ds-mmftf" deleted
pod "kube-flannel-ds-tmfdn" deleted
[root@k8s-master ~]# ip r
default via 172.25.254.2 dev eth0 proto static metric 100
10.244.0.0/24 dev cni0 proto kernel scope link src 10.244.0.1
10.244.1.0/24 via 172.25.254.10 dev eth0
10.244.2.0/24 via 172.25.254.20 dev eth0
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown
172.25.254.0/24 dev eth0 proto kernel scope link src 172.25.254.100 metric 100
8.3 calico网络插件
官网:
8.3.1 calico简介:
- 纯三层的转发,中间没有任何的NAT和overlay,转发效率最好。
- Calico 仅依赖三层路由可达。Calico 较少的依赖性使它能适配所有 VM、Container、白盒或者混合环境场景。
8.3.2 calico网络架构
8.3.3 部署calico
删除flannel插件
[root@master k8s-img]# kubectl delete -f kube-flannel.yml
namespace "kube-flannel" deleted
serviceaccount "flannel" deleted
clusterrole.rbac.authorization.k8s.io "flannel" deleted
clusterrolebinding.rbac.authorization.k8s.io "flannel" deleted
configmap "kube-flannel-cfg" deleted
daemonset.apps "kube-flannel-ds" deleted
删除所有节点上flannel配置文件,避免冲突
[root@all k8s-img]# rm -rf /etc/cni/net.d/10-flannel.conflist
下载部署文件
[root@master calico]# curl https://raw.githubusercontent.com/projectcalico/calico/v3.28.1/manifests/calico-typha.yaml -o calico.yaml
下载镜像上传至仓库:
#打标签的过程就省略了
[root@master network]# docker push reg.timingy.org/calico/cni:v3.28.1
The push refers to repository [reg.timingy.org/calico/cni]
5f70bf18a086: Mounted from flannel/flannel
38ba74eb8103: Pushed
6b2e64a0b556: Pushed
v3.28.1: digest: sha256:4bf108485f738856b2a56dbcfb3848c8fb9161b97c967a7cd479a60855e13370 size: 946
[root@master network]# docker push reg.timingy.org/calico/node:v3.28.1
The push refers to repository [reg.timingy.org/calico/node]
3831744e3436: Pushed
v3.28.1: digest: sha256:f72bd42a299e280eed13231cc499b2d9d228ca2f51f6fd599d2f4176049d7880 size: 530
[root@master network]# docker push reg.timingy.org/calico/kube-controllers:v3.28.1
The push refers to repository [reg.timingy.org/calico/kube-controllers]
4f27db678727: Pushed
6b2e64a0b556: Mounted from calico/cni
v3.28.1: digest: sha256:8579fad4baca75ce79644db84d6a1e776a3c3f5674521163e960ccebd7206669 size: 740
[root@master network]# docker push reg.timingy.org/calico/typha:v3.28.1
The push refers to repository [reg.timingy.org/calico/typha]
993f578a98d3: Pushed
6b2e64a0b556: Mounted from calico/kube-controllers
v3.28.1: digest: sha256:093ee2e785b54c2edb64dc68c6b2186ffa5c47aba32948a35ae88acb4f30108f size: 740
更改yml设置
[root@k8s-master calico]# vim calico.yaml
4835 image: calico/cni:v3.28.1 #会从你配置的默认docker仓库(/etc/docker/daemon.json)拉取镜像
4835 image: calico/cni:v3.28.1
4906 image: calico/node:v3.28.1
4932 image: calico/node:v3.28.1
5160 image: calico/kube-controllers:v3.28.1
5249 - image: calico/typha:v3.28.1
4970 - name: CALICO_IPV4POOL_IPIP
4971 value: "Never"
4999 - name: CALICO_IPV4POOL_CIDR
5000 value: "10.244.0.0/16"
5001 - name: CALICO_AUTODETECTION_METHOD
5002 value: "interface=eth0"
[root@master network]# kubectl apply -f calico.yaml
[root@master network]# kubectl -n kube-system get pods
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-6849cb478c-vqcj7 1/1 Running 0 31s
calico-node-54jw7 1/1 Running 0 31s
calico-node-knsq5 1/1 Running 0 31s
calico-node-nzmlx 1/1 Running 0 31s
calico-typha-fff9df85f-42n8s 1/1 Running 0 31s
coredns-7c677d6c78-7n96p 1/1 Running 1 (20h ago) 21h
coredns-7c677d6c78-jp6c5 1/1 Running 1 (20h ago) 21h
etcd-master 1/1 Running 1 (20h ago) 21h
kube-apiserver-master 1/1 Running 1 (20h ago) 21h
kube-controller-manager-master 1/1 Running 1 (20h ago) 21h
kube-proxy-qrfjs 1/1 Running 1 (20h ago) 21h
kube-proxy-rjzl9 1/1 Running 1 (20h ago) 21h
kube-proxy-tfwhz 1/1 Running 1 (20h ago) 21h
kube-scheduler-master 1/1 Running 1 (20h ago) 21h
测试:
[root@master network]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
test 1/1 Running 0 24s 10.244.166.128 node1 <none> <none>
[root@master network]# kubectl run test --image nginx
pod/test created
[root@master network]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
test 1/1 Running 0 24s 10.244.166.128 node1 <none> <none>
[root@master network]# curl 10.244.166.128
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
九 k8s调度(Scheduling)
9.1 调度在Kubernetes中的作用
- 调度是指将未调度的Pod自动分配到集群中的节点的过程
- 调度器通过 kubernetes 的 watch 机制来发现集群中新创建且尚未被调度到 Node 上的 Pod
- 调度器会将发现的每一个未调度的 Pod 调度到一个合适的 Node 上来运行
9.2调度原理:
创建Pod
- 用户通过Kubernetes API创建Pod对象,并在其中指定Pod的资源需求、容器镜像等信息。
调度器监视Pod
- Kubernetes调度器监视集群中的未调度Pod对象,并为其选择最佳的节点。
选择节点
- 调度器通过算法选择最佳的节点,并将Pod绑定到该节点上。调度器选择节点的依据包括节点的资源使用情况、Pod的资源需求、亲和性和反亲和性等。
绑定Pod到节点
- 调度器将Pod和节点之间的绑定信息保存在etcd数据库中,以便节点可以获取Pod的调度信息。
节点启动Pod
- 节点定期检查etcd数据库中的Pod调度信息,并启动相应的Pod。如果节点故障或资源不足,调度器会重新调度Pod,并将其绑定到其他节点上运行。
9.3 调度器种类
默认调度器(Default Scheduler):
- 是Kubernetes中的默认调度器,负责对新创建的Pod进行调度,并将Pod调度到合适的节点上。
自定义调度器(Custom Scheduler):
- 是一种自定义的调度器实现,可以根据实际需求来定义调度策略和规则,以实现更灵活和多样化的调度功能。
扩展调度器(Extended Scheduler):
- 是一种支持调度器扩展器的调度器实现,可以通过调度器扩展器来添加自定义的调度规则和策略,以实现更灵活和多样化的调度功能。
kube-scheduler是kubernetes中的默认调度器,在kubernetes运行后会自动在控制节点运行
9.4 常用调度方法
9.4.1 nodename
nodeName 是节点选择约束的最简单方法,但一般不推荐
如果 nodeName 在 PodSpec 中指定了,则它优先于其他的节点选择方法
使用 nodeName 来选择节点的一些限制
- 如果指定的节点不存在。
- 如果指定的节点没有资源来容纳 pod,则pod 调度失败。
- 云环境中的节点名称并非总是可预测或稳定的
实例:
[root@master network]# kubectl run test --image nginx --dry-run=client -o yaml > test.yml
[root@master network]# vim test.yml
apiVersion: v1
kind: Pod
metadata:
labels:
run: test
name: test
spec:
containers:
- image: nginx
name: test
[root@master network]# kubectl apply -f test.yml
[root@master network]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
test 1/1 Running 0 18m 10.244.166.128 node1 <none> <none>
#删除资源
[root@master scheduler]# kubectl delete -f test.yml
pod "test" deleted
#更改文件
[root@master scheduler]# cat test.yml
apiVersion: v1
kind: Pod
metadata:
labels:
run: test
name: test
spec:
nodeName: node2 #选择你要调度的节点名称
containers:
- image: nginx
name: test
[root@master scheduler]# kubectl apply -f test.yml
pod/test created
[root@master scheduler]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
test 1/1 Running 0 6s 10.244.104.1 node2 <none> <none>
**#注意:**找不到节点pod会出现pending,优先级最高,其他调度方式无效
9.4.2 Nodeselector(通过标签控制节点)
- nodeSelector 是节点选择约束的最简单推荐形式
- 给选择的节点添加标签:
kubectl label nodes k8s-node1 lab=xxy
- 可以给多个节点设定相同标签
示例:
#查看节点标签
[root@master scheduler]# kubectl get nodes --show-labels
NAME STATUS ROLES AGE VERSION LABELS
master Ready control-plane 22h v1.30.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=master,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node.kubernetes.io/exclude-from-external-load-balancers=
node1 Ready <none> 21h v1.30.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node1,kubernetes.io/os=linux
node2 Ready <none> 21h v1.30.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node2,kubernetes.io/os=linux
#设定节点标签
[root@master scheduler]# kubectl label nodes node1 disktype=ssd
node/node1 labeled
[root@master scheduler]# kubectl get nodes node1 --show-labels
NAME STATUS ROLES AGE VERSION LABELS
node1 Ready <none> 21h v1.30.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disktype=ssd,kubernetes.io/arch=amd64,kubernetes.io/hostname=node1,kubernetes.io/os=linux
#调度设置
[root@master scheduler]# cat nodeSelector.yml
apiVersion: v1
kind: Pod
metadata:
labels:
run: test
name: test
spec:
nodeSelector:
disktype: ssd
containers:
- image: nginx
name: test
[root@master scheduler]# kubectl apply -f nodeSelector.yml
pod/test created
[root@master scheduler]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
test 1/1 Running 0 8s 10.244.166.129 node1 <none> <none>
注意:节点标签可以给N个节点加
使用 nodeName 和 nodeSelector 控制 Pod 调度存在以下主要缺陷:
灵活性不足:
- nodeName 直接硬编码节点名称,一旦节点不可用,Pod 会调度失败
- 无法动态适应集群节点变化,增减节点需手动修改配置
缺乏复杂调度策略:
- 仅能基于节点标签做简单匹配,无法实现资源亲和性、反亲和性等高级策略
- 不能根据节点资源使用率、负载情况动态调度
扩展性问题:
- 当集群规模扩大,节点标签管理复杂时,维护成本显著增加
- 难以应对多维度的调度需求(如同时考虑硬件类型、区域、可用区等)
容错能力弱:
- 使用 nodeName 时,若指定节点故障,Pod 会一直处于 Pending 状态
- 没有重试或自动转移到其他节点的机制
与自动扩缩容不兼容:
- 无法很好地配合集群自动扩缩容(HPA/VPA)工作
- 新扩容节点可能无法被正确选中
这些缺陷使得 nodeName 和 nodeSelector 更适合简单场景,复杂场景通常需要使用 Node Affinity、Pod Affinity 等更高级的调度策略。
9.5 affinity(亲和性)
官方文档 :
9.5.1 亲和与反亲和
- nodeSelector 提供了一种非常简单的方法来将 pod 约束到具有特定标签的节点上。亲和/反亲和功能极大地扩展了你可以表达约束的类型。
- 使用节点上的 pod 的标签来约束,而不是使用节点本身的标签,来允许哪些 pod 可以或者不可以被放置在一起。
9.5.2 nodeAffinity节点亲和
哪个节点服务满足指定条件就在哪个节点运行
requiredDuringSchedulingIgnoredDuringExecution 必须满足,但不会影响已经调度
preferredDuringSchedulingIgnoredDuringExecution 倾向满足,在无法满足情况下也会调度pod
- IgnoreDuringExecution 表示如果在Pod运行期间Node的标签发生变化,导致亲和性策略不能满足,则继续运行当前的Pod。
nodeaffinity还支持多种规则匹配条件的配置如
匹配规则 | 功能 |
---|---|
ln | label 的值在列表内 |
Notln | label 的值不在列表内 |
Gt | label 的值大于设置的值,不支持Pod亲和性 |
Lt | label 的值小于设置的值,不支持pod亲和性 |
Exists | 设置的label 存在 |
DoesNotExist | 设置的 label 不存在 |
nodeAffinity示例
[root@master scheduler]# cat nodeAffinity_test1.yml
apiVersion: v1
kind: Pod
metadata:
labels:
run: test
name: test
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: disktype
operator: In
values:
- ssd
- fc
containers:
- image: nginx
name: test
[root@master scheduler]# kubectl apply -f nodeAffinity_test1.yml
pod/test created
[root@master scheduler]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
test 1/1 Running 0 31s 10.244.166.130 node1 <none> <none>
[root@master scheduler]# kubectl get nodes --show-labels
NAME STATUS ROLES AGE VERSION LABELS
master Ready control-plane 22h v1.30.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=master,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node.kubernetes.io/exclude-from-external-load-balancers=
node1 Ready <none> 22h v1.30.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disktype=ssd,kubernetes.io/arch=amd64,kubernetes.io/hostname=node1,kubernetes.io/os=linux #含有指定的键和值
node2 Ready <none> 22h v1.30.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node2,kubernetes.io/os=linux
9.5.3 Podaffinity(pod的亲和)
- 那个节点有符合条件的POD就在那个节点运行
- podAffinity 主要解决POD可以和哪些POD部署在同一个节点中的问题
- podAntiAffinity主要解决POD不能和哪些POD部署在同一个节点中的问题。它们处理的是Kubernetes集群内部POD和POD之间的关系。
- Pod 间亲和与反亲和在与更高级别的集合(例如 ReplicaSets,StatefulSets,Deployments 等)一起使用时,
- Pod 间亲和与反亲和需要大量的处理,这可能会显著减慢大规模集群中的调度。
Podaffinity示例
#先运行一个pod,记住他的标签是run=test
[root@master scheduler]# kubectl apply -f nodeName.yml
pod/test created
[root@master scheduler]# kubectl get pods -o wide --show-labels
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES LABELS
test 1/1 Running 0 12s 10.244.104.3 node2 <none> <none> run=test
#节点亲和配置实例
[root@master scheduler]# vim podAffinity_test1.yml
apiVersion: v1
kind: Pod
metadata:
labels:
run: myappv1
name: myappv1
spec:
affinity:
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: run
operator: In
values:
- test
topologyKey: "kubernetes.io/hostname"
containers:
- image: myapp:v1
name: test
[root@master scheduler]# kubectl apply -f podAffinity_test1.yml
pod/myappv1 created
[root@master scheduler]# kubectl get pods -o wide --show-labels
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES LABELS
myappv1 1/1 Running 0 6s 10.244.104.4 node2 <none> <none> run=myappv1
test 1/1 Running 0 3m10s 10.244.104.3 node2 <none> <none> run=test
可以看到该Pods根据我们的节点亲和配置被调度到了node2
也可以设置operator: NotIn 让pod被调度到其他节点
9.5.4 Podantiaffinity(pod反亲和)
Podantiaffinity示例
[root@master scheduler]# vim podAntiaffinity_test1.yml
apiVersion: v1
kind: Pod
metadata:
labels:
run: myappv1
name: myappv1
spec:
affinity:
podAntiAffinity: #反亲和
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: run
operator: In
values:
- test
topologyKey: "kubernetes.io/hostname"
containers:
- image: myapp:v1
name: test
[root@master scheduler]# kubectl apply -f nodeName.yml
pod/test created
[root@master scheduler]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
test 1/1 Running 0 7s 10.244.104.5 node2 <none> <none>
[root@master scheduler]# kubectl apply -f podAntiaffinity_test1.yml
pod/myappv1 created
[root@master scheduler]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
myappv1 1/1 Running 0 9s 10.244.166.131 node1 <none> <none>
test 1/1 Running 0 40s 10.244.104.5 node2 <none> <none>
9.6 Taints(污点模式,禁止调度)
- Taints(污点)是Node的一个属性,设置了Taints后,默认Kubernetes是不会将Pod调度到这个Node上
- Kubernetes如果为Pod设置Tolerations(容忍),只要Pod能够容忍Node上的污点,那么Kubernetes就会忽略Node上的污点,就能够(不是必须)把Pod调度过去
- 可以使用命令 kubectl taint 给节点增加一个 taint:
$ kubectl taint nodes <nodename> key=string:effect #命令执行方法
$ kubectl taint nodes node1 key=value:NoSchedule #创建
$ kubectl describe nodes server1 | grep Taints #查询
$ kubectl taint nodes node1 key- #删除
其中[effect] 可取值:
effect值 | 解释 |
---|---|
NoSchedule | POD 不会被调度到标记为 taints 节点 |
PreferNoSchedule | NoSchedule 的软策略版本,尽量不调度到此节点 |
NoExecute | 如该节点内正在运行的 POD 没有对应 Tolerate 设置,会直接被逐出 |
9.6.1 Taints示例
[root@master scheduler]# kubectl create deployment web --image nginx --replicas 2 --dry-run=client -o yaml > taints_test1.yml
[root@master scheduler]# vim taints_test1.yml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: web
name: web
spec:
replicas: 2
selector:
matchLabels:
app: web
template:
metadata:
labels:
app: web
spec:
containers:
- image: nginx
name: nginx
[root@master scheduler]# kubectl apply -f taints_test1.yml
deployment.apps/web created
[root@master scheduler]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
web-7c56dcdb9b-fg4t7 1/1 Running 0 8s 10.244.104.6 node2 <none> <none>
web-7c56dcdb9b-mpsj6 1/1 Running 0 8s 10.244.166.132 node1 <none> <none>
#设定污点为NoSchedule
[root@master scheduler]# kubectl taint node node1 name=xxy:NoSchedule
node/node1 tainted
#控制器增加pod
[root@master scheduler]# kubectl scale deployment web --replicas 6
deployment.apps/web scaled
#查看调度情况
[root@master scheduler]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
web-7c56dcdb9b-52sht 1/1 Running 0 6s 10.244.104.8 node2 <none> <none>
web-7c56dcdb9b-792zc 1/1 Running 0 6s 10.244.104.10 node2 <none> <none>
web-7c56dcdb9b-8mvc4 1/1 Running 0 6s 10.244.104.7 node2 <none> <none>
web-7c56dcdb9b-fg4t7 1/1 Running 0 5m28s 10.244.104.6 node2 <none> <none>
web-7c56dcdb9b-mpsj6 1/1 Running 0 5m28s 10.244.166.132 node1 <none> <none>
web-7c56dcdb9b-zw6ft 1/1 Running 0 6s 10.244.104.9 node2 <none> <none>
可以看到为node1设置了NoSchedule污点再增加pod,pod不会再被调度到pod1,但是已经运行在node1上的pod依然运行
#设定污点为NoExecute
[root@master scheduler]# kubectl taint node node1 name=xxy:NoExecute
node/node1 tainted
[root@master scheduler]# kubectl describe nodes master node1 node2 | grep Taints
Taints: node-role.kubernetes.io/control-plane:NoSchedule
Taints: name=xxy:NoExecute
Taints: <none>
[root@master scheduler]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
web-7c56dcdb9b-52sht 1/1 Running 0 4m4s 10.244.104.8 node2 <none> <none>
web-7c56dcdb9b-792zc 1/1 Running 0 4m4s 10.244.104.10 node2 <none> <none>
web-7c56dcdb9b-8mvc4 1/1 Running 0 4m4s 10.244.104.7 node2 <none> <none>
web-7c56dcdb9b-fg4t7 1/1 Running 0 9m26s 10.244.104.6 node2 <none> <none>
web-7c56dcdb9b-vzkn6 1/1 Running 0 18s 10.244.104.11 node2 <none> <none>
web-7c56dcdb9b-zw6ft 1/1 Running 0 4m4s 10.244.104.9 node2 <none> <none>
设置node1污点为NoExecute已经在运行的pod被剔除,运行到了其他节点
#删除污点
[root@master scheduler]# kubectl taint node node1 name-
node/node1 untainted
[root@master scheduler]# kubectl describe nodes master node1 node2 | grep Taints
Taints: node-role.kubernetes.io/control-plane:NoSchedule
Taints: <none>
Taints: <none>
9.6.2 tolerations(污点容忍)
tolerations中定义的key、value、effect,要与node上设置的taint保持一直:
- 如果 operator 是 Equal ,则key与value之间的关系必须相等。
- 如果 operator 是 Exists ,value可以省略
- 如果不指定operator属性,则默认值为Equal。
还有两个特殊值:
- 当不指定key,再配合Exists 就能匹配所有的key与value ,可以容忍所有污点。
- 当不指定effect ,则匹配所有的effect
9.6.3 污点容忍示例:
#设定节点污点
[root@master scheduler]# kubectl taint node node1 nodetype=badnode:PreferNoSchedule
node/node1 tainted
[root@master scheduler]# kubectl taint node node2 nodetype=badnode:NoSchedule
node/node2 tainted
[root@master scheduler]# kubectl describe nodes node1 node2 | grep Taints
Taints: nodetype=badnode:PreferNoSchedule
Taints: nodetype=badnode:NoSchedule
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: web
name: web
spec:
replicas: 2
selector:
matchLabels:
app: web
template:
metadata:
labels:
app: web
spec:
containers:
- image: nginx
name: nginx
tolerations:
- operator: Exists #容忍所有污点
tolerations: #容忍effect为PreferNoSchedule的污点
- operator: Exists
effect: PreferNoSchedule
tolerations: #容忍指定kv的NoSchedule污点
- key: nodetype
value: badnode
effect: NoSchedule
注意:三种容忍方式每次测试写一个即可
测试:
1.容忍所有污点
tolerations:
- operator: Exists #容忍所有污点
2.容忍effect为PreferNoSchedule的污点
tolerations: #容忍effect为Noschedule的污点
- operator: Exists
effect: PreferNoSchedule
3.容忍指定kv的NoSchedule污点
tolerations: #容忍指定kv的NoSchedule污点
- key: nodetype
value: badnode
effect: NoSchedule
十 kubernetes的认证和授权
10.1 kubernetes API 访问控制
Authentication(认证)
- 认证方式现共有8种,可以启用一种或多种认证方式,只要有一种认证方式通过,就不再进行其它方式的认证。通常启用X509 Client Certs和Service Accout Tokens两种认证方式。
- Kubernetes集群有两类用户:由Kubernetes管理的Service Accounts (服务账户)和(Users Accounts) 普通账户。k8s中账号的概念不是我们理解的账号,它并不真的存在,它只是形式上存在。
Authorization(授权)
- 必须经过认证阶段,才到授权请求,根据所有授权策略匹配请求资源属性,决定允许或拒绝请求。授权方式现共有6种,AlwaysDeny、AlwaysAllow、ABAC、RBAC、Webhook、Node。默认集群强制开启RBAC。
Admission Control(准入控制)
- 用于拦截请求的一种方式,运行在认证、授权之后,是权限认证链上的最后一环,对请求API资源对象进行修改和校验。
10.1.1 UserAccount与ServiceAccount
- 用户账户是针对人而言的。 服务账户是针对运行在 pod 中的进程而言的。
- 用户账户是全局性的。 其名称在集群各 namespace 中都是全局唯一的,未来的用户资源不会做 namespace 隔离, 服务账户是 namespace 隔离的。
- 集群的用户账户可能会从企业数据库进行同步,其创建需要特殊权限,并且涉及到复杂的业务流程。 服务账户创建的目的是为了更轻量,允许集群用户为了具体的任务创建服务账户 ( 即权限最小化原则 )。
10.1.1.1 ServiceAccount
服务账户控制器(Service account controller)
- 服务账户管理器管理各命名空间下的服务账户
- 每个活跃的命名空间下存在一个名为 “default” 的服务账户
服务账户准入控制器(Service account admission controller)
默认服务账户分配:当 Pod 未显式指定
serviceAccountName
时,自动为其分配当前命名空间中的default
服务账户。服务账户验证:检查 Pod 指定的服务账户是否存在于当前命名空间中,若不存在则拒绝 Pod 创建请求,防止无效配置。
镜像拉取密钥继承:当 Pod 未配置
imagePullSecrets
时,自动继承其关联服务账户中定义的imagePullSecrets
,简化私有镜像仓库的访问配置。自动挂载服务账户凭证:
- 为 Pod 添加一个特殊的 Volume,包含访问 API Server 所需的 token
- 将该 Volume 自动挂载到 Pod 中所有容器的
/var/run/secrets/kubernetes.io/serviceaccount
路径 - 挂载内容包括 token、CA 证书和 namespace 文件
10.1.1.2 ServiceAccount示例:
建立名字为admin的ServiceAccount
[root@k8s-master ~]# kubectl create sa timinglee
serviceaccount/timinglee created
[root@k8s-master ~]# kubectl describe sa timinglee
Name: timinglee
Namespace: default
Labels: <none>
Annotations: <none>
Image pull secrets: <none>
Mountable secrets: <none>
Tokens: <none>
Events: <none>
建立secrets
[root@k8s-master ~]# kubectl create secret docker-registry docker-login --docker-username admin --docker-password lee --docker-server reg.timinglee.org --docker-email lee@timinglee.org
secret/docker-login created
[root@k8s-master ~]# kubectl describe secrets docker-login
Name: docker-login
Namespace: default
Labels: <none>
Annotations: <none>
Type: kubernetes.io/dockerconfigjson
Data
====
.dockerconfigjson: 119 bytes
将secrets注入到sa中
[root@k8s-master ~]# kubectl edit sa timinglee
apiVersion: v1
imagePullSecrets:
- name: docker-login
kind: ServiceAccount
metadata:
creationTimestamp: "2024-09-08T15:44:04Z"
name: timinglee
namespace: default
resourceVersion: "262259"
uid: 7645a831-9ad1-4ae8-a8a1-aca7b267ea2d
[root@k8s-master ~]# kubectl describe sa timinglee
Name: timinglee
Namespace: default
Labels: <none>
Annotations: <none>
Image pull secrets: docker-login
Mountable secrets: <none>
Tokens: <none>
Events: <none>
建立私有仓库并且利用pod访问私有仓库
[root@k8s-master auth]# vim example1.yml
[root@k8s-master auth]# kubectl apply -f example1.yml
pod/testpod created
[root@k8s-master auth]# kubectl describe pod testpod
Warning Failed 5s kubelet Failed to pull image "reg.timinglee.org/lee/nginx:latest": Error response from daemon: unauthorized: unauthorized to access repository: lee/nginx, action: pull: unauthorized to access repository: lee/nginx, action: pull
Warning Failed 5s kubelet Error: ErrImagePull
Normal BackOff 3s (x2 over 4s) kubelet Back-off pulling image "reg.timinglee.org/lee/nginx:latest"
Warning Failed 3s (x2 over 4s) kubelet Error: ImagePullBackOff
Warning:
在创建pod时会镜像下载会受阻,因为docker私有仓库下载镜像需要认证
pod绑定sa
[root@k8s-master auth]# vim example1.yml
apiVersion: v1
kind: Pod
metadata:
name: testpod
spec:
serviceAccountName: timinglee
containers:
- image: reg.timinglee.org/lee/nginx:latest
name: testpod
[root@k8s-master auth]# kubectl apply -f example1.yml
pod/testpod created
[root@k8s-master auth]# kubectl get pods
NAME READY STATUS RESTARTS AGE
testpod 1/1 Running 0 2s
10.2 认证(在k8s中建立认证用户)
10.2.1 创建UserAccount
[root@master kubernetes]# cd /etc/kubernetes/pki #Kubernetes 集群的公钥基础设施目录
[root@master pki]# openssl genrsa -out timinglee.key 2048 #生成私钥
[root@master pki]# openssl req -new -key timinglee.key -out timinglee.csr -subj "/CN=timinglee" #使用前面生成的私钥创建证书签名请求(CSR)
[root@master pki]# openssl x509 -req -in timinglee.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out timinglee.crt -days 365 #使用 Kubernetes 集群的 CA 根证书对 CSR 进行签名,生成最终的证书
Certificate request self-signature ok
subject=CN = timinglee
[root@master pki]# openssl x509 -in timinglee.crt -text -noout #以文本形式查看证书的详细信息
Certificate:
Data:
Version: 1 (0x0)
Serial Number:
32:a5:fb:e1:5e:30:67:05:dc:af:1d:74:c6:7a:b2:aa:ce:af:be:85
Signature Algorithm: sha256WithRSAEncryption
Issuer: CN = kubernetes
Validity
Not Before: Aug 21 15:01:16 2025 GMT
Not After : Aug 21 15:01:16 2026 GMT
Subject: CN = timinglee
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
Public-Key: (2048 bit)
Modulus:
00:a0:52:4d:95:29:33:61:a0:2a:66:a8:24:9c:a8:
3c:34:2d:d2:cc:6f:68:66:b9:f4:e4:88:63:77:f6:
11:89:bb:42:80:cb:f2:4e:f3:de:00:94:bd:90:79:
03:7e:26:cb:99:6f:06:28:27:58:17:27:c0:01:42:
6c:41:57:c3:f2:90:7e:1a:d6:26:32:4c:94:00:80:
d2:8c:ce:42:79:6e:a1:97:48:a6:87:0a:18:7a:e5:
35:6c:9f:84:0c:51:58:a2:57:65:2d:3a:0b:28:18:
d4:76:d3:6d:e3:14:1f:a7:41:f9:ac:95:c0:20:de:
61:67:ba:e4:33:4a:c4:19:19:6c:47:14:8c:87:b5:
d2:67:22:80:06:6c:98:90:5c:ab:77:9e:30:9b:7d:
31:62:cc:fb:e6:a1:8c:2c:71:6e:74:a8:8b:13:55:
d3:28:1b:0d:d7:4b:51:94:4a:7f:36:6d:c5:62:03:
06:8d:32:90:92:f8:bd:80:57:6e:bf:8a:52:f6:af:
09:9b:a0:8b:c5:8a:05:b8:53:f5:23:9c:b9:1e:64:
82:72:ba:7c:90:8e:05:9e:d0:c4:51:b1:f4:37:86:
97:8b:a8:b7:b1:64:05:0f:e5:e2:a6:dc:90:03:80:
4f:4b:c9:9c:c5:e0:1e:c4:e4:c1:b4:a7:9c:7a:7c:
87:09
Exponent: 65537 (0x10001)
Signature Algorithm: sha256WithRSAEncryption
Signature Value:
c4:52:7f:48:36:21:6d:c5:eb:b6:38:98:f2:0e:b1:ac:03:14:
ef:99:f7:c1:74:34:30:56:20:31:3f:66:e2:59:ed:30:79:f4:
fb:67:45:5d:15:b9:1e:13:28:73:8f:1f:f6:8d:58:6a:94:26:
24:85:aa:2b:01:cb:b4:96:28:12:f3:42:97:70:95:f9:e3:fb:
32:79:61:8a:c0:e6:b7:94:97:9c:9c:ea:73:57:88:74:db:7e:
ee:cd:5d:54:46:b2:e2:35:fa:ee:3b:a8:ee:d8:24:fe:87:5e:
36:24:e4:f3:5f:48:08:f9:b0:f1:82:8b:40:74:b2:03:3f:b7:
79:2e:1c:60:fb:18:f9:97:5f:8d:31:78:ff:4f:5d:d6:44:a6:
ff:af:96:e4:c6:b8:52:2f:82:e5:1b:02:f1:5a:ff:7a:15:63:
80:f2:08:ac:89:d1:72:c6:35:c1:a7:c0:00:7a:9f:2e:06:58:
89:ef:64:aa:58:e4:3b:fb:f1:85:7c:39:3b:c4:3d:a1:36:d3:
dd:2c:51:58:87:69:64:89:0d:e3:ea:1f:36:97:e4:92:63:ec:
08:2a:b7:e0:86:14:f2:34:9b:4f:ce:c7:52:7b:dd:b7:0a:2c:
a4:09:29:88:c0:f6:40:7e:10:35:19:66:7f:78:1d:6e:ee:9b:
9b:94:31:bc
补充:
当你用 timinglee
这个用户通过 kubectl
访问集群时,私钥会隐性参与身份认证:
kubectl
会用你的私钥,对 “访问 API 的请求内容”(如 “获取 Pod 列表”)进行数字签名。Kubernetes API Server 收到请求后,会从你提供的证书中提取公钥,验证这个签名:
- 如果签名验证通过 → 证明 “这个请求确实是
timinglee
发送的”(因为只有你的私钥能生成这个签名); - 如果验证失败 → 说明请求可能被伪造,直接拒绝。
- 如果签名验证通过 → 证明 “这个请求确实是
#建立k8s中的用户
[root@master pki]# kubectl config set-credentials timinglee --client-certificate /etc/kubernetes/pki/timinglee.crt --client-key /etc/kubernetes/pki/timinglee.key --embed-certs=true
User "timinglee" set.
#--embed-certs=true用于把用户配置保存到配置文件中,持久化数据,否则重启后用户就不存在了
[root@master pki]# kubectl config view
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://192.168.121.100:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kubernetes-admin
name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
user:
client-certificate-data: DATA+OMITTED
client-key-data: DATA+OMITTED
- name: timinglee
user:
client-certificate-data: DATA+OMITTED
client-key-data: DATA+OMITTED
#为用户创建集群的安全上下文
[root@master pki]# kubectl config set-context timinglee@kubernetes --cluster kubernetes --user timinglee
Context "timinglee@kubernetes" created.
#切换用户,用户在集群中只有用户身份没有授权
[root@master pki]# kubectl config use-context timinglee@kubernetes
Switched to context "timinglee@kubernetes".
[root@master pki]# kubectl get pods
Error from server (Forbidden): pods is forbidden: User "timinglee" cannot list resource "pods" in API group "" in the namespace "default"
#切换回集群管理
[root@master pki]# kubectl config use-context kubernetes-admin@kubernetes
Switched to context "kubernetes-admin@kubernetes".
#如果需要删除用户
[root@master pki]# kubectl config delete-user timinglee
deleted user timinglee from /etc/kubernetes/admin.conf
10.2.2 RBAC(Role Based Access Control)
10.2.2.1 基于角色访问控制授权:
允许管理员通过Kubernetes API动态配置授权策略。RBAC就是用户通过角色与权限进行关联。
RBAC只有授权,没有拒绝授权,所以只需要定义允许该用户做什么即可
RBAC的三个基本概念
- Subject:被作用者,它表示k8s中的三类主体, user, group, serviceAccount
Role:角色,它其实是一组规则,定义了一组对 Kubernetes API 对象的操作权限。
RoleBinding:定义了“被作用者”和“角色”的绑定关系
RBAC包括四种类型:Role、ClusterRole、RoleBinding、ClusterRoleBinding
Role 和 ClusterRole
- Role是一系列的权限的集合,Role只能授予单个namespace 中资源的访问权限。
ClusterRole 跟 Role 类似,但是可以在集群中全局使用。
Kubernetes 还提供了四个预先定义好的 ClusterRole 来供用户直接使用
cluster-amdin、admin、edit、view
10.2.2.2 role授权实施
#生成role的yaml文件
[root@master role]# kubectl create role myrole --dry-run=client --verb=get --resource pods -o yaml > myrole.yml
#修改文件
[root@master role]# cat myrole.yml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
creationTimestamp: null
name: myrole
rules:
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- watch
- list
- create
- update
- path
- delete
- apiGroups:
- ""
resources:
- services
verbs:
- get
- watch
- list
- create
- update
- path
- delete
[root@master role]# kubectl api-resources | less #可以查看资源所属组来填写 - apiGroups: 字段
#创建role
[root@master role]# kubectl apply -f myrole.yml
role.rbac.authorization.k8s.io/myrole created
[root@master role]# kubectl get roles.rbac.authorization.k8s.io
NAME CREATED AT
myrole 2025-08-21T15:34:41Z
[root@master role]# kubectl describe roles.rbac.authorization.k8s.io myrole
Name: myrole
Labels: <none>
Annotations: <none>
PolicyRule:
Resources Non-Resource URLs Resource Names Verbs
--------- ----------------- -------------- -----
pods [] [] [get watch list create update path delete]
services [] [] [get watch list create update path delete]
#建立角色绑定
[root@master role]# kubectl create rolebinding timinglee --role myrole --namespace default --user timinglee --dry-run=client -o yaml > rolebinding-myrole.yml
[root@master role]# cat rolebinding-myrole.yml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
creationTimestamp: null
name: timinglee
namespace: default ##角色绑定必须指定namespace
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: myrole
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: timinglee
[root@master role]# kubectl apply -f rolebinding-myrole.yml
rolebinding.rbac.authorization.k8s.io/timinglee created
[root@master role]# kubectl get rolebindings.rbac.authorization.k8s.io
NAME ROLE AGE
timinglee Role/myrole 22s
#切换用户测试授权
[root@master role]# kubectl config use-context timinglee@kubernetes
Switched to context "timinglee@kubernetes".
[root@master role]# kubectl get pods
No resources found in default namespace.
[root@master role]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 25h
testpod ClusterIP 10.111.131.206 <none> 80/TCP 6h12m
[root@master role]# kubectl get namespaces #未对namespace资源授权
Error from server (Forbidden): namespaces is forbidden: User "timinglee" cannot list resource "namespaces" in API group "" at the cluster scope
#切换回管理员
[root@master role]# kubectl config use-context kubernetes-admin@kubernetes
Switched to context "kubernetes-admin@kubernetes".
10.2.2.3 clusterrole授权实施
#建立集群角色
[root@master role]# cat myclusterrole.yml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
creationTimestamp: null
name: myclusterrole
rules:
- apiGroups:
- apps
resources:
- deployments
verbs:
- get
- list
- watch
- create
- update
- path
- delete
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- list
- watch
- create
- update
- path
- delete
[root@master role]# kubectl apply -f myclusterrole.yml
clusterrole.rbac.authorization.k8s.io/myclusterrole created
[root@master role]# kubectl describe clusterrole myclusterrole
Name: myclusterrole
Labels: <none>
Annotations: <none>
PolicyRule:
Resources Non-Resource URLs Resource Names Verbs
--------- ----------------- -------------- -----
pods [] [] [get list watch create update path delete]
deployments.apps [] [] [get list watch create update path delete]
#建立集群角色绑定
[root@master role]# kubectl create clusterrolebinding clusterrolebind-myclusterrole --clusterrole myclusterrole --user timinglee --dry-run=client -o yaml > clusterrolebind-myclusterrole.yml
[root@master role]# cat clusterrolebind-myclusterrole.yml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
creationTimestamp: null
name: clusterrolebind-myclusterrole
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: myclusterrole
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: timinglee
[root@master role]# kubectl apply -f clusterrolebind-myclusterrole.yml
clusterrolebinding.rbac.authorization.k8s.io/clusterrolebind-myclusterrole created
[root@master role]# kubectl describe clusterrolebindings.rbac.authorization.k8s.io clusterrolebind-myclusterrole
Name: clusterrolebind-myclusterrole
Labels: <none>
Annotations: <none>
Role:
Kind: ClusterRole
Name: myclusterrole
Subjects:
Kind Name Namespace
---- ---- ---------
User timinglee
#测试:
[root@master role]# kubectl config use-context timinglee@kubernetes
Switched to context "timinglee@kubernetes".
[root@master role]# kubectl get pods -A #可以访问所有namespace的资源
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-kube-controllers-6849cb478c-vqcj7 1/1 Running 0 4h8m
kube-system calico-node-54jw7 1/1 Running 0 4h8m
kube-system calico-node-knsq5 1/1 Running 0 4h8m
kube-system calico-node-nzmlx 1/1 Running 0 4h8m
kube-system calico-typha-fff9df85f-42n8s 1/1 Running 0 4h8m
kube-system coredns-7c677d6c78-7n96p 1/1 Running 1 (24h ago) 25h
kube-system coredns-7c677d6c78-jp6c5 1/1 Running 1 (24h ago) 25h
kube-system etcd-master 1/1 Running 1 (24h ago) 25h
kube-system kube-apiserver-master 1/1 Running 1 (24h ago) 25h
kube-system kube-controller-manager-master 1/1 Running 2 (72m ago) 72m
kube-system kube-proxy-qrfjs 1/1 Running 1 (24h ago) 25h
kube-system kube-proxy-rjzl9 1/1 Running 1 (24h ago) 25h
kube-system kube-proxy-tfwhz 1/1 Running 1 (24h ago) 25h
kube-system kube-scheduler-master 1/1 Running 1 (24h ago) 25h
[root@master role]# kubectl get deployments.apps -A
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
kube-system calico-kube-controllers 1/1 1 1 4h8m
kube-system calico-typha 1/1 1 1 4h8m
kube-system coredns 2/2 2 2 25h
[root@master role]# kubectl get svc -A #clusterrole未对service资源授权
Error from server (Forbidden): services is forbidden: User "timinglee" cannot list resource "services" in API group "" at the cluster scope
10.2.2.4 服务账户的自动化
服务账户准入控制器(Service account admission controller)
如果该 pod 没有 ServiceAccount 设置,将其 ServiceAccount 设为 default。
保证 pod 所关联的 ServiceAccount 存在,否则拒绝该 pod。
如果 pod 不包含 ImagePullSecrets 设置,那么 将 ServiceAccount 中的 ImagePullSecrets 信息添加到 pod 中。
将一个包含用于 API 访问的 token 的 volume 添加到 pod 中。
将挂载于 /var/run/secrets/kubernetes.io/serviceaccount 的 volumeSource 添加到 pod 下的每个容器中。
服务账户控制器(Service account controller)
服务账户管理器管理各命名空间下的服务账户,并且保证每个活跃的命名空间下存在一个名为 “default” 的服务账户