<>一、安装介绍


安装是kubernetes1.12版本,采用二进制离线方式安装,安装kubernetes基本组件,没有安装证书。本安装文档适合于学习kubernetes环境搭建

<>二、资源准备

kubernetes1.12安装包 :
安装包下载地址 https://pan.baidu.com/s/15dSgsOVwmmk9gD6tbytTuQ
<https://pan.baidu.com/s/15dSgsOVwmmk9gD6tbytTuQ>

文件名称 选择
kubernetes-node-linux-amd64.tar.gz 必须
kubernetes-server-linux-amd64.tar.gz 必须
kubernetes-client-linux-amd64.tar.gz 可选
kubernetes-client-windows-amd64.tar.gz 可选
kubernetes.tar.gz 可看
服务器准备:4台centos7服务器

服务器名称 IP地址
master 192.168.0.5
node1 192.168.0.6
node2 192.168.0.7
node3 192.168.0.8
<>三、安装步骤

1、安装etcd分布式键值对存储
yum install -y etcd.x86_64
修改 /lib/systemd/system/etcd.service 服务配置文件,增加服务启动参数,添加etcd群集配置参数,添加参数见如下黄色标记处
[Unit] Description=Etcd Server After=network.target
After=network-online.target Wants=network-online.target [Service] Type=notify
WorkingDirectory=/var/lib/etcd/ EnvironmentFile=-/etc/etcd/etcd.conf User=etcd
# set GOMAXPROCS to number of processors ExecStart=/bin/bash -c
"GOMAXPROCS=$(nproc) /usr/bin/etcd --name=\"${ETCD_NAME}\"
--data-dir=\"${ETCD_DATA_DIR}\"
--listen-client-urls=\"${ETCD_LISTEN_CLIENT_URLS}\"
--initial-advertise-peer-urls=\"${ETCD_INITIAL_ADVERTISE_PEER_URLS}\"
--listen-peer-urls=\"${ETCD_LISTEN_PEER_URLS}\"
--advertise-client-urls=\"${ETCD_ADVERTISE_CLIENT_URLS}\"
--initial-cluster-token=\"${ETCD_INITIAL_CLUSTER_TOKEN}\"
--initial-cluster=\"${ETCD_INITIAL_CLUSTER}\"
--initial-cluster-state=\"${ETCD_INITIAL_CLUSTER_STATE}\"" Restart=on-failure
LimitNOFILE=65536 [Install] WantedBy=multi-user.target
修改 /etc/etcd/etcd.conf配置文件
#数据存储目录 ETCD_DATA_DIR="/var/lib/etcd/default.etcd" #群集通讯URL,此处IP地址为本机IP地址
ETCD_LISTEN_PEER_URLS="http://192.168.0.5:2380" #供外部客户端使用的url, 此处IP地址为本机IP地址
ETCD_LISTEN_CLIENT_URLS="http://192.168.0.5:2379,http://0.0.0.0:2379" #etcd节点名称
ETCD_NAME="master" #广播给集群内其他成员访问的URL 此处IP地址为本机IP地址
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.0.5:2380" #广播给外部客户端使用的url
此处IP地址为本机IP地址 ETCD_ADVERTISE_CLIENT_URLS="http://192.168.0.5:2379" #初始集群成员列表
此处IP地址为所有节点的名称与对应的IP地址
ETCD_INITIAL_CLUSTER="master=http://192.168.0.5:2380,node1=http://192.168.0.6:2380,node2=http://192.168.0.7:2380,node3=http://192.168.0.8:2380"
#集群名称 ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-1" #初始集群状态,new为新建集群
ETCD_INITIAL_CLUSTER_STATE="new"
配置开机启动
systemctl daemon-reload systemctl enable etcd systemctl start etcd
检查安装是否成功
查看成员
etcdctl member list

每个节点上执行查看健康状态
etcdctl cluster-health


问题: 检查群集只有本机节点信息,没有其他节点信息

配置好集群启动服务器不生效:原因是etcd服务已经初始化etcd数据库,这时需要删除数据库文件
“/var/lib/etcd/default.etcd/member/ ”
此目录下的所有文件,重启服务时会报错不用管将所有节点重启完成,在检查所有节点服务是否正常,没有启动的重新启动就行了

2、安装master
将kubernetes-server-linux-amd64.tar.gz复制到master服务器上,解压文件将
kube-apiserver、kube-controller-manager、kube-scheduler、kubectl 这几个文件复制到
/usr/bin/ 这个目录下

创建服务文件
创建kube-apiserver.service
[sysadmin@master ~]$ cat /lib/systemd/system/kube-apiserver.service [Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target After=etcd.service [Service]
EnvironmentFile=-/etc/kubernetes/apiserver ExecStart=/usr/bin/kube-apiserver
$KUBE_API_ARGS Restart=on-failure Type=notify LimitNOFILE=65536 [Install]
WantedBy=multi-user.target
创建kube-controller-manager.service
[sysadmin@master ~]$ cat /lib/systemd/system/kube-controller-manager.service
[Unit] Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes [Service]
EnvironmentFile=-/etc/kubernetes/controller-manager
ExecStart=/usr/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_ARGS
Restart=on-failure LimitNOFILE=65536 [Install] WantedBy=multi-user.target
创建kube-scheduler.service
[sysadmin@master ~]$ cat /lib/systemd/system/kube-scheduler.service [Unit]
Description=Kubernetes Scheduler Plugin
Documentation=https://github.com/GoogleCloudPlatform/kubernetes [Service]
EnvironmentFile=-/etc/kubernetes/scheduler ExecStart=/usr/bin/kube-scheduler
$KUBE_SCHEDULER_ARGS Restart=on-failure LimitNOFILE=65536 [Install]
WantedBy=multi-user.target
创建kubernetes master组件配置文件
创建apiserver
[sysadmin@master kubernetes]$ cat apiserver
KUBE_API_ARGS="--storage-backend=etcd3 --etcd-servers=http://127.0.0.1:2379
--insecure-bind-address=0.0.0.0 --insecure-port=8080
--service-cluster-ip-range=169.169.0.0/16 --service-node-port-range=1-65535
--admission-control=NamespaceLifecycle,LimitRanger,ResourceQuota
--logtostderr=false --log-dir=/var/log/kubernets/log --v=2"
创建controller-manager
[sysadmin@master kubernetes]$ cat controller-manager
KUBE_CONTROLLER_MANAGER_ARGS="--master=http://127.0.0.1:8080 --logtostderr=true
--log-dir=/var/log/kubernets/log --v=2"
创建scheduler
sysadmin@master kubernetes]$ cat scheduler
KUBE_SCHEDULER_ARGS="--master=http://127.0.0.1:8080 --logtostderr=false
--log-dir=/var/log/kubernets/log --v=2"
配置开机启动
systemctl daemon-reload systemctl enable kube-apiserver.service systemctl
enable kube-scheduler.service systemctl enable kube-scheduler.service systemctl
start kube-apiserver.service systemctl start kube-scheduler.service systemctl
start kube-scheduler.service
验证Master是否安装成功
[sysadmin@master ~]$ kubectl get componentstatuses NAME STATUS MESSAGE ERROR
controller-manager Healthy ok scheduler Healthy ok etcd-0 Healthy {"health":
"true"}
3、安装node节点
将kubernetes-node-linux-amd64.tar.gz复制所有的node节点服务器上解压,将
kubectl、kubelet、kube-proxy 复制到/usr/bin/目录下
创建 kubelet.service
[sysadmin@ucentosk8snode1 ~]$ cat /lib/systemd/system/kubelet.service [Unit]
Description=Kubernetes Kubelet Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service Requires=docker.service [Service] #手动创建此目录
WorkingDirectory=-/var/kubeletwork EnvironmentFile=-/etc/kubernetes/kubelet
ExecStart=/usr/bin/kubelet $KUBELET_ARGS Restart=on-failure [Install]
WantedBy=multi-user.target
创建 kube-proxy.service
[sysadmin@ucentosk8snode1 ~]$ cat /lib/systemd/system/kube-proxy.service
[Unit] Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.service Requires=network.service [Service]
EnvironmentFile=-/etc/kubernetes/proxy ExecStart=/usr/bin/kube-proxy
$KUBE_PROXY_ARGS Restart=on-failure LimitNOFILE=65536 [Install]
WantedBy=multi-user.target
创建 kubelet.kubeconfig
[sysadmin@ucentosk8snode1 ~]$ cat /etc/kubernetes/kubelet.kubeconfig
apiVersion: v1 kind: Config clusters: - cluster: server:
http://192.168.0.5:8080 name: local contexts: - context: cluster: local name:
local
创建kubelet
这里不在重复去其他node节点配置文件,其他节点 --hostname-override= 和 address= 值不一样,换成对用的机器IP地址
[sysadmin@ucentosk8snode1 ~]$ cat /etc/kubernetes/kubelet
KUBELET_ARGS="--kubeconfig=/etc/kubernetes/kubelet.kubeconfig
--hostname-override=192.168.0.6 --logtostderr=true
--log-dir=/var/log/kubernets/log --v=2 --address=192.168.0.6 --port=10250
--fail-swap-on=false
--pod-infra-container-image=zengshaoyong/pod-infrastructure"
在此问题注意 kubernetes pod基础镜像问题,在国内是无法直接连接谷歌k8s镜像仓库的,需要手动下pod的docker基础镜像,配置这个属性
--fail-swap-on=false --pod-infra-container-image=zengshaoyong/pod-infrastructure

创建proxy
[sysadmin@ucentosk8snode1 ~]$ cat /etc/kubernetes/proxy
KUBE_PROXY_ARGS="--master=http://192.168.0.5:8080 --hostname-override=node1
--v=2 --logtostderr=true --log-dir=/var/log/kubernets/log"
配置开机启动
systemctl daemon-reload systemctl enable kubelet systemctl enable kube-proxy
systemctl start kubelet systemctl start kube-proxy
检查服务是否启动成功
systemctl status kubelet systemctl status kube-proxy
检查节点状态
kubectl get nodes
4、安装flannel网路组件
在所有节点上安装flannel(包括master 、node节点)
yum -y install flannel
修改配置文件
[sysadmin@ucentosk8snode1 ~]$ cat /etc/sysconfig/flanneld # Flanneld
configuration options # etcd url location. Point this to the server where etcd
runs FLANNEL_ETCD_ENDPOINTS="http://127.0.0.1:2379" # etcd config key. This is
the configuration key that flannel queries # For address range assignment
FLANNEL_ETCD_PREFIX="/etc/kubernetes/network" # Any additional options that you
want to pass #FLANNEL_OPTIONS=""
初始化IP地址段信息数据
执行如下命令,在etcd中初始化数据
etcdctl set /etc/kubernetes/network/config '{"Network": "172.20.0.0/16"}'

执行如下命令,创建flannel服务并启动,重启docker、kube-apiserver、kube-apiserver、kube-apiserver、kubelet、kube-proxy
在master上执行如下命令
systemctl daemon-reload systemctl enable flanneld.service systemctl start
flanneld.service systemctl restart docker.service systemctl restart
kube-apiserver.service systemctl restart kube-apiserver.service systemctl
restart kube-apiserver.service
在node上执行如下命令
systemctl daemon-reload systemctl enable flanneld.service systemctl start
flanneld.service systemctl restart docker.service systemctl restart
kubelet.service systemctl restart kube-proxy.service
检查网卡是否正确
执行 ifconfig 命令检查docker0网卡与flannel0网卡是否在同一个IP地址段
[sysadmin@ucentosk8snode1 ~]$ ifconfig docker0:
flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1472 inet 169.168.93.1 netmask
255.255.255.0 broadcast 169.168.93.255 inet6 fe80::42:59ff:fe3c:1080 prefixlen
64 scopeid 0x20<link> ether 02:42:59:3c:10:80 txqueuelen 0 (Ethernet) RX
packets 104 bytes 9582 (9.3 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX
packets 191563 bytes 36020318 (34.3 MiB) TX errors 0 dropped 0 overruns 0
carrier 0 collisions 0 flannel0:
flags=4305<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST> mtu 1472 inet 169.168.93.0
netmask 255.255.0.0 destination 169.168.93.0 inet6 fe80::f6df:f17:7410:cb99
prefixlen 64 scopeid 0x20<link> unspec
00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 txqueuelen 500 (UNSPEC) RX
packets 136 bytes 18806 (18.3 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX
packets 171 bytes 15494 (15.1 KiB) TX errors 0 dropped 0 overruns 0 carrier 0
collisions 0
要是不同一个IP地址段,修改docker服务配置文件,找到/run/flannel/docker此文件,
添加如下参数,重启docker服务,在检测docker0网卡与flannel0网卡是否在同一网段
[sysadmin@ucentosk8snode1 ~]$ cat /lib/systemd/system/docker.service [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com After=network-online.target
firewalld.service Wants=network-online.target [Service] Type=notify # the
default is not to use systemd for cgroups because the delegate issues still #
exists and systemd currently does not support the cgroup feature set required #
for containers run by docker EnvironmentFile=-/run/flannel/docker
ExecStart=/usr/bin/dockerd $DOCKER_OPT_BIP $DOCKER_OPT_IPMASQ $DOCKER_OPT_MTU
$DOCKER_NETWORK_OPTIONS ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero
Limit*s causes performance problems due to accounting overhead # in the kernel.
We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment
TasksMax if your systemd version supports it. # Only systemd 226 and above
support this version. #TasksMax=infinity TimeoutStartSec=0 # set delegate yes
so that systemd does not reset the cgroups of docker containers Delegate=yes #
kill only the docker process, not all processes in the cgroup KillMode=process
# restart the docker process if it exits prematurely Restart=on-failure
StartLimitBurst=3 StartLimitInterval=60s [Install] WantedBy=multi-user.target