当前位置:网站首页>Kubesphere 3.3.0 offline installation tutorial
Kubesphere 3.3.0 offline installation tutorial
2022-07-21 13:00:00 【KubeSphere】
author : The old Z, Operation and maintenance architect of Shandong Branch of Chinatelecom Digital Intelligence Technology Co., Ltd , Cloud native enthusiasts , Currently focus on cloud native operation and maintenance , Cloud native domain technology stack involves Kubernetes、KubeSphere、DevOps、OpenStack、Ansible etc. .
KubeKey Is a tool for deploying K8s Open source lightweight tools for clustering .
It provides a flexibility 、 Fast 、 Convenient way to install only Kubernetes/K3s, Or install at the same time K8s/K3s and KubeSphere, And other cloud native plug-ins . besides , It is also an effective tool for expanding and upgrading clusters .
KubeKey v2.1.0 The version has added a list (manifest) And products (artifact) The concept of , Deploy offline for users K8s Clustering provides a solution .
manifest Is a description of the current K8s Cluster information and definitions artifact A text file of what needs to be included in the artifact .
in the past , Users need to prepare deployment tools , Mirror image tar Package and other related binaries , Each user needs to deploy K8s The version and the image to be deployed are different . Now use KubeKey, Users only need to use the list manifest File to define the content required by the cluster environment to be deployed offline , And then through the manifest To export artifacts artifact The document can be used to complete the preparation . When deploying offline, you only need KubeKey and artifact You can quickly 、 Simply deploy the image warehouse and in the environment K8s colony .
KubeKey Generate manifest There are two ways to document .
- Use the existing running cluster as the source to generate manifest file , It's also an official recommendation , Specific reference KubeSphere Offline deployment documents of the official website .
- according to Template file Write by hand manifest file .
The advantage of the first method is that you can build 1:1 Operating environment , But you need to deploy a cluster in advance , Not flexible enough , Not everyone has such conditions .
therefore , This article refers to the official offline documentation , Handwritten manifest How to file , Realize the installation and deployment of offline environment .
Knowledge of this article
- grading : beginner
- Know the list (manifest) And products (artifact) The concept of
- master manifest How to write the list
- according to manifest List making artifact
- Offline deployment KubeSphere and Kubernetes
Demonstrate server configuration
Host name | IP | CPU | Memory | System disk | Data disk | purpose |
---|---|---|---|---|---|---|
zdeops-master | 192.168.9.9 | 2 | 4 | 40 | 200 | Ansible Operation and maintenance control node |
ks-k8s-master-0 | 192.168.9.91 | 4 | 16 | 40 | 200+200 | KubeSphere/k8s-master/k8s-worker/Ceph |
ks-k8s-master-1 | 192.168.9.92 | 4 | 16 | 40 | 200+200 | KubeSphere/k8s-master/k8s-worker/Ceph |
ks-k8s-master-2 | 192.168.9.93 | 4 | 16 | 40 | 200+200 | KubeSphere/k8s-master/k8s-worker/Ceph |
es-node-0 | 192.168.9.95 | 2 | 8 | 40 | 200 | ElasticSearch |
es-node-1 | 192.168.9.96 | 2 | 8 | 40 | 200 | ElasticSearch |
es-node-2 | 192.168.9.97 | 2 | 8 | 40 | 200 | ElasticSearch |
harbor | 192.168.9.89 | 2 | 8 | 40 | 200 | Harbor |
total | 8 | 22 | 84 | 320 | 2200 |
The demo environment involves software version information
operating system :CentOS-7.9-x86_64
KubeSphere:3.3.0
Kubernetes:1.24.1
Kubekey:v2.2.1
Ansible:2.8.20
Harbor:2.5.1
Offline deployment resource production
download KubeKey
# stay zdevops-master The operation and maintenance development server executes # Select the Chinese area to download ( visit github Limited use )$ export KKZONE=cn# download KubeKey$ mkdir /data/kubekey$ cd /data/kubekey/$ curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.1 sh -
obtain manifest Templates
Reference resources https://github.com/kubesphere/kubekey/blob/master/docs/manifest-example.md
There are two reference use cases , A simple version , A full version . Just refer to the simple version .
obtain ks-installer images-list
$ wget https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/images-list.txt
In this paper, the image Selected in the list Docker Hub Warehouse public warehouse where other components are stored , It is suggested to change the prefix to registry.cn-beijing.aliyuncs.com/kubesphereio
The modified complete image list is shown below manifest The document shows .
Please note that ,example-images Contains image Only busybox, Others are not used in this article .
Get the operating system dependency package
$ wget https://github.com/kubesphere/kubekey/releases/download/v2.2.1/centos7-rpms-amd64.iso
Will be ISO Put the file on the server that makes the offline image /data/kubekey Under the table of contents
Generate manifest file
According to the above documents and relevant information , Generate final manifest.yaml.
Name it ks-v3.3.0-manifest.yaml
apiVersion: kubekey.kubesphere.io/v1alpha2kind: Manifestmetadata: name: samplespec: arches: - amd64 operatingSystems: - arch: amd64 type: linux id: centos version: "7" osImage: CentOS Linux 7 (Core) repository: iso: localPath: "/data/kubekey/centos7-rpms-amd64.iso" url: kubernetesDistributions: - type: kubernetes version: v1.24.1 components: helm: version: v3.6.3 cni: version: v0.9.1 etcd: version: v3.4.13 containerRuntimes: - type: containerd version: 1.6.4 crictl: version: v1.24.0 docker-registry: version: "2" harbor: version: v2.4.1 docker-compose: version: v2.2.2 images: - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-apiserver:v1.23.7 - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controller-manager:v1.23.7 - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.23.7 - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-scheduler:v1.23.7 - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-apiserver:v1.24.1 - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controller-manager:v1.24.1 - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.24.1 - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-scheduler:v1.24.1 - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-apiserver:v1.22.10 - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controller-manager:v1.22.10 - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.22.10 - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-scheduler:v1.22.10 - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-apiserver:v1.21.13 - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controller-manager:v1.21.13 - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.21.13 - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-scheduler:v1.21.13 - registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.7 - registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.6 - registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.5 - registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.4.1 - registry.cn-beijing.aliyuncs.com/kubesphereio/coredns:1.8.0 - registry.cn-beijing.aliyuncs.com/kubesphereio/coredns:1.8.6 - registry.cn-beijing.aliyuncs.com/kubesphereio/cni:v3.20.0 - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controllers:v3.20.0 - registry.cn-beijing.aliyuncs.com/kubesphereio/node:v3.20.0 - registry.cn-beijing.aliyuncs.com/kubesphereio/pod2daemon-flexvol:v3.20.0 - registry.cn-beijing.aliyuncs.com/kubesphereio/typha:v3.20.0 - registry.cn-beijing.aliyuncs.com/kubesphereio/flannel:v0.12.0 - registry.cn-beijing.aliyuncs.com/kubesphereio/provisioner-localpv:2.10.1 - registry.cn-beijing.aliyuncs.com/kubesphereio/linux-utils:2.10.0 - registry.cn-beijing.aliyuncs.com/kubesphereio/haproxy:2.3 - registry.cn-beijing.aliyuncs.com/kubesphereio/nfs-subdir-external-provisioner:v4.0.2 - registry.cn-beijing.aliyuncs.com/kubesphereio/k8s-dns-node-cache:1.15.12 - registry.cn-beijing.aliyuncs.com/kubesphereio/ks-installer:v3.3.0 - registry.cn-beijing.aliyuncs.com/kubesphereio/ks-apiserver:v3.3.0 - registry.cn-beijing.aliyuncs.com/kubesphereio/ks-console:v3.3.0 - registry.cn-beijing.aliyuncs.com/kubesphereio/ks-controller-manager:v3.3.0 - registry.cn-beijing.aliyuncs.com/kubesphereio/kubectl:v1.22.0 - registry.cn-beijing.aliyuncs.com/kubesphereio/kubectl:v1.21.0 - registry.cn-beijing.aliyuncs.com/kubesphereio/kubectl:v1.20.0 - registry.cn-beijing.aliyuncs.com/kubesphereio/kubefed:v0.8.1 - registry.cn-beijing.aliyuncs.com/kubesphereio/tower:v0.2.0 - registry.cn-beijing.aliyuncs.com/kubesphereio/minio:RELEASE.2019-08-07T01-59-21Z - registry.cn-beijing.aliyuncs.com/kubesphereio/mc:RELEASE.2019-08-07T23-14-43Z - registry.cn-beijing.aliyuncs.com/kubesphereio/snapshot-controller:v4.0.0 - registry.cn-beijing.aliyuncs.com/kubesphereio/nginx-ingress-controller:v1.1.0 - registry.cn-beijing.aliyuncs.com/kubesphereio/defaultbackend-amd64:1.4 - registry.cn-beijing.aliyuncs.com/kubesphereio/metrics-server:v0.4.2 - registry.cn-beijing.aliyuncs.com/kubesphereio/redis:5.0.14-alpine - registry.cn-beijing.aliyuncs.com/kubesphereio/haproxy:2.0.25-alpine - registry.cn-beijing.aliyuncs.com/kubesphereio/alpine:3.14 - registry.cn-beijing.aliyuncs.com/kubesphereio/openldap:1.3.0 - registry.cn-beijing.aliyuncs.com/kubesphereio/netshoot:v1.0 - registry.cn-beijing.aliyuncs.com/kubesphereio/cloudcore:v1.9.2 - registry.cn-beijing.aliyuncs.com/kubesphereio/iptables-manager:v1.9.2 - registry.cn-beijing.aliyuncs.com/kubesphereio/edgeservice:v0.2.0 - registry.cn-beijing.aliyuncs.com/kubesphereio/gatekeeper:v3.5.2 - registry.cn-beijing.aliyuncs.com/kubesphereio/openpitrix-jobs:v3.2.1 - registry.cn-beijing.aliyuncs.com/kubesphereio/devops-apiserver:v3.3.0 - registry.cn-beijing.aliyuncs.com/kubesphereio/devops-controller:v3.3.0 - registry.cn-beijing.aliyuncs.com/kubesphereio/devops-tools:v3.3.0 - registry.cn-beijing.aliyuncs.com/kubesphereio/ks-jenkins:v3.3.0-2.319.1 - registry.cn-beijing.aliyuncs.com/kubesphereio/inbound-agent:4.10-2 - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-base:v3.2.2 - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-nodejs:v3.2.0 - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-maven:v3.2.0 - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-maven:v3.2.1-jdk11 - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-python:v3.2.0 - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.0 - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.2-1.16 - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.2-1.17 - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.2-1.18 - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-base:v3.2.2-podman - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-nodejs:v3.2.0-podman - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-maven:v3.2.0-podman - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-maven:v3.2.1-jdk11-podman - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-python:v3.2.0-podman - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.0-podman - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.2-1.16-podman - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.2-1.17-podman - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.2-1.18-podman - registry.cn-beijing.aliyuncs.com/kubesphereio/s2ioperator:v3.2.1 - registry.cn-beijing.aliyuncs.com/kubesphereio/s2irun:v3.2.0 - registry.cn-beijing.aliyuncs.com/kubesphereio/s2i-binary:v3.2.0 - registry.cn-beijing.aliyuncs.com/kubesphereio/tomcat85-java11-centos7:v3.2.0 - registry.cn-beijing.aliyuncs.com/kubesphereio/tomcat85-java11-runtime:v3.2.0 - registry.cn-beijing.aliyuncs.com/kubesphereio/tomcat85-java8-centos7:v3.2.0 - registry.cn-beijing.aliyuncs.com/kubesphereio/tomcat85-java8-runtime:v3.2.0 - registry.cn-beijing.aliyuncs.com/kubesphereio/java-11-centos7:v3.2.0 - registry.cn-beijing.aliyuncs.com/kubesphereio/java-8-centos7:v3.2.0 - registry.cn-beijing.aliyuncs.com/kubesphereio/java-8-runtime:v3.2.0 - registry.cn-beijing.aliyuncs.com/kubesphereio/java-11-runtime:v3.2.0 - registry.cn-beijing.aliyuncs.com/kubesphereio/nodejs-8-centos7:v3.2.0 - registry.cn-beijing.aliyuncs.com/kubesphereio/nodejs-6-centos7:v3.2.0 - registry.cn-beijing.aliyuncs.com/kubesphereio/nodejs-4-centos7:v3.2.0 - registry.cn-beijing.aliyuncs.com/kubesphereio/python-36-centos7:v3.2.0 - registry.cn-beijing.aliyuncs.com/kubesphereio/python-35-centos7:v3.2.0 - registry.cn-beijing.aliyuncs.com/kubesphereio/python-34-centos7:v3.2.0 - registry.cn-beijing.aliyuncs.com/kubesphereio/python-27-centos7:v3.2.0 - registry.cn-beijing.aliyuncs.com/kubesphereio/argocd:v2.3.3 - registry.cn-beijing.aliyuncs.com/kubesphereio/argocd-applicationset:v0.4.1 - registry.cn-beijing.aliyuncs.com/kubesphereio/dex:v2.30.2 - registry.cn-beijing.aliyuncs.com/kubesphereio/redis:6.2.6-alpine - registry.cn-beijing.aliyuncs.com/kubesphereio/configmap-reload:v0.5.0 - registry.cn-beijing.aliyuncs.com/kubesphereio/prometheus:v2.34.0 - registry.cn-beijing.aliyuncs.com/kubesphereio/prometheus-config-reloader:v0.55.1 - registry.cn-beijing.aliyuncs.com/kubesphereio/prometheus-operator:v0.55.1 - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-rbac-proxy:v0.11.0 - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-state-metrics:v2.3.0 - registry.cn-beijing.aliyuncs.com/kubesphereio/node-exporter:v1.3.1 - registry.cn-beijing.aliyuncs.com/kubesphereio/alertmanager:v0.23.0 - registry.cn-beijing.aliyuncs.com/kubesphereio/thanos:v0.25.2 - registry.cn-beijing.aliyuncs.com/kubesphereio/grafana:8.3.3 - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-rbac-proxy:v0.8.0 - registry.cn-beijing.aliyuncs.com/kubesphereio/notification-manager-operator:v1.4.0 - registry.cn-beijing.aliyuncs.com/kubesphereio/notification-manager:v1.4.0 - registry.cn-beijing.aliyuncs.com/kubesphereio/notification-tenant-sidecar:v3.2.0 - registry.cn-beijing.aliyuncs.com/kubesphereio/elasticsearch-curator:v5.7.6 - registry.cn-beijing.aliyuncs.com/kubesphereio/elasticsearch-oss:6.8.22 - registry.cn-beijing.aliyuncs.com/kubesphereio/fluentbit-operator:v0.13.0 - registry.cn-beijing.aliyuncs.com/kubesphereio/docker:19.03 - registry.cn-beijing.aliyuncs.com/kubesphereio/fluent-bit:v1.8.11 - registry.cn-beijing.aliyuncs.com/kubesphereio/log-sidecar-injector:1.1 - registry.cn-beijing.aliyuncs.com/kubesphereio/filebeat:6.7.0 - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-events-operator:v0.4.0 - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-events-exporter:v0.4.0 - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-events-ruler:v0.4.0 - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-auditing-operator:v0.2.0 - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-auditing-webhook:v0.2.0 - registry.cn-beijing.aliyuncs.com/kubesphereio/pilot:1.11.1 - registry.cn-beijing.aliyuncs.com/kubesphereio/proxyv2:1.11.1 - registry.cn-beijing.aliyuncs.com/kubesphereio/jaeger-operator:1.27 - registry.cn-beijing.aliyuncs.com/kubesphereio/jaeger-agent:1.27 - registry.cn-beijing.aliyuncs.com/kubesphereio/jaeger-collector:1.27 - registry.cn-beijing.aliyuncs.com/kubesphereio/jaeger-query:1.27 - registry.cn-beijing.aliyuncs.com/kubesphereio/jaeger-es-index-cleaner:1.27 - registry.cn-beijing.aliyuncs.com/kubesphereio/kiali-operator:v1.38.1 - registry.cn-beijing.aliyuncs.com/kubesphereio/kiali:v1.38 - registry.cn-beijing.aliyuncs.com/kubesphereio/busybox:1.31.1 - registry.cn-beijing.aliyuncs.com/kubesphereio/scope:1.13.0 registry: auths: {}
manifest Modification Description
- Turn on harbor and docker-compose Configuration item , For the back through KubeKey build by oneself harbor The warehouse pushes the image to use .
- Created by default manifest The list of images inside is from docker.io obtain , Replace the prefix with registry.cn-beijing.aliyuncs.com/kubesphereio.
- If you need to export artifact The file contains operating system dependent files ( Such as :conntarck、chrony etc. ), Can be found in operationSystem In the element .repostiory.iso.url Configure the corresponding ISO The download address of the dependent file is localPath , Fill in the downloaded in advance ISO The local storage path of the package , And will url The configuration item is empty .
- You can visit https://github.com/kubesphere/kubekey/releases/tag/v2.2.1 download ISO file .
Export artifacts artifact
$ export KKZONE=cn$ ./kk artifact export -m ks-v3.3.0-manifest.yaml -o kubesphere-v3.3.0-artifact.tar.gz
products (artifact) explain
- products (artifact) It is a designated manifest The contents of the exported file contain images tar Package and related binary files tgz package .
- stay KubeKey Initialize the image warehouse 、 Create clusters 、 One can be specified in the commands of adding nodes and upgrading clusters artifact,KubeKey The... Will be unpacked automatically artifact And directly use the unpacked file when executing the command .
export Kubekey
$ tar zcvf kubekey-v2.2.1.tar.gz kk kubekey-v2.2.1-linux-amd64.tar.gz
K8s Server initialization configuration
This section implements the offline environment K8s Server initialization configuration .
Ansible hosts To configure
[k8s]ks-k8s-master-0 ansible_ssh_host=192.168.9.91 host_name=ks-k8s-master-0ks-k8s-master-1 ansible_ssh_host=192.168.9.92 host_name=ks-k8s-master-1ks-k8s-master-2 ansible_ssh_host=192.168.9.93 host_name=ks-k8s-master-2[es]es-node-0 ansible_ssh_host=192.168.9.95 host_name=es-node-0es-node-1 ansible_ssh_host=192.168.9.96 host_name=es-node-1es-node-2 ansible_ssh_host=192.168.9.97 host_name=es-node-2harbor ansible_ssh_host=192.168.9.89 host_name=harbor[servers:children]k8ses[servers:vars]ans[email protected]ywwpTj4bJtYwzpwCqD
Check server connectivity
# utilize ansible Check the connectivity of the server $ cd /data/ansible/ansible-zdevops/inventories/dev/$ source /opt/ansible2.8/bin/activate$ ansible -m ping all
Initialize server configuration
# utilize ansible-playbook Initialize server configuration $ ansible-playbook ../../playbooks/init-base.yaml -l k8s
Mount data disk
- Mount the first data disk
# utilize ansible-playbook Initialize the host data disk # Be careful -e data_disk_path="/data" Specify the mount Directory , Used to store Docker Container data $ ansible-playbook ../../playbooks/init-disk.yaml -e data_disk_path="/data" -l k8s
- Mount verification
# utilize ansible Verify that the data disk is formatted and mounted $ ansible harbor -m shell -a 'df -h'# utilize ansible Verify whether the data disk is configured with automatic mounting $ ansible harbor -m shell -a 'tail -1 /etc/fstab'
install K8s System dependent package
# utilize ansible-playbook install kubernetes System dependent package # ansible-playbook Enabled in GlusterFS Stored switch , Default on , If you don't need it, you can set the parameter to False$ ansible-playbook ../../playbooks/deploy-kubesphere.yaml -e k8s_storage_glusterfs=false -l k8s
Install the cluster offline
Transfer offline deployment resources to the deployment node
Deploy the following resources offline , To the deployment node ( Usually the first master node ) Of /data/kubekey Catalog .
- Kubekey:kubekey-v2.2.1.tar.gz
- products artifact:kubesphere-v3.3.0-artifact.tar.gz
Do the following , decompression kubekey.
$ cd /data/kubekey$ tar xvf kubekey-v2.2.1.tar.gz
Create offline cluster configuration file
- create profile
$ ./kk create config --with-kubesphere v3.3.0 --with-kubernetes v1.24.1 -f config-sample.yaml
- Modify the configuration file
$ vim config-sample.yaml
Modify the content description
- Modify the node information according to the actual offline environment configuration .
- Add according to the actual situation registry Information about .
apiVersion: kubekey.kubesphere.io/v1alpha2kind: Clustermetadata: name: samplespec: hosts: - {name: ks-k8s-master-0, address: 192.168.9.91, internalAddress: 192.168.9.91, user: root, password: "[email protected]"} - {name: ks-k8s-master-1, address: 192.168.9.92, internalAddress: 192.168.9.92, user: root, password: "[email protected]"} - {name: ks-k8s-master-2, address: 192.168.9.93, internalAddress: 192.168.9.93, user: root, password: "[email protected]"} roleGroups: etcd: - ks-k8s-master-0 - ks-k8s-master-1 - ks-k8s-master-2 control-plane: - ks-k8s-master-0 - ks-k8s-master-1 - ks-k8s-master-2 worker: - ks-k8s-master-0 - ks-k8s-master-1 - ks-k8s-master-2 controlPlaneEndpoint: ## Internal loadbalancer for apiservers internalLoadbalancer: haproxy domain: lb.zdevops.com.cn address: "" port: 6443 kubernetes: version: v1.24.1 clusterName: zdevops.com.cn autoRenewCerts: true containerManager: containerd etcd: type: kubekey network: plugin: calico kubePodsCIDR: 10.233.64.0/18 kubeServiceCIDR: 10.233.0.0/18 ## multus support. https://github.com/k8snetworkplumbingwg/multus-cni multusCNI: enabled: false registry: type: "harbor" auths: "registry.zdevops.com.cn": username: admin password: Harbor12345 privateRegistry: "registry.zdevops.com.cn" namespaceOverride: "kubesphereio" registryMirrors: [] insecureRegistries: [] addons: []# The following contents will not be modified , Don't show
stay Harbor Create project in
This article uses the pre deployed Harbor To store images , The deployment process refers to what I wrote before be based on KubeSphere Get along well with k8s-Harbor Installation notes .
You can use kk Automatic tool deployment Harbor, Specific reference Official offline deployment documents .
- Download the create project script template
$ curl -O https://raw.githubusercontent.com/kubesphere/ks-installer/master/scripts/create_project_harbor.sh
- Modify the project script according to the actual situation
#!/usr/bin/env bash# Harbor Warehouse address url="https://registry.zdevops.com.cn"# visit Harbor Warehouse users user="admin"# visit Harbor Warehouse user password passwd="Harbor12345"# List of project names to be created , Normally, you only need to create one **kubesphereio** that will do , In order to preserve the extensibility of variables, two more .harbor_projects=(library kubesphereio kubesphere)for project in "${harbor_projects[@]}"; do echo "creating $project" curl -u "${user}:${passwd}" -X POST -H "Content-Type: application/json" "${url}/api/v2.0/projects" -d "{ \"project_name\": \"${project}\", \"public\": true}"done
- Execute the script to create the project
$ sh create_project_harbor.sh
Push offline image to Harbor Warehouse
Push the prepared offline image to Harbor Warehouse , This step is optional , Because the image will be pushed again when creating the cluster . In order to deploy a success rate , It is recommended to push .
$ ./kk artifact image push -f config-sample.yaml -a kubesphere-v3.3.0-artifact.tar.gz
Create cluster and install OS rely on
$ ./kk create cluster -f config-sample.yaml -a kubesphere-v3.3.0-artifact.tar.gz --with-packages
Parameter description
- config-sample.yaml: Configuration file of offline environment cluster .
- kubesphere-v3.3.0-artifact.tar.gz: Products wrapped tar Package image .
- --with-packages: If you need to install operating system dependencies , This option needs to be specified .
View the cluster status
$ kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l 'app in (ks-install, ks-installer)' -o jsonpath='{.items[0].metadata.name}') -f
After the correct installation is complete , You will see the following :
**************************************************Collecting installation results ...######################################################## Welcome to KubeSphere! ########################################################Console: http://192.168.9.91:30880Account: adminPassword: [email protected]: 1. After you log into the console, please check the monitoring status of service components in "Cluster Management". If any service is not ready, please wait patiently until all components are up and running. 2. Please change the default password after login.#####################################################https://kubesphere.io 2022-06-30 14:30:19#####################################################
Sign in Web Console
adopt http://{IP}:30880
Use default account and password admin/[email protected]
visit KubeSphere Of Web Console , Carry out subsequent operation configuration .
summary
Thank you for reading this article completely , So , You should Get To the following skills
- Learned the list (manifest) And products (artifact) The concept of
- understand manifest and image Access address of resources
- Handwriting manifest detailed list
- according to manifest List making artifact
- Offline deployment KubeSphere and Kubernetes
- Harbor Image warehouse automatically creates projects
- Ansible Tips to use
So far, , We have finished minimizing the environment KubeSphere and K8s Cluster deployment . however , This is just the beginning , There are many configuration and usage skills in the future , Coming soon ...
This article by the blog one article many sends the platform OpenWrite Release !
边栏推荐
- 肝胆外科疾病,存在误诊和/或漏诊、误诊,医疗机构承担主要赔偿责任
- AI如何做新冠疫情预测?佐治亚理工最新《以数据为中心的流行病预测》综述
- LM386 practical schematic diagram
- 选择云企业网CEN接入自建数据库,需要怎么选择和链接?
- In that year, the story behind the opening of wild cattle in the Spring Festival Gala
- 2022清华暑校笔记之L1_NLP和Bigmodel基础
- Chen Hehong: Construction and application of alime mkg, a multimodal knowledge map of Alibaba new retail
- Solution of code conflict when SVN turtle merge branches to the trunk
- 13 继承
- 什么是公网IP自建数据库?
猜你喜欢
SEO (search engine optimization) search engine optimization
Sqlserver BCP参数解释和字符格式选择和故障处理小结
go mod創建項目
面对医疗纠纷应该这样做
WinForm UI interface design routine - multi thread access UI control
使用Unity Tilemap轻松制作2D瓦片地图-基础篇
生成验证码
LM386 practical schematic diagram
This should be done in the face of medical disputes
用Calendar中的add()方法对时间加减,获取时间范围,读取动态数据字典。
随机推荐
This should be done in the face of medical disputes
DTS中的数据库账号中出现最多的是MySQL 数据库的账号以及MONGODB 数据库的账号,具体有哪
P2404 自然数的拆分问题
Sqlserver BCP参数解释和字符格式选择和故障处理小结
预制财务凭证BAPI
lm386实用原理图
RS485数据线接反症状
shell基础之逻辑控制
绘图库MatplotlibMatplotlib快速入门
Shutter animation animation
阿里云DTS 支持的源端数据库类型有哪些?
如何实现FinClip微信授权登录的三种方案
000
【C语言】文件操作
Select the self built database through dedicated line /vpn gateway / intelligent access gateway, and what should be paid attention to when filling in VPC ID?
Leetcode exercise - Sword finger offer 66 Build product array
What is the difference between self built database transmission through dedicated line /vpn gateway / intelligent access gateway?
用Calendar中的add()方法对时间加减,获取时间范围,读取动态数据字典。
绘图库Matplotlib图形绘制
绘图库Matplotlib安装配置