# 重要,开源版已删库
- 文档丢失,备用文档地址: https://docs.kubesphere-carryon.top/zh/docs/v3.4/
 
# 错误排查方法
kubectl get nodes: 查看集群状态journalctl -u kubelet -f,使用该命令排查集群中出现的错误kubectl describe pod pod-name -n namespace:查看pod事件kubectl logs <pod-name> -n namespace: 查看pod日志
# kubesphere4
# kubesphere4.x
- 官方文档地址: https://kubesphere.io/zh/docs/v4.1/03-installation-and-upgrade/02-install-kubesphere/02-install-kubernetes-and-kubesphere/
 - 注意事项: 安装过程中,不能使用devtoolset-8环境
 - 补充,对于 大部分情况,应用与gcc版本无关。 但不排除依赖高版本库文件的情况,这时候就会不兼容。 比如nvm。 (应用分为静态链接与动态链接, 动态链接的情况可能不兼容,静态链接不会);
 
# 安装openebs
# 注意,这个方案,需要containerd配置代理,containerd配置代理有坑,需要注意,不能使用127.0.0.1:7890 进行代理,而应该使用节点ip:7890
helm repo add openebs https://openebs.github.io/openebs
helm repo update
helm install openebs --namespace openebs openebs/openebs --set engines.replicated.mayastor.enabled=false --create-namespace
kubectl get pods -n openebs
kubectl get sc
- 注意,需要将该storageClass设置为默认的storageClass;
 - 此外,首次使用时需要开启梯子,会下载一个 openebs/linux-utils镜像,不然无法创建PV;
 - 补充,并不是首次使用,而是每次使用都会下载 openebs/linux-utils镜像,不然无法创建PV;
 
# kubesphere4.x应用商店设置
- 应用商店问题可参考
- https://ask.kubesphere.com.cn/forum/d/4530/7
 - https://ask.kubesphere.io/forum/d/23922-kubesphere-411-ying-yong-shang-dian-pei-zhi-fang-fa
 
 
# kubesphere4.x宿主机重启后kubesphere集群没启动的处理
- systemctl status containerd && systemctl start containerd, 如果容器运行时环境没启动就启动
 - systemctl status kubelet && systemctl start kubelet, 如果k8s环境没启动就启动
 - 当上方都正常启动后,kubesphere服务就可用了;
 
# 使用node1:20080默认会使用https的处理
- 修改containerd的配置
 
[plugins]
  [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
    runtime_type = "io.containerd.runc.v2"
    [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
      SystemdCgroup = true
  [plugins."io.containerd.grpc.v1.cri"]
    sandbox_image = "registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.9"
    [plugins."io.containerd.grpc.v1.cri".cni]
      bin_dir = "/opt/cni/bin"
      conf_dir = "/etc/cni/net.d"
      max_conf_num = 1
      conf_template = ""
    [plugins."io.containerd.grpc.v1.cri".registry]
      [plugins."io.containerd.grpc.v1.cri".registry.mirrors]
        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
          endpoint = ["https://registry-1.docker.io"]
          //以下时新增的内容
        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."node1:20080"]
          endpoint = ["http://node1:20080"]
      [plugins."io.containerd.grpc.v1.cri".registry.configs]
        [plugins."io.containerd.grpc.v1.cri".registry.configs."node1:20080".tls]
          insecure_skip_verify = true
- systemctl restart containerd && systemctl restart kubelet
 - 重启后,当镜像地址使用: node1:20080/***时,将会默认使用http协议;之前使用的https协议;
 
# kube服务健康检查
- 由于kubenetes使用了iptables转发(cali网络插件),无法通过netstat查看端口的健康性;
 - 可以通过 
kubectl get services --all-namespaces查看30880的服务是否健康来确认控制台的健康性; kebuctl get pods -n kubesphere-system+kubectl logs pod-name -n kubesphere-system可以辅助排查- 参考文档: 
https://zhuanlan.zhihu.com/p/75933393 
# kubesphere4.x较为重要的三方插件
- 服务网格,用于自制应用; 需要依赖: WhizardTelemetry监控 -> WhizardTelemetry平台服务。 所谓自制应用,就是在页面上通过配置化一步步生成helm环境文件;
 - devOps运维,用于自动化部署应用;
 - 应用商店,用于将自己的应用做成公共应用,其他工作空间也能使用;
 
# 外部域名无法识别问题
- 默认情况下, coreDNS使用了宿主机的 /etc/resolv.conf配置文件;
 - 该配置文件可能会由networkManager插件自动生成,尽管之前已经更改了配置,重启后会被覆盖;
 
# 排查方案
kubectl get pods -n kube-system | grep coredns
kubectl logs <coredns-pod-name> -n kube-system
# 可以查看到无法解析成功的域名信息
[ERROR] plugin/errors: 2 2031635366030414054.8432922077260765341. HINFO: read udp 10.244.154.189:45172->192.168.10.1:53: i/o timeout
[ERROR] plugin/errors: 2 2031635366030414054.8432922077260765341. HINFO: read udp 10.244.154.189:58516->8.8.8.8:53: i/o timeout
[ERROR] plugin/errors: 2 2031635366030414054.8432922077260765341. HINFO: read udp 10.244.154.189:57743->192.168.10.1:53: i/o timeout
[ERROR] plugin/errors: 2 2031635366030414054.8432922077260765341. HINFO: read udp 10.244.154.189:37366->8.8.8.8:53: i/o timeout
[ERROR] plugin/errors: 2 2031635366030414054.8432922077260765341. HINFO: read udp 10.244.154.189:39639->192.168.10.1:53: i/o timeout
[ERROR] plugin/errors: 2 2031635366030414054.8432922077260765341. HINFO: read udp 10.244.154.189:47961->8.8.8.8:53: i/o timeout
[ERROR] plugin/errors: 2 charts.bitnami.com. A: dial udp [2409:8062:2000:1::1]:53: connect: network is unreachable
[ERROR] plugin/errors: 2 repo.broadcom.com. A: dial udp [2409:8062:2000:1::1]:53: connect: network is unreachable
[ERROR] plugin/errors: 2 repo.broadcom.com. A: dial udp [2409:8062:2000:1::1]:53: connect: network is unreachable
[ERROR] plugin/errors: 2 natm.app. AAAA: dial udp [2409:8062:2000:1::1]:53: connect: network is unreachable
[ERROR] plugin/errors: 2 natm-generator.app. A: dial udp [2409:8062:2000:1::1]:53: connect: network is unreachable
[ERROR] plugin/errors: 2 repo.broadcom.com. A: dial udp [2409:8062:2000:1::1]:53: connect: network is unreachable
[ERROR] plugin/errors: 2 opensearch-cluster-data.kubesphere-logging-system.svc. A: dial udp [2409:8062:2000:1::1]:53: connect: network is unreachable
[ERROR] plugin/errors: 2 repo.broadcom.com. AAAA: dial udp [2409:8062:2000:1::1]:53: connect: network is unreachable
[ERROR] plugin/errors: 2 natm-sso.app. AAAA: dial udp [2409:8062:2000:1::1]:53: connect: network is unreachable
[ERROR] plugin/errors: 2 repo.broadcom.com. AAAA: dial udp [2409:8062:2000:1::1]:53: connect: network is unreachable
[ERROR] plugin/errors: 2 repo.broadcom.com. AAAA: dial udp [2409:8062:2000:1::1]:53: connect: network is unreachable
[ERROR] plugin/errors: 2 opensearch-cluster-data.kubesphere-logging-system.svc. A: dial udp [2409:8062:2000:1::1]:53: connect: network is unreachable
[ERROR] plugin/errors: 2 opensearch-cluster-data.kubesphere-logging-system.svc. AAAA: dial udp [2409:8062:2000:1::1]:53: connect: network is unreachable
[ERROR] plugin/errors: 2 charts.bitnami.com. A: dial udp [2409:8062:2000:1::1]:53: connect: network is unreachable
[ERROR] plugin/errors: 2 charts.bitnami.com. A: dial udp [2409:8062:2000:1::1]:53: connect: network is unreachable
[ERROR] plugin/errors: 2 repo.broadcom.com. AAAA: dial udp [2409:8062:2000:1::1]:53: connect: network is unreachable
[ERROR] plugin/errors: 2 charts.bitnami.com. A: dial udp [2409:8062:2000:1::1]:53: connect: network is unreachable
[ERROR] plugin/errors: 2 repo.huaweicloud.com. A: dial udp [2409:8062:2000:1::1]:53: connect: network is unreachable
[ERROR] plugin/errors: 2 repo.huaweicloud.com. A: dial udp [2409:8062:2000:1::1]:53: connect: network is unreachable
[ERROR] plugin/errors: 2 repo.huaweicloud.com. A: dial udp [2409:8062:2000:1::1]:53: connect: network is unreachable
[ERROR] plugin/errors: 2 opensearch-cluster-data.kubesphere-logging-system.svc. AAAA: dial udp [2409:8062:2000:1::1]:53: connect: network is unreachable
[ERROR] plugin/errors: 2 opensearch-cluster-data.kubesphere-logging-system.svc. AAAA: dial udp [2409:8062:2000:1::1]:53: connect: network is unreachable
[ERROR] plugin/errors: 2 opensearch-cluster-data.kubesphere-logging-system.svc. AAAA: dial udp [2409:8062:2000:1::1]:53: connect: network is unreachable
[ERROR] plugin/errors: 2 repo.broadcom.com. A: dial udp [2409:8062:2000:1::1]:53: connect: network is unreachable
[ERROR] plugin/errors: 2 charts.bitnami.com. AAAA: dial udp [2409:8062:2000:1::1]:53: connect: network is unreachable
[ERROR] plugin/errors: 2 repo.broadcom.com. A: dial udp [2409:8062:2000:1::1]:53: connect: network is unreachable
[ERROR] plugin/errors: 2 repo.broadcom.com. A: dial udp [2409:8062:2000:1::1]:53: connect: network is unreachable
[ERROR] plugin/errors: 2 charts.bitnami.com. A: dial udp [2409:8062:2000:1::1]:53: connect: network is unreachable
[ERROR] plugin/errors: 2 charts.bitnami.com. A: dial udp [2409:8062:2000:1::1]:53: connect: network is unreachable
[ERROR] plugin/errors: 2 repo.broadcom.com. A: dial udp [2409:8062:2000:1::1]:53: connect: network is unreachable
[ERROR] plugin/errors: 2 charts.bitnami.com. A: dial udp [2409:8062:2000:1::1]:53: connect: network is unreachable
[ERROR] plugin/errors: 2 repo.broadcom.com. AAAA: dial udp [2409:8062:2000:1::1]:53: connect: network is unreachable
[ERROR] plugin/errors: 2 opensearch-cluster-data.kubesphere-logging-system.svc. A: dial udp [2409:8062:2000:1::1]:53: connect: network is unreachable
[ERROR] plugin/errors: 2 charts.bitnami.com. AAAA: dial udp [2409:8062:2000:1::1]:53: connect: network is unreachable
[ERROR] plugin/errors: 2 repo.broadcom.com. A: dial udp [2409:8062:2000:1::1]:53: connect: network is unreachable
[ERROR] plugin/errors: 2 gitee.com. A: dial udp [2409:8062:2000:1::1]:53: connect: network is unreachable
[ERROR] plugin/errors: 2 repo.broadcom.com. A: dial udp [2409:8062:2000:1::1]:53: connect: network is unreachable
[ERROR] plugin/errors: 2 charts.bitnami.com. A: dial udp [2409:8062:2000:1::1]:53: connect: network is unreachable
[ERROR] plugin/errors: 2 repo.broadcom.com. AAAA: dial udp [2409:8062:2000:1::1]:53: connect: network is unreachable
[ERROR] plugin/errors: 2 repo.broadcom.com. AAAA: dial udp [2409:8062:2000:1::1]:53: connect: network is unreachable
[ERROR] plugin/errors: 2 charts.bitnami.com. A: dial udp [2409:8062:2000:1::1]:53: connect: network is unreachable
[ERROR] plugin/errors: 2 opensearch-cluster-data.kubesphere-logging-system.svc. AAAA: dial udp [2409:8062:2000:1::1]:53: connect: network is unreachable
[ERROR] plugin/errors: 2 charts.bitnami.com. A: dial udp [2409:8062:2000:1::1]:53: connect: network is unreachable
[ERROR] plugin/errors: 2 charts.bitnami.com. A: dial udp [2409:8062:2000:1::1]:53: connect: network is unreachable
- 查看coredns的配置(查看上游服务器)
 
kubectl get configmap coredns -n kube-system -o yaml
#配置如下
apiVersion: v1
data:
  Corefile: |
    .:53 {
        errors
        health {
           lameduck 5s
        }
        ready
        kubernetes cluster.local in-addr.arpa ip6.arpa {
           pods insecure
           fallthrough in-addr.arpa ip6.arpa
           ttl 30
        }
        prometheus :9153
        # 上游服务器,本机
        forward . /etc/resolv.conf {
           max_concurrent 1000
        }
        cache 30
        loop
        reload
        loadbalance
    }
kind: ConfigMap
metadata:
  creationTimestamp: "2025-02-23T08:56:44Z"
  name: coredns
  namespace: kube-system
  resourceVersion: "215"
  uid: 028ef4aa-a4ab-40ae-8997-7cf9bced2d22
#使用该命令查看 dhcp的分配情况
nmcli dev show enp7s0
GENERAL.DEVICE:                         enp7s0
GENERAL.TYPE:                           ethernet
GENERAL.HWADDR:                         00:E0:4F:10:E5:6F
GENERAL.MTU:                            1500
GENERAL.STATE:                          100 (connected)
GENERAL.CONNECTION:                     home
GENERAL.CON-PATH:                       /org/freedesktop/NetworkManager/ActiveConnection/6
WIRED-PROPERTIES.CARRIER:               on
IP4.ADDRESS[1]:                         192.168.10.7/24
IP4.GATEWAY:                            192.168.10.1
IP4.ROUTE[1]:                           dst = 192.168.10.0/24, nh = 0.0.0.0, mt = 100
IP4.ROUTE[2]:                           dst = 0.0.0.0/0, nh = 192.168.10.1, mt = 100
IP4.DNS[1]:                             8.8.8.8
IP4.DNS[2]:                             8.8.4.4
IP6.ADDRESS[1]:                         2409:8a62:1213:1d70:34b0:e39d:f096:9afb/64
IP6.ADDRESS[2]:                         fe80::8d93:d849:c0c:2435/64
IP6.GATEWAY:                            fe80::a:1
IP6.ROUTE[1]:                           dst = 2409:8a62:1213:1d70::/64, nh = ::, mt = 100
IP6.ROUTE[2]:                           dst = ::/0, nh = fe80::a:1, mt = 100
IP6.ROUTE[3]:                           dst = fe80::/64, nh = ::, mt = 100
IP6.ROUTE[4]:                           dst = ff00::/8, nh = ::, mt = 256, table=255
IP6.DNS[1]:                             2409:8062:2000:1::1
IP6.DNS[2]:                             2409:8062:2000:1::2
# 解决方案
# 禁用ipv6
# 临时禁用
sysctl -w net.ipv6.conf.all.disable_ipv6=1
sysctl -w net.ipv6.conf.default.disable_ipv6=1
#永久禁用
vim /etc/sysctl.conf
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
sysctl -p
reboot
# 配置nfs存储方案
# 安装nfs服务器
略, 可参考 nfs章节;
# 配置nfs客户端
- 在k8s的集群节点所在的服务器执行
 
sudo yum install -y nfs-utils
# 配置nfs的provisoner
# 192.168.10.7:/nfs 是nfs服务器的目录
helm install nfs-subdir-external-provisioner ./nfs-subdir-external-provisioner --set nfs.server=192.168.10.7 --set nfs.path=/nfs
#查看pod执行状态(默认是执行在default命名空间下的)
kubectl get pods -n default | grep nfs-subdir
# 查看storageclass
kubectl get sc # 此时能够查看到 nfs-client的 sc
# 配置storageClass(可选)
# nfs-storageclass.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nfs-dynamic
provisioner: cluster.local/nfs-subdir-external-provisioner  # 与 Provisioner 的>名称匹配
parameters:
  archiveOnDelete: "false"  # 删除 PVC 时是否保留数据
# 安装 
kubectl apply -f nfs-storageclass.yaml
# 卸载
kubectl delete -f nfs-storageclass.yaml
# 服务网格配置
# pod调试
- 启动命令(前后都不要有空格): /bin/bash
 - 启动参数(多个参数以,隔开): -c, while true; do sleep 3600; done
 - 实例说明:
 
# 以实际命令为例:
podman run -p 9000:9000 -p 9001:9001 192.168.10.7:20080/library/minio:latest server /data --console-address ":9001"
- 参数为: 
server,/data,--console-address="0.0.0.0:9001" 
# 挂载.env文件
- 以
/app/api/.env资源为例, 配置挂载目录:/app/api/.env, 同时指定挂载子目录为.env - 指定挂载配置字典键名: 
env, 路径填写.env, 与上方配置的子目录保持一致; 
# devops配置
# 配置统一登录
- 配置过一次,暂时找不到配置项了;略
 
# 配置public-over-ssh插件
- 在
system-workspace中的kubesphere-devops-system的保密字典中,获取jenkins的登录账户及密码; - 在内网环境登录(如果没使用单点的话,也可以暴露为外网)jenkins面板,系统设置-插件管理-高级-升级站点, 修改url为 
https://mirrors.huaweicloud.com/jenkins/updates/dynamic-2.346/update-center.json - 修改认证参数, 在
kubesphere-devops-system的 应用负载-工作负载-devops-jenkins中,更多操作-编辑设置-容器-devops-jenkins, 点击编辑,在JAVA_TOOL_OPTIONS 中添加-Dhudson.model.DownloadService.noSignatureCheck=true,保存后会自动重启容器; - 在jenkins面板中,插件管理,页面下方有个浮动菜单,点击立即获取。 后续直接页面安装即可。
 
# kubesphere4.x 自定义仓库配置
镜像仓库配置: 每个项目均可以配置多个普通镜像仓库,和一个默认镜像仓库。路径:配置->保密字典->创建 -> 镜像服务信息->测试->保存;
- 注意,这里有个坑:harbor的外部地址不能使用域名,而是应该使用宿主机ip, 否则kubesphere无法使用;
 - 如果harbor中的外部地址使用的域名,即使在添加镜像的时候用了ip,也无法使用。最后还是会寻址后自动使用域名。
 
helm仓库配置: 每个企业空间配置多个仓库。路径: 企业空间->应用管理->应用仓库->添加->url -> 测试 -> 确定;
- 注意: 此处的仓库地址应该使用ip地址,而不是域名,否则kubesphere无法使用;如: 192.168.10.7:20080/chartrepo/library
 
# kubespherev4的坑
- v4版本官网推荐的K8s为 1.28.1, K8s从1.24开始,默认使用containerd作为容器运行时环境, 之前是docker
 - containerd容器运行时环境,可能遇到未知的坑,无法处理,绕过;---补充: 在同一台机器上,同时存在docker,containerd两个容器运行时环境,可以更好的扩展;
 - 可使用k8s的v1.21.5 进行安装, 使用docker作为容器运行时环境 ---补充: 在同一台机器上,同时存在docker,containerd两个容器运行时环境,可以更好的扩展;
 - config-sample.yaml的配置如下:
 
apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Cluster
metadata:
  name: sample
spec:
  hosts:
  - {name: node1, address: 192.168.10.7, internalAddress: 192.168.10.7, user: root, password: "xxxxx"}
  roleGroups:
    etcd:
    - node1
    control-plane:
    - node1
    worker:
    - node1
  controlPlaneEndpoint:
    domain: lb.kubesphere.local
    address: ""
    port: 6443
  kubernetes:
    version: v1.21.5
    clusterName: cluster.local
    autoRenewCerts: true
    containerManager: docker # 这里v1.24之后为 containerd
  etcd:
    type: kubekey
  network:
    plugin: calico
    kubePodsCIDR: 10.233.64.0/18
    kubeServiceCIDR: 10.233.0.0/18
    ## multus support. https://github.com/k8snetworkplumbingwg/multus-cni
    multusCNI:
      enabled: false
  registry:
    privateRegistry: ""
    namespaceOverride: ""
    registryMirrors: []
    insecureRegistries: []
  addons: []
以下是kubesphere3的内容,已过时,仅作存档。
# kubespherer制品重要配置备份
# nginx配置备份
events {
  worker_connections 1024;
}
 http {
   resolver 10.96.0.10 valid=60s;  # CoreDNS的ClusterIP,valid为DNS缓存刷新时间,配置默认搜索域
   server {
     listen 80;
     location / {
       root /usr/share/nginx/html;
     }
    location /atm {
                set $backend "http://natm.app.svc.cluster.local:8091";
                proxy_pass $backend;
        }
    location /pdmaner/ {
                proxy_set_header   Host    $http_host;
                set $backend "http://pdmaner.app.svc.cluster.local";
                proxy_pass $backend;
        }
   location /atm/generator {
                set $backend "http://natm-generator.app.svc.cluster.local:8092";
                proxy_pass $backend;
                proxy_cookie_path /atm/generator /atm;
        }
    location /atm/ssoServer {
                set $backend "http://natm-sso.app.svc.cluster.local:8090";
                proxy_pass $backend;
                proxy_cookie_path /atm/ssoServer /atm;
                proxy_set_header Host $http_host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto $scheme;
             # 统一用 $http_host 构造重定向地址
             proxy_redirect ~^(https?://[^/]+)?(?<path>/.*)$ $scheme://$http_host$path;
        }
     location /natm-ui/ {
                set $backend "http://natm-ui.app.svc.cluster.local";
                proxy_pass $backend;
                proxy_set_header Host $http_host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto $scheme;
        }
     location /difyweb/ {
                set $backend "http://dify-service.automannn.svc.cluster.local:3000";
                proxy_pass $backend;
     }
     location /difyapi/ {
                set $backend "http://dify-service.automannn.svc.cluster.local:5001";
                # 重写请求路径,去除前缀
                rewrite ^/difyapi/(.*) /$1 break;
                proxy_pass $backend;
     }
     location /difyplugin/ {
                set $backend "http://dify-plugin-daemon.automannn.svc.cluster.local:5002";
                # 重写请求路径,去除前缀
                rewrite ^/difyplugin/(.*) /$1 break;
                proxy_pass $backend;
     }
    location /miniosvc/ {
                set $backend "http://minio.automannn.svc.cluster.local:9000";
                # 重写请求路径,去除前缀
                rewrite ^/miniosvc/(.*) /$1 break;
                proxy_pass $backend;
     }
    location /minioconsole/ {
                set $backend "http://minio.automannn.svc.cluster.local:9001";
                # 重写请求路径,去除前缀
                rewrite ^/minioconsole/(.*) /$1 break;
                proxy_pass $backend;
     }
    location /outline{
                set $backend "http://outline.app.svc.cluster.local:3000";
                # 重写请求路径,去除前缀
                rewrite ^/outline/(.*) /$1 break;
                proxy_pass $backend;
            proxy_set_header Host $http_host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto $scheme;
            proxy_http_version 1.1;
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection "upgrade";
            
             # 统一用 $http_host 构造重定向地址
             proxy_redirect ~^(https?://[^/]+)?(?<path>/.*)$ $scheme://$http_host$path;
        }
   }
   server {
     listen 8081;
     location / {
       		set $backend "http://minio.automannn.svc.cluster.local:9001";
                proxy_pass $backend;
                
               proxy_http_version 1.1;
               proxy_set_header Upgrade $http_upgrade;
              proxy_set_header Connection "upgrade";
             proxy_set_header Host $http_host;
             proxy_read_timeout 86400;  # 长连接超时时间
        
     }
       location /miniosvc/ {
                set $backend "http://minio.automannn.svc.cluster.local:9000";
                # 重写请求路径,去除前缀
                rewrite ^/miniosvc/(.*) /$1 break;
                proxy_pass $backend;
       }
    }
   server {
        listen       8082 ssl;
        server_name  home.automannn.cn;
        ssl_certificate      ./cert/home.automannn.cn.pem;
        ssl_certificate_key  ./cert/home.automannn.cn.key;
        ssl_session_cache    shared:SSL:1m;
        ssl_session_timeout  5m;
        ssl_ciphers  HIGH:!aNULL:!MD5;
        ssl_prefer_server_ciphers  on;
        location / {
           proxy_pass http://192.168.10.7:10233/;
           proxy_set_header Host $http_host;
          proxy_set_header X-Real-IP $remote_addr;
          proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
          proxy_set_header X-Forwarded-Proto $scheme;
          proxy_redirect off;
        }
    }
 }
# kubesphere3.x安装
# AllInOne方式安装
不启用任何可选组件的情况下最低配置要求2C4G40g,启用所有组件的情况下,建议配置8C16G
# 下载KubeKey
#务必执行这个命令,否则会使用github进行下载,会导致下载慢或者下载失败
export KKZONE=cn
#下载kubekey,目前的最新版本是3.0.13,建议用最新版
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.7 sh -
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.13 sh -
#添加执行权限
chmod +x kk
# 使用kubekey进行安装
# 最新版本可指定 kubesphere 版本为3.4.1
./kk create cluster --with-kubernetes v1.22.12 --with-kubesphere v3.3.2
./kk create cluster --with-kubernetes v1.22.12 --with-kubesphere v3.4.1
# 验证
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l 'app in (ks-install, ks-installer)' -o jsonpath='{.items[0].metadata.name}') -f
# 多集群安装
- 假设集群中信息如下
 
| ip | 主机名 | 角色 | 
|---|---|---|
| 192.168.10.2 | control plane | control plane,etcd | 
| 192.168.10.3 | node1 | worker | 
| 192.168.10.4 | node2 | worker | 
# 下载kubekey
略
# 创建集群配置文件
spec:
  hosts:
  - {name: master, address: 192.168.10.2,port:22, internalAddress: 192.168.10.2, user: ubuntu, password: Testing123}
  - {name: node1, address: 192.168.10.3, port:22,internalAddress: 192.168.10.3, user: ubuntu, password: Testing123}
  - {name: node2, address: 192.168.10.4, port:22,internalAddress: 192.168.10.4, user: ubuntu, password: Testing123}
  roleGroups:
    etcd:
    - master
    control-plane:
    - master
    worker:
    - node1
    - node2
  #仅适用于高可用安装
  #controlPlaneEndpoint:
  #  domain: lb.kubesphere.local
  #  address: ""
  #  port: 6443
# 集群安装
./kk create cluster -f config-sample.yaml
# 验证
安装成功后,可在控制台看到实例的访问地址信息;
# 离线安装
- 离线安装环境如下
 
| ip | 主机名称 | 角色 | 
|---|---|---|
| 192.168.10.2 | node1 | 联网主机,用于制作离线包 | 
| 192.168.10.3 | node2 | 离线环境主节点 | 
| 192.168.10.4 | node3 | 离线环境镜像仓库节点 | 
# 下载kubekey
略
# 联网主机制作配置文件
---
apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Manifest
metadata:
  name: sample
spec:
  arches:
  - amd64
  operatingSystems:
  - arch: amd64
    type: linux
    id: centos
    version: "7"
    repository:
      iso:
        localPath:
        url: https://github.com/kubesphere/kubekey/releases/download/v3.0.7/centos7-rpms-amd64.iso
  - arch: amd64
    type: linux
    id: ubuntu
    version: "20.04"
    repository:
      iso:
        localPath:
        url: https://github.com/kubesphere/kubekey/releases/download/v3.0.7/ubuntu-20.04-debs-amd64.iso
  kubernetesDistributions:
  - type: kubernetes
    version: v1.22.12
  components:
    helm:
      version: v3.9.0
    cni:
      version: v0.9.1
    etcd:
      version: v3.4.13
   ## For now, if your cluster container runtime is containerd, KubeKey will add a docker 20.10.8 container runtime in the below list.
   ## The reason is KubeKey creates a cluster with containerd by installing a docker first and making kubelet connect the socket file of containerd which docker contained.
    containerRuntimes:
    - type: docker
      version: 20.10.8
    crictl:
      version: v1.24.0
    docker-registry:
      version: "2"
    harbor:
      version: v2.5.3
    docker-compose:
      version: v2.2.2
  images:
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-apiserver:v1.22.12
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controller-manager:v1.22.12
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.22.12
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-scheduler:v1.22.12
  - registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.5
  - registry.cn-beijing.aliyuncs.com/kubesphereio/coredns:1.8.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/cni:v3.23.2
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controllers:v3.23.2
  - registry.cn-beijing.aliyuncs.com/kubesphereio/node:v3.23.2
  - registry.cn-beijing.aliyuncs.com/kubesphereio/pod2daemon-flexvol:v3.23.2
  - registry.cn-beijing.aliyuncs.com/kubesphereio/typha:v3.23.2
  - registry.cn-beijing.aliyuncs.com/kubesphereio/flannel:v0.12.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/provisioner-localpv:3.3.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/linux-utils:3.3.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/haproxy:2.3
  - registry.cn-beijing.aliyuncs.com/kubesphereio/nfs-subdir-external-provisioner:v4.0.2
  - registry.cn-beijing.aliyuncs.com/kubesphereio/k8s-dns-node-cache:1.15.12
  - registry.cn-beijing.aliyuncs.com/kubesphereio/ks-installer:v3.3.2
  - registry.cn-beijing.aliyuncs.com/kubesphereio/ks-apiserver:v3.3.2
  - registry.cn-beijing.aliyuncs.com/kubesphereio/ks-console:v3.3.2
  - registry.cn-beijing.aliyuncs.com/kubesphereio/ks-controller-manager:v3.3.2
  - registry.cn-beijing.aliyuncs.com/kubesphereio/ks-upgrade:v3.3.2
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kubectl:v1.22.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kubectl:v1.21.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kubectl:v1.20.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kubefed:v0.8.1
  - registry.cn-beijing.aliyuncs.com/kubesphereio/tower:v0.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/minio:RELEASE.2019-08-07T01-59-21Z
  - registry.cn-beijing.aliyuncs.com/kubesphereio/mc:RELEASE.2019-08-07T23-14-43Z
  - registry.cn-beijing.aliyuncs.com/kubesphereio/snapshot-controller:v4.0.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/nginx-ingress-controller:v1.1.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/defaultbackend-amd64:1.4
  - registry.cn-beijing.aliyuncs.com/kubesphereio/metrics-server:v0.4.2
  - registry.cn-beijing.aliyuncs.com/kubesphereio/redis:5.0.14-alpine
  - registry.cn-beijing.aliyuncs.com/kubesphereio/haproxy:2.0.25-alpine
  - registry.cn-beijing.aliyuncs.com/kubesphereio/alpine:3.14
  - registry.cn-beijing.aliyuncs.com/kubesphereio/openldap:1.3.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/netshoot:v1.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/cloudcore:v1.9.2
  - registry.cn-beijing.aliyuncs.com/kubesphereio/iptables-manager:v1.9.2
  - registry.cn-beijing.aliyuncs.com/kubesphereio/edgeservice:v0.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/gatekeeper:v3.5.2
  - registry.cn-beijing.aliyuncs.com/kubesphereio/openpitrix-jobs:v3.3.2
  - registry.cn-beijing.aliyuncs.com/kubesphereio/devops-apiserver:ks-v3.3.2
  - registry.cn-beijing.aliyuncs.com/kubesphereio/devops-controller:ks-v3.3.2
  - registry.cn-beijing.aliyuncs.com/kubesphereio/devops-tools:ks-v3.3.2
  - registry.cn-beijing.aliyuncs.com/kubesphereio/ks-jenkins:v3.3.0-2.319.1
  - registry.cn-beijing.aliyuncs.com/kubesphereio/inbound-agent:4.10-2
  - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-base:v3.2.2
  - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-nodejs:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-maven:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-maven:v3.2.1-jdk11
  - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-python:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.2-1.16
  - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.2-1.17
  - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.2-1.18
  - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-base:v3.2.2-podman
  - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-nodejs:v3.2.0-podman
  - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-maven:v3.2.0-podman
  - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-maven:v3.2.1-jdk11-podman
  - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-python:v3.2.0-podman
  - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.0-podman
  - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.2-1.16-podman
  - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.2-1.17-podman
  - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.2-1.18-podman
  - registry.cn-beijing.aliyuncs.com/kubesphereio/s2ioperator:v3.2.1
  - registry.cn-beijing.aliyuncs.com/kubesphereio/s2irun:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/s2i-binary:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/tomcat85-java11-centos7:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/tomcat85-java11-runtime:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/tomcat85-java8-centos7:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/tomcat85-java8-runtime:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/java-11-centos7:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/java-8-centos7:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/java-8-runtime:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/java-11-runtime:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/nodejs-8-centos7:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/nodejs-6-centos7:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/nodejs-4-centos7:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/python-36-centos7:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/python-35-centos7:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/python-34-centos7:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/python-27-centos7:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/argocd:v2.3.3
  - registry.cn-beijing.aliyuncs.com/kubesphereio/argocd-applicationset:v0.4.1
  - registry.cn-beijing.aliyuncs.com/kubesphereio/dex:v2.30.2
  - registry.cn-beijing.aliyuncs.com/kubesphereio/redis:6.2.6-alpine
  - registry.cn-beijing.aliyuncs.com/kubesphereio/configmap-reload:v0.5.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/prometheus:v2.34.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/prometheus-config-reloader:v0.55.1
  - registry.cn-beijing.aliyuncs.com/kubesphereio/prometheus-operator:v0.55.1
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-rbac-proxy:v0.11.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-state-metrics:v2.5.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/node-exporter:v1.3.1
  - registry.cn-beijing.aliyuncs.com/kubesphereio/alertmanager:v0.23.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/thanos:v0.25.2
  - registry.cn-beijing.aliyuncs.com/kubesphereio/grafana:8.3.3
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-rbac-proxy:v0.8.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/notification-manager-operator:v1.4.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/notification-manager:v1.4.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/notification-tenant-sidecar:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/elasticsearch-curator:v5.7.6
  - registry.cn-beijing.aliyuncs.com/kubesphereio/elasticsearch-oss:6.8.22
  - registry.cn-beijing.aliyuncs.com/kubesphereio/fluentbit-operator:v0.13.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/docker:19.03
  - registry.cn-beijing.aliyuncs.com/kubesphereio/fluent-bit:v1.8.11
  - registry.cn-beijing.aliyuncs.com/kubesphereio/log-sidecar-injector:1.1
  - registry.cn-beijing.aliyuncs.com/kubesphereio/filebeat:6.7.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-events-operator:v0.4.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-events-exporter:v0.4.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-events-ruler:v0.4.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-auditing-operator:v0.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-auditing-webhook:v0.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/pilot:1.11.1
  - registry.cn-beijing.aliyuncs.com/kubesphereio/proxyv2:1.11.1
  - registry.cn-beijing.aliyuncs.com/kubesphereio/jaeger-operator:1.27
  - registry.cn-beijing.aliyuncs.com/kubesphereio/jaeger-agent:1.27
  - registry.cn-beijing.aliyuncs.com/kubesphereio/jaeger-collector:1.27
  - registry.cn-beijing.aliyuncs.com/kubesphereio/jaeger-query:1.27
  - registry.cn-beijing.aliyuncs.com/kubesphereio/jaeger-es-index-cleaner:1.27
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kiali-operator:v1.38.1
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kiali:v1.38
  - registry.cn-beijing.aliyuncs.com/kubesphereio/busybox:1.31.1
  - registry.cn-beijing.aliyuncs.com/kubesphereio/nginx:1.14-alpine
  - registry.cn-beijing.aliyuncs.com/kubesphereio/wget:1.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/hello:plain-text
  - registry.cn-beijing.aliyuncs.com/kubesphereio/wordpress:4.8-apache
  - registry.cn-beijing.aliyuncs.com/kubesphereio/hpa-example:latest
  - registry.cn-beijing.aliyuncs.com/kubesphereio/fluentd:v1.4.2-2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/perl:latest
  - registry.cn-beijing.aliyuncs.com/kubesphereio/examples-bookinfo-productpage-v1:1.16.2
  - registry.cn-beijing.aliyuncs.com/kubesphereio/examples-bookinfo-reviews-v1:1.16.2
  - registry.cn-beijing.aliyuncs.com/kubesphereio/examples-bookinfo-reviews-v2:1.16.2
  - registry.cn-beijing.aliyuncs.com/kubesphereio/examples-bookinfo-details-v1:1.16.2
  - registry.cn-beijing.aliyuncs.com/kubesphereio/examples-bookinfo-ratings-v1:1.16.3
  - registry.cn-beijing.aliyuncs.com/kubesphereio/scope:1.13.0
# 导出制品artifact
#manifest-sample.yaml为前文配置的信息
./kk artifact export -m manifest-sample.yaml -o kubesphere.tar.gz
# 拷贝kubekey以及制品artifact至离线环境
略
# 创建离线集群配置文件
./kk create config --with-kubesphere v3.3.2 --with-kubernetes v1.22.12 -f config-sample.yaml
# 修改集群配置文件
apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Cluster
metadata:
  name: sample
spec:
  hosts:
  - {name: master, address: 192.168.10.3, internalAddress: 192.168.10.3, user: root, password: test123}
  - {name: node1, address: 192.168.10.4, internalAddress: 192.168.10.4, user: root, password: test123}
  roleGroups:
    etcd:
    - master
    control-plane:
    - master
    worker:
    - node1
    # 如需使用 kk 自动部署镜像仓库,请设置该主机组 (建议仓库与集群分离部署,减少相互影响)
    registry:
    - node1
  controlPlaneEndpoint:
    ## Internal loadbalancer for apiservers
    # internalLoadbalancer: haproxy
    domain: lb.kubesphere.local
    address: ""
    port: 6443
  kubernetes:
    version: v1.22.12
    clusterName: cluster.local
  network:
    plugin: calico
    kubePodsCIDR: 10.233.64.0/18
    kubeServiceCIDR: 10.233.0.0/18
    ## multus support. https://github.com/k8snetworkplumbingwg/multus-cni
    multusCNI:
      enabled: false
  registry:
    # 如需使用 kk 部署 harbor, 可将该参数设置为 harbor,不设置该参数且需使用 kk 创建容器镜像仓库,将默认使用docker registry。
    type: harbor
    # 如使用 kk 部署的 harbor 或其他需要登录的仓库,可设置对应仓库的auths,如使用 kk 创建的 docker registry 仓库,则无需配置该参数。
    # 注意:如使用 kk 部署 harbor,该参数请于 harbor 启动后设置。
    #auths:
    #  "dockerhub.kubekey.local":
    #    username: admin
    #    password: Harbor12345
    # 设置集群部署时使用的私有仓库
    privateRegistry: ""
    namespaceOverride: ""
    registryMirrors: []
    insecureRegistries: []
  addons: []
# 安装离线镜像仓库
./kk init registry -f config-sample.yaml -a kubesphere.tar.gz
# 创建habor项目
登录habor,并将项目设置为公开;
项目列表如下:
 kubesphereio
    kubesphere
    calico
    coredns
    openebs
    csiplugin
    minio
    mirrorgooglecontainers
    osixia
    prom
    thanosio
    jimmidyson
    grafana
    elastic
    istio
    jaegertracing
    jenkins
    weaveworks
    openpitrix
    joosthofman
    nginxdemos
    fluent
    kubeedge
# 配置habor信息
...
  registry:
    type: harbor
    auths:
      "dockerhub.kubekey.local":
        username: admin
        password: Harbor12345
    privateRegistry: "dockerhub.kubekey.local"
    namespaceOverride: "kubesphereio"
    registryMirrors: []
    insecureRegistries: []
  addons: []
# 安装集群
./kk create cluster -f config-sample.yaml -a kubesphere.tar.gz --with-packages
# 验证
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l 'app in (ks-install, ks-installer)' -o jsonpath='{.items[0].metadata.name}') -f
# kubesphere卸载
# ALL IN ONE模式
./kk delete cluster
# 集群模式
./kk delete cluster [-f config-sample.yaml]
# 多租户
多租户分为三个层级,即集群,企业空间和项目;
此处的项目,相当于k8s的命名空间
# 集群
- 可以理解为除root外的,服务提供商;
 - 一般来说,只有一个账号(除root外);
 
# 企业空间
- 企业空间是多租户系统的基础
 - 企业空间,通常包含1个管理员,和n个运维人员;
 - 管理员可以创建项目,运维人员可在项目中部署;
 
# 项目
- 项目与k8s的命名空间相同,为资源提供虚拟隔离;
 - 绝大部分情况下,运维人员都是在项目下工作;
 
# 可插拔组件
# ALL IN ONE启用可插拔组件
- 依次点击: 平台管理-集群管理-定制资源定义,输入clusterconfiguration进行检索
 - 点击自定义资源,点击 
ks-installer的更多操作图标,选择编辑YAML - 搜索
servicemesh,将enabled改为true - 搜索
devops,将enabled改为true - 搜索
openpitrix,将enabled改为true - 保存后退出
 - 执行安装
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l 'app in (ks-install, ks-installer)' -o jsonpath='{.items[0].metadata.name}') -f 
# 集群模式启用可插拔组件
vim config-sample.yaml- 将该文件的
devops的enabled改为true ./kk create cluster -f config-sample.yaml创建集群
# 卸载可插拔组件
# 卸载应用商店
- 将
openpitrix.store.enabled改为false - 安装集群
 
# 卸载devops
将
openpitrix.store.enabled改为false卸载devops
helm uninstall -n kubesphere-devops-system devops kubectl patch -n kubesphere-system cc ks-installer --type=json -p='[{"op": "remove", "path": "/status/devops"}]' kubectl patch -n kubesphere-system cc ks-installer --type=json -p='[{"op": "replace", "path": "/spec/devops/enabled", "value": false}]'卸载devOps资源
# 删除所有 DevOps 相关资源
for devops_crd in $(kubectl get crd -o=jsonpath='{range .items[*]}{.metadata.name}{"\n"}{end}' | grep "devops.kubesphere.io"); do
    for ns in $(kubectl get ns -ojsonpath='{.items..metadata.name}'); do
        for devops_res in $(kubectl get $devops_crd -n $ns -oname); do
            kubectl patch $devops_res -n $ns -p '{"metadata":{"finalizers":[]}}' --type=merge
        done
    done
done
# 删除所有 DevOps CRD
kubectl get crd -o=jsonpath='{range .items[*]}{.metadata.name}{"\n"}{end}' | grep "devops.kubesphere.io" | xargs -I crd_name kubectl delete crd crd_name
# 删除 DevOps 命名空间
kubectl delete namespace kubesphere-devops-system
# 卸载服务网格
- 将
servicemesh.enabled改为false - 卸载服务网格
 
curl -L https://istio.io/downloadIstio | sh -
istioctl x uninstall --purge
kubectl -n istio-system delete kiali kiali
helm -n istio-system delete kiali-operator
kubectl -n istio-system delete jaeger jaeger
helm -n istio-system delete jaeger-operator
# devops
# 常见流水线阶段
- checkout scm, 从仓库拉取源代码
 - 单元测试,通过后进入下一阶段
 - 代码质量分析,通常使用sonarQube进行静态代码分析
 - 构建并推送,推送制品库至docker仓库
 - 部署至开发环境
 
# 流水线注意事项
- 既能够通过图形界面编辑流水线
 - 也能够直接通过流水线配置文件快速实例化流水线
 - kubesphere中内置了常见的流水线模板;
 
# 流水线关键节点
- 代理类型,决定了当前流水线节点有哪些处理能力;
 
# 流水线配置
# 流水线参数
| 名称 | 类型 | 值 | 
|---|---|---|
| REGISTRY | 字符串 | registry.cn-hangzhou.aliyuncs.com | 
| DOCKERHUB_NAMESPACE | 字符串 | automannn-midware | 
| APP_NAME | 字符串 | 当前应用的构建名称 | 
# nginx负载搭建
# 应用商店nginx默认配置修改(踩坑)
configurationFile: {}
#  nginx.conf: |-
#  http {
#    server {
#      listen 80;
#      location / {
#        root /usr/share/nginx/html;
#      }
#    }
#  }
修改为
configurationFile: |-
    # events配置
    events {
      worker_connections 1024;
    }
    http {
    server {
      listen 80;
      
      location /natm-ui/ {
                proxy_pass http://192.168.10.7:30255/natm-ui/;
        }
      
      location / {
        root /usr/share/nginx/html;
      }
      
      
    }
    }
# configMap修改代理配置
- 修改了配置后,需要重新创建容器;
 - 需要确保nginx.conf配置文件内容正确,否则会出现莫名奇妙的问题,且不好排查!(错误提示与问题不相干)
 
# kubesphere的本质
k8s资源+自定义资源+jenkins资源
# kubesphere与k8s的关系(k8s资源)
# 企业空间及项目
- kubesphere的多租户系统分为三个层级,分别是:集群,企业空间,项目;
- 多租户系统是一种软件架构技术,允许多个用户(租户)共享同一个系统或程序组件,同时保证各用户间的数据以及资源隔离,这种系统也被称为Saas平台;
 
 - 在kubesphere中,项目以及DevOps项目,相当于k8s的命名空间;
 
# 容器组
- 容器组相当于k8s的Pod资源,
 - 容器组是一个或多个容器的组合,这些容器共享存储、网络、和命名空间
 
# 应用商店(应用模板)
- 对应于k8s的包管理工具,称为Chart软件包。
 - 其包含了应用程序所需的所有资源定义。
 - kubesphere中,模板和应用商店,针对的是企业空间而言,而不是针对项目。
 - 应用商店基于 openPitrix实现;
 
# 应用
- 对应于k8s的RELEASE资源
 - RELEASE是Chart的运行实例,可以用于管理应用程序的版本和更新;
 
# 服务
- 对应于k8s的Service资源.
 - 是应用的访问入口,服务类型分为ClusterIP 与 NodePort 和 LoadBalancer。
- ClusterIp主要用于为服务提供了一个内部集群IP地址,只能在集群内部访问该服务。只能在集群内部访问该服务,如果要在外部访问可以使用NodePort服务。
 - NodePort主要用于外部流量通过节点的IP地址和NodePort访问Service,NodePort将流量从集群外部引入到Service内部。
 - LoadBalancer通常与NodePort类似,但在每个节点上启用一个端口来暴露服务。适用于公有云环境。它通常与NodePort一起使用,以将外部流量引入到集群内部。
 
 
# 应用路由
- Ingress为HTTP(S)流量提供了路由规则,适用于需要处理HTTP(S)流量的场景。可以与NodePort或ClusterIP结合使用,也可以替代LoadBalancer。
 
# 工作负载
- 包括部署,有状态副本集。 分别对应于k8s中的 Deployment(默认会使用ReplicaSet)资源 和 StatefulSet 以及 DaemonSet
 - 包含三种常用的控制器对象,用于管理应用程序的部署和运行。
- Deployment适用于无状态应用程序,如Web服务、API服务等;
 - StatefulSet适用于有状态的应用程序,如数据库、消息队列、存储节点等有状态应用程序;
 - DaemonSet一次部署,所有的node节点都会部署,如在每个Node上运行日志收集 daemon;
 
 
# 配置
- 对应于K8s的ConfigMap,Secret资源,不再赘述;
 
# 存储
- 分为静态存储与动态存储,动态存储通过PVC+StorageClass创建,静态存储通过PV创建后与PVC绑定
 - 此处仅描述使用静态存储的方式:
 
apiVersion: apps/v1
kind: Deployment
spec:
  template:
    spec:
      imagePullSecrets:
        - name: hub-pass(这里存储的是配置的密钥名称)
      containers:
        - name: 镜像名称
          image: 镜像地址
          
        volumeMounts:
          - name: 存储的名称
            mountPath: /var/lib/mysql
            subPath: 挂载的子路径
      volumes:
        - name: 存储的名称
          persistentVolumeClaim:
            claimName: pvc的名称(静态创建)
# 自定义k8s资源
- Source To Image(S2I): 一个工具箱和工作流,用于从源代码构建可再现容器镜像;
 - Binary to Image(B2I): 一个工具箱和工作流,用于从二进制可执行文件(如Jar,War和二进制包)构建可再现的容器镜像;
 - 核心原理是:镜像构建器+运行时镜像,技术原理可见:https://github.com/openshift/source-to-image
 - s2i是红帽开源的一款镜像构建工具,属于openshift的一部分,可以提供一套模板化的构建方案,让开发人员为各种不同类型的源代码提前准备好运行环境(builder-image),并进行快速构建和运行。
 
# jenkins资源
# devops项目
- 一个devops项目,对应于jenkins中的一个文件夹。
 - 同时,一个devops项目,对应于k8s中的一个namespace资源,但内部资源为空;
 
# 流水线
- 对应于jenkins中的流水线
 - 需要注意,官方的jenkins镜像版本基于2.319相对比较老,很多插件不兼容,谨慎安装,否则可能导致kubesphere崩溃
 - sshPublisher插件实测可以兼容,插件安装包下载
 
# 流水线步骤自定义
- 此处也使用了自定义资源:clustersteptemplates, 但是没有步骤直接使用插件也是可以的,此处省略。
 - 插件如果没有对应的步骤,则无法使用图形界面编辑,但是jenkinsfile依然是可以编辑的。
 
# 流水线Agent自定义
- agent指整个流水线或特定阶段,将在jenkins环境中执行的位置(执行的宿主);
 - KubeSphere使用PodTemplate创建Agent, 提供了base,nodejs,maven,go这四种基本的Agent;
 - 理论上通过agent自定义,可以实现任意编程环境的编译要求。
 - 自定义agent(修改配置字典jenkins-casc-config的data.jenkins_user.yaml:jenkins.clouds.kubernetes.templates,添加配置并等待70秒后生效):
 
- name: "automannn-mvn-jdk11" # 自定义 Jenkins Agent 的名称。
  label: "automannn maven jdk11" # 自定义 Jenkins Agent 的标签。若要指定多个标签,请用空格来分隔标签。
  inheritFrom: "maven" # 该自定义 Jenkins Agent 所继承的现有容器组模板的名称。现有的四个容器之一
  containers:
  - name: "maven" # 该自定义 Jenkins Agent 所继承的现有容器组模板中指定的容器名称。   也就是需要覆盖的容器名称
    image: "kubespheredev/builder-maven:v3.2.0jdk11" # 自定义的镜像。
# 实践
# 集群的重启
- ip需要固定;
 - 节点要么不重启,要么重启后至少等10分钟再操作,集群会自动启动;
 
# 修改密码报错
- 报错提示: Internal error occurred: failed calling webhook "users.iam.kubesphere.io": Post https://ks-controller-manager.kubesphere-system.svc:443/validate-email-iam-kubesphere-io-v1alpha2
 - 解决方法:
 
set ks-controller-manage hostNetwork: true
kubectl delete validatingwebhookconfigurations.admissionregistration.k8s.io users.iam.kubesphere.io
# 修改流水线的maven配置
- 依次点击:平台管理-配置-配置字典
 - 搜索devops
 - 点击
ks-devops-agent的【编辑设置】,即可修改 
# 私有镜像密钥配置
- 进入需要拉取私有镜像的项目(即命名空间)
 - 点击:配置-保密字典
 - 创建保密字典, 类型选择镜像服务信息,镜像服务地址为ip+端口,比如
192.168.10.7:30002 - 在使用了私有镜像的Deployment资源中,配置密钥:
 
apiVersion: apps/v1
kind: Deployment
spec:
  template:
    metadata:
      annotations:
        kubesphere.io/imagepullsecrets: '{"镜像名称":"hub-pass密钥名称"}'
    spec:
      imagePullSecrets:
        - name: hub-pass(这里存储的是配置的密钥名称)
      containers:
        - name: 镜像名称
          image: 镜像地址
# 添加helm三方仓库
- 依次打开: 企业空间-应用管理-应用仓库(需要admin权限的用户才会有此菜单)
 - 添加应用仓库: 例如:user:pass@192.168.0.2:30002/chartrepo/automannn