Openshift v3.1.1配置手册

snow chuai汇总、整理、撰写---2020/2/15


1. 拓扑及配置需求
1) 拓扑
    +----------------------+      +-----------------------+      +-----------------------+
    |[openshift.1000cc.net]|      | [compute1.1000cc.net] |      | [compute3.1000cc.net] |
    |     (Master Node)    |      |    (Compute Node)     |      |    (Compute Node)     |
    |     (Infra Node)     |      |                       |      |                       |
    |     (Compute Node)   |      |                       |      |                       |
    +----------+-----------+      +----------+------------+      +----------+------------+
               |192.168.10.11                |192.168.10.12                 |192.168.10.13
    -----------+-----------------------------+------------------------------+--------------
               |192.168.10.14                |192.168.10.15                 |192.168.10.16
    +----------+-----------+      +----------+------------+      +----------+------------+
    | [compute3.1000cc.net]|      |   [srv.1000cc.net]    |      | [keystone.1000cc.net] |
    |    (Compute Node)    |      |     (NFS Server)      |      | (Openstack Keystone ) |
    |                      |      |     (DNS Server)      |      |                       |
    +----------------------+      +-----------------------+      +-----------------------+
2) 配置要求 Openshift节点: 16G内存及4vCPU. Compute节点: 8G内存及1vCPU. OS要求RHEL(CentOS) 7.4及以上版本 所有节点开启SELinux 所有节点的网卡配置增加:NM_CONTROLLED=yes 搭建一台DNS 浏览器处于正常浏览模式,不要使用[无痕][隐私]模式
2. 安装及配置Openshift
2.1 在所有节点设置具有root权限的账户
[root@$allnode ~]# useradd snow
[root@$allnode ~]# passwd snow
[root@$allnode ~]# echo -e 'Defaults:snow !requiretty\nsnow ALL = (root) NOPASSWD:ALL' | \
tee /etc/sudoers.d/openshift
[root@$allnode ~]# chmod 440 /etc/sudoers.d/openshift
2.2 在所有节点安装openshift源及docker服务
[root@$allnode ~]# yum install centos-release-openshift-origin311 epel-release docker git pyOpenSSL -y
[root@$allnode ~]# vim /etc/docker/daemon.json { "registry-mirrors": ["https://3laho3y3.mirror.aliyuncs.com"] }
[root@$allnode ~]# systemctl enable --now docker
2.3 在openshift节点设置ssh-key并传送给其他节点
[root@openshift ~]# su - snow
[snow@openshift ~]$ ssh-keygen -q -N ''
Enter file in which to save the key (/home/snow/.ssh/id_rsa):     # 回车
[snow@openshift ~]$ 
[snow@openshift ~]$ vim ~/.ssh/config Host os Hostname openshift.1000cc.net User snow Host cn1 Hostname computenode1.1000cc.net User snow Host cn2 Hostname computenode2.1000cc.net User snow
[snow@openshift ~]$ chmod 600 ~/.ssh/config
[snow@openshift ~]$ ssh-copy-id os [snow@openshift ~]$ ssh-copy-id cn1 [snow@openshift ~]$ ssh-copy-id cn2
2.4 在openshift节点使用ansbile安装openshift集群
1) 安装openshift-ansible
[snow@openshift ~]$ sudo yum install openshift-ansible -y
2) 定义ansible相关环境 [snow@openshift ~]$ sudo vim /etc/ansible/hosts ...... ...... ...... ...... ...... ......
# 于文件最后追加如下内容 [OSEv3:children] masters nodes etcd
[OSEv3:vars] ansible_ssh_user=snow ansible_become=true openshift_deployment_type=origin
# 使用htpasswd方式验证 openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider'}] # 为openshift节点指定默认的子域 openshift_master_default_subdomain=apps.1000cc.net # 定义允许集群内的registry的网络段(此处如果做了修改,在后面进行仓库部署时,需要修改docker的私有仓库地址) openshift_docker_insecure_registries=172.30.0.0/16
[masters] openshift.1000cc.net openshift_schedulable=true containerized=false
[etcd] openshift.1000cc.net
在下面的文件中为[openshift_node_group_name]定义的值 [nodes] openshift.1000cc.net openshift_node_group_name='node-config-master-infra' computenode1.1000cc.net openshift_node_group_name='node-config-compute' computenode2.1000cc.net openshift_node_group_name='node-config-compute'

3) 初始化 [snow@openshift ~]$ ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/prerequisites.yml ...... ...... PLAY RECAP ******************************************************************* computenode1.1000cc.net : ok=58 changed=21 unreachable=0 failed=0 computenode2.1000cc.net : ok=58 changed=21 unreachable=0 failed=0 localhost : ok=11 changed=0 unreachable=0 failed=0 openshift.1000cc.net : ok=83 changed=22 unreachable=0 failed=0
INSTALLER STATUS ************************************************************* Initialization : Complete (0:02:09)
4) 部署集群 [snow@openshift ~]$ ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.yml ...... ......
INSTALLER STATUS ************************************** Initialization : Complete (0:01:30) Health Check : Complete (0:00:23) Node Bootstrap Preparation : Complete (0:08:11) etcd Install : Complete (0:02:09) Master Install : Complete (0:10:00) Master Additional Install : Complete (0:01:54) Node Join : Complete (0:00:41) Hosted Install : Complete (0:02:32) Cluster Monitoring Operator : Complete (0:01:12) Web Console Install : Complete (0:01:01) Console Install : Complete (0:01:00) metrics-server Install : Complete (0:00:03) Service Catalog Install : Complete (0:13:36)
5) 验证集群 [snow@openshift ~]$ oc get nodes NAME STATUS ROLES AGE VERSION computenode1.1000cc.net Ready compute 37m v1.11.0+d4cacc0 computenode2.1000cc.net Ready compute 37m v1.11.0+d4cacc0 openshift.1000cc.net Ready infra,master 44m v1.11.0+d4cacc0
[snow@openshift ~]$ oc get nodes --show-labels=true NAME STATUS ROLES AGE VERSION LABELS computenode1.1000cc.net Ready compute 1h v1.11.0+d4cacc0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=computenode1.1000cc.net,node-role.kubernetes.io/compute=true computenode2.1000cc.net Ready compute 1h v1.11.0+d4cacc0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=computenode2.1000cc.net,node-role.kubernetes.io/compute=true openshift.1000cc.net Ready infra,master 1h v1.11.0+d4cacc0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=openshift.1000cc.net,node-role.kubernetes.io/infra=true,node-role.kubernetes.io/master=true
3. 添加用户账户
1) 在Master节点增加一个新的htpasswd所验证的账户
[snow@openshift ~]$ sudo htpasswd /etc/origin/master/htpasswd lisa
New password: 
Re-type new password: 
Adding password for user lisa
2) 在Master节点登录openshift [snow@openshift ~]$ oc login Authentication required for https://openshift.1000cc.net:8443 (openshift) Username: lisa Password: Login successful.
You don't have any projects. You can try to create a new project, by running
oc new-project <projectname>
3) 账户确认 [snow@openshift ~]$ oc whoami lisa
4) 登出 [snow@openshift ~]$ oc logout Logged "lisa" out on "https://openshift.1000cc.net:8443"
5) 将账户lisa赋予system-admin角色 [snow@openshift ~]$ oc login -u system:admin Logged into "https://openshift.1000cc.net:8443" as "system:admin" using existing credentials.
You have access to the following projects and can switch between them with 'oc project <projectname>':
* default kube-public kube-service-catalog kube-system management-infra openshift openshift-ansible-service-broker openshift-console openshift-infra openshift-logging openshift-monitoring openshift-node openshift-sdn openshift-template-service-broker openshift-web-console
Using project "default".
[snow@openshift ~]$ oc adm policy add-cluster-role-to-user cluster-admin lisa cluster role "cluster-admin" added: "lisa"
6) 使用浏览器登录 [浏览器]==>https://openshift.1000cc.net:8443

4. 部署Applications
1) 登录oc
[snow@openshift ~]$ oc login
Authentication required for https://openshift.1000cc.net:8443 (openshift)
Username: lisa
Password: 
Login successful.
You don't have any projects. You can try to create a new project, by running
oc new-project <projectname>
2) 创建一个测试项目 [snow@openshift ~]$ oc new-project test-project Now using project "test-project" on server "https://openshift.1000cc.net:8443".
You can add applications to this project with the 'new-app' command. For example, try:
oc new-app centos/ruby-25-centos7~https://github.com/sclorg/ruby-ex.git
to build a new example application in Ruby.
3) 从Docker HUB上标记application镜像 [snow@openshift ~]$ oc tag --source=docker openshift/deployment-example:v2 deployment-example:latest Tag deployment-example:latest set to openshift/deployment-example:v2.
4) 部署deployment-example应用 [snow@openshift ~]$ oc new-app deployment-example --> Found image da61bb2 (4 years old) in image stream "test-project/deployment-example" under tag "latest" for "deployment-example"
* This image will be deployed in deployment config "deployment-example" * Port 8080/tcp will be load balanced by service "deployment-example" * Other containers can access this service through the hostname "deployment-example" * WARNING: Image "test-project/deployment-example:latest" runs as the 'root' user which may not be permitted by your cluster administrator
--> Creating resources ... deploymentconfig.apps.openshift.io "deployment-example" created service "deployment-example" created --> Success Application is not exposed. You can expose services to the outside world by executing one or more of the commands below: 'oc expose svc/deployment-example' Run 'oc status' to view your app.
5) 验证所部署的deployment-example应用 [snow@openshift ~]$ oc status In project test-project on server https://openshift.1000cc.net:8443
svc/deployment-example - 172.30.180.120:8080 dc/deployment-example deploys istag/deployment-example:latest deployment #1 deployed 46 seconds ago - 1 pod

2 infos identified, use 'oc status --suggest' to see details.
6) 应用描述 [snow@openshift ~]$ oc describe svc/deployment-example Name: deployment-example Namespace: test-project Labels: app=deployment-example Annotations: openshift.io/generated-by=OpenShiftNewApp Selector: app=deployment-example,deploymentconfig=deployment-example Type: ClusterIP IP: 172.30.180.120 Port: 8080-tcp 8080/TCP TargetPort: 8080/TCP Endpoints: 10.129.0.2:8080 Session Affinity: None Events: <none>
7) 查看pods [snow@openshift ~]$ oc get pods NAME READY STATUS RESTARTS AGE deployment-example-1-zstb9 1/1 Running 0 2m
8) 访问集群IP确认工作状态 [snow@openshift ~]$ curl 172.30.180.120:8080 ...... ...... HTML{height:100%;} BODY{font-family:Helvetica,Arial;display:flex;display:-webkit-flex;align-items:center;justify-content:center;-webkit-align-items:center;-webkit-box-align:center;-webkit-justify-content:center;height:100%;} ...... ......
9) 于控制台验证application






10) 删除application [snow@openshift ~]$ oc delete all -l app=deployment-example deploymentconfig "deployment-example" deleted pod "deployment-example-1-dv2fh" deleted service "deployment-example" deleted
[snow@openshift ~]$ oc get pods No resources found.
11) 换至default project [snow@openshift ~]$ oc get pods -n default NAME READY STATUS RESTARTS AGE docker-registry-1-vprcz 1/1 Running 0 1h registry-console-1-rj2dc 1/1 Running 0 1h router-1-ttx8s 1/1 Running 0 1h
[snow@openshift ~]$ oc project default Now using project "default" on server "https://openshift.1000cc.net:8443".
5. 添加计算节点
1) 为新计算节点添sudo加账户信息(与其他节点一致)
[root@computenode3 ~]# useradd snow
[root@computenode3 ~]# passwd snow Changing password for user snow. New password: Retype new password: passwd: all authentication tokens updated successfully.
[root@computenode3 ~]# echo -e 'Defaults:snow !requiretty\nsnow ALL = (root) NOPASSWD:ALL' | tee /etc/sudoers.d/openshift Defaults:snow !requiretty snow ALL = (root) NOPASSWD:ALL
[root@computenode3 ~]# chmod 440 /etc/sudoers.d/openshift
2) 添加Openshift源及安装docker服务 [root@computenode3 ~]# yum install centos-release-openshift-origin311 epel-release docker git pyOpenSSL -y
[root@computenode3 ~]# systemctl enable --now docker
4) 在Master节点上配置以下内容 [snow@openshift ~]$ vim ~/.ssh/config Host os Hostname openshift.1000cc.net User snow Host cn1 Hostname computenode1.1000cc.net User snow Host cn2 Hostname computenode2.1000cc.net User snow Host cn3 Hostname computenode3.1000cc.net User snow
[snow@openshift ~]$ ssh-copy-id cn3
[snow@openshift ~]$ sudo vim /etc/ansible/hosts # 在[OSEv3:children]区段追加new_nodes [OSEv3:children] masters nodes etcd new_nodes ...... ...... # 在文件最底部追加如下信息 [new_nodes] computenode3.1000cc.net openshift_node_group_name='node-config-infra'
5) 添加新的计算节点 # 检查条件及准备环境 [snow@openshift ~]$ oc logout Logged "lisa" out on "https://openshift.1000cc.net:8443"
[snow@openshift ~]$ oc login -u system:admin
[snow@openshift ~]$ oc whoami system:admin
[snow@openshift ~]$ ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/prerequisites.yml INSTALLER STATUS *********************************** Initialization : Complete (0:03:51)
# 添加计算节点 [snow@openshift ~]$ ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/openshift-node/scaleup.yml INSTALLER STATUS ************************************** Initialization : Complete (0:03:01) Node Bootstrap Preparation : Complete (0:20:47) Node Join : Complete (0:00:45)
6) 验证 [snow@openshift ~]$ oc get nodes --show-labels=true NAME STATUS ROLES AGE VERSION LABELS computenode1.1000cc.net Ready compute 3h v1.11.0+d4cacc0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=computenode1.1000cc.net,node-role.kubernetes.io/compute=true computenode2.1000cc.net Ready compute 3h v1.11.0+d4cacc0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=computenode2.1000cc.net,node-role.kubernetes.io/compute=true computenode3.1000cc.net Ready infra 4m v1.11.0+d4cacc0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=computenode3.1000cc.net,node-role.kubernetes.io/infra=true openshift.1000cc.net Ready infra,master 4h v1.11.0+d4cacc0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=openshift.1000cc.net,node-role.kubernetes.io/infra=true,node-role.kubernetes.io/master=true
7) 将新加入的节点移至[nodes]部分并删除相关的[new_nodes]信息 [snow@openshift ~]$ sudo vim /etc/ansible/hosts ...... ...... [OSEv3:children] masters nodes etcd new_nodes # 删除new_nodes
...... ...... [nodes] openshift.1000cc.net openshift_node_group_name='node-config-master-infra' computenode1.1000cc.net openshift_node_group_name='node-config-compute' computenode2.1000cc.net openshift_node_group_name='node-config-compute' computenode3.1000cc.net openshift_node_group_name='node-config-infra'
[new_nodes] # 删除new_nodes computenode3.1000cc.net openshift_node_group_name='node-config-infra'
8) 加入Master节点,可按下面方式进行 [snow@openshift ~]$ sudo vim /etc/ansible/hosts [OSEv3:children] masters nodes new_masters
..... .....
[new_masters] openshift1.1000cc.net openshift_node_group_name='node-config-master'
[snow@openshift ~]$ ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/prerequisites.yml
[snow@openshift ~]$ ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/openshift-master/scaleup.yml
# 之后按7) 将新加入的节点移至[nodes]部分并删除相关的[new_nodes]信息方式调整
6. 使用永久存储-NFS
1) 安装及配置NFS
[root@srv ~]# yum install nfs-utils -y
[root@srv ~]# vim /etc/idmapd.conf # 修改第5行 Domain = 1000cc.net
[root@srv ~]# vim /etc/exports /sharedisk/nfs *(rw,no_root_squash)
[root@srv ~]# mkdir -p /sharedisk/nfs
[root@srv ~]# systemctl enable --now rpcbind nfs-server
2) 实现默认的scc(Security Context Constrains)列表 [snow@openshift ~]$ oc get scc NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP PRIORITY READONLYROOTFS VOLUMES anyuid false [] MustRunAs RunAsAny RunAsAny RunAsAny 10 false [configMap downwardAPI emptyDir persistentVolumeClaim projected secret] hostaccess false [] MustRunAs MustRunAsRange MustRunAs RunAsAny <none> false [configMap downwardAPI emptyDir hostPath persistentVolumeClaim projected secret] hostmount-anyuid false [] MustRunAs RunAsAny RunAsAny RunAsAny <none> false [configMap downwardAPI emptyDir hostPath nfs persistentVolumeClaim projected secret] hostnetwork false [] MustRunAs MustRunAsRange MustRunAs MustRunAs <none> false [configMap downwardAPI emptyDir persistentVolumeClaim projected secret] node-exporter false [] RunAsAny RunAsAny RunAsAny RunAsAny <none> false [*] nonroot false [] MustRunAs MustRunAsNonRoot RunAsAny RunAsAny <none> false [configMap downwardAPI emptyDir persistentVolumeClaim projected secret] privileged true [*] RunAsAny RunAsAny RunAsAny RunAsAny <none> false [*] restricted false [] MustRunAs MustRunAsRange MustRunAs RunAsAny <none> false [configMap downwardAPI emptyDir persistentVolumeClaim projected secret]
3) 添加scc group [snow@openshift ~]$ oc adm policy add-scc-to-group anyuid system:authenticated scc "anyuid" added to groups: ["system:authenticated"]
4) 创建PV(Persistent Volume) [snow@openshift ~]$ vim nfs-pv.yml apiVersion: v1 kind: PersistentVolume metadata: # 设定pv名称 name: nfs-pv spec: capacity: # 指定PV大小 storage: 10Gi accessModes: # 设定访问权限为(rw)[ReadWriteMany(RW from multi nodes), ReadWriteOnce(RW from a node), ReadOnlyMany(R from multi nodes)] - ReadWriteMany persistentVolumeReclaimPolicy: # 即时pod终止,也保留此pv Retain nfs: path: /sharedisk/nfs server: 192.168.10.15 readOnly: false
[snow@openshift ~]$ oc create -f nfs-pv.yml persistentvolume/nfs-pv created
[snow@openshift ~]$ oc get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE nfs-pv 10Gi RWX Retain Available 16s
4) 使用集群账户创建PVC(Persistent Volume Claim) [snow@openshift ~]$ oc login Authentication required for https://openshift.1000cc.net:8443 (openshift) Username: lisa Password: Login successful.
You have one project on this server: "test-project"
Using project "test-project".
# 如果不是test-project作为使用项目可通过下面命令进行切换 [snow@openshift ~]$ c project test-project
[snow@openshift ~]$ vim nfs-pvc.yml apiVersion: v1 kind: PersistentVolumeClaim metadata: # 设定pvc名 name: nfs-pvc spec: accessModes: - ReadWriteMany resources: requests: # pvc大小 storage: 1Gi
[snow@openshift ~]$ oc create -f nfs-pvc.yml persistentvolumeclaim/nfs-pvc created
[snow@openshift ~]$ oc get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE nfs-pvc Bound nfs-pv 10Gi RWX 18s
5) 在所有的计算节点上更改SELinux标签 [root@computenode1 ~]# setsebool -P virt_use_nfs on
[root@computenode2 ~]# setsebool -P virt_use_nfs on
[root@computenode3 ~]# setsebool -P virt_use_nfs on
6) 以lisa账户创建使用nfs的podsbig挂载NFS [snow@openshift ~]$ vim nginx-nfs.yml apiVersion: v1 kind: Pod metadata: # 设定pod名 name: nginx-nfs labels: name: nginx-nfs spec: containers: - name: nginx-nfs image: fedora/nginx ports: - name: web containerPort: 80 volumeMounts: # 指定容器的挂载点 - name: nfs-share mountPath: /usr/share/nginx/html volumes: - name: nfs-share persistentVolumeClaim: # 指定刚才所创建的pvc的名字 claimName: nfs-pvc
[snow@openshift ~]$ oc create -f nginx-nfs.yml pod/nginx-nfs created
[snow@openshift ~]$ oc get pods NAME READY STATUS RESTARTS AGE nginx-nfs 1/1 Running 0 42s
7) 访问容器并验证挂载 [snow@openshift ~]$ oc exec -it nginx-nfs bash
[root@nginx-nfs /]# df /usr/share/nginx/html Filesystem 1K-blocks Used Available Use% Mounted on 192.168.10.14:/sharedisk/nfs 39822464 1551872 38270592 4% /usr/share/nginx/html
8) 创建测试页 [root@nginx-nfs /]# echo 'NFS Persistent Storage Test ====== 1000cc.net' > /usr/share/nginx/html/index.html
[root@nginx-nfs /]# exit exit
9) 获取容器IP [snow@openshift ~]$ oc describe pod nginx-nfs | grep ^IP IP: 10.130.0.4
10) 访问测试 [snow@openshift ~]$ curl 10.130.0.4 NFS Persistent Storage Test ====== 1000cc.net
7. 部署私有仓库
1) 删除默认仓库(如果存在)
[snow@openshift ~]$ oc logout
[snow@openshift ~]$ oc login -u system:admin
[snow@openshift ~]$ oc project default
[snow@openshift ~]$ oc get pods NAME READY STATUS RESTARTS AGE docker-registry-1-7fhl2 1/1 Running 0 18m ...... ......
[snow@openshift ~]$ oc describe pod docker-registry-1-7fhl2 | grep -A3 'Volumes:' Volumes: registry-storage: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium:
[snow@openshift ~]$ oc delete all -l docker-registry=default pod "docker-registry-1-h2cdr" deleted replicationcontroller "docker-registry-1" deleted service "docker-registry" deleted deploymentconfig.apps.openshift.io "docker-registry" deleted
[snow@openshift ~]$ oc delete all -l name=registry-console pod "registry-console-1-2cg24" deleted replicationcontroller "registry-console-1" deleted service "registry-console" deleted deploymentconfig.apps.openshift.io "registry-console" deleted
[snow@openshift ~]$ oc delete serviceaccount registry serviceaccount "registry" deleted
[snow@openshift ~]$ oc delete oauthclients cockpit-oauth-client oauthclient "cockpit-oauth-client" deleted
# 如果存在registry-registry-role也需要删除 [snow@openshift ~]$ oc delete clusterrolebindings registry-registry-role clusterrolebinding.authorization.openshift.io "registry-registry-role" deleted
[snow@openshift ~]$ oc get pods NAME READY STATUS RESTARTS AGE router-1-rq6k2 1/1 Running 0 20h
2) 配置仓库 (1) 验证主机状态 [snow@openshift ~]$ oc get nodes NAME STATUS ROLES AGE VERSION computenode1.1000cc.net Ready compute 20h v1.11.0+d4cacc0 computenode2.1000cc.net Ready compute 20h v1.11.0+d4cacc0 computenode3.1000cc.net Ready infra 11h v1.11.0+d4cacc0 openshift.1000cc.net Ready infra,master 20h v1.11.0+d4cacc0
(2) 为镜像创建保存目录 [snow@openshift ~]$ ssh cn1 "sudo mkdir /var/lib/origin/registry"
[snow@openshift ~]$ ssh cn1 "sudo chown snow. /var/lib/origin/registry"
(3) 账户赋权 [snow@openshift ~]$ oc adm policy add-scc-to-user privileged system:serviceaccount:default:registry
scc "privileged" added to: ["system:serviceaccount:default:registry"]
(4) 部署仓库 [snow@openshift ~]$ sudo oc adm registry \ --config=/etc/origin/master/admin.kubeconfig \ --service-account=registry \ --mount-host=/var/lib/origin/registry \ --selector='kubernetes.io/hostname=computenode1.1000cc.net' \ --replicas=1
--> Creating registry registry ... serviceaccount "registry" created clusterrolebinding.authorization.openshift.io "registry-registry-role" created deploymentconfig.apps.openshift.io "docker-registry" created service "docker-registry" created --> Success
[snow@openshift ~]$ oc project default Now using project "default" on server "https://openshift.1000cc.net:8443".
[snow@openshift ~]$ oc get pods NAME READY STATUS RESTARTS AGE docker-registry-1-fp5kw 1/1 Running 0 26s router-1-rq6k2 1/1 Running 0 20h
[snow@openshift ~]$ sudo oc describe pod docker-registry-1-fp5kw Name: docker-registry-1-fp5kw Namespace: default Priority: 0 ...... ...... Normal Created 48s kubelet, computenode1.1000cc.net Created container Normal Started 47s kubelet, computenode1.1000cc.net Started container
(5) 测试 [snow@openshift ~]$ oc login Authentication required for https://openshift.1000cc.net:8443 (openshift) Username: lisa Password: Login successful.
[snow@openshift ~]$ oc new-project test-project
# 或切换至test-project [snow@openshift ~]$ oc project test-project
[snow@openshift ~]$ oc new-app centos/ruby-25-centos7~https://github.com/sclorg/ruby-ex.git --> Found Docker image c5b6c39 (2 months old) from Docker Hub for "centos/ruby-25-centos7"
Ruby 2.5 -------- Ruby 2.5 available as container is a base platform for building and running various Ruby 2.5 applications and frameworks. Ruby is the interpreted scripting language for quick and easy object-oriented programming. It has many features to process text files and to do system management tasks (as in Perl). It is simple, straight-forward, and extensible.
Tags: builder, ruby, ruby25, rh-ruby25
* An image stream tag will be created as "ruby-25-centos7:latest" that will track the source image * A source build using source code from https://github.com/sclorg/ruby-ex.git will be created * The resulting image will be pushed to image stream tag "ruby-ex:latest" * Every time "ruby-25-centos7:latest" changes a new build will be triggered * This image will be deployed in deployment config "ruby-ex" * Port 8080/tcp will be load balanced by service "ruby-ex" * Other containers can access this service through the hostname "ruby-ex"
--> Creating resources ... imagestream.image.openshift.io "ruby-25-centos7" created imagestream.image.openshift.io "ruby-ex" created buildconfig.build.openshift.io "ruby-ex" created deploymentconfig.apps.openshift.io "ruby-ex" created service "ruby-ex" created --> Success Build scheduled, use 'oc logs -f bc/ruby-ex' to track its progress. Application is not exposed. You can expose services to the outside world by executing one or more of the commands below: 'oc expose svc/ruby-ex' Run 'oc status' to view your app.
# 查看生成过程及push到registry的过程 [snow@openshift ~]$ oc logs -f bc/ruby-ex
[snow@openshift ~]$ oc status In project test-project on server https://openshift.1000cc.net:8443
svc/ruby-ex - 172.30.253.178:8080 dc/ruby-ex deploys istag/ruby-ex:latest <- bc/ruby-ex source builds https://github.com/sclorg/ruby-ex.git on istag/ruby-25-centos7:latest deployment #1 deployed 22 seconds ago - 1 pod
pod/nginx-nfs runs fedora/nginx

3 infos identified, use 'oc status --suggest' to see details.
[snow@openshift ~]$ oc get pods NAME READY STATUS RESTARTS AGE nginx-nfs 1/1 Running 0 19m ruby-ex-1-build 0/1 Completed 0 6m ruby-ex-1-qbxxt 1/1 Running 0 1m
[snow@openshift ~]$ oc describe service ruby-ex Name: ruby-ex Namespace: test-project Labels: app=ruby-ex Annotations: openshift.io/generated-by=OpenShiftNewApp Selector: app=ruby-ex,deploymentconfig=ruby-ex Type: ClusterIP IP: 172.30.253.178 Port: 8080-tcp 8080/TCP TargetPort: 8080/TCP Endpoints: 10.129.0.7:8080 Session Affinity: None Events: <none> [snow@openshift ~]$ curl 172.30.220.163:8080 ...... ......
<section class='container'> <hgroup> <h1>Welcome to your Ruby application on OpenShift</h1> </hgroup>
...... ......
</body> </html>
3 为registry开启WEB UI (1) 确认存在registry-console [snow@openshift ~]$ oc project default [snow@openshift ~]$ oc get routes NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD docker-registry docker-registry-default.apps.1000cc.net docker-registry <all> passthrough None registry-console registry-console-default.apps.1000cc.net registry-console <all> passthrough None
# 如果不存在,则用以下命令创建: [snow@openshift ~]$ oc create route passthrough --service registry-console --port registry-console -n default
(2) 开启 [snow@openshift ~]$ oc new-app -n default --template=registry-console \ -p IMAGE_NAME="docker.io/cockpit/kubernetes:latest" \ -p OPENSHIFT_OAUTH_PROVIDER_URL="https://openshift.1000cc.net:8443" \ -p REGISTRY_HOST=$(oc get route docker-registry -n default --template='{{ .spec.host }}') \ -p COCKPIT_KUBE_URL=$(oc get route registry-console -n default --template='https://{{ .spec.host }}') --> Deploying template "openshift/registry-console" to project default
registry-console --------- Template for deploying registry web console. Requires cluster-admin.
* With parameters: * IMAGE_NAME=docker.io/cockpit/kubernetes:latest * OPENSHIFT_OAUTH_PROVIDER_URL=https://openshift.1000cc.net:8443 * COCKPIT_KUBE_URL=https://registry-console-default.apps.1000cc.net * OPENSHIFT_OAUTH_CLIENT_SECRET=userYfvUst80IXsSodOEYqyEU8ypsFFqFxaQ666YKm7LXvxD3k1L6ev5t6Xwe8s1kvH8 # generated * OPENSHIFT_OAUTH_CLIENT_ID=cockpit-oauth-client * REGISTRY_HOST=docker-registry-default.apps.1000cc.net
--> Creating resources ... deploymentconfig.apps.openshift.io "registry-console" created service "registry-console" created oauthclient.oauth.openshift.io "cockpit-oauth-client" created --> Success Application is not exposed. You can expose services to the outside world by executing one or more of the commands below: 'oc expose svc/registry-console' Run 'oc status' to view your app.
[snow@openshift ~]$ oc get pods NAME READY STATUS RESTARTS AGE docker-registry-1-fp5kw 1/1 Running 0 1h registry-console-1-khx2w 1/1 Running 0 1m router-1-rq6k2 1/1 Running 0 22h
[snow@openshift ~]$ oc get routes NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD docker-registry docker-registry-default.apps.1000cc.net docker-registry passthrough None registry-console registry-console-default.apps.1000cc.net registry-console passthrough None
(3) 访问(注意FQDN解析,FQDN解析为Master(Openshift)节点的IP) [浏览器]===>https://registry-console-default.apps.1000cc.net
8. 允许外部访问
1) 配置外部网络并应用
[snow@openshift ~]$ sudo vim /etc/origin/master/master-config.yaml
......
......
于139行设定外部网络地址
externalIPNetworkCIDRs:
- 192.168.10.0/24
......
......
[snow@openshift ~]$ sudo /usr/local/bin/master-restart api 2
[snow@openshift ~]$ sudo /usr/local/bin/master-restart controllers 2
2) 通过外部网络访问应用 (1) 确认信息 [snow@openshift ~]$ oc whoami lisa
[snow@openshift ~]$ oc get project NAME DISPLAY NAME STATUS default Active kube-public Active kube-service-catalog Active kube-system Active management-infra Active openshift Active openshift-ansible-service-broker Active openshift-console Active openshift-infra Active openshift-logging Active openshift-monitoring Active openshift-node Active openshift-sdn Active openshift-template-service-broker Active openshift-web-console Active test-project Active
[snow@openshift ~]$ oc project test-project Now using project "test-project" on server "https://openshift.1000cc.net:8443".
(2) 部署nodejs-ex [snow@openshift ~]$ oc new-app https://github.com/openshift/nodejs-ex --> Found image 93de123 (16 months old) in image stream "openshift/nodejs" under tag "10" for "nodejs"
Node.js 10.12.0 --------------- Node.js available as docker container is a base platform for building and running various Node.js applications and frameworks. Node.js is a platform built on Chrome's JavaScript runtime for easily building fast, scalable network applications. Node.js uses an event-driven, non-blocking I/O model that makes it lightweight and efficient, perfect for data-intensive real-time applications that run across distributed devices.
Tags: builder, nodejs, nodejs-10.12.0
* The source repository appears to match: nodejs * A source build using source code from https://github.com/openshift/nodejs-ex will be created * The resulting image will be pushed to image stream tag "nodejs-ex:latest" * Use 'start-build' to trigger a new build * This image will be deployed in deployment config "nodejs-ex" * Port 8080/tcp will be load balanced by service "nodejs-ex" * Other containers can access this service through the hostname "nodejs-ex"
--> Creating resources ... imagestream.image.openshift.io "nodejs-ex" created buildconfig.build.openshift.io "nodejs-ex" created deploymentconfig.apps.openshift.io "nodejs-ex" created service "nodejs-ex" created --> Success Build scheduled, use 'oc logs -f bc/nodejs-ex' to track its progress. Application is not exposed. You can expose services to the outside world by executing one or more of the commands below: 'oc expose svc/nodejs-ex' Run 'oc status' to view your app.
(3) 查看生成过程并确认 [snow@openshift ~]$ oc logs -f bc/nodejs-ex ...... ...... Pushing image docker-registry.default.svc:5000/test-project/nodejs-ex:latest ... Pushed 0/7 layers, 5% complete Pushed 1/7 layers, 14% complete Push successful
[snow@openshift ~]$ oc get pods NAME READY STATUS RESTARTS AGE nginx-nfs 1/1 Running 0 1h nodejs-ex-1-build 0/1 Completed 0 4m nodejs-ex-1-p7qnc 1/1 Running 0 1m ruby-ex-1-build 0/1 Completed 0 1h ruby-ex-1-qbxxt 1/1 Running 0 1h
(4) 查看集群IP [snow@openshift ~]$ oc get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nodejs-ex ClusterIP 172.30.124.112 <none> 8080/TCP 5m ruby-ex ClusterIP 172.30.253.178 <none> 8080/TCP 1h
(5) 访问集群IP [snow@openshift ~]$ curl 172.30.124.112:8080 <footer> <div class="logo"><a href="https://www.openshift.com/"></a><div> </footer> </section> </body> </html>
(6) 开放外部访问应用 [snow@openshift ~]$ oc expose service nodejs-ex route.route.openshift.io/nodejs-ex exposed
(7) 显示访问路径 [snow@openshift ~]$ oc get routes NAME HOST/PORT PATH SERVICES PORT TERMINATION nodejs-ex nodejs-ex-test-project.apps.1000cc.net nodejs-ex 8080-tcp WILDCARD None
(8) DNS设定 [root@srv ~]# vim /var/named/1000cc.db ...... ...... openshift IN A 192.168.10.11 *.apps IN CNAME openshift.1000cc.net. ...... ......
[root@srv ~]# systemctl restart named (9) 测试 [浏览器]=====>nodejs-ex-test-project.apps.1000cc.net

9. 使用keystone验证
1) keystone配置及启动
(1) 安装openstack源
[root@keystone ~]# yum install centos-release-openstack-queens -y
[root@keystone ~]# sed -i -e "s/enabled=1/enabled=0/g" /etc/yum.repos.d/CentOS-OpenStack-queens.repo
(2) 安装MariaDB并初始化 [root@keystone ~]# yum --enablerepo=centos-openstack-queens install mariadb-server -y
[root@keystone ~]# vim /etc/my.cnf # 在[mysqld]区段追加如下内容 character-set-server=utf8
[root@keystone ~]# systemctl enable --now mariadb
[root@keystone ~]# mysql_secure_installation ...... ......
(3) 安装RabbitMQ和memcached [root@keystone ~]# yum --enablerepo=epel install rabbitmq-server memcached -y
[root@keystone ~]# systemctl enable --now rabbitmq-server memcached
[root@keystone ~]# rabbitmqctl add_user openstack password Creating user "openstack" ... ...done.
[root@keystone ~]# rabbitmqctl set_permissions openstack ".*" ".*" ".*" Setting permissions for user "openstack" in vhost "/" ... ...done.
2) 部署keystone (1) 创建keystone数据库并赋权 [root@keystone ~]# mysql -u root -p Enter password: Welcome to the MariaDB monitor. Commands end with ; or \g. Your MariaDB connection id is 10 Server version: 10.1.20-MariaDB MariaDB Server
Copyright (c) 2000, 2016, Oracle, MariaDB Corporation Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MariaDB [(none)]> create database keystone; Query OK, 1 row affected (0.00 sec)
MariaDB [(none)]> grant all privileges on keystone.* to keystone@'localhost' identified by 'password'; Query OK, 0 rows affected (0.00 sec)
MariaDB [(none)]> grant all privileges on keystone.* to keystone@'%' identified by 'password'; Query OK, 0 rows affected (0.00 sec)
MariaDB [(none)]> flush privileges; Query OK, 0 rows affected (0.01 sec)
MariaDB [(none)]> exit Bye
(2) 安装keystone软件 [root@keystone ~]# yum --enablerepo=centos-openstack-queens,epel install openstack-keystone openstack-utils python-openstackclient httpd mod_wsgi -y
(3) 配置keystone [root@keystone ~]# vim /etc/keystone/keystone.conf # 修改605行,指定Memcached的信息 memcache_servers = 192.168.10.16:11211
# 修改737行,指定数据库相关信息 connection = mysql+pymysql://keystone:password@192.168.10.16/keystone
# 于[token],添加2879行内容 [token] provider = fernet
# 同步数据库 [root@keystone ~]# su -s /bin/bash keystone -c "keystone-manage db_sync"
# 初始化秘钥 [root@keystone ~]# keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone [root@keystone ~]# keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
# bootstrap keystone [root@keystone ~]# keystone-manage bootstrap --bootstrap-password adminpassword --bootstrap-admin-url http://192.168.10.16:5000/v3/ --bootstrap-internal-url http://192.168.10.16:5000/v3/ --bootstrap-public-url http://192.168.10.16:5000/v3/ --bootstrap-region-id RegionOne
[root@keystone ~]# ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/
[root@keystone ~]# systemctl enable --now httpd
[root@keystone ~]# vim ~/keystonerc export OS_PROJECT_DOMAIN_NAME=default export OS_USER_DOMAIN_NAME=default export OS_PROJECT_NAME=admin export OS_USERNAME=admin export OS_PASSWORD=adminpassword export OS_AUTH_URL=http://192.168.10.16:5000/v3 export OS_IDENTITY_API_VERSION=3 export OS_IMAGE_API_VERSION=2 export PS1='[\u@\h \W(keystone)]\$ '
[root@keystone ~]# chmod 600 ~/keystonerc
[root@keystone ~]# source ~/keystonerc [root@keystone ~(keystone)]# echo "source ~/keystonerc " >> ~/.bash_profile
2) 实现keystone验证 (1) 配置Openshift主节点 [snow@openshift ~]$ oc logout Logged "lisa" out on "https://openshift.1000cc.net:8443"
[snow@openshift ~]$ oc login -u system:admin Logged into "https://openshift.1000cc.net:8443" as "system:admin" using existing credentials.
[snow@openshift ~]$ sudo vim /etc/origin/master/master-config.yaml # 146行起,修改如下内容 ...... ...... identityProviders: - challenge: true login: true mappingMethod: claim name: keystone_auth provider: apiVersion: v1 file: /etc/origin/master/htpasswd kind: KeystonePasswordIdentityProvider domainName: default url: http://192.168.10.16:5000 ...... ......
[snow@openshift ~]$ sudo /usr/local/bin/master-restart api 2
[snow@openshift ~]$ sudo /usr/local/bin/master-restart controllers 2
(2) 在keystone节点创建租户 [root@keystone ~(keystone)]# openstack user create --domain default --password password gzliu +---------------------+----------------------------------+ | Field | Value | +---------------------+----------------------------------+ | domain_id | default | | enabled | True | | id | 42e5c2ace7104b378444424fb8eb4e45 | | name | gzliu | | options | {} | | password_expires_at | None | +---------------------+----------------------------------+
(3) Openshift节点测试 [snow@openshift ~]$ oc login Authentication required for https://openshift.1000cc.net:8443 (openshift) Username: gzliu Password: Login successful.
You don't have any projects. You can try to create a new project, by running
oc new-project <projectname>
[snow@openshift ~]$ oc whoami gzliu
[snow@openshift ~]$ oc new-project myproject Now using project "myproject" on server "https://openshift.1000cc.net:8443".
You can add applications to this project with the 'new-app' command. For example, try:
oc new-app centos/ruby-25-centos7~https://github.com/sclorg/ruby-ex.git
to build a new example application in Ruby.
[snow@openshift ~]$ oc project Using project "myproject" on server "https://openshift.1000cc.net:8443".
(4) 赋予keystone指定账户cluster-admin权限 [snow@openshift ~]$ oc whoami gzliu
[snow@openshift ~]$ oc logout
[snow@openshift ~]$ oc login -u system:admin
[snow@openshift ~]$ oc adm policy add-cluster-role-to-user cluster-admin gzliu cluster role "cluster-admin" added: "gzliu"
10. 部署Router
1) 确认是否有路由存在
[snow@openshift ~]$ oc logout
[snow@openshift ~]$ oc login -u system:admin
[snow@openshift ~]$ oc project default
[snow@openshift ~]$ oc adm router --dry-run --service-account=router Router "router" service exists
2) 删除现有路由 [snow@openshift ~]$ oc get pods NAME READY STATUS RESTARTS AGE docker-registry-1-fp5kw 1/1 Running 0 2h registry-console-1-khx2w 1/1 Running 0 43m router-1-rq6k2 1/1 Running 0 22h
[snow@openshift ~]$ oc delete all -l router pod "router-1-rq6k2" deleted replicationcontroller "router-1" deleted service "router" deleted deploymentconfig.apps.openshift.io "router" deleted
[snow@openshift ~]$ oc delete serviceaccounts router serviceaccount "router" deleted
[snow@openshift ~]$ oc delete clusterrolebindings router-router-role clusterrolebinding.authorization.openshift.io "router-router-role" deleted
3) 在指定节点openshift.1000cc.net上重新创建一个HA Proxy路由 [snow@openshift ~]$ oc adm router router \ --selector='kubernetes.io/hostname=openshift.1000cc.net' \ --replicas=1 --service-account=router info: password for stats user admin has been set to 7k0xH2U1I7 --> Creating router router ... serviceaccount "router" created clusterrolebinding.authorization.openshift.io "router-router-role" created deploymentconfig.apps.openshift.io "router" created service "router" created --> Success
[snow@openshift ~]$ oc get pods NAME READY STATUS RESTARTS AGE docker-registry-1-fp5kw 1/1 Running 0 2h registry-console-1-khx2w 1/1 Running 0 50m router-1-nfj6j 1/1 Running 0 44s
4) 测试 [snow@openshift ~]$ oc login Authentication required for https://openshift.1000cc.net:8443 (openshift) Username: gzliu Password: Login successful.
You have one project on this server: "myproject"
Using project "myproject".
[snow@openshift ~]$ oc new-app centos/ruby-25-centos7~https://github.com/sclorg/ruby-ex.git --> Found Docker image c5b6c39 (2 months old) from Docker Hub for "centos/ruby-25-centos7"
Ruby 2.5
-------- Ruby 2.5 available as container is a base platform for building and running various Ruby 2.5 applications and frameworks. Ruby is the interpreted scripting language for quick and easy object-oriented programming. It has many features to process text files and to do system management tasks (as in Perl). It is simple, straight-forward, and extensible.
Tags: builder, ruby, ruby25, rh-ruby25
* An image stream tag will be created as "ruby-25-centos7:latest" that will track the source image * A source build using source code from https://github.com/sclorg/ruby-ex.git will be created * The resulting image will be pushed to image stream tag "ruby-ex:latest" * Every time "ruby-25-centos7:latest" changes a new build will be triggered * This image will be deployed in deployment config "ruby-ex" * Port 8080/tcp will be load balanced by service "ruby-ex" * Other containers can access this service through the hostname "ruby-ex"
--> Creating resources ... imagestream.image.openshift.io "ruby-25-centos7" created imagestream.image.openshift.io "ruby-ex" created buildconfig.build.openshift.io "ruby-ex" created deploymentconfig.apps.openshift.io "ruby-ex" created service "ruby-ex" created --> Success Build scheduled, use 'oc logs -f bc/ruby-ex' to track its progress. Application is not exposed. You can expose services to the outside world by executing one or more of the commands below: 'oc expose svc/ruby-ex' Run 'oc status' to view your app.
[snow@openshift ~]$ oc get pods NAME READY STATUS RESTARTS AGE ruby-ex-1-build 0/1 Completed 0 1m ruby-ex-1-s6pmw 1/1 Running 0 14s
[snow@openshift ~]$ oc get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ruby-ex ClusterIP 172.30.148.189 <none> 8080/TCP 1m
[snow@openshift ~]$ oc expose service ruby-ex route.route.openshift.io/ruby-ex exposed
[snow@openshift ~]$ oc get routes NAME HOST/PORT PATH SERVICES PORT TERMINATION ruby-ex ruby-ex-myproject.apps.1000cc.net ruby-ex 8080-tcp
WILDCARD None
# 做好 ruby-ex-myproject.apps.1000cc.net的FQDN解析
5) 客户端访问测试 [浏览器]===>ruby-ex-myproject.apps.1000cc.net

 

如对您有帮助,请随缘打个赏。^-^

gold