kolla是為了提供production-ready的 OpenStack Cloud 之container和deployment tools。
環境配置
本次安裝是使用三台Bare Metal去作部署,作業系統採用的是Ubuntu 16.04 LTS版。
Kubernetes Role
OpeStack Role
RAM
CPUs
IP Address
Master
Controller
16G
8Cores
10.0.0.190
Node1
Compute1
16G
8Cores
10.0.0.191
Node2
Compute2
16G
8Cores
10.0.0.192
部署kubernetes
這邊利用的是kubeadm進行kubernetes的安裝『版本是Kubernetes1.10』,可以參考官方網站 的部屬方式。
安裝 Kubernetes latest version and other dependencies:
1
2
3
4
5
6
7
8
9
$curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo -E apt-key add -
$cat <<EOF > kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOF
$sudo cp -aR kubernetes.list /etc/apt/sources.list.d/kubernetes.list
$sudo apt-get update
$sudo apt-get install -y docker.io kubelet kubeadm kubectl
Kubernetes v1.8+ 要求關閉系統 Swap,如果不想關閉系統的Swap需要修改 kubelet 設定參數,我們利用以下指令關閉系統Swap:
1
$swapoff -a && sysctl -w vm.swappiness= 0
透過以下指令啟動Docker Daemon。
1
$systemctl enable docker && systemctl start docker
確認Docker是否支援Cgroup Driver或是Systemd Driver,進一步修改 kubelet 設定參數。
1
2
$CGROUP_DRIVER = $( sudo docker info | grep "Cgroup Driver" | awk '{print $3}' )
$sudo sed -i "s|KUBELET_KUBECONFIG_ARGS=|KUBELET_KUBECONFIG_ARGS=--cgroup-driver= $CGROUP_DRIVER |g" /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
將橋接的IPv4流量傳遞給iptables
1
2
3
4
5
6
$cat <<EOF > /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
$sysctl -p /etc/sysctl.d/k8s.conf
設置Kubernetes Server CIDR的DNS位置。
1
$sudo sed -i 's/10.96.0.10/10.3.3.10/g' /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
利用以下重新載入kubelet相關的啟動參數
1
2
3
4
$sudo systemctl daemon-reload
$sudo systemctl stop kubelet
$sudo systemctl enable kubelet
$sudo systemctl start kubelet
在master節點上使用kubeadm進行kubernetes 叢集的初始化
1
$sudo kubeadm init --feature-gates CoreDNS = true --pod-network-cidr= 10.1.0.0/16 --service-cidr= 10.3.3.0/24
會得到下列輸出,我們利用下列輸出的資訊將其他節點加入叢集。
1
2
3
4
You can now join any number of machines by running the following on each node
as root:
kubeadm join 10.0.0.181:6443 --token pieol0.2kfzpwhosxuqhe6t --discovery-token-ca-cert-hash sha256:e55b423135642404ffc60bcae4793732f18b4ce2866a8419c87b7dd92724a481
在其他節點上我們可以利用以下指令加入叢集。
1
$kubeadm join 10.0.0.181:6443 --token pieol0.2kfzpwhosxuqhe6t --discovery-token-ca-cert-hash sha256:e55b423135642404ffc60bcae4793732f18b4ce2866a8419c87b7dd92724a481
在master節點設定kube config。
1
2
3
$mkdir -p $HOME /.kube
$sudo -H cp /etc/kubernetes/admin.conf $HOME /.kube/config
$sudo -H chown $( id -u) :$( id -g) $HOME /.kube/config
在master 安裝Kubernetes CNI,這邊採用的是Canal。
1
2
3
4
5
$wget https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/canal/rbac.yaml
$kubectl apply -f rbac.yaml
$wget https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/canal/canal.yaml
$sed -i "s@10.244.0.0/16@10.1.0.0/16@" canal.yaml
$kubectl apply -f canal.yaml
因為我們要把kubernetes master當作openstack controller的角色,我們將kubernetes master 的節點污染拿掉。
1
$kubectl taint nodes --all= true node-role.kubernetes.io/master:NoSchedule-
在CNI安裝完成後可透過下列指令來檢查,Node & Pod是否都準備好了。
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
$kubectl get pod,node -o wide --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system pod/canal-297sw 3/3 Running 0 2m
kube-system pod/canal-f82qj 3/3 Running 0 2m
kube-system pod/canal-zxfbk 3/3 Running 0 2m
kube-system pod/coredns-7997f8864c-cglcr 1/1 Running 0 9m
kube-system pod/coredns-7997f8864c-sxf2c 1/1 Running 0 9m
kube-system pod/etcd-node1 1/1 Running 0 8m
kube-system pod/kube-apiserver-node1 1/1 Running 0 8m
kube-system pod/kube-controller-manager-node1 1/1 Running 0 8m
kube-system pod/kube-proxy-6gwws 1/1 Running 0 6m
kube-system pod/kube-proxy-9xbdq 1/1 Running 0 6m
kube-system pod/kube-proxy-z54k4 1/1 Running 0 9m
kube-system pod/kube-scheduler-node1 1/1 Running 0 8m
NAME STATUS ROLES AGE VERSION
node1 Ready master 8m v1.10.3
node2 Ready master 8m v1.10.3
node3 Ready master 8m v1.10.3
利用BusyBox ,驗證Kubernetes 環境例如DNS 是否有通。
1
2
3
4
5
6
7
8
9
$kubectl run -i -t $( uuidgen) --image= busybox --restart= Never
$nslookup kubernetes
Server: 10.3.3.10
Address 1: 10.3.3.10 kube-dns.kube-system.svc.cluster.local
Name: kubernetes
Address 1: 10.3.3.1 kubernetes.default.svc.cluster.local
為kubernetes Helm 建立Tiller Service Account以及綁定Cluster-Admin Role
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
$kubectl create serviceaccount tiller --namespace kube-system
$cat <<EOF | kubectl create -f -
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: tiller-clusterrolebinding
subjects:
- kind: ServiceAccount
name: tiller
namespace: kube-system
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: ""
EOF
接著我們安裝Kubernetes Helm,用來管理Helm package的元件。
1
2
3
4
$curl -L https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get > get_helm.sh
$chmod 700 get_helm.sh
$./get_helm.sh
$helm init --service-account tiller
由於kolla kubernetes 需要用到ansible git ,我們在master的節點需要安裝ansible ,其他節點需安裝python。
1
2
3
$sudo apt-get update && sudo apt-get install -y software-properties-common git python python-pip
$sudo apt-add-repository -y ppa:ansible/ansible
$sudo apt-get update && sudo apt-get install -y ansible
建立一個資料夾方便我們接下來的步驟
1
2
$mkdir kolla-bringup
$cd kolla-bringup
下載 kolla-kubernetes & kolla ansible
1
2
3
4
5
6
7
$git clone http://github.com/openstack/kolla-ansible
$git clone http://github.com/openstack/kolla-kubernetes
$cd kolla-kubernetes
$git checkout 22ed0c232d7666afb6e288001b8814deea664992
$cd ../kolla-ansible
$git checkout origin/stable/pike
$cd ..
使用pip 安裝kolla-kubernetes & kolla-ansible所需要的套件
1
$sudo pip install -U kolla-ansible/ kolla-kubernetes
把相關設定檔製到/etc底下
1
2
$cp -Ra kolla-kubernetes/etc/kolla/ /etc
$cp -Ra kolla-kubernetes/etc/kolla-kubernetes/ /etc
pip剛剛安裝的項目這時候可以幫我們生成default passwords
1
$sudo kolla-kubernetes-genpwd
建立一個Kubernetes namespaces來隔離Kolla deployment
1
$kubectl create namespace kolla
使用Label標記要成為Controller及Compute的節點,我們這邊將node1當成Controller,node2 node3當成Compute
1
2
3
$kubectl label node node1 kolla_controller = true
$kubectl label node node2 kolla_compute = true
$kubectl label node node3 kolla_compute = true
將/etc/kolla/globals.yml設定檔,修改成與自己環境相符
將/etc/kolla/globals.yml中的network_interface設置為Management interface name。例如:eth1
將/etc/kolla/globals.yml中的neutron_external_interface設置為neutron external interface name。例如:eth2
將相關的openstack設定加入/etc/kolla/globals.yml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
$cat <<EOF > add-to-globals.yml
kolla_install_type: "source"
tempest_image_alt_id: "{{ tempest_image_id }}"
tempest_flavor_ref_alt_id: "{{ tempest_flavor_ref_id }}"
neutron_plugin_agent: "openvswitch"
api_interface_address: 0.0.0.0
tunnel_interface_address: 0.0.0.0
orchestration_engine: KUBERNETES
memcached_servers: "memcached"
keystone_admin_url: "http://keystone-admin:35357/v3"
keystone_internal_url: "http://keystone-internal:5000/v3"
keystone_public_url: "http://keystone-public:5000/v3"
glance_registry_host: "glance-registry"
neutron_host: "neutron"
keystone_database_address: "mariadb"
glance_database_address: "mariadb"
nova_database_address: "mariadb"
nova_api_database_address: "mariadb"
neutron_database_address: "mariadb"
cinder_database_address: "mariadb"
ironic_database_address: "mariadb"
placement_database_address: "mariadb"
rabbitmq_servers: "rabbitmq"
openstack_logging_debug: "True"
enable_heat: "no"
enable_cinder: "yes"
enable_cinder_backend_lvm: "yes"
enable_cinder_backend_iscsi: "yes"
enable_cinder_backend_rbd: "no"
enable_ceph: "no"
enable_elasticsearch: "no"
enable_kibana: "no"
glance_backend_ceph: "no"
cinder_backend_ceph: "no"
nova_backend_ceph: "no"
EOF
$cat ./add-to-globals.yml | sudo tee -a /etc/kolla/globals.yml
接下來透過ansible幫我們產生OpneStack設定檔
1
$ansible -playbook -e @/etc/kolla/globals.yml -e @/etc/kolla/passwords.yml -e CONFIG_DIR = /etc/kolla kolla-kubernetes/ansible/site.yml
使用官方提供的腳本幫助我們快速的把OpenStack的password,加到Kubernetes secrets
1
$kolla -kubernetes/tools/secret-generator.py create
使用之前在pip安裝的檔案,快速的把OpenStack的設定檔,加入Kubernetes config maps裡
1
2
3
4
5
6
7
8
9
10
$kollakube res create configmap \
mariadb keystone horizon rabbitmq memcached nova-api nova-conductor \
nova-scheduler glance-api-haproxy glance-registry-haproxy glance-api \
glance-registry neutron-server neutron-dhcp-agent neutron-l3-agent \
neutron-metadata-agent neutron-openvswitch-agent openvswitch-db-server \
openvswitch-vswitchd nova-libvirt nova-compute nova-consoleauth \
nova-novncproxy nova-novncproxy-haproxy neutron-server-haproxy \
nova-api-haproxy cinder-api cinder-api-haproxy cinder-backup \
cinder-scheduler cinder-volume iscsid tgtd keepalived \
placement-api placement-api-haproxy
透過官方提供的腳本建立Kolla Helm Chart
1
$kolla -kubernetes/tools/helm_build_all.sh .
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
$cat <<EOF > cloud.yaml
global:
kolla:
all:
docker_registry: docker.io
image_tag: "4.0.0"
kube_logger: false
external_vip: "192.168.7.105"
base_distro: "centos"
install_type: "source"
tunnel_interface: "docker0"
keystone:
all:
admin_port_external: "true"
dns_name: "192.168.7.105"
public:
all:
port_external: "true"
rabbitmq:
all:
cookie: 67
glance:
api:
all:
port_external: "true"
cinder:
api:
all:
port_external: "true"
volume_lvm:
all:
element_name: cinder-volume
daemonset:
lvm_backends:
- '192.168.7.105': 'cinder-volumes'
ironic:
conductor:
daemonset:
selector_key: "kolla_conductor"
nova:
placement_api:
all:
port_external: true
novncproxy:
all:
port: 6080
port_external: true
openvswitch:
all:
add_port: true
ext_bridge_name: br-ex
ext_interface_name: enp1s0f1
setup_bridge: true
horizon:
all:
port_external: true
EOF
將該數值修改成環境上Management interface的IP。例如10.0.0.178
1
$sed -i "s@192.168.7.105@10.0.0.178@g" ./cloud.yaml
將該數值修改成環境上ext_interface_name的interface。例如:eth1
1
$sed -i "s@enp1s0f1@eth1@g" ./cloud.yaml
將該數值修改成環境上Management interface。例如:eth2
1
$sed -i "s@docker0@eth2@g" ./cloud.yaml
在這邊建立一個給Openstack rbac ,讓後來helm啟動的元件去取得Kubernetes資源不會有權限問題(OpenStack官方RBAC這一點還沒修正…)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
$cat <<EOF | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: default
namespace: kolla
EOF
一個一個把openstack的服務使用helm啟動起來。(例如mariadb,rabbitmq,memcached…等)
1
2
3
4
5
6
7
8
9
10
11
$helm install --debug kolla-kubernetes/helm/service/mariadb --namespace kolla --name mariadb --values ./cloud.yaml
$helm install --debug kolla-kubernetes/helm/service/rabbitmq --namespace kolla --name rabbitmq --values ./cloud.yaml
$helm install --debug kolla-kubernetes/helm/service/memcached --namespace kolla --name memcached --values ./cloud.yaml
$helm install --debug kolla-kubernetes/helm/service/keystone --namespace kolla --name keystone --values ./cloud.yaml
$helm install --debug kolla-kubernetes/helm/service/glance --namespace kolla --name glance --values ./cloud.yaml
$helm install --debug kolla-kubernetes/helm/service/cinder-control --namespace kolla --name cinder-control --values ./cloud.yaml
$helm install --debug kolla-kubernetes/helm/service/horizon --namespace kolla --name horizon --values ./cloud.yaml
$helm install --debug kolla-kubernetes/helm/service/openvswitch --namespace kolla --name openvswitch --values ./cloud.yaml
$helm install --debug kolla-kubernetes/helm/service/neutron --namespace kolla --name neutron --values ./cloud.yaml
$helm install --debug kolla-kubernetes/helm/service/nova-control --namespace kolla --name nova-control --values ./cloud.yaml
$helm install --debug kolla-kubernetes/helm/service/nova-compute --namespace kolla --name nova-compute --values ./cloud.yaml
這邊有可能遇到第一個問題,也就是nova-compute init 會卡在這個畫面。
1
nova-api-create-cell-r288q 0/1 Init:2/3 0 5min
這邊發生了一些問題,官方沒有去修正他。先把這個helm chart 刪除掉,我在這邊修正了helm的設定檔案,去修改keystone的url。
1
2
3
4
$helm delete --purge nova-compute
$vim kolla-bringup/kolla-kubernetes/helm/microservice/nova-api-create-simple-cell-job/templates/nova-api-create-cell.yaml
{{ - $keystonePort := include "kolla_val_get_str" ( dict "key" "port" "searchPath" $keystoneSearchPath "Values" .Values ) | default "5000" }}
再重新build一次helm chart
1
$kolla -kubernetes/tools/helm_build_all.sh .
再重新run一次修正過後的chart.
1
helm install --debug kolla-kubernetes/helm/service/nova-compute --namespace kolla --name nova-compute --values ./cloud.yaml
確認所有服務運作正常
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
$ kubectl get pod -n kolla
NAME READY STATUS RESTARTS AGE
cinder-api-5d6fd874b5-tzwlr 3/3 Running 0 11h
cinder-create-db-x552q 0/2 Completed 0 11h
cinder-create-keystone-endpoint-admin-7xhjp 0/1 Completed 0 11h
cinder-create-keystone-endpoint-adminv2-252bg 0/1 Completed 0 11h
cinder-create-keystone-endpoint-adminv3-4zcv7 0/1 Completed 0 11h
cinder-create-keystone-endpoint-internal-s2n77 0/1 Completed 0 11h
cinder-create-keystone-endpoint-internalv2-9wr7f 0/1 Completed 0 11h
cinder-create-keystone-endpoint-internalv3-2srh2 0/1 Completed 0 11h
cinder-create-keystone-endpoint-public-7mksf 0/1 Completed 0 11h
cinder-create-keystone-endpoint-publicv2-pqhms 0/1 Completed 0 11h
cinder-create-keystone-endpoint-publicv3-8z6xg 0/1 Completed 0 11h
cinder-create-keystone-service-4sbp6 0/1 Completed 0 11h
cinder-create-keystone-servicev2-9h88v 0/1 Completed 0 11h
cinder-create-keystone-servicev3-p89wk 0/1 Completed 0 11h
cinder-create-keystone-user-4whfz 0/1 Completed 0 11h
cinder-manage-db-hgppr 0/1 Completed 0 11h
cinder-scheduler-0 1/1 Running 0 11h
glance-api-6f649fbf8d-9hwzn 1/1 Running 0 11h
glance-create-db-76lwc 0/2 Completed 0 11h
glance-create-keystone-endpoint-admin-m4mxm 0/1 Completed 0 11h
glance-create-keystone-endpoint-internal-q9whd 0/1 Completed 0 11h
glance-create-keystone-endpoint-public-stszm 0/1 Completed 0 11h
glance-create-keystone-service-hcznf 0/1 Completed 0 11h
glance-create-keystone-user-9f2g7 0/1 Completed 0 11h
glance-manage-db-ch6rp 0/1 Completed 0 11h
glance-registry-684d9cc765-d5g5p 3/3 Running 0 11h
horizon-7bc45d8df6-8ndt6 1/1 Running 0 11h
keystone-b55d658-4bmpf 1/1 Running 0 12h
keystone-create-db-rb9wq 0/2 Completed 0 12h
keystone-create-endpoints-65m86 0/1 Completed 0 12h
keystone-fernet-setup-job-knfm8 0/1 Completed 0 12h
keystone-manage-db-x4m2n 0/1 Completed 0 12h
mariadb-0 1/1 Running 0 12h
mariadb-init-element-ndbxt 0/1 Completed 0 12h
memcached-7b95fd6b69-v8f4v 2/2 Running 0 12h
neutron-create-db-w2hqk 0/2 Completed 0 11h
neutron-create-keystone-endpoint-admin-hkg8p 0/1 Completed 0 11h
neutron-create-keystone-endpoint-internal-cwzrt 0/1 Completed 0 11h
neutron-create-keystone-endpoint-public-bjzzk 0/1 Completed 0 11h
neutron-create-keystone-service-q7ms9 0/1 Completed 0 11h
neutron-create-keystone-user-zvqnw 0/1 Completed 0 11h
neutron-dhcp-agent-l5qkg 1/1 Running 0 11h
neutron-l3-agent-network-64v9x 1/1 Running 0 11h
neutron-manage-db-5dqkn 0/1 Completed 0 11h
neutron-metadata-agent-network-ttf5v 1/1 Running 0 11h
neutron-openvswitch-agent-network-j6llm 1/1 Running 0 11h
neutron-server-6d74c78c98-xzdd8 3/3 Running 0 11h
nova-api-7d5cf595bc-rxg4k 3/3 Running 0 11h
nova-api-create-cell-r288q 0/1 Completed 0 11h
nova-api-create-db-5w2lg 0/2 Completed 0 11h
nova-api-manage-db-wd4b8 0/1 Completed 0 11h
nova-cell0-create-db-5bz6v 0/2 Completed 0 11h
nova-compute-wn5lb 1/1 Running 0 11h
nova-compute-xkv8r 1/1 Running 0 11h
nova-conductor-0 1/1 Running 0 11h
nova-consoleauth-0 1/1 Running 0 11h
nova-create-db-476gl 0/2 Completed 0 11h
nova-create-keystone-endpoint-admin-xbt8x 0/1 Completed 0 11h
nova-create-keystone-endpoint-internal-58dvx 0/1 Completed 0 11h
nova-create-keystone-endpoint-public-8c56c 0/1 Completed 0 11h
nova-create-keystone-service-jngxg 0/1 Completed 0 11h
nova-create-keystone-user-4gc62 0/1 Completed 0 11h
nova-libvirt-kbcrl 1/1 Running 0 11h
nova-libvirt-n2nnj 1/1 Running 0 11h
nova-novncproxy-79bf74796f-9p7ct 3/3 Running 0 11h
nova-scheduler-0 1/1 Running 0 11h
openvswitch-ovsdb-network-rwwrz 1/1 Running 0 11h
openvswitch-vswitchd-network-6q9w9 1/1 Running 0 11h
placement-api-create-keystone-endpoint-admin-bg9ct 0/1 Completed 0 11h
placement-api-create-keystone-endpoint-internal-v998h 0/1 Completed 0 11h
placement-api-create-keystone-endpoint-public-p6mvf 0/1 Completed 0 11h
placement-api-fc8f68544-rvhwc 1/1 Running 0 11h
placement-create-keystone-service-blj57 0/1 Completed 0 11h
placement-create-keystone-user-tw5k4 0/1 Completed 0 11h
rabbitmq-0 1/1 Running 0 12h
rabbitmq-init-element-gtvgw 0/1 Completed 0 12h
使用官方的tool建立admin的openrc file,建立完成的檔案會存在當前使用者的home目錄下。
1
$kolla -kubernetes/tools/build_local_admin_keystonerc.sh ext
接者安裝OpenStack clients套件
1
2
3
$sudo pip install "python-openstackclient"
$sudo pip install "python-neutronclient"
$sudo pip install "python-cinderclient"
使用openstack語法去產生image,network…等
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
$source ~/keystonerc_admin
$IMAGE_URL = http://download.cirros-cloud.net/0.3.5/
$IMAGE = cirros-0.3.5-x86_64-disk.img
$IMAGE_NAME = cirros
$IMAGE_TYPE = linux
EXT_NET_CIDR = '172.24.10.1/24'
EXT_NET_RANGE = 'start=172.24.10.10,end=172.24.10.200'
EXT_NET_GATEWAY = '172.24.10.1'
$curl -L -o ./${ IMAGE } ${ IMAGE_URL } /${ IMAGE }
$openstack image create --disk-format qcow2 --container-format bare --public \
--property os_type = ${ IMAGE_TYPE } --file ./${ IMAGE } ${ IMAGE_NAME }
$openstack network create --external --provider-physical-network physnet1 \
--provider-network-type flat public1
$openstack subnet create --no-dhcp \
--allocation-pool ${ EXT_NET_RANGE } --network public1 \
--subnet-range ${ EXT_NET_CIDR } --gateway ${ EXT_NET_GATEWAY } public1-subnet
openstack flavor create --id 1 --ram 512 --disk 1 --vcpus 1 m1.tiny
openstack flavor create --id 2 --ram 2048 --disk 20 --vcpus 1 m1.small
openstack flavor create --id 3 --ram 4096 --disk 40 --vcpus 2 m1.medium
openstack flavor create --id 4 --ram 8192 --disk 80 --vcpus 4 m1.large
openstack flavor create --id 5 --ram 16384 --disk 160 --vcpus 8 m1.xlarge
可以使用openstack horizon操作openstack dashboard,使用admin user的帳號、密碼進行登入。
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
$kubectl get service -n kolla
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT( S) AGE
cinder-api ClusterIP 10.3.3.125 10.0.0.182 8776/TCP 12h
glance-api ClusterIP 10.3.3.141 10.0.0.182 9292/TCP 12h
glance-registry ClusterIP 10.3.3.15 <none> 9191/TCP 12h
horizon ClusterIP 10.3.3.11 10.0.0.182 80/TCP 12h
keystone-admin ClusterIP 10.3.3.35 10.0.0.182 35357/TCP 12h
keystone-internal ClusterIP 10.3.3.228 <none> 5000/TCP 12h
keystone-public ClusterIP 10.3.3.124 10.0.0.182 5000/TCP 12h
mariadb ClusterIP 10.3.3.98 <none> 3306/TCP 12h
memcached ClusterIP 10.3.3.140 <none> 11211/TCP 12h
neutron-server ClusterIP 10.3.3.21 10.0.0.182 9696/TCP 12h
nova-api ClusterIP 10.3.3.150 10.0.0.182 8774/TCP 12h
nova-metadata ClusterIP 10.3.3.217 <none> 8775/TCP 12h
nova-novncproxy ClusterIP 10.3.3.4 10.0.0.182 6080/TCP 12h
nova-placement-api ClusterIP 10.3.3.159 10.0.0.182 8780/TCP 12h
rabbitmq ClusterIP 10.3.3.66 <none> 5672/TCP 12h
rabbitmq-mgmt ClusterIP 10.3.3.37 <none> 15672/TCP 12h
$cat keystonerc_admin
unset OS_SERVICE_TOKEN
export OS_USERNAME = admin
export OS_PASSWORD = kQHEss3THBmHCQWdqNd2b51U8xRB3hKPH6KD4kx3
export OS_AUTH_URL = http://10.0.0.182:5000/v3
export PS1 = '[\u@\h \W(keystone_admin)]$ '
export OS_PROJECT_NAME = admin
export OS_USER_DOMAIN_NAME = Default
export OS_PROJECT_DOMAIN_NAME = Default
export OS_IDENTITY_API_VERSION = 3
export OS_REGION_NAME = RegionOne
export OS_VOLUME_API_VERSION = 2
如果需要拆除你的OpenStack Kolla Kubernetes環境
在你的master上,使用helm拆除OpenStack相關元件。
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
$helm install --debug ~/kolla-bringup/kolla-kubernetes/helm/service/nova-cleanup --namespace kolla --name nova-cleanup --values ~/kolla-bringup/cloud.yaml
$helm delete mariadb --purge
$helm delete mariadb --purge
$helm delete rabbitmq --purge
$helm delete memcached --purge
$helm delete keystone --purge
$helm delete glance --purge
$helm delete cinder-control --purge
$helm delete horizon --purge
$helm delete openvswitch --purge
$helm delete neutron --purge
$helm delete nova-control --purge
$helm delete nova-compute --purge
$helm delete nova-cell0-create-db-job --purge
$helm delete cinder-volume-lvm --purge
在每個節點上拆除相關的OpenStack volume
1
$sudo rm -rf /var/lib/kolla/volumes/*
在每個節點上刪除Kubernetes
1
2
3
4
$sudo kubeadm reset
$sudo rm -rf /etc/kolla
$sudo rm -rf /etc/kubernetes
$sudo rm -rf /etc/kolla-kubernetes