kubernetes ssh tunnel debug pratice

 ·  ☕ 5 

本篇文章的流程圖取自於KT Connnect 轻量级云原生测试环境治理工具並加以整理與新增實驗過程。

越來越多的開發團隊基於 Kubernetes 部署公司的產品,一但服務上了 Kubernetes 當然會牽扯到基礎設計( Infra )、持續交付( CD )的過程,在如此龐大且複雜的架構與前提之下,我們有沒有什麼現有的專案可以幫助開發團隊快速的在 Kubernetes 上進行除錯。

比如把線上(Remote)的流量導入本地(Local),又會是反過說把本地測試的測試請求打到線上環境。

痛點

在 Kubernetes 原生提供了一種方法( port-forward ) , 透過 port-forward 讓 local 端可以透過 : 存取線上的服務。

我們應該知道 Kubernetes 原生提提供的 port-forward 能力不足,在微服務開發架構下,服務的調用除了調用方( client )依賴別人( service-A ) 以外,還有別人( service-B )依賴 調用方( client )。就目前 Kubernetes 原生提供的方法為基礎的話,勢必要將服務部署到一個測試環境中,但部署到測試環境又相當的麻煩,也有可能遇到各式不同的問題,例如:

  • 單向調用:只能是從 local 向 kubernetes 發起請求,kubernetes 上的其他服務無法將請求轉發到local。

  • 多個服務之間部署複雜:當微服務架構一大起來,部署與設定就會是一個問題需要 SRE 或是 DevOps 人員設置一個測試環境。

  • 程式碼修改問題:一但採用 port-forward 去導流 loacl 端進入到 kubernetes 的流量,那我們的程式碼本來透過 存取的方式必定要修改成 : 一但導流的東西多了,也意味著程式碼修改的地方多了更容易發生人為設定上的錯誤。

解決方案

阿里巴巴( alibaba )這時腦洞大開開發了一個專案 kt-connect 我們先看看他們怎麼介紹自己的。

Manage and Integration with your Kubernetes dev environment more efficient.

好吧看不是很懂xD,簡單來說阿里巴巴為了解決開發人員上述遇到的三個問題。

  1. 開發人員可以將 kubernetes 上的流量轉到 local 進行測試。
  2. 開發人員可以將 local 的測試請求發送到 kubernetes 進行測試。
  3. 開發人員不需要更動過多的程式碼,可以沿用 的方式對服務發起請求。

大概看完了 kt-connect 帶來的好處接著就來安裝玩玩看!

  • 需要有 kubernetes 環境(開發環境具有 kubeconfig 部分權限 e.g. create list delete deploy 等)
  • ssh
  • 一顆熱愛的 debug 心

install kt-connect dependency package

使用 kc connection 之前 我們需要安裝一些依賴套件。

install sshuttle

#mac
brew install sshuttle
#linux
pip install sshuttle
#windows
https://rdc-incubators.oss-cn-beijing.aliyuncs.com/stable/ktctl_windows_amd64.tar.gz

install kt-connect

緊接著安裝 kt-connect 本體。

#mac
curl -OL https://rdc-incubators.oss-cn-beijing.aliyuncs.com/stable/ktctl_darwin_amd64.tar.gz
tar -xzvf ktctl_darwin_amd64.tar.gz
mv ktctl_darwin_amd64 /usr/local/bin/ktctl

#linux
curl -OL https://rdc-incubators.oss-cn-beijing.aliyuncs.com/stable/ktctl_linux_amd64.tar.gz
tar -xzvf ktctl_linux_amd64.tar.gz
mv ktctl_linux_amd64 /usr/local/bin/ktctl

安裝完後可以下 check 指令確認一下依賴包是否都安裝完成

ktctl check

1:01PM INF system info darwin-amd64
1:01PM INF checking ssh version
OpenSSH_8.1p1, LibreSSL 2.7.3
1:01PM INF checking ssh version start at pid: 16888
1:01PM INF checking kubectl version
Client Version: v1.18.2
Server Version: v1.17.1+6af3663
1:01PM INF checking kubectl version start at pid: 16890
1:01PM INF checking sshuttle version
1.0.4
1:01PM INF checking sshuttle version start at pid: 16891
1:01PM INF KT Connect is ready, enjoy it!

connect kubernetes in localhost

先來測試看看從 localhost 與 remote 建立 tunnel ,記得一定要用 sudo 不然kt connection 無法操作系統的網路。

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
sudo ktctl --namespace=default connect

Password:
2:21PM INF Connect Start At 28422
2:21PM INF Client address 192.168.51.191
2:21PM INF deploy shadow deployment kt-connect-daemon-ojbky in namespace default

2:21PM INF pod label: kt=kt-connect-daemon-ojbky
2:21PM INF pod: kt-connect-daemon-ojbky-6484749d95-2zqnl is running,but not ready
2:21PM INF pod: kt-connect-daemon-ojbky-6484749d95-2zqnl is running,but not ready
2:21PM INF pod: kt-connect-daemon-ojbky-6484749d95-2zqnl is running,but not ready
2:21PM INF Shadow pod: kt-connect-daemon-ojbky-6484749d95-2zqnl is ready.
2:21PM INF Fail to get pod cidr from node.Spec.PODCIDR, try to get with pod sample
Forwarding from 127.0.0.1:2222 -> 22
Forwarding from [::1]:2222 -> 22
2:21PM INF port-forward start at pid: 28424
Handling connection for 2222
Warning: Permanently added '[127.0.0.1]:2222' (ECDSA) to the list of known hosts.
client: Connected.
2:21PM INF vpn(sshuttle) start at pid: 28428
2:21PM INF KT proxy start successful
client: warning: closed channel 1 got cmd=TCP_STOP_SENDING len=0
server: warning: closed channel 1 got cmd=TCP_EOF len=0

當 connection 建立好後可以在 remote 到建立了一個 tunnel 的 pod ,這個 pod 專門用來跟 local端進行 vpn/socks5 的連接方式(預設vpn)

1
2
3
4
kubectl get pod -A                                                                           
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE
default       kt-connect-daemon-ojbky-6484749d95-2zqnl   1/1     Running   0          60s
...

在 remote 環境準備一個 nginx deployment 進行測試

1
2
3
4
kubectl create deploy nginx --image nginx
deployment.apps/nginx created
kubectl expose deploy nginx --port 80
service/nginx exposed

安裝完 nginx 之後可以透過下列指令檢視安裝成果

1
2
3
4
5
6
7
8
kubectl get svc,deploy
NAME                 TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   10.96.0.1      <none>        443/TCP   24m
service/nginx        ClusterIP   10.99.58.138   <none>        80/TCP    4m25s

NAME                                      READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/kt-connect-daemon-ojbky   1/1     1            1           2m48s
deployment.apps/nginx                     1/1     1            1           4m34s

curl with nginx pod ip in localhost

我們先在本地 透過 pod ip 存取 nginx 服務 看看服務是否正常

1
2
3
4
5
6
curl 10.32.0.5:80
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...

curl with nginx service ip

接著在本地透過 nginx service virtual ip 存取 nginx 服務 看看服務是否正常

1
2
3
4
5
6
curl 10.99.58.138:80
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...

curl with nginx service DNS name

最後在本地透過 nginx cluster DNS 存取 nginx 服務 測試 服務是否正常。

1
2
3
4
5
6
7
8
curl nginx.default.svc.cluster.local
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
...

上述三個測試表示舞們可以在 localhost 存取遠端 kubernetes 上的 nginx 。

exchange remote deployment traffic to local

如果我們想要把 remote 流量導到 localhost 來該怎麼做?

可以透過ktctl exchang <deployment> --expose <remote port :local port>指令,將 remote 的 deployment 流量轉移到 localhost 來。

以下範例我現在本地啟動一個 apache 服務,把原本進入kubernetes nginx 流量導流到 localhost 的 apache 上。

在localhost開一個簡單的服務!

  docker run -dit --name test-apache -p 80:80 httpd                                                                                                                                   
Unable to find image 'httpd:latest' locally
latest: Pulling from library/httpd
bb79b6b2107f: Pull complete
26694ef5449a: Pull complete
7b85101950dd: Pull complete
da919f2696f2: Pull complete
3ae86ea9f1b9: Pull complete
Digest: sha256:b82fb56847fcbcca9f8f162a3232acb4a302af96b1b2af1c4c3ac45ef0c9b968
Status: Downloaded newer image for httpd:latest
bbe6575b59c5a11d430da4c158894b7740fae8e520c72ac8a71de227fcb59e3d

docker ps                                                                                                                                                                             
CONTAINER ID        IMAGE               COMMAND              CREATED             STATUS              PORTS                NAMES
bbe6575b59c5        httpd               "httpd-foreground"   28 seconds ago      Up 27 seconds       0.0.0.0:80->80/tcp   test-apache

##在本地測試一下!
curl 127.0.0.1:80
<html><body><h1>It works!</h1></body></html>

原本架構 user==========>(remote) nginx
導流後架構 user==========>(remote) nginx==(導流)==>(localhost)apache

透過以下指令告訴遠端要透過 80 port 進入 nginx 的流量,需要導流到本地的 80 port。

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
sudo ktctl exchange nginx --expose 80:80
2:37PM INF 'KT Connect' not runing, you can only access local app from cluster
2:37PM INF Client address 192.168.51.191
2:37PM INF deploy shadow deployment nginx-kt-rjulr in namespace default

2:37PM INF pod label: kt=nginx-kt-rjulr
2:37PM INF pod: nginx-kt-rjulr-7b478cd45d-c2xw5 is running,but not ready
2:37PM INF pod: nginx-kt-rjulr-7b478cd45d-c2xw5 is running,but not ready
2:37PM INF pod: nginx-kt-rjulr-7b478cd45d-c2xw5 is running,but not ready
2:37PM INF Shadow pod: nginx-kt-rjulr-7b478cd45d-c2xw5 is ready.
2:37PM INF create exchange shadow nginx-kt-rjulr in namespace default
2:37PM INF scale deployment nginx to 0

2:37PM INF  * nginx (0 replicas) success
2:37PM INF remote 10.32.0.4 forward to local 80:80
Forwarding from 127.0.0.1:2204 -> 22
Forwarding from [::1]:2204 -> 22
2:37PM INF exchange port forward to local start at pid: 30915
2:37PM INF redirect request from pod 10.32.0.4 22 to 127.0.0.1:2204 starting

Handling connection for 2204
Warning: Permanently added '[127.0.0.1]:2204' (ECDSA) to the list of known hosts.
2:37PM INF ssh remote port-forward start at pid: 30917

curl with nginx service in remote

在 remote 環境測試請求 nginx service 能不能將 traffic 導流到 localhost 上。

1
2
curl 10.99.58.138
<html><body><h1>It works!</h1></body></html>

curl with nginx DNS in remote

測試能不能在遠端透過 nginx dns name 把請求導流到 localhost 上。

1
2
curl nginx.default.svc.cluster.local
<html><body><h1>It works!</h1></body></html>

Dashboard

Kt connect 有提供一個dashboard可以使用,但目前不是很清楚這個dashboard可以提供怎麼樣的功能,就加減看一下囉!

基本上按照官方的教學,先安裝RBAC 讓dashboard可以存取kubernetes上的資源。

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
cat <<EOF | k apply -f -
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: ktadmin
rules:
- apiGroups: [""]
  resources:
  - namespaces
  - nodes
  - nodes/proxy
  - services
  - endpoints
  - pods
  verbs: ["get", "list", "watch"]
- apiGroups:
  - extensions
  resources:
  - ingresses
  verbs: ["get", "list", "watch"]
- nonResourceURLs: ["/metrics"]
  verbs: ["get"]
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: ktadmin
  namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: ktadmin
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: ktadmin
subjects:
- kind: ServiceAccount
  name: ktadmin
  namespace: default
EOF  
clusterrole.rbac.authorization.k8s.io/ktadmin created
serviceaccount/ktadmin created
clusterrolebinding.rbac.authorization.k8s.io/ktadmin created

接著繼續按照官方文件安裝 dashboard 所需要的 deployment 與 service 即可。

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Service
metadata:
  name: kt-dashboard
spec:
  ports:
  - port: 80
    targetPort: 80
  selector:
    app: kt-dashboard
  type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: kt-dashboard
  name: kt-dashboard
spec:
  replicas: 1
  selector:
    matchLabels:
      app: kt-dashboard
  template:
    metadata:
      labels:
        app: kt-dashboard
    spec:
      serviceAccount: ktadmin
      containers:
      - image: registry.cn-shanghai.aliyuncs.com/kube-helm/kt-dashboard:stable
        imagePullPolicy: Always
        name: dashboard
        ports:
        - containerPort: 80
      - image: registry.cn-shanghai.aliyuncs.com/kube-helm/kt-controller:stable
        imagePullPolicy: Always
        name: controller
        ports:
        - containerPort: 8000
EOF
service/kt-dashboard created
deployment.apps/kt-dashboard created

RBAC 、 Service 與 Deployment 安裝好之後可以透過 kubectl 指令確認狀態與 kt connect dashboard 暴露出的 port 是哪個。

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
 kubectl get pod,svc
NAME                                           READY   STATUS    RESTARTS   AGE
pod/kt-connect-daemon-lucia-7b97887df4-d6lqx   1/1     Running   0          3m21s
pod/kt-dashboard-68bbc66bc6-skcsp              2/2     Running   0          12m
pod/nginx-f89759699-zjb6b                      1/1     Running   0          9m39s

NAME                   TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
service/kt-dashboard   NodePort    10.100.1.170   <none>        80:30080/TCP   13m
service/kubernetes     ClusterIP   10.96.0.1      <none>        443/TCP        90m
service/nginx          ClusterIP   10.99.58.138   <none>        80/TCP         69m

我們可以透過 service 所告訴我們的 port 去存取這個服務,存取的圖是如下所示。


Meng Ze Li
Meng Ze Li
Kubernetes / DevOps / Backend