kubernetes Audit 查起來

 ·  ☕ 7 

就想看看你對我做什麼壞壞的事

在操作 Kubernetes Cluster 的時候系統管理員會發給各個開發者、使用者又或是 Robot 帳號,這時好多人在操作 Cluster 有沒有什麼方法可以看看誰對 Cluster 做了什麼,到時候炸鍋的時候比較好找兇手(X)。

好拉除了找兇手之外還可以除錯計費做很多我目前還想不到的功能,如果有大大看到這篇文章有想到 audit 還可以做什麼可以跟我分享與討論呦!

我們先看看 Kubernetes 支援哪些 audit 處理的方法。

  • Log backend, which writes events to a disk
  • Webhook backend, which sends events to an external API
  • Dynamic backend, which configures webhook backends through an AuditSink API object.

目前 Kubernetes 支援這三種方法處理 audit 資料,下文會三種方法都會使用一次作為範例,在看看實際的範例之前我們可以先來看看 audit 資料可以記錄哪些東西。

  1. 發生了什麼事情(What
  2. 什麼時候發生的(When
  3. 誰讓事情發生了(Who

雖然還有紀錄其他可以的東西,不過我認為這三個最為重要,有興趣的朋友可以看看官方的文件

我們了解到 Kubernetes audit 會記錄什麼後,接著就是 audit 紀錄什麼時候會被觸發,這邊分為四個階段。

  1. RequestReceived

    • events generated as soon as the audit handler receives the request, and before it is delegated down the handler chain.(簡單來說就是 apiserver 收到請求的階段)
  2. ResponseStarted

    • Once the response headers are sent, but before the response body is sent. This stage is only generated for long-running requests (e.g. watch).(這個階段我不是非常了解,看起來像是 watch 之類的才會觸發這個階段)
  3. ResponseComplete

    • The response body has been completed and no more bytes will be sent.(這個階段代表 apiserver 的回應)
  4. Panic

    • Events generated when a panic occurred.(發生 panic 才會觸發)

目前我們知道 Kubernetes audit 會記錄事情發生的內容,事情發生的階段。那有可不可以分門別類,例如我只想要紀錄 pods 的事件、 configmaps 的事件 或是某個 user 的行為呢?

答案是可以的!每當我看完 Kubernetes 的設計真的覺得社群考慮的相當的彈性,繼續項開源專案學習,只有站在巨人的肩膀上才能看得更遠。好廢話不多說,剛剛談到的分門別類在 audit 裡稱為 policy , policy 分為以下四個級別。

  1. None
    • (不要記錄與此規則匹配的事件)
  2. Metadata
    • (記錄請求的 metadata 例如 user timestamp resource verb e.t.c. 但不記錄請求的內容,有點類似只把 header 紀錄一下來 body 隨它去的感覺。)
  3. Request
    • (記錄整個事件請求的內容, body 跟 header 都要存的意思)
  4. RequestResponse
    • (記錄整個事件回應的內容, body 跟 header 都要存的意思)

官方有給出一個範例讓我們可以依照自己的需求進行修改,範例如下所示。

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
apiVersion: audit.k8s.io/v1 # This is required.
kind: Policy
# Don't generate audit events for all requests in RequestReceived stage.
omitStages:
  - "RequestReceived"
rules:
  # Log pod changes at RequestResponse level
  - level: RequestResponse
    resources:
    - group: ""
      # Resource "pods" doesn't match requests to any subresource of pods,
      # which is consistent with the RBAC policy.
      resources: ["pods"]
  # Log "pods/log", "pods/status" at Metadata level
  - level: Metadata
    resources:
    - group: ""
      resources: ["pods/log", "pods/status"]

  # Don't log requests to a configmap called "controller-leader"
  - level: None
    resources:
    - group: ""
      resources: ["configmaps"]
      resourceNames: ["controller-leader"]

  # Don't log watch requests by the "system:kube-proxy" on endpoints or services
  - level: None
    users: ["system:kube-proxy"]
    verbs: ["watch"]
    resources:
    - group: "" # core API group
      resources: ["endpoints", "services"]

  # Don't log authenticated requests to certain non-resource URL paths.
  - level: None
    userGroups: ["system:authenticated"]
    nonResourceURLs:
    - "/api*" # Wildcard matching.
    - "/version"

  # Log the request body of configmap changes in kube-system.
  - level: Request
    resources:
    - group: "" # core API group
      resources: ["configmaps"]
    # This rule only applies to resources in the "kube-system" namespace.
    # The empty string "" can be used to select non-namespaced resources.
    namespaces: ["kube-system"]

  # Log configmap and secret changes in all other namespaces at the Metadata level.
  - level: Metadata
    resources:
    - group: "" # core API group
      resources: ["secrets", "configmaps"]

  # Log all other resources in core and extensions at the Request level.
  - level: Request
    resources:
    - group: "" # core API group
    - group: "extensions" # Version of group should NOT be included.

  # A catch-all rule to log all other requests at the Metadata level.
  - level: Metadata
    # Long-running requests like watches that fall under this rule will not
    # generate an audit event in RequestReceived.
    omitStages:
      - "RequestReceived"

上面大致上瞭解了 Kuberetes audit 的大方向如,當 event 發生的時候他會記錄什麼,有哪幾個 stage 會觸發 event 以及 event 可以看照 policy 分門別類。

了解這幾項東西之後就可以來進行實驗,我會透過 kubeadm 建立一個 all in one 的 kubernetes 測試 audit 紀錄在 disk 上以及 透過 webhook 的方式把 event 傳出來進行額外的處理。

Kubeadm 安裝測試環境

kubeadm 安裝方法這邊不多談 Google 有很多安裝的方法,在本次實驗需要設定 audit 的相關參數,kubeadm 目前還沒有完全支援 audit 的 feature gate 除了設定 kubeadm config 之外還需要手動設定部分參數。

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
#featureGates:
#  not support DynamicAuditing
#  DynamicAuditing: true
apiServer:
  extraArgs:
    audit-log-path: /home/ubuntu/audit.log
    audit-policy-file: /etc/kubernetes/addon/audit-policy.yaml
#    not support DynamicAuditing
#    runtime-config=auditregistration.k8s.io/v1alpha1: "true"
#    audit-dynamic-configuration:
  extraVolumes:
  - name: audit
    hostPath: /etc/kubernetes/addon/audit-policy.yaml
    mountPath: /etc/kubernetes/addon/audit-policy.yaml
    readOnly: true
    pathType: File
  - name: audit-log
    hostPath: /home/ubuntu
    mountPath: /home/ubuntu
    pathType: DirectoryOrCreate
1
2
3
4
5
6
cat <<EOF | >/etc/kubernetes/addon/audit-policy.yaml
# Log all requests at the Metadata level.
apiVersion: audit.k8s.io/v1
kind: Policy
rules:
- level: Metadata

可以透過上述這個 kubeadm config 去設定部分的 audit 參數,再由 kubeadm init 啟動的時候將它自動地帶入 kube-apiserver中。

1
2
3
4
5
6
kubeadm init --config admin-apiserver.yml
W0629 07:46:16.046453   86387 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.18.5
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
...

由於本次測試環境只有一個節點所以需要進行untaint的動作,讓 pod 可以在 master node 上開啟。

kubectl taint node jason-test node-role.kubernetes.io/master-

當 kubeadm 執行完也設定完 CNI 後需要修改/etc/kubernetes/manifests/kube-apiserver 相關參數,讓 apiserver 支援 audit-dynamic-configuration 需要加入以下參數。

1
2
3
    - --audit-dynamic-configuration
    - --feature-gates=DynamicAuditing=true
    - --runtime-config=auditregistration.k8s.io/v1alpha1=true

修改完成後,可以透過 kubectl 指令檢查是否開啟 auditregistration.k8s.io/v1alpha1 的資源。

1
2
kubectl api-resources | grep AuditSink
auditsinks                                     auditregistration.k8s.io       false        AuditSin

完成以上步驟後就完成了 kubernetes audit 的測試環境搭建。

Log Backend

事實上剛剛的流程已經把 Backend 設定好了,位置就在 audit-log-path: /home/ubuntu/audit.log ,只要觸發 audit 就會把紀錄儲存在這個位置上。那我們來看看設定完 Kubernetes 之後會儲存哪寫資料。

1
2
3
4
tail -f audit.log
{"kind":"Event","apiVersion":"audit.k8s.io/v1","level":"Metadata","auditID":"caa00b7a-e564-486c-837f-219eade633dd","stage":"RequestReceived","requestURI":"/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=10s","verb":"get","user":{"username":"system:kube-controller-manager","groups":["system:authenticated"]},"sourceIPs":["10.0.2.4"],"userAgent":"kube-controller-manager/v1.18.5 (linux/amd64) kubernetes/e6503f8/leader-election","objectRef":{"resource":"leases","namespace":"kube-system","name":"kube-controller-manager","apiGroup":"coordination.k8s.io","apiVersion":"v1"},"requestReceivedTimestamp":"2020-07-04T15:19:21.083898Z","stageTimestamp":"2020-07-04T15:19:21.083898Z"}
{"kind":"Event","apiVersion":"audit.k8s.io/v1","level":"Metadata","auditID":"caa00b7a-e564-486c-837f-219eade633dd","stage":"ResponseComplete","requestURI":"/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=10s","verb":"get","user":{"username":"system:kube-controller-manager","groups":["system:authenticated"]},"sourceIPs":["10.0.2.4"],"userAgent":"kube-controller-manager/v1.18.5 (linux/amd64) kubernetes/e6503f8/leader-election","objectRef":{"resource":"leases","namespace":"kube-system","name":"kube-controller-manager","apiGroup":"coordination.k8s.io","apiVersion":"v1"},"responseStatus":{"metadata":{},"code":200},"requestReceivedTimestamp":"2020-07-04T15:19:21.083898Z","stageTimestamp":"2020-07-04T15:19:21.085030Z","annotations":{"authorization.k8s.io/decision":"allow","authorization.k8s.io/reason":"RBAC: allowed by ClusterRoleBinding \"system:kube-controller-manager\" of ClusterRole \"system:kube-controller-manager\" to User \"system:kube-controller-manager\""}}
...

這邊會看到許多觸發 audit 後記錄的訊息,這邊需要注意的地方有 requestURIverbusersourceIPsuserAgent

我先執行一個簡單的 kubectl 指令來觀察 audit 有沒有攔截到這個請求。

1
kubectl get pod 

再去看 audit 所記錄的 log 有沒有儲存這個操作

cat /home/ubuntu/audit/log | grep 

{"kind":"Event","apiVersion":"audit.k8s.io/v1","level":"Metadata","auditID":"0595bb26-d8a2-4b59-b3d1-7d538c2e131f","stage":"RequestReceived","requestURI":"/api/v1/namespaces/default/pods?limit=500","verb":"list","user":{"username":"kubernetes-admin","groups":["system:masters","system:authenticated"]},"sourceIPs":["10.0.2.4"],"userAgent":"kubectl/v1.18.5 (linux/amd64) kubernetes/e6503f8","objectRef":{"resource":"pods","namespace":"default","apiVersion":"v1"},"requestReceivedTimestamp":"2020-07-04T15:26:20.895927Z","stageTimestamp":"2020-07-04T15:26:20.895927Z"}
{"kind":"Event","apiVersion":"audit.k8s.io/v1","level":"Metadata","auditID":"0595bb26-d8a2-4b59-b3d1-7d538c2e131f","stage":"ResponseComplete","requestURI":"/api/v1/namespaces/default/pods?limit=500","verb":"list","user":{"username":"kubernetes-admin","groups":["system:masters","system:authenticated"]},"sourceIPs":["10.0.2.4"],"userAgent":"kubectl/v1.18.5 (linux/amd64) kubernetes/e6503f8","objectRef":{"resource":"pods","namespace":"default","apiVersion":"v1"},"responseStatus":{"metadata":{},"code":200},"requestReceivedTimestamp":"2020-07-04T15:26:20.895927Z","stageTimestamp":"2020-07-04T15:26:20.897187Z","annotations":{"authorization.k8s.io/decision":"allow","authorization.k8s.io/reason":""}}

整理 audit log 的資訊得到以下重點

  1. “requestURI”:"/api/v1/namespaces/default/pods?limit=500"
  2. “verb”:“list”
  3. “user”:{“username”:“kubernetes-admin”,“groups”:[“system:masters”,“system:authenticated”]}
  4. “sourceIPs”:[“10.0.2.4”]
  5. “userAgent”:“kubectl/v1.18.5 (linux/amd64) kubernetes/e6503f8”

可以清楚看到 kubernetes-admin 這個 user 執行了這個操作 ,操作是 /api/v1/namespaces/default/pods?limit=500 執行的動作是 list ,這個操作是來自 10.0.2.4kubectl

看到以上這些重點之後,大致上可以推測初使用者在 10.0.2.4 透過 kubectl執行get pod --namespace default的動作。

Dynamic Backend

接著讓我們來看看 Dynamic Backend 是怎麼一回事,可以通過 AuditSink API 設定相對應的 Webhook 對資料進行預處理再送往其他地方。

預設的 kubernetes 不會幫你設定 AuditSink API ,需要手動去開啟。在前面的章節已經有先開啟了,這邊我們可以回頭去看一下設定了哪些東西。

  1. –audit-dynamic-configuration
  2. –feature-gates=DynamicAuditing=true
  3. –runtime-config=auditregistration.k8s.io/v1alpha1=true

這三個設定在 /etc/kubernetes/manifests/kube-apiserver.yaml 裡可以看到,還沒有修改的小夥伴可以現在加上去。

確定完設定檔後我們可以透過 kubectl 去檢查這個資源是不是被開啟。

1
2
kubectl api-resources | grep AuditSink
auditsinks                                     auditregistration.k8s.io       false        AuditSin

接著我們要去部署一個 webhook server 讓 kubernetes audit 可以把資料送上來,由於要建立 webhook 需要撰寫 code 以及需要設定 CA 這邊我提供實驗的 repo 給大家試試看。

使用上非常簡單,只要透過 kubectl apply -f pod.yml -f service.yml這樣就把 webhook 建立好了。

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
 k apply  -f pod.yml -f service.yml
pod/webhook created
service/admissionwebhook created
...

k get pod,svc
NAME          READY   STATUS    RESTARTS   AGE
pod/webhook   1/1     Running   0          7s

NAME                       TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
service/admissionwebhook   ClusterIP   10.97.173.35   <none>        443/TCP   7s
service/kubernetes         ClusterIP   10.96.0.1      <none>        443/TCP   2d14h

建立好 webhook pod 後我們需要建立 AuditSink 物件讓 audit 事件把資料往這個 webhook 送,

1
2
kubectl apply -f auditSink.yml
auditsink.auditregistration.k8s.io/mysink created

這一個 auditSink.yml 描述了,限定 audit 觸發的事件以及要往哪個地方送資料。

apiVersion: auditregistration.k8s.io/v1alpha1
kind: AuditSink
metadata:
  name: mysink
spec:
  policy:
    level: Metadata
    stages:
    - RequestReceived
  webhook:
    clientConfig:
      service:
        namespace: default
        name: admissionwebhook
        path: /sink
        port: 443
      caBundle: "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN2RENDQWFRQ0NRRHVXMXNYUVhqUjdEQU5CZ2txaGtpRzl3MEJBUXNGQURBZk1SMHdHd1lEVlFRRERCUkIKWkcxcGMzTnBiMjRnVjJWaWFHOXZheUJEUVRBZ0Z3MHlNREEzTURjd05UQTNOVEJhR0E4ek1ERTVNVEV3T0RBMQpNRGMxTUZvd0h6RWRNQnNHQTFVRUF3d1VRV1J0YVhOemFXOXVJRmRsWW1odmIyc2dRMEV3Z2dFaU1BMEdDU3FHClNJYjNEUUVCQVFVQUE0SUJEd0F3Z2dFS0FvSUJBUUMvWjZ1U1UrNlgrR3htN0NuYXR0RXBvUTN0VjJIb01TNGcKeCtGdEV4Q1Nqb0FjTk5wdHI1cjdTVjVIck1zS09wNDhrZFQ5NVI0YytWZUFnOGlqbVdpSEdQSXFyb1dIRys5cQpibzhoM0FZWk9MeWxQVjBUVWVSb0hQSkRmRmZsV054WlFnRHRXa2p0c2VRUVdsVisyVE5KVXF1cGtBMUMvZTJWCnVEckVkRzZTRFZaWDZMYTdTd3ViOXA4UnVaY21TMURrWlB3bFBmMEt2UVp5UHlMUXl4TVRjeVdJM3ZnNzlENWsKcTJPNFB0N080bm9MM0YzRGRyMHU3aGZwQjlJUUFUelZnTWRvQjNPRmV1NjRTZVRydmdmeG1XK1FZNFA0Z05aRQova21TTnNmaklnT202VnF1eEZHTEpsdUE3WlJoQkdadG1ON1N6dktTb21OZjlhTHJkWERSQWdNQkFBRXdEUVlKCktvWklodmNOQVFFTEJRQURnZ0VCQUtpMDBFRUZBdTFKL1M0ejBpVTVUYlJvOXI1WjRTZk5Ub3ZhS1U4bHA3YTAKaUUwWXRSVUdSMjhkRjRrRE12OXp4dTRCYy82N0ptb3g0SGtZMTFIU0RtOXVUUUs0T1dHMGo5MnIvOGY2RlRZMQpNSk1VNUJ0dy90Ym5hV2ZNWE5Xa0R5TnVhQXA1U3hMV1luODE4OWNqM0Zyd2NIL1VoODl4WFhDQnpzQUNOZGNiClRiN3ZKdnBGdWgyOXpNWUJGNTBPNHNnTi9UMWhNbTk3Y2xBbU9OZ1JoSTQ3cVdDamtlNlJKb2I0MnF6Ri8yK0gKK3k3eWlqQktkU2pQOVJOS3VreXM5VlY4eEN5TlpTSUFGYzM1dldMUDFLUEY1aFVTMWo1c0o2V3JncjdGVDU2ZQpZMC9rdGZkcWpSNXJVbWhzUW9GeHhJMzFHY1hjYktWTFNoRldtS2pMMERvPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg=="

這時候有觸發 audit 的事件就會往 webhook service送,可以透過 kubectl 指令去確認相關 log 。

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
k log -f webhook
...

I0707 05:47:44.205612       1 main.go:34] this event is {TypeMeta:{Kind: APIVersion:} Level:Metadata AuditID:4b6cb6dd-456b-4b77-9e63-90634a6d93eb Stage:RequestReceived RequestURI:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=10s Verb:update User:{Username:system:kube-controller-manager UID: Groups:[system:authenticated] Extra:map[]} ImpersonatedUser:nil SourceIPs:[10.0.2.4] UserAgent:kube-controller-manager/v1.18.5 (linux/amd64) kubernetes/e6503f8/leader-election ObjectRef:0xc000335980 ResponseStatus:nil RequestObject:nil ResponseObject:nil RequestReceivedTimestamp:2020-07-07 05:47:42.212444 +0000 UTC StageTimestamp:2020-07-07 05:47:42.212444 +0000 UTC Annotations:map[]}
I0707 05:47:44.206093       1 main.go:34] this event is {TypeMeta:{Kind: APIVersion:} Level:Metadata AuditID:3d52b965-d65f-4060-b839-8e5ffe874b7f Stage:RequestReceived RequestURI:/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=492314&timeout=8m47s&timeoutSeconds=527&watch=true Verb:watch User:{Username:system:kube-scheduler UID: Groups:[system:authenticated] Extra:map[]} ImpersonatedUser:nil SourceIPs:[10.0.2.4] UserAgent:kube-scheduler/v1.18.5 (linux/amd64) kubernetes/e6503f8/scheduler ObjectRef:0xc000335a80 ResponseStatus:nil RequestObject:nil ResponseObject:nil RequestReceivedTimestamp:2020-07-07 05:47:43.363791 +0000 UTC StageTimestamp:2020-07-07 05:47:43.363791 +0000 UTC Annotations:map[]}
I0707 05:47:44.206530       1 main.go:34] this event is {TypeMeta:{Kind: APIVersion:} Level:Metadata AuditID:d1a20f9d-0204-43ae-8ab1-82b2c09858ed Stage:RequestReceived RequestURI:/apis/apps/v1/replicasets?allowWatchBookmarks=true&resourceVersion=492314&timeout=5m20s&timeoutSeconds=320&watch=true Verb:watch User:{Username:system:kube-controller-manager UID: Groups:[system:authenticated] Extra:map[]} ImpersonatedUser:nil SourceIPs:[10.0.2.4] UserAgent:kube-controller-manager/v1.18.5 (linux/amd64) kubernetes/e6503f8/shared-informers ObjectRef:0xc000335b80 ResponseStatus:nil RequestObject:nil ResponseObject:nil RequestReceivedTimestamp:2020-07-07 05:47:43.610833 +0000 UTC StageTimestamp:2020-07-07 05:47:43.610833 +0000 UTC Annotations:map[]}
I0707 05:47:44.206956       1 main.go:34] this event is {TypeMeta:{Kind: APIVersion:} Level:Metadata AuditID:5d48dc35-f089-4e76-a2f4-098e5628b055 Stage:RequestReceived RequestURI:/api/v1/namespaces/kube-system/endpoints/kube-scheduler?timeout=10s Verb:get User:{Username:system:kube-scheduler UID: Groups:[system:authenticated] Extra:map[]} ImpersonatedUser:nil SourceIPs:[10.0.2.4] UserAgent:kube-scheduler/v1.18.5 (linux/amd64) kubernetes/e6503f8/leader-election ObjectRef:0xc000335c80 ResponseStatus:nil RequestObject:nil ResponseObject:nil RequestReceivedTimestamp:2020-07-07 05:47:44.024424 +0000 UTC StageTimestamp:2020-07-07 05:47:44.024424 +0000 UTC Annotations:map[]}
I0707 05:47:44.207483       1 main.go:34] this event is {TypeMeta:{Kind: APIVersion:} Level:Metadata AuditID:e2b5f015-d0e0-48ab-85a8-c9a42121cdfd Stage:RequestReceived RequestURI:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-scheduler?timeout=10s Verb:get User:{Username:system:kube-scheduler UID: Groups:[system:authenticated] Extra:map[]} ImpersonatedUser:nil SourceIPs:[10.0.2.4] UserAgent:kube-scheduler/v1.18.5 (linux/amd64) kubernetes/e6503f8/leader-election ObjectRef:0xc000335d80 ResponseStatus:nil RequestObject:nil ResponseObject:nil RequestReceivedTimestamp:2020-07-07 05:47:44.026864 +0000 UTC StageTimestamp:2020-07-07 05:47:44.026864 +0000 UTC Annotations:map[]}
I0707 05:47:44.207995       1 main.go:34] this event is {TypeMeta:{Kind: APIVersion:} Level:Metadata AuditID:0070ecce-3119-4e44-850d-5914e45dcde6 Stage:RequestReceived RequestURI:/api/v1/namespaces/kube-system/endpoints/kube-scheduler?timeout=10s Verb:update User:{Username:system:kube-scheduler UID: Groups:[system:authenticated] Extra:map[]} ImpersonatedUser:nil SourceIPs:[10.0.2.4] UserAgent:kube-scheduler/v1.18.5 (linux/amd64) kubernetes/e6503f8/leader-election ObjectRef:0xc000335e80 ResponseStatus:nil RequestObject:nil ResponseObject:nil RequestReceivedTimestamp:2020-07-07 05:47:44.028928 +0000 UTC StageTimestamp:2020-07-07 05:47:44.028928 +0000 UTC Annotations:map[]}
I0707 05:47:44.208582       1 main.go:34] this event is {TypeMeta:{Kind: APIVersion:} Level:Metadata AuditID:ed450b92-27cc-449d-95cf-c2822fd0e380 Stage:RequestReceived RequestURI:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-scheduler?timeout=10s Verb:get User:{Username:system:kube-scheduler UID: Groups:[system:authenticated] Extra:map[]} ImpersonatedUser:nil SourceIPs:[10.0.2.4] UserAgent:kube-scheduler/v1.18.5 (linux/amd64) kubernetes/e6503f8/leader-election ObjectRef:0xc000335f80 ResponseStatus:nil RequestObject:nil ResponseObject:nil RequestReceivedTimestamp:2020-07-07 05:47:44.032618 +0000 UTC StageTimestamp:2020-07-07 05:47:44.032618 +0000 UTC Annotations:map[]}
I0707 05:47:44.209296       1 main.go:34] this event is {TypeMeta:{Kind: APIVersion:} Level:Metadata AuditID:9deb9b72-4a27-4a36-b6f8-74e976aee3cf Stage:RequestReceived RequestURI:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-scheduler?timeout=10s Verb:update User:{Username:system:kube-scheduler UID: Groups:[system:authenticated] Extra:map[]} ImpersonatedUser:nil SourceIPs:[10.0.2.4] UserAgent:kube-scheduler/v1.18.5 (linux/amd64) kubernetes/e6503f8/leader-election ObjectRef:0xc000126080 ResponseStatus:nil RequestObject:nil ResponseObject:nil RequestReceivedTimestamp:2020-07-07 05:47:44.034645 +0000 UTC StageTimestamp:2020-07-07 05:47:44.034645 +0000 UTC Annotations:map[]}
[GIN] 2020/07/07 - 05:47:44 | 200 |     105.108µs |       10.32.0.1 | POST     "/sink?timeout=30s"
[GIN] 2020/07/07 - 05:47:44 | 200 |      97.207µs |       10.32.0.1 | POST     "/sink?timeout=30s"
[GIN] 2020/07/07 - 05:47:44 | 200 |     150.111µs |       10.32.0.1 | POST     "/sink?timeout=30s"
...

可以看到 webhook 的 log 已經收到許多 audit 事件的資料,我們可以去修改 webhook 的程式碼讓 webhook 收到資料後進行預處理後再往其他地方發送例如:做Alert 。

後記

目前大多數看到的解決方案是直接採用 Log Backend ,把資料直接記載在 master node 上再透過 logstash or fluentd 直接把資料過濾並且丟到 Elasticsearch ,可以利用 audit 紀錄使用者行為看到 cluster 內發生什麼事情。


Meng Ze Li
Meng Ze Li
Kubernetes / DevOps / Backend