Chapter 8. Policy

Once a system is constructed on solid foundations, it must be used correctly to maintain its integrity. Building a sea-fort to defend an island from pirates is half the battle, followed by posting guards to the watchtower and being prepared for defense at any time.

Like the orders to the fort’s guards, the policies applied to a cluster define the range of behaviors allowed. For example, what security configuration options a pod must use, storage and network options, container images, and any other feature of the workloads.

Policies must be synchronized across clusters and cloud (admission controllers, IAM policy, security sidecars, service mesh, seccomp and AppArmor profiles) and enforced. And policies must target workloads, which raises a question of identity. Can we prove the identity of a workload before giving it privileges?

In this chapter we look at what happens when policies are not enforced, how identity for workloads and operators should be managed, and how the Captain would try to engage with potential holes in our defensive walls.

We will first review different types of policies and discuss the out-of-the-box (OOTB) features of Kubernetes in this area. Then we move on to threat models and common expectations concerning policies such as auditing. The bulk of the chapter we spend with the access control topic, specifically around role-based access control (RBAC) and further on we investigate the generic handling of policies for Kubernetes, based on projects such as the Open Policy Agent (OPA) and Kyverno.

Defaults

Policy is essential to keeping Kubernetes secure, but by default little is enabled. Configuration mutates with time for most software as new features come out; misconfiguration is a common attack vector, and Kubernetes is no different.

Reusing and extending open source policy configuration for your needs is often safer than rolling out your own, and to protect against regressions you must test your infrastructure and security code with tools like conftest before you deploy it; in “Open Policy Agent” we will dive deeper into this topic.

Figure 8-1, sums up the sentiment nicely. In it, Kubernetes security practitioner Brad Geesaman points out the dangers of not having admission control enabled by default; see also the respective TGIK episdode.

Now, what are the defaults that the Captain might be able to exploit, if you’re asleep at the helm?

tweet-brad-not-a-container-escape
Figure 8-1. Brad Geesaman sagely reminding us of the dangers of Kubernetes defaults, and the importance of adding admission control

Kubernetes offers out-of-the-box support for some policies, including for controlling network traffic, limiting resource usage, runtime behavior, and most prominently for access control, which we will dive deeper into in “Authentication and Authorization” and “Role-Based Access Control (RBAC)” before we shift our attention to generic policies in “Generic Policy Engines”.

Let’s have a closer look now at the defaults and see what challenges we face.

Access Control Policies

Kubernetes is, concerning authentication and authorization, flexible and extensible. We discuss the details of access control policies in “Authentication and Authorization” and specifically role-based access control (RBAC) in “Role-Based Access Control (RBAC)”.

Now, with the overview on built-in policies in Kubernetes out of the way, what does the threat modeling in the policies space look like? Let’s find out.

Threat Model

The threat model relevant in the context of policies is broad, however sometimes they may subtly be hidden within other topics and/or not explicitly called out. Let’s have a look at some scenarios of past attacks pertinent to the policy space using examples from the 2016 to 2019 time frame:

  • CVE-2016-5392 describes an attack where the API server (in a multitenant environment) allowed remote authenticated users with knowledge of other project names to obtain sensitive project and user information via vectors related to the watch-cache list.

  • Certain versions of CoreOS Tectonic mount a direct proxy to the cluster at /api/kubernetes/, accessible without authentication to and allowing an attacker to directly connect to the API server, as observed in CVE-2018-5256.

  • In CVE-2019-3818, the kube-rbac-proxy container did not honor TLS configurations, allowing for use of insecure ciphers and TLS 1.0. An attacker could target traffic sent over a TLS connection with a weak configuration and potentially break the encryption.

  • In CVE-2019-11245 we see how an attacker could exploit the fact that certain kubelet versions did not specify an explicit runAsUser attempt to run as UID 0 (root) on container restart, or if the image was previously pulled to the node.

  • As per CVE-2019-11247 the Kubernetes API server mistakenly allowed access to a cluster-scoped custom resource if the request was made as if the resource were namespaced. Authorizations for the resource accessed in this manner are enforced using roles and role bindings within the namespace, meaning that a user with access only to a resource in one namespace could create, view, update, or delete the cluster-scoped resource.

  • In CVE-2020-8554 it’s possible for an attack to person-in-the-middle traffic, which in multitenant environments may intercept traffic to other tenants. The new DenyServiceExternalIPs admission controller was added as there is currently no patch for this issue.

Common Expectations

In the following sections, we review some common expectations—that is, policy-related situations and methods that are well-established—and how they are addressed by defaults in Kubernetes and, in case there are no OOTB functions available, point to examples that work on top of Kubernetes.

Auditing

Kubernetes comes with auditing built in. In the API server, each request generates an audit event, which is preprocessed according to a policy that states what is recorded and then written to a backend; currently logfiles and webhooks (sends events to an external HTTP API) are supported.

The configurable audit levels range from None (do not record event) to RequestResponse (record event metadata, request and response bodies).

An example policy to capture events on ConfigMaps may look as follows:

apiVersion: audit.k8s.io/v1
kind: Policy
rules:
  - level: Request
    resources:
    - group: ""
      resources: ["configmaps"]

The OOTB auditing features of Kubernetes are a good starting point and many security and observability vendors offer, based on it, additional functionality, be it a more convenient interface or integrations with destinations, including but not limited to the following:

As a good practice, enable auditing and try to find the right balance between verbosity (audit level) and retention period.

Authentication and Authorization

If you consider a Kubernetes cluster, there are different types of resources, both in-cluster (such as a pod or a namespace) as well as out-of-cluster (for example, the load balancer of your cloud provider), that a service may provision. In this section we will dive into the topic of defining and checking the access a person or a program requires to access resources necessary to carry out a task.

In the context of access control, when we say authorization we mean the process of checking the permissions concerning a certain action, for example to create or delete a resource, for a given identity. This identity can represent a human user or a program, which we usually refer to as workload identity. Verifying the identity of a subject, human or machine, is called authentication.

Figure 8-2 shows, on a high level, how the access to resources works in a Kubernetes cluster, covering the authentication and authorization steps.

Kubernetes access control overview
Figure 8-2. Kubernetes access control overview (source: Kubernetes documentation)

The first step in the API server is the authentication of the request via one or more of the configured authentication modules such as client certificates, passwords, or JSON Web Tokens (JWT). If an API server cannot authenticate the request, it rejects it with a 401 HTTP status. However, if the authentication succeeds, the API server moves on to the authorization step.

In this step the API server uses one of the configured authorization modules to determine if the access is allowed; it takes the credentials along with the requested path, resource (pod, service, etc.) and verb (create, get, etc.), and if at least one module grants access, the request is allowed. If the authorization fails, an 403 HTTP status code is returned. The most widely used authorization module nowadays is RBAC (see “Role-Based Access Control (RBAC)”).

In the following sections, we will first review the defaults Kubernetes has, show how those can be attacked, and subsequently discuss how to monitor and defend against attacks in the access control space.

Human Users

Kubernetes does not consider human users as first-class citizens, in contrast to machines (or applications), which are represented by so-called service accounts (see “Service accounts”). In other words, there are no core Kubernetes resources representing human users in Kubernetes proper.

In practice, organizations oftentimes want to map Kubernetes cluster users to existing user directories such as LDAP servers like Azure Directory and ideally provide single sign-on (SSO).

As usual, there are the two options available: buy or build. If you’re using the Kubernetes distribution of your cloud provider, check the integrations there. If you’re looking into building out SSO yourself, there are a number of open source tools available that allow you to do this:

In addition, there are more complete open source offerings such as Keycloak, supporting a range of use cases from SSO to policy enforcement.

While humans don’t have a native representation in Kubernetes, your workload does.

Workload Identity

In contrast to human users, workloads such as a deployment owning pods are indeed first-class citizens in Kubernetes.

Service accounts

By default, a service account represents the identity of an app in Kubernetes. A service account is a namespaced resource that can be used in the context of a pod to authenticate your app against the API server. Its canonical form is as follows:

system:serviceaccount:NAMESPACE:NAME

As part of the control plane, three controllers jointly implement the service account automation, that is, managing Secrets and tokens:

For example, the default service account in the kube-system namespace would be referred to as system:serviceaccount:kube-system:default and would look something like the following:

apiVersion: v1
kind: ServiceAccount
metadata:
  name: default 1
  namespace: kube-system 2
secrets:
- name: default-token-v9vsm 3
1

The default service account

2

In the kube-system namespace

3

Using the Secret with the name default-token-v9vsm

We saw that the default service account uses a Secret called default-token-v9vsm, so let have a look at it with kubectl -n kube-system get secret default-token-v9vsm -o yaml, which yields the following YAML doc (edited to fit):

apiVersion: v1
kind: Secret
metadata:
  annotations:
    kubernetes.io/service-account.name: default
  name: default-token-v9vsm
  namespace: kube-system
type: kubernetes.io/service-account-token
data:
  ca.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tL...==
  namespace: a3ViZS1zeXN0ZW0=
  token: ZXlKaGJHY2lPaUpTVXpJMU5pSXNJbXRwWk...==

Your application can use the data managed by the control plane components as described previously from within the pod. For example, from inside a container, the volume is available at:

~ $ ls -al /var/run/secrets/kubernetes.io/serviceaccount/
total 4
drwxrwxrwt 3 root root  140 Jun 16 11:31 .
drwxr-xr-x 3 root root 4096 Jun 16 11:31 ..
drwxr-xr-x 2 root root  100 Jun 16 11:31 ..2021_06_16_11_31_31.83035518
lrwxrwxrwx 1 root root   31 Jun 16 11:31 ..data -> ..2021_06_16_11_31_31.83035518
lrwxrwxrwx 1 root root   13 Jun 16 11:31 ca.crt -> ..data/ca.crt
lrwxrwxrwx 1 root root   16 Jun 16 11:31 namespace -> ..data/namespace
lrwxrwxrwx 1 root root   12 Jun 16 11:31 token -> ..data/token

The JWT token that the TokenController created is readily available for you:

~ $ cat /var/run/secrets/kubernetes.io/serviceaccount/token
eyJhbGciOiJSUzI1NiIsImtpZCI6InJTT1E1VDlUX1ROZEpRMmZSWi1aVW0yNWVocEh.
...

Service accounts are regularly used as building blocks and can be combined with other mechanisms such as projected volumes (discussed in Chapter 6, and the kubelet for workload identity management.

For example, the EKS feature IAM roles for service accounts demonstrates such a combination in action.

While handy, the service account does not provide for a cryptographically strong workload identity out-of-the-box and hence may be not sufficient for certain use cases.

Cryptographically strong identities

Secure Production Identity Framework for Everyone (SPIFFE) is a Cloud Native Computing Foundation (CNCF) project that establishes identities for your workloads. SPIRE is a production-ready reference implementation of the SPIFFE APIs allowing performance of node and workload attestation; that is, you can automatically assign cryptographically strong identities to resources like pods.

In SPIFFE, a workload is a program deployed using a specific configuration, defined in the context of a trust domain, such as a Kubernetes cluster. The identity of the workload is in the form of a so-called SPIFFE ID, which comes in the general schema shown as follows:

spiffe://trust-domain/workload-identifier

An SVID (short for SPIFFE Verifiable Identity Document) is the document, for example a X.509 certificate JWT token, a workload proves its identity toward a caller. The SVID is valid if it has been signed by an authority in the trust domain.

If you are not familiar with SPIFFE and want to read up on it, we recommend having a look at the terminology section of the SPIFFE docs.

With this we’ve reached the end of the general authentication and authorization discussion and focus now on a central topic in Kubernetes security: role-based access control.

Role-Based Access Control (RBAC)

Nowadays, the default mechanism for granting humans and workloads access to resources in Kubernetes is role-based access control (RBAC).

We will first review the defaults, then discuss how to understand RBAC using tools to analyze and visualize the relations, and finally we review attacks in this space.

RBAC Recap

In the context of RBAC we use the following terminology:

Allowed actions of an identity on a given resource are called verbs that come in two flavors: read-only ones (get and list) and read-write ones (create, update, patch, delete, and deletecollection). Further, the scope of a role can be cluster-wide or in the context of a Kubernetes namespace.

By default, Kubernetes comes with privilege escalation prevention. That is, users can create or update a role only if they already have all the permissions contained in the role.

Last but not least, Kubernetes defines a number of default roles you might want to review before defining your own roles (or use them as starting points).

For example, there’s a default cluster role called edit predefined (note that the output has been cut down to fit):

$ kubectl describe clusterrole edit
Name:         edit
Labels:       kubernetes.io/bootstrapping=rbac-defaults
              rbac.authorization.k8s.io/aggregate-to-admin=true
Annotations:  rbac.authorization.kubernetes.io/autoupdate: true
PolicyRule:
  Resources     Non-Resource URLs  Resource Names  Verbs
  ---------     -----------------  --------------  -----
  configmaps    []                 []              [create delete ... watch]
  ...

A Simple RBAC Example

In this section, we have a look at a simple RBAC example: assume you want to give a developer joey the permission to view resources of type deployments in the yolo namespace.

Let’s first create a cluster role called view-deploys that defines the actions allowed for the targeted resources with the following command:

$ kubectl create clusterrole view-deploys \
  --verb=get --verb=list \
  --resource=deployments

The preceding command creates a resource with a YAML representation as shown in the following:

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: view-deploys
rules:
- apiGroups:
  - apps
  resources: 1
  - deployments
  verbs: 2
  - get
  - list
1

The targeted resources of this cluster role

2

The allowed actions when this cluster role is bound

Next, we equip the targeted principal with the cluster role we created in the previous step. This is achieved by the following command that binds the view-deploys cluster role to the user joey:

$ kubectl create rolebinding assign-perm-view-deploys \
  --role=view-deploys \
  --user=joey \
  --namespace=yolo

When you execute this command you create a resource with a YAML representation like so:

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: assign-perm-view-deploys
  namespace: yolo 1
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: view-deploys 2
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: User
  name: joey 3
1

The scope of the role binding

2

The cluster role we want to use (bind)

3

The targeted principal (subject) to bind the cluster role to

Now, looking at a bunch of YAML code to determine what the permissions are is usually not the way you want to go. Given its graph nature, usually you want some visual representation, something akin to what is depicted in Figure 8-3.

For this case it looks pretty straightforward, but alas the reality is much more complicated and messy. Expect to deal with hundreds of roles, bindings, and subjects and actions across core Kubernetes resources as well as custom resource definitions (CRDs).

So, how can you figure out what’s going on, how can you truly understand the RBAC setup in your cluster? As usual, the answer is: additional software.

Example RBAC graph showing what developer `joey` is allowed to do
Figure 8-3. Example RBAC graph showing what developer joey is allowed to do

Authoring RBAC

According to the least privileges principle, you should only grant exactly the permissions necessary to carry out a specific task. But how do you arrive at the exact permissions? Too few means the task will fail, but too much power can yield a field day for attackers. A good way to go about this is to automate it: let’s have a look at a small but powerful tool called audit2rbac that can generate Kubernetes RBAC roles and role bindings covering API requests made by a user.

As a concrete example we’ll use an EKS cluster running in AWS. First, install awslogs and also audit2rbac for your platform.

For the following you need two terminal sessions as we use the first command (awslogs) in a blocking mode.

First, in one terminal session, create the audit log by tailing the CloudWatch output as follows (note, you can also directly pipe into audit2rbac):

$ awslogs get /aws/eks/example/cluster \
  "kube-apiserver-audit*" \
  --no-stream --no-group --watch \
  >> audit-log.json

Now, in another terminal session, execute the kubectl command with the user you want to create the RBAC setting for. In the case shown we’re already logged in as said user, otherwise you can impersonate them with --as.

Let’s say you want to generate the necessary role and binding for listing all the default resources (such as pods, services, etc.) across all namespaces. You would use the following command (note that the output is not shown):

$ kubectl get all -A
...

At this point we should have the audit log in audit-log.json and can use it as an input for audit2rbac as shown in the following. Let’s consume the audit log and create RBAC roles and bindings for a specific user:

$ audit2rbac --user kubernetes-admin \   1
  --filename audit-log.json \ 2
  > list-all.yaml
Opening audit source...
Loading events....
Evaluating API calls...
Generating roles...
Complete!
1

Specify target user for the role binding.

2

Specify the logs to use as an input.

After running the preceding command, the resulting RBAC resources, comprising a cluster role and a cluster role binding that permit the user kubernetes-admin to successfully execute kubectl get all -A, is now available in list-all.yaml (note that the output has been trimmed):

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole 1
metadata:
  annotations:
    audit2rbac.liggitt.net/version: v0.8.0
  labels:
    audit2rbac.liggitt.net/generated: "true"
    audit2rbac.liggitt.net/user: kubernetes-admin
  name: audit2rbac:kubernetes-admin
rules:
- apiGroups:
  - ""
  resources:
  - pods
  - replicationcontrollers
  - services
  verbs:
  - get
  - list
  - watch
...
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding 2
metadata:
  annotations:
    audit2rbac.liggitt.net/version: v0.8.0
  labels:
    audit2rbac.liggitt.net/generated: "true"
    audit2rbac.liggitt.net/user: kubernetes-admin
  name: audit2rbac:kubernetes-admin
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: audit2rbac:kubernetes-admin
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: User
  name: kubernetes-admin
1

The generated cluster role allowing you to list the default resources across all namespaces

2

The binding, giving the user kubernetes-admin the permissions

Tip

There’s also a krew plug-in called who-can allowing you to gather the same information, quickly.

That was some (automagic) entertainment, was it not? Automating the creation of the roles helps you in enforcing least privileges as otherwise the temptation to simply “give access to everything to make it work” is indeed a big one, playing into the hands of the Captain and their greedy crew.

Next up: how to read and understand RBAC in a scalable manner.

Analyzing and Visualizing RBAC

Given their nature, with RBAC you end up with a huge forest of directed acyclic graph (DAGs), including the subjects, roles, their bindings, and actions. Trying to manually comprehend the connections is almost impossible, so you want to either visualize the graphs and/or use tooling to query for specific paths.

Tip

To address the challenge of discovering RBAC tooling and good practices, we maintain rbac.dev, open to suggestions for additions via issues and pull requests.

For example, let’s assume you would like to perform a static analysis on your RBAC setup. You could consider using krane, a tool that identifies potential security risks and also makes suggestions on how to mitigate those.

To demonstrate RBAC visualization in action, let’s walk through two examples.

The first example to visualize RBAC is a krew plug-in called rbac-view (Figure 8-4) that you can run as follows:

$ kubectl rbac-view
INFO[0000] Getting K8s client
INFO[0000] serving RBAC View and http://localhost:8800
INFO[0010] Building full matrix for json
INFO[0010] Building Matrix for Roles
INFO[0010] Retrieving RoleBindings
INFO[0010] Building Matrix for ClusterRoles
...
Screen shot of the `rbac-view` web interface in action
Figure 8-4. Screenshot of the rbac-view web interface in action

Then you open the link provided, here http://localhost:8800, in a browser and can interactively view and query roles.

The second example is a CLI tool called rback, invented and codeveloped by one of the authors. rback queries RBAC-related information and generates a graph representation of service accounts, (cluster) roles, and the access rules in dot format:

$ kubectl get sa,roles,rolebindings,clusterroles,clusterrolebindings \ 1
  --all-namespaces \ 2
  -o json |
  rback | 3
  dot -Tpng  > rback-output.png 4
1

List the resources to include in the graph.

2

Set the scope (in our case: cluster-wide).

3

Feed the resulting JSON into rback via stdin.

4

Feed the rback output in dot format to the dot program to generate the image rback-output.png.

If you do have dot installed you would find the output in the file called rback-output.png, which would look something like shown in Figure 8-5.

Output of running `rback` against an EKS cluster
Figure 8-5. Output of running rback against an EKS cluster

RBAC-Related Attacks

There are not that many RBAC-related attacks found in the wild, indicated by CVEs. The basic patterns include:

With the RBAC fun wrapped up, let’s now move on to the topic of generic policy handling and engines for said purpose. The basic idea being that, rather than hardcode certain policy types, making them part of Kubernetes proper, one has a generic way to define policies and enforce it using one of the many Kubernetes extension mechanisms.

Generic Policy Engines

Let’s discuss generic policy engines that can be used in the context of Kubernetes to define and enforce any kind of policy, from organizational to regulatory ones.

Open Policy Agent

Open Policy Agent (OPA) is a graduated CNCF project that provides a general-purpose policy engine that unifies policy enforcement. The policies in OPA are represented in a high-level declarative language called Rego. It lets you specify policy as code and simple APIs to externalize policy decision-making, that is, moving it out of your own software. As you can see in Figure 8-6, OPA decouples policy decision-making from policy enforcement.

When you need to make a policy decision somewhere in your code (service), you’d use the OPA API to query the policy in question. As an input the OPA server takes the current request data (in JSON format) as well as a policy (in Rego format) as input and computes an answer such as “access allowed” or “here is a list of relevant locations.” Note that the answer is not a binary one and entirely depends on the rules and data provided, computed in a deterministic manner.

Let’s look at a concrete example (one of the examples from the Rego online playground). Imagine you want to make sure that every resource has a costcenter label that starts with cccode-, and if that’s not the case the user receives a message that this is missing and cannot proceed (for example, cannot deploy an app).

OPA concept
Figure 8-6. OPA concept

In Rego, the rule would look something like the following (we will get back to this example in “Gatekeeper” in greater detail):

package prod.k8s.acme.org

deny[msg] { 1
  not input.request.object.metadata.labels.costcenter
  msg := "Every resource must have a costcenter label"
}

deny[msg] { 2
  value := input.request.object.metadata.labels.costcenter
  not startswith(value, "cccode-")
  msg := sprintf("Costcenter code must start with `cccode-`; found `%v`", [value])
}
1

Is the costcenter label present?

2

Does the costcenter label start with a certain prefix?

Now, let’s assume someone does a kubectl apply that causes a pod to be created that does not have a label.

As a result of the kubectl command the API server generates an AdmissionReview resource, in the following shown as a JSON document:

{
    "kind": "AdmissionReview",
    "request": {
        "kind": {
            "kind": "Pod",
            "version": "v1"
        },
        "object": {
            "metadata": {
                "name": "myapp"
            },
            "spec": {
                "containers": [
                    {
                        "image": "nginx",
                        "name": "nginx-frontend"
                    },
                    {
                        "image": "mysql",
                        "name": "mysql-backend"
                    }
                ]
            }
        }
    }
}

With the preceding input, the OPA engine would compute the following output, which in turn would be, for example, fed back by the API server to kubectl and shown to the user on the command line:

{
    "deny": [
        "Every resource must have a costcenter label"
    ]
}

Now, how to rectify the situation and make it work? Simply add a label:

"metadata": {
                "name": "myapp",
                "labels": {
                    "costcenter": "cccode-HQ"
                 }
            },

This should go without saying, but it is always a good idea to test your policies before you deploy them.

Rego is a little different than what you might be used to and the best analogue we could come up with is XSLT. If you do decide to adopt Rego, consider internalizing some tips.

Using OPA directly

To use OPA on the command line directly or in the context of an editor is fairly straightforward.

First, let’s see how to evaluate a given input and a policy. You start, as usual, with installing OPA. Given that it’s written in Go, this means a single, self-contained binary.

Next, let’s say we want to use the costcenter example and evaluate it on the command line, assuming you have stored the AdmissionReview resource in a file called input.json and the Rego rules in cc-policy.rego:

$ opa eval \
  --input input.json \ 1
  --data cc-policy.rego \ 2
  --package prod.k8s.acme.org \ 3
  --format pretty 'deny' 4
[
  "Every resource must have a costcenter label"
]
1

Specify the input OPA should use (an AdmissionReview resource).

2

Specify what rules to use (in Rego format).

3

Set the evaluation context.

4

Specify output.

That was easy enough! But we can go a step further: how about using OPA/Rego in an editor, for developing new policies?

Interestingly enough, a range of IDEs and editors, from VSCode to vim, are supported (see Figure 8-7).

Screen shot of the Rego plug-in for `vim`
Figure 8-7. Screenshot of the Rego plug-in for vim

In the context of managing OPA policies across a fleet of clusters, you may want to consider evaluating Styra’s Declarative Authorization Service (DAS) offering, an enterprise OPA solution coming with some useful features such as centralized policy management and logging, as well as impact analysis.

Do you have to use Rego directly, though? No you do not have to, really. Let’s discuss alternatives in the context of Kubernetes, next.

Gatekeeper

Given that Rego is a DSL and has a learning curve, folks oftentimes wonder if they should use it directly or if there are more Kubernetes-native ways to use OPA. In fact the Gatekeeper project allows exactly for this.

Tip

If you’re unsure if you should be using Gatekeeper over OPA directly, there are plenty of nice articles available that discuss the topic in greater detail; for example, “Differences Between OPA and Gatekeeper for Kubernetes Admission Control” and “Integrating Open Policy Agent (OPA) With Kubernetes”.

What Gatekeeper does is essentially introduce a separation of concerns: so-called templates represent the policies (encoding Rego) and as an end user you would interface with CRDs that use said templates. An admission controller configured in the API server takes care of the enforcement of the policies, then.

Let’s have a look at how the previous example concerning costcenter labels being required could look with Gatekeeper. We assume that you have installed Gatekeeper already.

First, you define the template, defining a new CRD called K8sCostcenterLabels in a file called costcenter_template.yaml:

apiVersion: templates.gatekeeper.sh/v1beta1
kind: ConstraintTemplate
metadata:
  name: costcenterlabels
spec:
  crd:
    spec:
      names:
        kind: K8sCostcenterLabels
      validation:
        openAPIV3Schema: 1
          properties:
            labels:
              type: array
              items: string
  targets:
    - target: admission.k8s.gatekeeper.sh
      rego: |
  package prod.k8s.acme.org

  deny[msg] { 2
    not input.request.object.metadata.labels.costcenter 3
    msg := "Every resource must have a costcenter label"
  }

  deny[msg] { 4
    value := input.request.object.metadata.labels.costcenter
    not startswith(value, "cccode-")
    msg := sprintf("Costcenter code must start with `cccode-`; found `%v`", [value])
  }
1

This defines the schema for the parameters field.

2

This definition checks if the costcenter label is provided or not. Note that each rule contributes individually to the resulting (error) messages.

3

The not keyword in this rule turns an undefined statement into a truthy statement. That is, if any of the keys are missing, this statement is true.

4

In this rule we check if the costcenter label is formatted appropriately. In other words, we require that it must start with cccode-.

When you have the CRD defined, you then can install it as follows:

$ kubectl apply -f costcenter_template.yaml

To use the costcenter template CRD, you have to define a concrete instance (a custom resource, or CR for short), so put the following in a file called req_cc.yaml:

apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sCostcenterLabels
metadata:
  name: ns-must-have-cc
spec:
  match:
    kinds:
      - apiGroups: [""]
        kinds: ["Namespace"]

Which you then create using the following command:

$ kubectl apply -f req_cc.yaml

After this command, the Gatekeeper controller knows about the policy and enforces it.

To check if the preceding policy works, you can create a namespace that doesn’t have a label and if you then tried to create the namespace, for example using kubectl apply, you would see an error message containing “Every resource must have a costcenter label” along with the resource creation denial.

With this you have a basic idea of how Gatekeeper works. Now let’s move on to an alternative way to effectively achieve the same: the CNCF Kyverno project.

Kyverno

Another way to go about managing and enforcing policies is a CNCF project by the name of Kyverno. This project, initiated by Nirmata, is conceptually similar to Gatekeeper. Kyverno works as shown in Figure 8-8: it runs as a dynamic admission controller, supporting both validating and mutating admission webhooks.

Kyverno concept
Figure 8-8. Kyverno concept

So, what’s the difference between using Gatekeeper or plain OPA, then? Well, rather than directly or indirectly using Rego, with Kyverno you can do the following:

apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: costcenterlabels
spec:
  validationFailureAction: enforce
  rules:
  - name: check-for-labels
    match: 1
      resources:
        kinds:
        - Namespace
    validate:
      message: "label 'app.kubernetes.io/name' is required"
      pattern: 2
        metadata:
          labels:
            app.kubernetes.io/name: "?cccode-*"
1

Defines what resources to target, in this case namespaces.

2

Defines the expected pattern; in case this is not achieved, the preceding error message is returned via webhook to client.

Does the preceding YAML look familiar? This is our costcenter-labels-are-required example from earlier on.

Learn more about getting started from Gaurav Agarwal’s article “Policy as Code on Kubernetes with Kyverno” and watch “Introduction to Kyverno” from David McKay’s excellent Rawkode Live series on YouTube.

Both OPA/Gatekeeper and Kyverno fail open, meaning that if the policy engine service called by the API server webhook is down and hence unable to validate an inbound change, they will proceed unvalidated. Depending on your requirements this may not be what you want, but the reasoning behind this is to prevent DOSing your cluster and subsequently slowing it down or potentially bringing down the control plane at all.

Both have auditing functionalities as well as a scanning mode that addresses this situation. For a more fine-grained comparison, we recommend you peruse Chip Zoller’s blog post “Kubernetes Policy Comparison: OPA/Gatekeeper vs. Kyverno”.

Let’s now have a further look at other options in this space.

Other Policy Offerings

In this last section on handling policies for and in Kubernetes we review some projects and offerings that you may want to consider using in addition to or as an alternative to the previously discussed ones.

Given that a Kubernetes cluster doesn’t operate in a vacuum, but in a certain environment such as the case with managed offerings that would be the cloud provider of your choice, you may indeed already be using some of the following:

OSO

This is a library for building authorization in your application. It comes with a set of APIs built on top of a declarative policy language called Polar, as well as a CLI/REPL and a debugger and REPL. With OSO you can express policies like “these types of users can see these sorts of information,” as well as implement role-based access control in your app.

Cilium policy and Calico policy

These extend the functionalities of Kubernetes network policies.

AWS Identity and Access Management (IAM)

This has a range of policies, from identity-based to resource-based to organization-level policies. There are also more specialized offerings; for example, in the context of Amazon EKS, you can define security groups for pods.

Google Identity and Access Management (IAM)

This has a rich and powerful policy model, similar to Kubernetes.

Azure Policy

This allows stating business-level policies and they in addition offer Azure RBAC for access control purposes.

CrossGuard

By Pulumi, this is described as “Policy as Code,” offering to define and enforce guardrails across cloud providers.

Conclusion

Policy is essential to securing your clusters, and thought is required to map your teams to their groups and roles. Roles that allow transitive access to other service accounts may offer a path to privilege escalation. Also, don’t forget to threat model the impact of credential compromise, and always use 2FA for humans. Last but not least, as usual, automating as much as possible, including policy testing and validation, pays off in the long run.

The wonderful Kubernetes and wider CNCF ecosystem has already provided a wealth of open source solutions, so in our experience it’s usually not a problem to find a tool but to figure out which out of the, say, ten tools available is the best and will still be supported when the Captain’s grandchildren have taken over.

With this we’ve reached the end of the policy chapter and will now turn our attention to the question of what happens if the Captain somehow, despite of all our controls put in place, manages to break in. In other words, we discuss intrusion detection systems (IDS) to detect unexpected activity. Arrrrrrr!