Visualizing and Securing Google Cloud Run for Anthos with Octarine

By Haim Helman 

Cloud Run for Anthos: an Enterprise-Grade Knative Serverless Platform 

Google has announced the general availability of Cloud Run for Anthos, a serverless solution built on the Knative open-source project to enable portability of workloads across platforms. Anthos supports Google Kubernetes Engine (GKE) on Google Cloud, on-premises data centers, and multi-cloud environments. 

Anthos is designed for organizations who embrace Kubernetes for building and running cloud-native workloads and need an enterprise grade solution for both the public cloud and on-prem. Cloud Run delivers auto-scaling for API-driven microservices across these environments. A core advantage of using Cloud Run on Kubernetes is that it allows organizations to use the same tools they apply to all of their workloads in Kubernetes to workloads running on Cloud Run.

Organizations rely on Octarine’s continuous security platform to maintain security and compliance for Kubernetes workloads. Let’s take a look at how Octarine naturally extends protection against threat for Cloud Run workloads in Anthos.

Cloud Run, Istio, and Octarine

Cloud Run relies on Istio’s ingress gateway to route API requests to its ‘Activator’, which in turn forwards them to the relevant workload as defined by the Knative service object. It is possible, but not mandatory, to apply the full Istio service mesh on Knative’s services (Activator, Autoscaler, Controller and Webhook) as well as the workloads themselves.

Octarine takes a service mesh approach to runtime visibility and security for Kubernetes, either by leveraging Istio or by deploying a standalone, lightweight, Envoy-based service mesh. We have found that when using Anthos on GKE, utilizing GCP’s managed Istio is the faster way to go. 

On-prem, where managed Istio is not yet available, Octarine’s standalone service mesh is a quick and easy way to get started with establishing visibility and security that will stay effective if and when Istio is adopted in the cluster later on. 

Securing Cloud Run and Kubernetes in Anthos

Like with any platform, the responsibility for securing an Anthos environment is shared between the platform provider (Google Cloud) and the application owners (the user). Anthos, whether running on-prem or in the cloud, is a best-in-class secure platform for running cloud-native applications. It is up to the user to ensure that Kubernetes applications deployed to Anthos are configured in a way that does not introduce weaknesses and new vulnerability to that environment. 

Since there is no way to guarantee that the application itself cannot be exploited, it is advised that you limit the privileges and network access of workloads to a minimum. The last layer of defence would be a network intrusion detection system that looks for anomalies and attack signatures in the cluster’s network traffic. Octarine provides all of these capabilities by introducing a combination of admission controllers and service-mesh based network security services. 

Let’s take a look at the “online boutique” application which we deployed on Cloud Run. The application is comprised of several stateless “functions” and one stateful service—the Redis database.

Workload Configuration

We start by examining the workloads that are deployed and look for violations of policy. Octarine uses a predefined policy by default, which covers the security context of pods and containers, service definitions, network policy, and RBAC.

When looking at our workloads we find no violations other than the nginx service, which is exposed via a load balancer. We can either change its service definition or create an exception to our policy:

Our policy is also set to alert on “exec” API calls which try to run a command in the container. In this case, we see someone (me) ran the “ps” command in a couple of containers:

Network visibility

Now lets see how our workload actually behave in run time.

We can see that there is a lot of traffic between Knative’s activator and autoscaler with our workloads.

Now let’s look specifically at the cart service:

Here, the cart service is receiving http traffic from the autoscaler and activator and sending traffic back to the redis cart.

Now that we know what’s going on, we can tighten things up with a network policy.

Network Policy and Service Access Control

One of the advantages of working with Octarine is that we don’t have to figure out what a good network policy should be. We can use Octarine’s policy recommender for that:

Octarine recommends an easy-to-understand “least privilege” network policy definition based on observed activity, without the complexity of Istio RBAC or Kubernetes network policy definitions.

The Bottom Line

Whether you choose to run Anthos on-prem or in the cloud, Cloud Run is probably the most efficient and simple way to deploy your stateless microservices without worrying about scale or API routing. Even better, it utilizes an open framework, Knative, which in turn uses Kubernetes, the de-facto standard in cloud native orchestration.

This approach makes it possible to apply the same security controls you use for all you cloud native workloads for this new framework and those new “functions”.

We here at Octarine congratulate Google on the release of Cloud Run for Anthos and are looking forward to helping our customers adopt it, securely!

Contact us today to start securing your serverless applications on Google Anthos and Cloud Run with Octarine!