One of the interesting aspects of moving to a top-down, application-centric way of working is rethinking how we network. Just as the application model first took away physical infrastructure with virtualization and now uses Kubernetes and similar orchestration tools to abstract the underlying virtual machines, networking is shifting from general-purpose routed protocol stacks to software-driven networks that use common protocols to implement application-specific networking functions.
We can see how networking evolves with the introduction of SMB over QUIC by Windows Server 2022 as an alternative to common VPNs for file sharing between on-premises Azure Stack systems and the Azure public cloud. Similarly, in Kubernetes, we see that technologies such as service mesh provide an application-defined network model that delivers network meshes with your distributed application as part of the application definition rather than as a network using an application.
A new network layer: application-defined networking
This application-driven network is a logical extension of much of the software-defined network model underlying the public cloud. However, instead of requiring in-depth knowledge of networking and, most importantly, networking hardware, it is a shift to a higher level approach where a network is automatically deployed using the intents in policies and rules. The shift from both the virtual and the physical is essential when working with dynamically self-orchestrating applications that scale up and down as needed, with instances in multiple regions and geographies all part of the same application.
It’s too early for application-driven networking, but we’re seeing tools appearing in Azure as part of the Kubernetes deployment. One option is, of course, the Open Service Mesh, but there is another set of tools that helps manage the network security of our Kubernetes applications: Network Policy. This helps manage connectivity between the different components of a Kubernetes application and handle traffic flow between pods.
Network policies in Azure Kubernetes service
AKS (Azure Kubernetes Service) provides support for network policies through two routes: its own native tool or the community-developed Calico. This second option is perhaps the most interesting, as it gives you a cross-cloud tool that can work not only with AKS, but also your own on-premises Kubernetes, Red Hat’s OpenShift, and many other Kubernetes implementations.
Calico is managed by Kubernetes security and management company Tigera. It is an open source implementation of the Kubernetes Network Policy Specification, which handles connectivity between workloads and enforces security policies on those connections, adding proprietary extensions to the basic Kubernetes functions. It is designed to work with a variety of dataplanes, from eBPF on Linux to Windows Host Networking. This approach makes it ideal for Azure, which offers Kubernetes support for both Linux and Windows containers.
Setting network policies in AKS is important. By default, all pods can send data anywhere. While this isn’t inherently insecure, it does open up your cluster to the possibility of compromise. Pods with back-end services are open to the outside world, so anyone can access your services. By implementing a network policy, you can ensure that those back-end services are accessible only by front-end systems, reducing risk by controlling traffic.
Whether you use the native service or Calico, AKS network policies are YAML documents that define the rules used to route traffic between pods. You can make these policies part of the global manifest for your application, defining your network with your application definition. This allows the network to scale with the application, adding or removing pods as AKS responds to load changes (or if you use it with KEDA [Kubernetes-based Event-Driven Autoscaling], as your application responds to events).
Using Calico in Azure Kubernetes service
Choosing a network policy tool should be done when creating a cluster; you cannot change the tool you are using once it is deployed. There are differences between the native AKS implementation and the Calico support. Both implement the Kubernetes specification and both run on Linux AKS clusters, but only Calico has support for Windows containers. It is important to note that while Calico will work in AKS, there is no official Azure support for Calico outside of the existing community options.
Getting started with Calico in AKS is relatively easy. First, create an AKS cluster and add the Azure Container Networking plugin to your cluster. This can host both AKS Network Policy and Calico. Then set up your virtual network with all the subnets you want to use. Once you’ve entered this, all you need to do is use the Azure command line to create an AKS cluster, setting your network policy to ‘calico’ instead of ‘azure’. This enables Calico support on both Linux and Windows node pools. If you are using Windows, make sure to register Calico support using the EnableAKSWindowsCalico feature flag of the Azure CLI.
The Calico team recommends installing the calicoctl manager in your cluster. There are several installation options: run binaries on Windows or Linux, or add a Kubernetes pod to your cluster. The latter option is probably best for working with AKS, as you can then mix and match Windows and Linux pods in your cluster and manage both from the same Kubernetes environment.
Building and Implementing Calico Network Policies
You create Calico network policies using YAML and set policies for pods with specific roles. These roles are applied as pod labels when you create the pod, and your rules require a selector to associate your policy with the pods that match your app and role labels. After you create a policy, use kubectl to apply it to your cluster.
Rules are easy enough to define. You can set ingress policies for specific pods to, for example, only receive traffic from a different set of pods that match a different selector pattern. This way, you can ensure that your application’s backend only receives traffic from your frontend, for example, and that your data service only works when it’s addressed by your backend. The resulting simple set of entry rules provides isolation between application levels as part of your application definition. Other options allow you to define rules for namespaces and roles to ensure separation between production and test pods.
Calico gives you granular control over your application network policies. You can manage ports, specific application endpoints, protocols and even IP versions. Your policy can be applied to a specific namespace or globally to your Kubernetes instance. Rules are in place for inbound and outbound traffic, allowing you to control the flow of traffic in and out of your pods, with policies denying all traffic except what is specifically allowed. With Calico, there is enough flexibility to quickly build complex network security models with a handful of simple YAML files. Just create the YAML you need and use calicoctl to apply your rules.
Application-driven networking is an important concept that allows application development teams to determine how their code interacts with the underlying network structure. Like storage and – thanks to tools like Kubernetes – compute, it is important to treat networks as a substance that can be easily managed at the connection level. Network teams no longer need to configure application networks; all they have to do is help define VNets and then leave the application policies to the application.
If we want to build flexible, modern applications, we need to take advantage of tools like Calico, which makes our network as portable as our code and as flexible and scalable. It may be a change in how we think about networking, but it’s essential to support modern application infrastructures.
Copyright © 2022 IDG Communications, Inc.