This page explains how to control traffic flow between Pods and Services at Pod level by configuring multi-network network policies that apply specifically to a designated Pod-network.
As a cluster administrator, you can configure multi-network network policies to control traffic flow between Pods and Services using firewall rules at the Pod level. You can enhance network security and traffic control within your cluster.
To understand how multi-network network policies work, see how Network Policies work with Pod-networks.
Requirements
To use multi-network network policies, consider following requirements:
- Google Cloud CLI version 459 and later.
- You must have a GKE cluster running one of the following versions:
- 1.28.5-gke.1293000 or later
- 1.29.0-gke.1484000 or later
- Your cluster must use GKE Dataplane V2.
Limitations
FQDN network policy and CiliumClusterWide network policy are not supported: If you use FQDN network policy and CiliumClusterWide network policy on a Pod connected to multiple networks, the policy affects all the Pod's connections and not just the ones to which the policies are applied.
Pricing
The following Network Function Optimizer (NFO) features are supported only on clusters that are in Projects enabled with GKE Enterprise:
- Multi-network support for Pods
- Persistent IP addresses support for Pods (Preview)
- Multi-network network policies (Preview)
- Service Steering support for Pods (Preview)
To understand the charges that apply for enabling Google Kubernetes Engine (GKE) Enterprise edition, see GKE Enterprise Pricing.
Configure multi-network network policies
To use multi-network network policies, do the following:
- Create a cluster with multi-network enabled GKE .
- Create a node pool and a Pod network.
- Reference the Pod-network.
- Create a network policy to be enforced that references the same Pod-network utilized by the workload.
Before you begin
Before you start, make sure you have performed the following tasks:
- Enable the Google Kubernetes Engine API. Enable Google Kubernetes Engine API
- If you want to use the Google Cloud CLI for this task,
install and then
initialize the
gcloud CLI. If you previously installed the gcloud CLI, get the latest
version by running
gcloud components update
.
Create network policy
To create a network policy that enforces rules on the same Pod-network as your workload, reference the specific Pod-network in the network policy definition.
To define the selected ingress traffic rules and target Pods based on labels or other selectors, create a standard Network Policy.
Save the following sample manifest as
sample-ingress-network-policy1.yaml
:apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: sample-network-policy namespace: default annotations: networking.gke.io/network: blue-pod-network # GKE-specific annotation for network selection spec: podSelector: matchLabels: app: test-app-2 # Selects pods with the label "app: test-app-2" policyTypes: - Ingress # Specifies the policy applies only to incoming traffic ingress: - from: # Allow incoming traffic only from... - podSelector: matchLabels: app: test-app-1 # ...pods with the label "app: test-app-1"
Apply the
sample-ingress-network-policy1.yaml
manifest:kubectl apply -f sample-ingress-network-policy1.yaml
To define the selected egress traffic rules and target Pods based on labels or other selectors, create a standard network policy.
Save the following sample manifest as
sample-egress-network-policy2.yaml
:apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: sample-network-policy-2 namespace: default annotations: networking.gke.io/network: blue-pod-network # GKE-specific annotation (optional) spec: podSelector: matchLabels: app: test-app-2 policyTypes: - Egress # Only applies to outgoing traffic egress: - to: - podSelector: matchLabels: app: test-app-3
Apply the
sample-egress-network-policy2.yaml
manifest:kubectl apply -f sample-egress-network-policy2.yaml
Troubleshoot multi-network network policies
If you experience issues with network policies, whether they are applied to specific Pod-networks or not, you can diagnose and troubleshoot the problem by running the following commands:
kubectl get networkpolicy
: lists all network policy objects and information about them.iptables-save
: retrieves and lists all IP address tables chains for a particular node. You must run this command on the node as root.cilium bpf policy get <endpoint-id>
: retrieves and lists allowed IP addresses from each endpoint's policy map.cilium policy selectors
: prints out the identities and the associated policies that have selected them.cilium identity list
: shows mappings from identity to IP address.