This tutorial shows how to apply privately used public IP (PUPI) addresses to Google Kubernetes Engine (GKE) Pod address blocks.
Privately Used Public IP (PUPIs) addresses are addresses that you can use privately within your Google Cloud Virtual Private Cloud (VPC) network. These IP addresses are not owned by Google. You don't need to own these public IP addresses to use them privately.
Overview
GKE clusters require dedicated IP address ranges for nodes, Pods, and Services. As your infrastructure grows, you might face exhaustion of the standard internal IP address space (RFC 1918). One way to mitigate private RFC 1918 address exhaustion is to use privately used public IP (PUPI) addresses for the GKE Pod CIDR block. PUPIs provide an alternative for your GKE Pod network, reserving private IP addresses for other cluster components.
Single Cluster: If you're only creating one GKE cluster, you can enable PUPIs by enabling privately used external IP address ranges.
Multiple Clusters: If you're working with multiple GKE clusters connected through VPC peering (a common scenario for service providers), you'll need a more complex configuration. The following diagram shows an example of how a company (producer) offers a managed service using PUPI with VPC-peering to a customer (consumer).
The preceding diagram involves the following considerations:
- Primary CIDR block: A non-PUPI CIDR block that is used for nodes and internal load balancer (ILB) and must be non-overlapping across VPCs.
- Producer secondary CIDR block: A PUPI CIDR block that is used for Pods
(for example,
45.45.0.0/16
). - Consumer secondary CIDR block: Any other PUPI CIDR block on the customer
side (for example,
5.5.0.0/16
).
How PUPIs are used in a Service provider scenario
The service provider (producer) runs their managed service on a
GKE cluster (gke-2) within their VPC (vpc-producer
). This
cluster uses the PUPI range 45.0.0.0/8
for its Pod IP addresses.
The customer (consumer) also has a GKE cluster (gke-1) in their own
VPC (vpc-consumer
), using a different PUPI range, 5.0.0.0/8
,
for its Pod IP addresses.
These two VPCs are connected using VPC peering, but each continues to use standard private IP addresses (RFC 1918) for nodes, services, and internal load balancers.
Ensure communication between VPCs
- Consumer to Producer: By default, the consumer's VPC automatically shares its RFC 1918 routes (but not PUPIs) with the producer. This allows resources in the consumer's VPC to access services in the producer's VPC (typically through internal load balancers).
- Producer to Consumer: For the producer's Pods to reach resources in the consumer's VPC, explicit configuration is needed.
- No Overlap: The producer and consumer must ensure that the consumer's PUPI range doesn't conflict with any IP addresses used in the producer's VPC.
- Export/Import: The producer must enable export of its PUPI routes, and the consumer must enable import of those routes over the peering connection.
Enable communication when PUPI ranges overlap
If the consumer's VPC already uses the same PUPI range as the producer, direct communication from producer Pods isn't possible. Instead, the producer can enable IP address masquerading, where the Pod IP addresses are hidden behind the producer's node IP addresses.
The following table shows the default import and export settings for each
VPC. You can modify default VPC peering settings
by using the gcloud compute networks peerings update
command.
VPC Network | Import settings | Export settings | Notes |
---|---|---|---|
Producer | Default behavior (turned off): Does not import subnet routes with public
IP addresses |
Default behavior (turned on): Exports subnet routes with public IP
addresses |
Flags controlled through service networking. |
Consumer | Turned off (default) | Turned on (default) | Typically managed by the customer, not required to be modified through service networking |
These settings result in the following:
- The producer VPC sees all the customer routes.
- The consumer VPC doesn't see the PUPI routes configured on the Pod subnet in the producer VPC.
- Traffic originating from the producer Pods to the
vpc-consumer
network must be translated behind the node addresses in the producer cluster.
Prerequisites
To establish successful communication between VPC Pods and another VPC, ensure the following prerequisites and configurations are met:
- Your GKE cluster must meet the following minimum version
requirements:
- Autopilot clusters: GKE version 1.22 or later
- Standard clusters: GKE version 1.14 or later
- Select a PUPI range that is not publicly routable or owned by Google.
- Ensure that the node IP addresses and the primary IP address range of both VPCs don't overlap.
- If you require direct Pod-to-Pod communication between the customer
VPC and the managed service, follow these steps:
- Autopilot clusters: Configure SNAT for PUPI to ensure Pod-to-Pod communication. You don't require additional configuration.
- Standard clusters: Translate Pod IP addresses to their corresponding node IP addresses using SNAT. Enable SNAT for PUPI traffic. For more information, see Enable privately used external IP address ranges.
Configure privately used public IP addresses for GKE clusters
To configure PUPI addresses for GKE clusters:
- Configure two Virtual Private Cloud networks.
- Configure one subnet inside each Virtual Private Cloud network.
- Configure a PUPI address range on a secondary address range in each subnet.
- Establish a Virtual Private Cloud peering relationship between the two Virtual Private Cloud networks with proper import and export settings.
- Inspect the routes within each Virtual Private Cloud.
Costs
In this document, you use the following billable components of Google Cloud:
To generate a cost estimate based on your projected usage, use the pricing calculator.
Before you begin
Before you start, make sure you have performed the following tasks:
- Enable the Google Kubernetes Engine API. Enable Google Kubernetes Engine API
- If you want to use the Google Cloud CLI for this task,
install and then
initialize the
gcloud CLI. If you previously installed the gcloud CLI, get the latest
version by running
gcloud components update
.
Use VPC-native clusters only.
Setup the required IPAM specifications for VPC peering.
Set up Networks and clusters
Create VPC networks:
Create the following two VPC networks with RFC-1918 as primary ranges for nodes and PUPI ranges for Pods. Assign primary IP address ranges from the RFC 1918 private address space (for example,
10.x.x.x
,172.16.x.x
, or192.168.x.x
) to both VPCs. These ranges are typically used for the worker nodes within your GKE clusters. In addition to the internal IP address ranges, designate separate ranges of Privately Used Public IP addresses (PUPIs) for each VPC. These PUPI ranges are used exclusively for the Pod IP addresses within their corresponding GKE clusters.Consumer VPC network: This VPC hosts the GKE cluster where the consumer's applications or workloads run. You can use configuration similar to the following:
Network: consumer Subnetwork: consumer-subnet Primary range: 10.129.0.0/24 Service range name and cidr: consumer-services, 172.16.5.0/24 Pod range name and cidr: consumer-pods, 5.5.5.0/24 Cluster name: consumer-cluster
Producer VPC network: This VPC hosts the GKE cluster responsible for providing the service that the consumer uses. You can use configuration similar to the following:
Network: producer Subnetwork: producer-subnet Primary range: 10.128.0.0/24 Service range name and cidr: producer-services, 172.16.45.0/24 Pod range name and cidr: producer-pods, 45.45.45.0/24 Cluster name: producer-cluster
For more information on creating VPC networks, see Create VPC networks.
With the VPC networks and subnets created with PUPI ranges in the preceding step, you can create two GKE clusters (
producer
andconsumer
).Create
producer
clusters with the producer network and subnet as follows:gcloud container clusters create PRODUCER_CLUSTER_NAME \ --enable-ip-alias \ --network=NETWORK_NAME \ --subnetwork=SUBNETWORK_NAME \ --cluster-secondary-range-name=PRODUCER_PODS \ --services-secondary-range-name=PRODUCER_SERVICES \ --num-nodes=1 \ --zone=PRODUCER_ZONE_NAME \ --project=PRODUCER_PROJECT_NAME
Replace the following:
PRODUCER_CLUSTER_NAME
: the name of the GKE producer cluster.COMPUTE_LOCATION
: the Compute Engine location for the cluster.PRODUCER_SUBNET_NAME
: the name of an existing subnet. The subnet's primary IP address range is used for nodes. The subnet must exist in the same region as the one used by the cluster. If omitted, GKE attempts to use a subnet in thedefault
VPC network in the cluster's region.- If the secondary range assignment method is managed by GKE:
POD_IP_RANGE
: an IP address range in CIDR notation, such as10.0.0.0/14
, or the size of a CIDR block subnet mask, such as/14
. This is used to create the subnet's secondary IP address range for Pods. If you omit the--cluster-ipv4-cidr
option, GKE chooses a/14
range (218 addresses) automatically. The automatically chosen range is randomly selected from10.0.0.0/8
(a range of 224 addresses).SERVICES_IP_RANGE
: an IP address range in CIDR notation (for example,10.4.0.0/19
) or the size of a CIDR block subnet mask (for example,/19
). This is used to create the subnet's secondary IP address range for Services.
- If the secondary range assignment method is user-managed:
SECONDARY_RANGE_PODS
: the name of an existing secondary IP address range in the specifiedSUBNET_NAME
. GKE uses the entire subnet secondary IP address range for the cluster's Pods.SECONDARY_RANGE_SERVICES
: the name of an existing secondary IP address range in the specified.
Create
consumer
clusters with the consumer network and subnet as follows:gcloud container clusters create CONSUMER_CLUSTER_NAME \ --enable-ip-alias \ --network=CONSUMER_NETWORK_NAME \ --subnetwork=CONSUMER_SUBNETWORK_NAME \ --cluster-secondary-range-name=CONSUMER_PODS \ --services-secondary-range-name=CONSUMER_SERVICES \ --num-nodes=1 \ --zone=CONSUMER_ZONE \ --project=CONSUMER_PROJECT
Replace the following:
CONSUMER_CLUSTER_NAME
: the name of the GKE consumer cluster.COMPUTE_LOCATION
: the Compute Engine location for the cluster.CONSUMER_SUBNET_NAME
: the name of an existing subnet. The subnet's primary IP address range is used for nodes. The subnet must exist in the same region as the one used by the cluster. If omitted, GKE attempts to use a subnet in thedefault
VPC network in the cluster's region.- If the secondary range assignment method is managed by GKE:
POD_IP_RANGE
: an IP address range in CIDR notation, such as10.0.0.0/14
, or the size of a CIDR block subnet mask, such as/14
. This is used to create the subnet's secondary IP address range for Pods. If you omit the--cluster-ipv4-cidr
option, GKE chooses a/14
range (218 addresses) automatically. The automatically chosen range is randomly selected from10.0.0.0/8
(a range of 224 addresses) and won't include IP address ranges allocated to VMs, existing routes, or ranges allocated to other clusters. The automatically chosen range might conflict with reserved IP addresses, dynamic routes, or routes within VPCs that peer with this cluster. If you use these any of these, you should specify--cluster-ipv4-cidr
to prevent conflicts.SERVICES_IP_RANGE
: an IP address range in CIDR notation (for example,10.4.0.0/19
) or the size of a CIDR block subnet mask (for example,/19
). This is used to create the subnet's secondary IP address range for Services.
- If the secondary range assignment method is user-managed:
SECONDARY_RANGE_PODS
: the name of an existing secondary IP address range in the specifiedSUBNET_NAME
. GKE uses the entire subnet secondary IP address range for the cluster's Pods.SECONDARY_RANGE_SERVICES
: the name of an existing secondary IP address range in the specified.
For more information on creating clusters, see Creating clusters.
Establish the VPC peering relationship between the consumer-vpc network and the producer-vpc network as follows:
To connect
consumer
network to producer, run the following command:gcloud compute networks peerings create PEERING_NAME \ --project=consumer_project \ --network=consumer \ --peer-network=producer
To connect
producer
network to consumer, run the following command:gcloud compute networks peerings create PEERING_NAME \ --project=producer_project \ --network=producer \ --peer-network=consumer \ --no-export-subnet-routes-with-public-ip \ --import-subnet-routes-with-public-ip
Replace the following:
PEERING_NAME
: the name of the GKE consumer cluster.CONSUMER_CLUSTER_NAME
: the name of the GKE consumer cluster.
By default, the consumer VPC exports the PUPI addresses. When you create the producer VPC, you use the following arguments to configure the VPC to import PUPI addresses but not export them:
--no-export-subnet-routes-with-public-ip --import-subnet-routes-with-public-ip
Verify the networks and subnetworks
Verify the producer network:
gcloud compute networks describe producer \ --project=producer_project
The output is similar to the following:
... kind: compute#network name: producer peerings: - autoCreateRoutes: true exchangeSubnetRoutes: true exportCustomRoutes: false exportSubnetRoutesWithPublicIp: false importCustomRoutes: false importSubnetRoutesWithPublicIp: true name: producer-peer-consumer …
Verify the producer subnetwork:
gcloud compute networks subnets describe producer-subnet \ --project=producer_project\ --region=producer_region
The output is similar to the following:
... ipCidrRange: 10.128.0.0/24 kind: compute#subnetwork name: producer-subnet … secondaryIpRanges: - ipCidrRange: 172.16.45.0/24 rangeName: producer-services - ipCidrRange: 45.45.45.0/24 rangeName: producer-pods …
Verify the consumer network:
gcloud compute networks subnets describe consumer-subnet \ --project=consumer_project \ --region=consumer_region
The output is similar to the following:
... kind: compute#network name: consumer peerings: - autoCreateRoutes: true exchangeSubnetRoutes: true exportCustomRoutes: false exportSubnetRoutesWithPublicIp: true importCustomRoutes: false importSubnetRoutesWithPublicIp: false name: consumer-peer-producer ...
Verify the consumer subnetwork:
gcloud compute networks describe consumer \ --project=consumer_project
The output is similar to the following:
... ipCidrRange: 10.129.0.0/24 kind: compute#subnetwork name: consumer-subnet ... secondaryIpRanges: - ipCidrRange: 172.16.5.0/24 rangeName: consumer-services - ipCidrRange: 5.5.5.0/24 rangeName: consumer-pods ...
Verify GKE cluster and its resources
Get the consumer cluster credentials:
gcloud container clusters get-credentials consumer-cluster \ --project=consumer_project \ --zone=consumer_zone
The output is similar to the following:
... Fetching cluster endpoint and auth data. kubeconfig entry generated for consumer-cluster. ...
Install and verify the helloapp.
Alternately, you can save the following manifest as
deployment.yaml
:kubectl apply -f - <<'EOF' apiVersion: apps/v1 kind: Deployment metadata: name: helloweb labels: app: hello spec: selector: matchLabels: app: hello tier: web template: metadata: labels: app: hello tier: web spec: containers: - name: hello-app image: us-docker.pkg.dev/google-samples/containers/gke/hello-app:1.0 ports: - containerPort: 8080 resources: requests: cpu: 200m EOF
Apply the deployment.yaml:
kubectl apply -f
Verify the
helloweb
deployment:kubectl get deployment helloweb
The output is similar to the following:
... NAME READY UP-TO-DATE AVAILABLE AGE helloweb 1/1 1 1 10s ...
Verifying the solution
Validate that you successfully created VPC peering:
gcloud compute networks peerings list
The output is similar to the following, which shows peerings that are named consumer and producer:
NAME NETWORK PEER_PROJECT PEER_NETWORK STACK_TYPE PEER_MTU IMPORT_CUSTOM_ROUTES EXPORT_CUSTOM_ROUTES STATE STATE_DETAILS consumer-peer-producer consumer <project_name> producer IPV4_ONLY 1460 False False ACTIVE [2023-08-07T13:14:57.178-07:00]: Connected. producer-peer-consumer producer <project_name> consumer IPV4_ONLY 1460 False False ACTIVE [2023-08-07T13:14:57.178-07:00]: Connected.
Validate that the consumer VPC exports PUPI routes:
gcloud compute networks peerings list-routes consumer-peer-producer \ --direction=OUTGOING \ --network=consumer \ --region=<consumer_region>
The output is similar to the following, which shows all three consumer CIDR blocks:
DEST_RANGE TYPE NEXT_HOP_REGION PRIORITY STATUS 10.129.0.0/24 SUBNET_PEERING_ROUTE us-central1 0 accepted by peer 172.16.5.0/24 SUBNET_PEERING_ROUTE us-central1 0 accepted by peer 5.5.5.0/24 SUBNET_PEERING_ROUTE us-central1 0 accepted by peer
Validate the PUPI routes that the producer VPC imported:
gcloud compute networks peerings list-routes producer-peer-consumer \ --direction=INCOMING \ --network=producer \ --region=<producer_region>
The output is similar to the following, which shows all three consumer CIDR blocks:
DEST_RANGE TYPE NEXT_HOP_REGION PRIORITY STATUS 10.129.0.0/24 SUBNET_PEERING_ROUTE us-central1 0 accepted 172.16.5.0/24 SUBNET_PEERING_ROUTE us-central1 0 accepted 5.5.5.0/24 SUBNET_PEERING_ROUTE us-central1 0 accepted
Validate that the GKE Pods have a PUPI address:
kubectl get pod -o wide
The output is similar to the following, which shows that the IP addresses of the Pods fall within the 5.5.5/24 range.
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES helloweb-575d78464d-xfklj 1/1 Running 0 28m 5.5.5.16 gke-consumer-cluster-default-pool-e62b6542-dp5f <none> <none>
What's next
- Read the Configuring private service access guide.
- Read the Getting started with the Service Networking API guide.
- Explore reference architectures, diagrams, and best practices about Google Cloud. Take a look at our Cloud Architecture Center.