Skip to content

Commit b4203eb

Browse files
committed
feat: Added Custom Ingress Network Policy docs
1 parent 3571aeb commit b4203eb

File tree

6 files changed

+147
-40
lines changed

6 files changed

+147
-40
lines changed

vcluster/_fragments/integrations/istio.mdx

Lines changed: 25 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -58,6 +58,27 @@ This configuration:
5858
Only `DestinationRules`, `Gateways`, and `VirtualServices` from `networking.istio.io/v1` API Version are synced to the host clusters. Other kinds are not yet supported.
5959
:::
6060

61+
### Network policies
62+
63+
If you are using [network policies](../../configure/vcluster-yaml/policies/network-policy.mdx), consult the [Ambient and Kubernetes NetworkPolicy](https://istio.io/latest/docs/ambient/usage/networkpolicy/) documentation.
64+
Depending on the underlying configuration, the following policies may be required:
65+
66+
```yaml title="vcluster.yaml"
67+
policies:
68+
networkPolicy:
69+
workload:
70+
ingress:
71+
- ports:
72+
# Allow HBONE traffic per https://istio.io/latest/docs/ambient/usage/networkpolicy/
73+
- port: 15008
74+
protocol: TCP
75+
- from:
76+
# Allow kubelet health probe per https://istio.io/latest/docs/ambient/usage/networkpolicy/
77+
- ipBlock:
78+
cidr: 169.254.7.127/32
79+
80+
```
81+
6182
<!--vale off-->
6283
## Route request based on the version label of the app
6384
<Flow id="istio-example">
@@ -69,8 +90,8 @@ Only `DestinationRules`, `Gateways`, and `VirtualServices` from `networking.isti
6990
the `HOST_CONTEXT` is the context of the host cluster, while the `VCLUSTER_CONTEXT` is
7091
the context to access the created vCluster. The `VCLUSTER_HOST_NAMESPACE` is the namespace
7192
where the vCluster is created on the host cluster. The `ISTIO_NAMESPACE` is the namespace to deploy the Istio integration.
72-
The commands in the following steps will be automatically updated to follow your
73-
configuration. For the GatewayAPI version, you should set it according to
93+
The commands in the following steps will be automatically updated to follow your
94+
configuration. For the GatewayAPI version, you should set it according to
7495
your Istio version / GatewayAPI version matrix above.
7596

7697
<PageVariables HOST_CONTEXT="your-host-context" VCLUSTER_CONTEXT="vcluster-ctx" VCLUSTER_HOST_NAMESPACE="vcluster" GATEWAY_API_VERSION="v1.2.1" ISTIO_NAMESPACE="istio"/>
@@ -132,7 +153,7 @@ Only `DestinationRules`, `Gateways`, and `VirtualServices` from `networking.isti
132153
/>
133154

134155
and label it with `istio.io/dataplane-mode: ambient`:
135-
156+
136157
<InterpolatedCodeBlock
137158
code={`kubectl --context="[[GLOBAL:VCLUSTER_CONTEXT]]" label namespace [[GLOBAL:ISTIO_NAMESPACE]] istio.io/dataplane-mode=ambient`}
138159
language="bash"
@@ -441,7 +462,7 @@ code={`apiVersion: apps/v1
441462
/>
442463

443464
To apply a `DestinationRule` configuration to the virtual cluster specified by the `VCLUSTER_CONTEXT` context, use the following command:
444-
465+
445466
<InterpolatedCodeBlock
446467
code={` kubectl --context="[[GLOBAL:VCLUSTER_CONTEXT]]" create -f destination_rule.yaml`}
447468
language="bash"

vcluster/_fragments/metrics-server.mdx

Lines changed: 10 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -33,19 +33,20 @@ integrations:
3333
port: 443
3434
```
3535
36-
If you are using network policies, you need to add the metrics server pod to the allowed rules that the control plane can communicate with:
36+
If you are using [network policies](../configure/vcluster-yaml/policies/network-policy.mdx), you need to add the metrics server pod to the allowed rules that the control plane can communicate with:
3737
3838
```yaml
3939
policies:
4040
networkPolicy:
41-
extraControlPlaneRules:
42-
- to:
43-
- podSelector:
44-
matchLabels:
45-
k8s-app: metrics-server
46-
namespaceSelector:
47-
matchLabels:
48-
kubernetes.io/metadata.name: "my-metrics-server-namespace"
41+
controlPlane:
42+
egress:
43+
- to:
44+
- namespaceSelector:
45+
matchLabels:
46+
kubernetes.io/metadata.name: 'kube-system'
47+
podSelector:
48+
matchLabels:
49+
k8s-app: metrics-server
4950
```
5051
5152
## Config reference

vcluster/_fragments/private-nodes.mdx

Lines changed: 28 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
import PrivateNodeLimitations from './private-nodes-limitations.mdx'
22

33
Using private nodes is a tenancy model for vCluster where, instead of sharing the host cluster’s worker nodes, individual worker nodes are joined to a vCluster.
4-
These private nodes act as the vCluster’s worker nodes and are treated as worker nodes for the vCluster.
4+
These private nodes act as the vCluster’s worker nodes and are treated as worker nodes for the vCluster.
55

66
Because these nodes are real Kubernetes nodes, vCluster does not sync any resources to the host cluster as no host cluster worker nodes are used. All workloads run directly on the attached nodes as if they were native to the virtual cluster.
77

@@ -28,10 +28,10 @@ alt="Overview"
2828

2929
## How private nodes can be provisioned
3030

31-
Private nodes can be provisioned in two different ways:
31+
Private nodes can be provisioned in two different ways:
3232

33-
* **[Manually provisioned](../../../deploy/worker-nodes/private-nodes/join)** - Nodes that were provisioned outside of vCluster. These nodes are joined to vCluster using a vCluster CLI command.
34-
* **[Automatically provisioned](./auto-nodes)** - Nodes that are provisioned on-demand based on the vCluster configuration and resource requirements. vCluster is connected to vCluster Platform and references a node provider defined in
33+
* **[Manually provisioned](../../../deploy/worker-nodes/private-nodes/join)** - Nodes that were provisioned outside of vCluster. These nodes are joined to vCluster using a vCluster CLI command.
34+
* **[Automatically provisioned](./auto-nodes)** - Nodes that are provisioned on-demand based on the vCluster configuration and resource requirements. vCluster is connected to vCluster Platform and references a node provider defined in
3535
vCluster Platform.
3636

3737

@@ -75,4 +75,27 @@ controlPlane:
7575
enabled: true
7676
```
7777

78-
<PrivateNodeLimitations />
78+
<PrivateNodeLimitations />
79+
80+
### Network policies
81+
If you are using [network policies](../configure/vcluster-yaml/policies/network-policy.mdx), private nodes traffic into the vcluster control plane must be allowed.
82+
83+
```yaml title="vcluster.yaml"
84+
privateNodes:
85+
enabled: true
86+
87+
controlPlane:
88+
service:
89+
spec:
90+
type: LoadBalancer
91+
92+
policies:
93+
networkPolicy:
94+
enabled: true
95+
controlPlane:
96+
ingress:
97+
- from:
98+
# Allow incomming traffic from the load balancer internal ip address.
99+
# This example is allowing incomming traffic from any address. Load balancer internal CIDR should be used.
100+
- ipBlock:
101+
cidr: 0.0.0.0/0

vcluster/configure/vcluster-yaml/networking/replicate-services.mdx

Lines changed: 28 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -49,6 +49,34 @@ In the above example, when you remove the `my-virtual-namespace/my-virtual-servi
4949
`networking.replicateServices.toHost`, the host cluster service `my-host-service` is not automatically deleted from
5050
the host cluster, so you need to delete it manually, if you don't want to keep it in the host cluster.
5151

52+
## Network policies
53+
54+
If you are using [network policies](../policies/network-policy.mdx), traffic to or from the replicated services must be allowed.
55+
56+
```yaml title="vcluster.yaml"
57+
policies:
58+
networkPolicy:
59+
workload:
60+
egress:
61+
- to:
62+
# Example allowing vcluster workload traffic to all pods in the my-host-namespace namespace.
63+
# Depending on your use case, a more restrictive pod selector may be used.
64+
- namespaceSelector:
65+
matchLabels:
66+
kubernetes.io/metadata.name: my-host-namespace
67+
68+
ingress:
69+
- from:
70+
# Example allowing vcluster workload traffic from all pods in the my-host-namespace namespace.
71+
# Depending on your use case, a more restrictive pod selector may be used.
72+
- namespaceSelector:
73+
matchLabels:
74+
kubernetes.io/metadata.name: my-host-namespace
75+
76+
77+
78+
```
79+
5280
## Config reference
5381

5482
<ReplicateServices/>

vcluster/configure/vcluster-yaml/policies/admission-control.mdx

Lines changed: 17 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -244,6 +244,23 @@ spec:
244244
admission webhook "validation.gatekeeper.sh" denied the request: you must provide labels: [gatekeeper]
245245
```
246246
247+
## Network policies
248+
If you are using [network policies](./network-policy.mdx), admission webhooks traffic from the vCluster control plane must be allowed.
249+
250+
```yaml title="vcluster.yaml"
251+
policies:
252+
networkPolicy:
253+
controlPlane:
254+
egress:
255+
- to:
256+
# Allow vcluster control plane traffic to the gatekeeper webhook.
257+
- namespaceSelector:
258+
matchLabels:
259+
kubernetes.io/metadata.name: gatekeeper-system
260+
podSelector:
261+
matchLabels:
262+
gatekeeper.sh/operation: webhook
263+
```
247264
248265
## Config reference
249266

vcluster/configure/vcluster-yaml/policies/network-policy.mdx

Lines changed: 39 additions & 22 deletions
Original file line numberDiff line numberDiff line change
@@ -44,24 +44,51 @@ policies:
4444
enabled: true
4545
```
4646
47-
#### Custom egress rules {#custom-egress}
48-
49-
Control outbound traffic with specific CIDR blocks:
50-
47+
#### Custom ingress and egress rules {#custom-rules}
48+
Control inbound and outbound traffic with specific ports and IP addresses for vCluster control plane and workloads:
5149
```yaml title="vcluster.yaml"
5250
policies:
5351
networkPolicy:
5452
enabled: true
55-
outgoingConnections:
56-
ipBlock:
57-
cidr: 0.0.0.0/0
58-
except:
59-
- 169.254.0.0/16 # AWS metadata service
60-
- 10.0.0.0/8 # Private network ranges
61-
- 172.16.0.0/12
62-
- 192.168.0.0/16
53+
54+
workload:
55+
ingress:
56+
# Allow ingress from anywhere to specific ports
57+
- ports:
58+
- port: 6060
59+
- port: 444
60+
61+
egress:
62+
# Allow egress to a specific address and port
63+
- to:
64+
- ipBlock:
65+
cidr: 172.19.10.23/32
66+
ports:
67+
- port: 7777
68+
protocol: TCP
69+
70+
publicEgress:
71+
# Disable convenience common public egress rule.
72+
enabled: false
73+
74+
controlPlane:
75+
ingress:
76+
# Allow ingress traffic from anywhere to the virtual cluster control plane api
77+
- ports:
78+
- port: 8443
79+
80+
egress:
81+
# Allow egress traffic to a specific address
82+
- to:
83+
- ipBlock:
84+
cidr: 172.19.10.23/32
85+
6386
```
6487

88+
:::note
89+
`ingress` and `egress` config sections accept the same content type as [PodNetworkPolicy](https://kubernetes.io/docs/concepts/services-networking/network-policies/#podnetworkpolicy-resource)
90+
:::
91+
6592
#### Add custom labels {#custom-labels}
6693

6794
Apply labels to generated NetworkPolicies for easier management:
@@ -77,16 +104,6 @@ policies:
77104
description: "Network isolation for production vCluster"
78105
```
79106
80-
:::warning DNS Port in vCluster
81-
vCluster uses port 1053 for DNS queries, not the standard port 53. When creating custom NetworkPolicies for pods inside vCluster, ensure DNS rules target port 1053:
82-
83-
```yaml
84-
ports:
85-
- port: 1053
86-
protocol: UDP
87-
```
88-
:::
89-
90107
## Project-scoped isolation with Platform {#project-scoped-isolation}
91108
92109
For Platform users needing project-level network boundaries, combine `policies.networkPolicy` with [VirtualClusterTemplates](/platform/administer/templates/create-templates):

0 commit comments

Comments
 (0)