diff --git a/vcluster/_fragments/integrations/istio.mdx b/vcluster/_fragments/integrations/istio.mdx index 8c3181173..0ab4af752 100644 --- a/vcluster/_fragments/integrations/istio.mdx +++ b/vcluster/_fragments/integrations/istio.mdx @@ -58,8 +58,29 @@ This configuration: Only `DestinationRules`, `Gateways`, and `VirtualServices` from `networking.istio.io/v1` API Version are synced to the host clusters. Other kinds are not yet supported. ::: +### Network policies + +If you are using [network policies](../../configure/vcluster-yaml/policies/network-policy.mdx), consult the [Ambient and Kubernetes NetworkPolicy](https://istio.io/latest/docs/ambient/usage/networkpolicy/) documentation. +Depending on the underlying configuration, the following policies may be required: + +```yaml title="vcluster.yaml" +policies: + networkPolicy: + workload: + ingress: + - ports: + # Allow HBONE traffic per https://istio.io/latest/docs/ambient/usage/networkpolicy/ + - port: 15008 + protocol: TCP + - from: + # Allow kubelet health probe per https://istio.io/latest/docs/ambient/usage/networkpolicy/ + - ipBlock: + cidr: 169.254.7.127/32 +``` + ## Route request based on the version label of the app + ### Set up cluster contexts @@ -69,8 +90,8 @@ Only `DestinationRules`, `Gateways`, and `VirtualServices` from `networking.isti the `HOST_CONTEXT` is the context of the host cluster, while the `VCLUSTER_CONTEXT` is the context to access the created vCluster. The `VCLUSTER_HOST_NAMESPACE` is the namespace where the vCluster is created on the host cluster. The `ISTIO_NAMESPACE` is the namespace to deploy the Istio integration. - The commands in the following steps will be automatically updated to follow your - configuration. For the GatewayAPI version, you should set it according to + The commands in the following steps will be automatically updated to follow your + configuration. For the GatewayAPI version, you should set it according to your Istio version / GatewayAPI version matrix above. @@ -132,7 +153,7 @@ Only `DestinationRules`, `Gateways`, and `VirtualServices` from `networking.isti /> and label it with `istio.io/dataplane-mode: ambient`: - + To apply a `DestinationRule` configuration to the virtual cluster specified by the `VCLUSTER_CONTEXT` context, use the following command: - + \ No newline at end of file + + +### Network policies +If you are using [network policies](../configure/vcluster-yaml/policies/network-policy.mdx), private nodes traffic into the virtual cluster control plane must be allowed. + +```yaml title="vcluster.yaml" +privateNodes: + enabled: true + +controlPlane: + service: + spec: + type: LoadBalancer + +policies: + networkPolicy: + enabled: true + controlPlane: + ingress: + - from: + # Allow incoming traffic from the load balancer internal IP address. + # This example is allowing incoming traffic from any address. Load balancer internal CIDR should be used. + - ipBlock: + cidr: 0.0.0.0/0 +``` diff --git a/vcluster/configure/vcluster-yaml/control-plane/components/backing-store/database/external.mdx b/vcluster/configure/vcluster-yaml/control-plane/components/backing-store/database/external.mdx index 7c9b74127..fccf3bc88 100644 --- a/vcluster/configure/vcluster-yaml/control-plane/components/backing-store/database/external.mdx +++ b/vcluster/configure/vcluster-yaml/control-plane/components/backing-store/database/external.mdx @@ -2,7 +2,7 @@ title: External database sidebar_label: external sidebar_position: 2 -sidebar_class_name: pro host-nodes private-nodes +sidebar_class_name: pro host-nodes private-nodes description: Configure an external database as the virtual cluster's backing store for enhanced performance and scalability. --- @@ -76,7 +76,7 @@ There are two mutually exclusive options for using an external backing store. Replace `CONNECTION_STRING` with the connection string for your database. Examples: - PostgreSQL: `postgres://username:password@hostname:5432/vcluster-db` -- MySQL: `mysql://root:password@tcp(192.168.86.9:30360)/vcluster` +- MySQL: `mysql://root:password@tcp(192.168.86.9:3306)/vcluster` ### Connector configuration @@ -95,6 +95,35 @@ controlPlane: The virtual cluster must be [connected to the platform](/vcluster/configure/vcluster-yaml/external/platform/api-key) to use the connector. This enables centralized management and monitoring of virtual clusters. ::: +## Network policies +If you are using [network policies](../../../../policies/network-policy.mdx), the virtual cluster control plane traffic to the external database must be allowed. + +```yaml title="vcluster.yaml" +policies: + networkPolicy: + controlPlane: + egress: + # Allow outgoing traffic to the mysql database server. + - to: + - ipBlock: + cidr: 192.168.86.9/32 + ports: + - port: 3306 + protocol: TCP + + # Allow outgoing traffic to the postgres database server private subnets. + - to: + - ipBlock: + cidr: 10.0.0.0/24 + - ipBlock: + cidr: 10.0.1.0/24 + - ipBlock: + cidr: 10.0.2.0/24 + ports: + - port: 5432 + protocol: TCP +``` + ## Config reference diff --git a/vcluster/configure/vcluster-yaml/networking/replicate-services.mdx b/vcluster/configure/vcluster-yaml/networking/replicate-services.mdx index 6ad8bb5e3..c2aa99fcd 100644 --- a/vcluster/configure/vcluster-yaml/networking/replicate-services.mdx +++ b/vcluster/configure/vcluster-yaml/networking/replicate-services.mdx @@ -49,6 +49,32 @@ In the above example, when you remove the `my-virtual-namespace/my-virtual-servi `networking.replicateServices.toHost`, the host cluster service `my-host-service` is not automatically deleted from the host cluster, so you need to delete it manually, if you don't want to keep it in the host cluster. +## Network policies + +If you are using [network policies](../policies/network-policy.mdx), traffic to or from the replicated services must be allowed. + +```yaml title="vcluster.yaml" +policies: + networkPolicy: + workload: + egress: + - to: + # Example allowing vcluster workload traffic to all pods in the my-host-namespace namespace. + # Depending on your use case, a more restrictive pod selector may be used. + - namespaceSelector: + matchLabels: + kubernetes.io/metadata.name: my-host-namespace + + ingress: + - from: + # Example allowing vcluster workload traffic from all pods in the my-host-namespace namespace. + # Depending on your use case, a more restrictive pod selector may be used. + - namespaceSelector: + matchLabels: + kubernetes.io/metadata.name: my-host-namespace + +``` + ## Config reference diff --git a/vcluster/configure/vcluster-yaml/policies/admission-control.mdx b/vcluster/configure/vcluster-yaml/policies/admission-control.mdx index 7eb8767e7..80816e476 100644 --- a/vcluster/configure/vcluster-yaml/policies/admission-control.mdx +++ b/vcluster/configure/vcluster-yaml/policies/admission-control.mdx @@ -3,7 +3,7 @@ title: Central admission control sidebar_label: centralAdmission sidebar_class_name: pro host-nodes sidebar_position: 5 -description: Configuration for ... +description: Configuration for central admission control. --- import AdmissionControl from '../../../_partials/config/policies/centralAdmission.mdx' @@ -27,8 +27,8 @@ Centralized admission control is an advanced feature for cluster admins that hav Central admission webhooks are different from the other policies: -- `LimitRange` and `ResourceQuota` resources are created and enforced on the host cluster. They do not appear as resources in the virtual cluster. -- A user could define `LimitRange` and `ResourceQuota` resources inside the virtual cluster, but they have full control over them and can delete them when needed. +- LimitRange and ResourceQuota resources are created and enforced on the host cluster. They do not appear as resources in the virtual cluster. +- A user could define LimitRange and ResourceQuota resources inside the virtual cluster, but they have full control over them and can delete them when needed. - Webhooks are different because they need to be configured inside the virtual cluster in order for them to be called by the vCluster's API server. - vCluster rewrites these definitions to point to a proxy for a host cluster service that handles webhook requests. The host might also have webhook configurations that use this service. - A user can still install a webhook service or webhook configuration into the virtual cluster outside of this config, but it would run inside the virtual cluster like any other workload. @@ -244,6 +244,23 @@ spec: admission webhook "validation.gatekeeper.sh" denied the request: you must provide labels: [gatekeeper] ``` +## Network policies +If you are using [network policies](./network-policy.mdx), admission webhooks traffic from the vCluster control plane must be allowed. + +```yaml title="vcluster.yaml" +policies: + networkPolicy: + controlPlane: + egress: + - to: + # Allow vcluster control plane traffic to the gatekeeper webhook. + - namespaceSelector: + matchLabels: + kubernetes.io/metadata.name: gatekeeper-system + podSelector: + matchLabels: + gatekeeper.sh/operation: webhook +``` ## Config reference diff --git a/vcluster/configure/vcluster-yaml/policies/network-policy.mdx b/vcluster/configure/vcluster-yaml/policies/network-policy.mdx index d758e15c4..f3f9d4ebb 100644 --- a/vcluster/configure/vcluster-yaml/policies/network-policy.mdx +++ b/vcluster/configure/vcluster-yaml/policies/network-policy.mdx @@ -15,7 +15,13 @@ import TenancySupport from '../../../_fragments/tenancy-support.mdx'; This feature is disabled by default. ::: -Workloads created by vCluster are able to communicate with other workloads in the host cluster through their cluster IPs. Configure network policies when you want to isolate namespaces and do not want the pods running inside the virtual cluster to have access to other workloads in the host cluster. +By default, workloads created by vCluster are able to communicate with other workloads in the host cluster through their cluster IPs. Configure network policies when you want to isolate namespaces and do not want the pods running inside the virtual cluster to have access to other workloads in the host cluster. + +Enabling this creates Kubernetes [NetworkPolicy](https://kubernetes.io/docs/concepts/services-networking/network-policies/) resources in the host namespace that control how vCluster pods (both control plane and workloads) communicate with each other and with other pods on the host cluster. + +## Prerequisites + +[Network policies](https://kubernetes.io/docs/concepts/services-networking/network-policies/) are implemented by the [network plugin (CNI)](https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/). To use network policies, you must be using a networking solution which supports NetworkPolicy. Creating a NetworkPolicy resource without a controller that implements it will have no effect. ## Enable network isolation {#enable-isolation} @@ -27,41 +33,206 @@ policies: enabled: true ``` -This creates NetworkPolicies in the host namespace that: +This creates Kubernetes NetworkPolicies resources in the host namespace that: - Allow traffic between pods within the virtual cluster - Block traffic from other namespaces - Permit DNS and API server communication -### Example configurations {#examples} +:::note +The Kubernetes NetworkPolicies resources are managed by vCluster. Manual changes to these resources will be overwritten. +::: -#### Basic isolation {#basic-isolation} +
+ Example of NetworkPolicies resources created in the host namespace +```yaml +apiVersion: networking.k8s.io/v1 +kind: NetworkPolicy +metadata: + name: vc-work-{name} + namespace: vcluster-{name} + labels: + app: vcluster + chart: vcluster-0.31.0 + heritage: Helm + release: {name} +spec: + # Pod selector matching virtual cluster workloads pods. + podSelector: + matchLabels: + vcluster.loft.sh/managed-by: {name} + policyTypes: + - Egress + - Ingress + egress: + # Allow egress to vcluster DNS and control plane. + - ports: + - port: 1053 + protocol: UDP + - port: 1053 + protocol: TCP + - port: 8443 + protocol: TCP + to: + - podSelector: + matchLabels: + release: {name} + # Allow egress to other vcluster workloads, including coredns when not embedded. + - to: + - podSelector: + matchLabels: + vcluster.loft.sh/managed-by: {name} + # Allow public egress. + - to: + - ipBlock: + cidr: 0.0.0.0/0 + except: + - 100.64.0.0/10 + - 127.0.0.0/8 + - 10.0.0.0/8 + - 172.16.0.0/12 + - 192.168.0.0/16 + ingress: + # Allow ingress from vcluster control plane. + - from: + - podSelector: + matchLabels: + release: {name} + # Allow ingress from other vcluster workloads. + - from: + - podSelector: + matchLabels: + vcluster.loft.sh/managed-by: {name} +--- +apiVersion: networking.k8s.io/v1 +kind: NetworkPolicy +metadata: + name: vc-cp-{name} + namespace: vcluster-{name} + labels: + app: vcluster + chart: vcluster-0.31.0 + heritage: Helm + release: {name} +spec: + # Pod selector matching virtual cluster control plane pods. + podSelector: + matchLabels: + release: {name} + policyTypes: + - Egress + - Ingress + egress: + # Allow egress to host kube-dns. + - to: + - namespaceSelector: + matchLabels: + kubernetes.io/metadata.name: 'kube-system' + podSelector: + matchLabels: + k8s-app: kube-dns + # Allow egress to host control plane. + - ports: + - port: 443 + protocol: TCP + - port: 8443 + protocol: TCP + - port: 6443 + protocol: TCP + # Allow egress to vcluster control plane peers, including etcd peers, when using etcd as the backend in HA mode. + - to: + - podSelector: + matchLabels: + release: {name} + # Allow egress connections to vcluster workloads. + - to: + - podSelector: + matchLabels: + vcluster.loft.sh/managed-by: {name} + # Allow egress to vcluster platform. + - to: + - podSelector: + matchLabels: + app: loft + namespaceSelector: {} + ingress: + # Allow ingress from vcluster control plane peers, including etcd peers, when using etcd as the backend in HA mode. + - from: + - podSelector: + matchLabels: + release: {name} + # Allow ingress for vcluster workloads. + - ports: + - port: 1053 + protocol: UDP + - port: 1053 + protocol: TCP + - port: 8443 + protocol: TCP + from: + - podSelector: + matchLabels: + vcluster.loft.sh/managed-by: {name} + # Allow ingress from vcluster snapshot. + - from: + - podSelector: + matchLabels: + app: vcluster-snapshot + # Allow ingress from vcluster platform. + - from: + - podSelector: + matchLabels: + app: loft + namespaceSelector: {} -The simplest configuration enables network isolation with default settings: +``` +
+### Example configurations {#examples} + +#### Custom ingress and egress rules {#custom-rules} +Control inbound and outbound traffic with specific ports and IP addresses for vCluster control plane and workloads: ```yaml title="vcluster.yaml" policies: networkPolicy: enabled: true -``` -#### Custom egress rules {#custom-egress} + workload: + ingress: + # Allow ingress from anywhere to specific ports + - ports: + - port: 6060 + - port: 444 -Control outbound traffic with specific CIDR blocks: + egress: + # Allow egress to a specific address and port + - to: + - ipBlock: + cidr: 172.19.10.23/32 + ports: + - port: 7777 + protocol: TCP -```yaml title="vcluster.yaml" -policies: - networkPolicy: - enabled: true - outgoingConnections: - ipBlock: - cidr: 0.0.0.0/0 - except: - - 169.254.0.0/16 # AWS metadata service - - 10.0.0.0/8 # Private network ranges - - 172.16.0.0/12 - - 192.168.0.0/16 + publicEgress: + # Disable convenience common public egress rule. + enabled: false + + controlPlane: + ingress: + # Allow ingress traffic from anywhere to the virtual cluster control plane api + - ports: + - port: 8443 + + egress: + # Allow egress traffic to a specific address + - to: + - ipBlock: + cidr: 172.19.10.23/32 ``` +:::note +`ingress` and `egress` config sections accept the same content type as [PodNetworkPolicy](https://kubernetes.io/docs/concepts/services-networking/network-policies/#podnetworkpolicy-resource) +::: + #### Add custom labels {#custom-labels} Apply labels to generated NetworkPolicies for easier management: @@ -114,6 +285,59 @@ This automatically: - Allows communication within the same project - Enforces network boundaries for CI/CD pipelines +## Migration from v0.30 config {#migration} +`workload` and `controlPlane` configuration sections are introduced to allow defining additional ingress/egress rules for the specific components. + +
+```yaml title="vcluster.yaml (v0.30 and earlier)" +policies: + networkPolicy: + enabled: true + + extraControlPlaneRules: + - ports: + - port: 8443 + + + extraWorkloadRules: + - ports: + - port: 6060 + + + outgoingConnections: + ipBlock: + cidr: 172.19.10.23/32 +``` +
+ +
+```yaml title="vcluster.yaml (v0.31)" +policies: + networkPolicy: + enabled: true + + controlPlane: + egress: + - ports: + - port: 8443 + + workload: + egress: + - ports: + - port: 6060 + + publicEgress: + cidr: 172.19.10.23/32 + +``` +
+ ## Config reference +| Deprecated Field | New Field | +| ----------------- | ---------------- | +| `extraControlPlaneRules` | `controlPlane.egress` | +| `extraWorkloadRules` | `workload.egress` | +| `outgoingConnections.ipBlock` | `workload.publicEgress` | + diff --git a/vcluster/third-party-integrations/rancher/install-rancher-integration.mdx b/vcluster/third-party-integrations/rancher/install-rancher-integration.mdx index c1675bb7c..73cd7d814 100644 --- a/vcluster/third-party-integrations/rancher/install-rancher-integration.mdx +++ b/vcluster/third-party-integrations/rancher/install-rancher-integration.mdx @@ -24,6 +24,30 @@ Ensure you have the following prerequisites before installing the vCluster Ranch - A running Rancher local cluster selected for installation. Please note that the docker install is not supported for production use cases. - At least one connected downstream cluster where virtual clusters are to be deployed. +### Network policies +If you are using [network policies](../../configure/vcluster-yaml/policies/network-policy.mdx), Rancher Operator traffic into the virtual cluster control plane must be allowed. + +```yaml title="vcluster.yaml" +policies: + networkPolicy: + controlPlane: + ingress: + # Allow ingress traffic from the rancher-cluster-agent-install job pods. + - from: + - podSelector: + matchExpressions: + - key: batch.kubernetes.io/job-name + operator: Exists + workload: + egress: + # Allow workload egress traffic to the rancher server. + # This example is allowing outgoing traffic to any address. + # Depending on your use case, a more restrictive policy may be used. + - to: + - ipBlock: + cidr: 0.0.0.0/0 +``` + ## Install the vCluster Rancher Operator @@ -62,7 +86,7 @@ Defining `snycLabels` in the operator's helm values allows you to configure the ```yaml syncLabels: - excludeKeys: + excludeKeys: - no-sync/* includeKeys: - some-specific-label @@ -75,7 +99,7 @@ The default fleet workspace is `fleet-default`, but can be overridden by setting ```yaml fleet: - defaultWorkspace: some-other-ws + defaultWorkspace: some-other-ws ``` You can also map projects to workspaces using the kubenetes UID for the Rancher project, mapping to a workspace like so: @@ -83,14 +107,14 @@ You can also map projects to workspaces using the kubenetes UID for the Rancher ```yaml fleet: projectUIDToWorkspaceMappings: - a8732c55-e618-42dc-8f75-0964b0180a79: ws-1 - 98abcc55-8341-c4dc-5531-902348b80321: ws-2 + a8732c55-e618-42dc-8f75-0964b0180a79: ws-1 + 98abcc55-8341-c4dc-5531-902348b80321: ws-2 ``` ## Use the vCluster Rancher Extension UI -Optionally, you can install the vCluster Rancher Extension UI to enable a more tailored user experience in Rancher. +Optionally, you can install the vCluster Rancher Extension UI to enable a more tailored user experience in Rancher. The vCluster Rancher Extension UI allows you to deploy virtual clusters directly from the Rancher user interface. It provides a separate UI for managing virtual clusters and a user experience more tailored to virtual clusters. The extension also labels virtual clusters in the Rancher Cluster Dashboard so you can distinguish them from physical clusters. :::note Requirements @@ -106,7 +130,7 @@ Ensure your environment meets the following version requirements: -Open Rancher and go to **Extensions** in the left navigation. Click the ellipsis menu (`...`) in the top-right corner and select **Manage Repositories**. +Open Rancher and go to **Extensions** in the left navigation. Click the ellipsis menu (`...`) in the top-right corner and select **Manage Repositories**. @@ -116,7 +140,7 @@ Click **Create**, enter a name for the repository, and set the **Index URL** to: https://charts.loft.sh/ ``` -Click **Create** to save the repository. +Click **Create** to save the repository.