You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Enhancement: Support for zVM compute nodes to Hosted Control Plane - vswitch FCP/DASD (#227)
Enhancement: Support for zVM compute nodes to Hosted Control Plane -
vswitch FCP/DASD
- Code changes for supporting zVM for Hosted Control Plane
- Supported network type : vswitch
- Supported disk types: FCP/DASD
- Updated the documentation for the same
---------
Signed-off-by: root <[email protected]>
Co-authored-by: root <[email protected]>
Copy file name to clipboardExpand all lines: docs/run-the-playbooks-for-hypershift.md
+4-3Lines changed: 4 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,13 +1,14 @@
1
1
# Run the Playbooks
2
2
## Prerequisites
3
3
* Running OCP Cluster ( Management Cluster )
4
-
* KVM host with root user access or user with sudo privileges
4
+
* KVM host with root user access or user with sudo privileges if compute nodes are KVM.
5
+
* zvm host ( bastion ) and nodes if compute nodes are zVM.
5
6
6
7
### Network Prerequisites
7
8
* DNS entry to resolve api.${cluster}.${domain} , api-int.${cluster}.${domain} , *apps.${cluster}.${domain} to a load balancer deployed to redirect incoming traffic to the ingresses pod ( Bastion ).
8
9
* If using dynamic IP for agents, make sure you have entries in DHCP Server for macaddresses you are using in installation to map to IPv4 addresses and along with this DHCP server should make your IPs to use nameserver which you have configured.
9
10
## Note:
10
-
* As of now we are supporting only macvtap for hypershift Agent based installation.
11
+
* As of now we are supporting only macvtap for hypershift Agent based installation for KVM compute nodes.
11
12
12
13
## Step-1: Setup Ansible Vault for Management Cluster Credentials
* Navigate to the [root folder of the cloned Git repository](https://github.com/IBM/Ansible-OpenShift-Provisioning) in your terminal (`ls` should show [ansible.cfg](https://github.com/IBM/Ansible-OpenShift-Provisioning/blob/main/ansible.cfg)).
39
-
* Update all the variables in Section-16 ( Hypershift ) and Section-3 ( File Server : ip , protocol and iso_mount_dir ) in [all.yaml](https://github.com/veera-damisetti/Ansible-OpenShift-Provisioning/blob/main/inventories/default/group_vars/all.yaml.template) before running the playbooks.
40
+
* Update variables as per the compute node type (zKVM /zVM) in Section-16 ( Hypershift ) and Section-3 ( File Server : ip , protocol and iso_mount_dir ) in [all.yaml](https://github.com/veera-damisetti/Ansible-OpenShift-Provisioning/blob/main/inventories/default/group_vars/all.yaml.template) before running the playbooks.
40
41
* First playbook to be run is setup_for_hypershift.yaml which will create inventory file for hypershift and will add ssh key to the kvm host.
Copy file name to clipboardExpand all lines: docs/set-variables-group-vars.md
+10-1Lines changed: 10 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -198,6 +198,7 @@
198
198
## 16 - Hypershift ( Optional )
199
199
**Variable Name** | **Description** | **Example**
200
200
:--- | :--- | :---
201
+
**hypershift.compute_node_type** | Select the compute node type for HCP , either zKVM or zVM | zvm
201
202
**hypershift.kvm_host** | IPv4 address of KVM host for hypershift <br /> (kvm host where you want to run all oc commands and create VMs)| 192.168.10.1
202
203
**hypershift.kvm_host_user** | User for KVM host | root
203
204
**hypershift.bastion_hypershift** | IPv4 address for bastion of Hosted Cluster | 192.168.10.1
@@ -232,15 +233,23 @@
232
233
**hypershift.asc.iso_url** | Give URL for ISO image | https://... <br /> ...s390x-live.s390x.iso
233
234
**hypershift.asc.root_fs_url** | Give URL for rootfs image | https://... <br /> ... live-rootfs.s390x.img
234
235
**hypershift.asc.mce_namespace** | Namespace where your Multicluster Engine Operator is installed. <br /> Recommended Namespace for MCE is 'multicluster-engine'. <br /> Change this only if MCE is installed in other namespace. | multicluster-engine
236
+
**hypershift.agents_parms.agents_count** | Number of agents for the hosted cluster <br /> The same number of compute nodes will be attached to Hosted Cotrol Plane | 2
235
237
**hypershift.agents_parms.static_ip_parms.static_ip** | true or false - use static IPs for agents using NMState | true
236
238
**hypershift.agents_parms.static_ip_parms.ip** | List of IP addresses for agents | 192.168.10.1
237
239
**hypershift.agents_parms.static_ip_parms.interface** | Interface for agents for configuring NMStateConfig | eth0
238
-
**hypershift.agents_parms.agents_count** | Number of agents for the hosted cluster <br /> The same number of compute nodes will be attached to Hosted Cotrol Plane | 2
239
240
**hypershift.agents_parms.agent_mac_addr** | List of macaddresses for the agents. <br /> Configure in DHCP if you are using dynamic IPs for Agents. | - 52:54:00:ba:d3:f7
240
241
**hypershift.agents_parms.disk_size** | Disk size for agents | 100G
241
242
**hypershift.agents_parms.ram** | RAM for agents | 16384
242
243
**hypershift.agents_parms.vcpus** | vCPUs for agents | 4
243
244
**hypershift.agents_parms.nameserver** | Nameserver to be used for agents | 192.168.10.1
0 commit comments