The weight of the buckets is always the sum of their contained items. Why would you want to change that?
All pools always use all OSDs (except when you use device classes). This makes Ceph very flexible wrt to disk usage.
If you want the second pool to use its "own" OSDs you need to assign a...
Just put two buckets of type room (or datacenter if you like) under the root and the hosts under the room buckets.
The second rule would then take the room bucket for site A as starting point. No need for two root buckets.
You have added a RAW image as virtual disk to the VM. A raw image does not support snapshots or thin provisioning.
Add a qcow2 image in the directory storage to the VM and it will be thin provisioned and support snapshots.
https://pve.proxmox.com/wiki/Storage
Have you read https://docs.ceph.com/en/latest/install/windows-install/ ?
I have no experience with Ceph on Windows but it looks like it is limited to the server editions. At least it needs dokany if you want to use CephFS.
Stop all VMs and CTs on pve03.
Edit /etc/pve/storage.cfg and rename the storage.
Edit all VM and CT config files below /etc/pve/nodes and change the storage name.
Start the VMs and CTs on pve03.
Have you moved the MONs to new IP addresses?
Is the Ceph cluster working? What does ceph -s tell you?
Have you changed the MON IPs in the ceph.conf of the PBS host?
You can create an SDN with a VLAN zone where you can name each VLAN. Within the VM's configuration the named VLAN can then be selected for the vNIC.
With the permission system of Proxmox you can control who is able to use which VLANs for their VMs.
I would also assume that automating a Debian installation and then adding the Proxmox repos and packages would be the easiest way.
Have you looked at https://fai-project.org/ ?
Assign a different device class to the "B" disks and create crush rules that utilizes these device classes. Assign the crush rules to the pools and they will only use the OSDs from their device class.
Use another VLAN ID as PVID for that bridge. We use 4063 for that purpose:
auto vmbr0
iface vmbr0 inet static
bridge-ports bond0
bridge-stp off
bridge-fd 0
bridge-vlan-aware yes
bridge-vids 1-3 5-99 101-4094
bridge-pvid 4063
IMHO 80GB for WAL is way too much. It should be sufficient to have one extra 80GB of SSD space for HDD OSDs that then will contain DB and WAL. OSDs will automatically put WAL and DB together if you only specify one db-device on creation.