Search results

  1. D

    Finally Cloudbase Init windows servers

    Hi, I'm trying this patch with Proxmox 6.0 but the error below is happening when I apply the patch: patching file /usr/share/perl5/PVE/QemuServer/Cloudinit.pm Hunk #1 FAILED at 37. Hunk #2 succeeded at 149 (offset 12 lines). Hunk #3 succeeded at 241 with fuzz 1 (offset 19 lines). Hunk #4...
  2. D

    Proxmox Replication feature does not replicate cloud-init drive

    I found the issue!!! The VM has a replication JOB configured.. with this job configured.. when I try to add the cloud-init drive.. the error above happens!! After we delete the replication job, we can successfully add the cloud-init drive.. this is something that must be treated by proxmox...
  3. D

    Proxmox Replication feature does not replicate cloud-init drive

    Hi, I'm testing the replication feature of Proxmox 6 using zfs-local on both nodes, everything works great except for the cloud-init config drive... when I try to simulate a disaster recovery, and try to use the replica VM on node02, the cloud-init drive does not exist on target zfs-local...
  4. D

    VM replication

    I solved the problem!!! Just removed the zpool from node2 and recreate it again with the same pool name as node1 without ticking the option to "ADD STORAGE"... after this, just configured the storage at DataCenter level to be available on both nodes.. After this, the zfs pool appeared on both...
  5. D

    VM replication

    reading the PVE Admin Guide, it says ZFS-local is supported for storage replication.. so, what am I missing in this setup?
  6. D

    VM replication

    Hi, I'm trying to create a replication between two PVE 6.0 Servers, but it is giving the error below when I try to create the replication: 2019-10-25 11:26:01 101-0: start replication job 2019-10-25 11:26:01 101-0: guest => VM 101, running => 0 2019-10-25 11:26:01 101-0: volumes =>...
  7. D

    CEPH does not mark OSD down after node power failure

    after almost 30min the OSDs were marked as DOWN.. Why it took so long? 30min is not a supported scenario for a cluster with dozens of VMs running... Were can I lower this timeout or why it took so long to ceph take OSD down...
  8. D

    CEPH does not mark OSD down after node power failure

    It has been almost 20min since the power failure and OSDs are still UP/IN.. size/min_size is 2/1 (only a test environment) 10 OSD per node the environment is: 3 mon nodes 3 mgr nodes 2 osd nodes
  9. D

    CEPH does not mark OSD down after node power failure

    Hi, I'm making some testing in a ceph cluster before put VMs in production onto this environment.. but we are seeing a strange problem.. When I reboot a node (Clean OS Shutdown) everything works great in the Ceph Manager, the node OSD become DOWN and everything works as expected.. But if we...
  10. D

    CPU hotplug not working..

    Thanks, based on this, what is the best practices? map to guest the same vCPU count as specified in the physical socket/cores tab or it doesn't matters?
  11. D

    Doubt regarding HA fence on Proxmox

    Hi, I use three node cluster with proxmox 6 and CephRDB for virtual machines. Is fence really necessary for virtual machines running over a ceph rdb cluster? no other shared resources exist.. soh that's why I'm confusing about the fence requirement on this HA cluster. My self understand about...
  12. D

    CPU hotplug not working..

    yes, cpu hotplug and memory is enabled in OPTIONS.. This guest is a Centos7 with Kernel 3.10.XX, which I believe is supported for HOTPLUG!!! Thanks for the explanation regarding the sockets/cores vs vcpus.. I was trying to online add sockets/cores, not vCPUS... this is working now!!! many...
  13. D

    CPU hotplug not working..

    Hi, I enabled NUMA on guest hardware, but when try to increase vCPU count, the guest cpu not change.. Memory hotplug is working great!! just CPU is not!! In the proxmox GUI when I increase the CPU count, the hardware tab stay in RED... and only disappear after a VM shutdown.. Why the CPU...
  14. D

    Cant login to WEB-UI after a node failure

    I make some tests here.. and marking the OSD as down manually, works.. I can access the CEPH Storage again.. #ceph osd down osd.1 #ceph osd down osd.2 etc... This is by design or it should be marked as down automatically when a host suffers a complete failure? My understand is that it should...
  15. D

    Cant login to WEB-UI after a node failure

    Hi, I'm testing a three-node cluster with CEPH and HA, everything is ok.. but if a poweroff one node (cold reboot) I can't login to web-ui anymore until this host come online again.. also.. ceph is not working, ha, etc in whis way... If a reboot the host normally (clean OS reboot) everything...
  16. D

    Proxmox exempt on virtual machine cluster

    Hi, For storage nodes ok, just not participate from the CEPH Cluster, but how can I do that for KVM host? I mean, when creating a virtual machine this host should not be displayed as an available host to accommodate this new virtual machine..
  17. D

    Proxmox exempt on virtual machine cluster

    Hi, Is possible to configure a specific proxmox node to not participate to the available cluster nodes when creating a virtual machine? This node does not have RAM/CPU for KVM guests, but I want to use it to only for the Ceph Cluster (monitor/osd)... Any tips to accomplish this?