Search results

  1. D

    [TUTORIAL] HOWTO: Scripts to make cloudbase work like cloudinit for your windows based instances

    It does not work on latest release proxmox 7.2-4. Not to mention that after manually patching, a lot of errors show up regarding Cloudinit.pm when trying to regenerate a cloudinit drive for a VM. This is reffering to: sub configdrive2_gen_metadata { Password for windows administrator is never...
  2. D

    Urgent/important issue regarding proxmox/ceph storage and kvm virtualization!

    There is no way to recover anything. The bug seems similar, because when running backups I/O usage and resource usage is probably higher than normal, though we did not ran any backups but we did started a remote rsync restoration to a VM , this caused high I/O and other resource usage which led...
  3. D

    Urgent/important issue regarding proxmox/ceph storage and kvm virtualization!

    Can't be restored, disks exists but partition table was wiped. No backup was running, only what I first specified in the first post . It should get more priority because this is very alarming and can happen in a production environment.
  4. D

    Urgent/important issue regarding proxmox/ceph storage and kvm virtualization!

    That is what I also thought. Now, the problem is that we are afraid to try again because we have no idea what else could happen, as there are more VMs running inside this cluster, if you have any other idea what should we check for please let us know.
  5. D

    Urgent/important issue regarding proxmox/ceph storage and kvm virtualization!

    Hello, Thank you for your explanation. Does what you mention applies if this specific VM was create like 1 month ago? Was shutdown, started/rebooted several times before starting the migration? What I mean is that the changes were surely commited to the disks as those were created long time...
  6. D

    Urgent/important issue regarding proxmox/ceph storage and kvm virtualization!

    Hello, We are running multiple VMs in the following environment: proxmox cluster with ceph storage - block storage - all osds are enterprise SSDs (RBD pool 3 times replicated). ceph version: 15.2.11 All nodes inside the cluster have exactly this following version...
  7. D

    CEPH Network with 2 Switches

    how about lacp? if the switches support it?
  8. D

    Best option for getting HA to work across two servers?

    I personally do not use or recommend the use of this variant in some important production cluster.
  9. D

    Best option for getting HA to work across two servers?

    I agree, but in total, those are still 3.
  10. D

    A node in multiple clusters

    https://pve.proxmox.com/wiki/High_Availability
  11. D

    A node in multiple clusters

    You need 3 servers(nodes). You can spread all 12 VMs on these 3 nodes as you wish. You do not need 2 clusters.
  12. D

    A node in multiple clusters

    To have a working cluster and HA you need at least 3 nodes and you will be able to do what you need. All nodes within the same cluster (one cluster).
  13. D

    Node3 will not cluster

    multicast issues? I would check networking
  14. D

    PVE 5.4 Ceph Cloud Init Issue

    I believe he was talking about the storage itself, not the drive. You can see and change that from Datacenter -> Storage. The ceph storage, when editing, should have the option KRBD there. Though some of us might notice performance difference, thats why I will wait till they will fix this issue.
  15. D

    CEPH cluster. Wanted a comment from other PVE ceph users on ram usage per node.

    What is your ceph version and are you using bluestore? Have a look at osd_memory_target which I believe in newest versions it defaults to 4GB. I do believe that the OSDs should not eat more memory than they are told to, correct me if I'm wrong.
  16. D

    Cluster Communication

    Post your networking config. I see loss on your outputs, so things are not fine.
  17. D

    help with Proxmox network configuration

    I'd use network one for the proxmox cluster(corosync), network 2 as you, network 3 and 4 ceph public/private.
  18. D

    RRD update errors

    We had the same issue on 2 clusters, on all newly fresh installed nodes. As above stated, wait until the time catches up with the timestamp in the rrd database and the errors will be gone
  19. D

    PVE 5.4 Ceph Cloud Init Issue

    We expect it to work with krbd in the nearby future aswell, as some of us might see performance difference when using krbd.

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!