Search results

  1. R

    Add OSD - Disk does not show up

    Hi, we had to remove an OSD from a server and we want to add it again, but is does not show up as available disk: But the disk is available: Any idea?
  2. R

    [SOLVED] clock skew detected on mon A, B

    Okay i was able to fix it * apt purge ntp * vi /etc/systemd/timesyncd.conf * NTP=<yourservers> * timedatectl set-ntp true * timedatectl status 15 minutes later, skew issue went away. Thanks!
  3. R

    [SOLVED] clock skew detected on mon A, B

    Hi, we have 3 monitors in our CEPH cluster. two of them are reporting "clock skew detected on mon A, B", the third one is green. We have ntpdate installed, time seems to be fine. How can we get rid of this error?
  4. R

    HA Groups Priorities

    Great thanks alot! Is Proxmox deciding on memory usagE? Ex: Node1: 256 GB Memory free | Prio 10 Node2: 100 GB Memory free | Prio 100 Node3: 50 GB Memory free | Prio 1000 Where will the new VM with 20 GB Memory being created?
  5. R

    HA Groups Priorities

    Hi there, In Datacenter => HA => Groups its possible to define priorities. is 1 the highest priority or the higher the better? Node 1: Prio 1 Node 2: Prio 10 Which one has now the higher priority?
  6. R

    Enable HA on all VM's

    Thanks Thomas, we have a 6 node Ceph Cluster. All VM's has HA not enabled atm. What happens, if a host fails and the VM's has no HA enabled? Can we start them manually on another host or are they blocked on the failed host?
  7. R

    Enable HA on all VM's

    Hi Guys, how can i define, that a newly created VM is HA enabled?
  8. R

    Thick Provisioning to Thin Provisioning

    We're migrating from local storage to CEPH. rbd du -p cephpool1 NAME PROVISIONED USED vm-100-disk-0 32 GiB 32 GiB vm-102-disk-0 40 GiB 40 GiB vm-104-disk-0 41 GiB 41 GiB vm-107-disk-0 72 GiB 72 GiB vm-108-disk-0 41 GiB 41 GiB Backup / Restore is...
  9. R

    Thick Provisioning to Thin Provisioning

    Any idea if that's possible?
  10. R

    Cannot Migrate VM to another Node

    I moved the affected VMs using backup/restore, now their migrated. thanks a lot for your help fabian!
  11. R

    Thick Provisioning to Thin Provisioning

    Hi, i have some huge VM's with are thick provisioned in RAW format which i want to change to Thin provisioning. What is the correct way or is it even possible? Thanks a lot for your advise!
  12. R

    Cannot Migrate VM to another Node

    we currently have some 5.4 Hosts with local storage and some 6.1 Hosts with CEPH. We want to migrate the vm's from 5.4 to the new hosts. The source host has local storage (local-lvm), VM Config: Local-LVM shows the two disks: All VM's which we can migrate has only one disk attached and no...
  13. R

    Cannot Migrate VM to another Node

    I figured out, that the two "problem vm" have two disk images, disk-0 and disk-1 in local-lvm. in the hardware tab of the vm, i can only see disk-0 assigned. when i try to delete disk-1, i get an error that the vm with the id <> exisits. a qm rescan --vmid... does not help, the disk-1 does...
  14. R

    Cannot Migrate VM to another Node

    Thanks alot. We're currently doing a bulk migration of VM's. Most of the VM's are migrated, but we have some VM's which cannot be migrated. All are in local-lvm on the source node and we try to migrate with the following command: qm migrate 106 <dsthost> -online --with-local-disks...
  15. R

    Cannot Migrate VM to another Node

    Thanks for your fast answer! how can i savely drop that?
  16. R

    Cannot Migrate VM to another Node

    Hi, we try to migrate a VM to another node and it fails with the following error: Any idea?
  17. R

    Enable HA on all VM's

    thanks a lot, we'll try that!
  18. R

    Enable HA on all VM's

    Hi, we're migrating >100 VM's from Local Storage to a CEPH Cluster. Is there a command to enable HA on all VM's once? We have no more local storage.
  19. R

    New Ceph Cluster: degraded data redundancy 256 pgs undersized

    Hi, we have a new Ceph cluster running 3 nodes. Each node has 6 x 1.96TB SSD and 2x SAS Disks for System. After creating the OSD's and Pool, we get the following warning: degraded data redundancy 256 pgs undersized We created a ceph pool with 256 PGs. ceph osd df tree: ID CLASS WEIGHT...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!