Search results

  1. aaron

    High RAM usage in KVM processes & OOM errors

    Can you show the configs of the VMs in comparison to what memory they are using? qm config {vmid}. Some memory overhead is to be expected. But it would be interesting to see how the one with the potentially huge overhead is configured. How much memory do you assign all guests in total? Do you...
  2. aaron

    Systeme markieren

    Mittlerweile gibt es die Tags in der GUI. Der Thread ist schon mehr als ein Jahr alt. Schaut mal unter Datacenter->Options, da gibt es ein paar Einstellungen wie die Tags dargestellt werden.
  3. aaron

    [SOLVED] Moving to ZFS Mirror from GRUB installation

    Anything where you can store backups to. A network share on another NAS or other machine for example. Then use the backup functionality to back up your guests. After that you can modify your server and do a new install with ZFS. Then configure that external storage again to get access to the...
  4. aaron

    [SOLVED] Moving to ZFS Mirror from GRUB installation

    Ok, then a reinstall will be necessary. Are the guest's images located in `local-lvm`? Then it is easiest to create backups of the guests, to a reinstall and restore the guests with a new target storage.
  5. aaron

    Ceph Hyperconverged on Blade servers

    For now, if you want to create multiple OSDs / disk, you need to create them with ceph-volume lvm batch --osds-per-device X /dev/nvme…. If you want to build a Ceph cluster with blades, keep in mind that failure domains might be a bit different, depending on what is locally per blade and what is...
  6. aaron

    Add Tags to VMs in Server View

    In Datacenter->Options are a few options that control the tags and how they are shown.
  7. aaron

    Backups wieder herstellen auf einen anderen PVE

    Proxmox VE zeigt nur die Backups für den jeweiligen Namespace an. Wenn man mehrere Namespaces einbinden will, braucht man pro Namespace eine Storageconfig.
  8. aaron

    Pool Ceph in Storage ZFS

    I am confused. The OSDs should be set up on their own physical disk that is present in the node. Can you post the output of the following commands inside tags? ceph osd df tree ceph device ls
  9. aaron

    Mount external CephFS - no fsname in /etc/pve/storage.cfg

    thanks for the hint. I sent a quick docu patch.
  10. aaron

    [SOLVED] Moving to ZFS Mirror from GRUB installation

    How is it installed currently? ZFS with a single disk?
  11. aaron

    Mount external CephFS - no fsname in /etc/pve/storage.cfg

    cephfs: cephfs path /mnt/pve/cephfs content backup,vztmpl,iso fs-name cephfs fs-name is the parameter that you most likely need :)
  12. aaron

    unexpected full disk

    How large did you configure the disk images for both? In total more than ~16.5 GiB? Do they have snapshots maybe that take up space?
  13. aaron

    PBS Encrypted Backups und Deduplication

    Solange der gleiche Key verwendet wird, sollte gut dedupliziert werden können. Andere Keys liefern andere Daten, die am PBS gespeichert werden. Ansonsten wäre es keine gute Verschlüsselung ;)
  14. aaron

    [SOLVED] Netzwerk nachträglich separieren

    AFAIU gibt es ein weiteres Netzwerk? Stellt sicher, das ein anderes IP Subnet verwendet wird. Ein vmbrX Interface ist nur nötig, wenn die Gäste auch in das Netz sollen. Ansonsten kann man die IP direkt im Interface (Autostart aktivieren) oder direkt am Bond konfigurieren. Man kann für Corosync...
  15. aaron

    How to restore services on a proxmox+ceph cluster when 2 servers fail?

    So, 4 nodes with OSDs, pool is using a size/min_size of 3/2. 2 Nodes die. Some PGs will only have one replica. So far, so good. Make sure to have enough space on the OSDs that Ceph can restore the second replica on either node. While the pool is IO blocked, the VMs won't be able to access...
  16. aaron

    How to restore services on a proxmox+ceph cluster when 2 servers fail?

    The additional server is for Ceph, not Proxmox VE. You will need 5 MONs in order to survive the loss of two. -> small Proxmox VE node with a Ceph MON on it. If you run the pools with size/min_size 3/2 and lose two nodes, chances are high that some PGs will have lost two replicas. Until Ceph is...
  17. aaron

    Is dedup per server or per datastore ??

    Another thing to consider is if the clients encrypt their backups. Then the encryption key is another separation. If two different encryption keys would generate the same chunk, it wouldn't be a good encryption. ;)
  18. aaron

    Onboard NVMe controller with ZFS (ZFS is not compatible with disks backed by a hardware RAID controller)

    Ideally, anything with Power-Loss-Protection (PLP), the cheapest ones are just below 300€ by now. With consumer SSDs it is always a bit hit or miss if they are decent enough. Consumer SSDs are optimized for a desktop workload, where they will see writes happening in short bursts and data...
  19. aaron

    Onboard NVMe controller with ZFS (ZFS is not compatible with disks backed by a hardware RAID controller)

    I hope not for VMs… those drives are terribly slow once their internal write cache is full! That warning is there permanently, we do not try to detect if the disks are connected via a HW RAID controller or not as that would be hard to do reliably.

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!