Search results

  1. M

    ZFS USED twice as LUSED

    Hi, i noticed that my two of my backup VMs use internally less space than is seen on hipervizor. Here is an example. Inside VM: /dev/sdb 4.0T 1.8T 2.2T 44% /backup Outside VM: rpool/data/vm-140-disk-1 2.70T 1.58T 2.70T - VOLSIZE LUSED USED...
  2. M

    Delete multiple backups at once.

    I have many backups for many VMs usig PBS. Now I want to remove most for just one VM and one of it's disks. I could click and delete one by one in PM or PBS GUI, however I would like to select multiple not to click thousands of times. Any suggestions?
  3. M

    ZFS zvol on HDD locks up VM

    Hmm... if there was no downside, it would be the default setting, i think. I guess you loose on disk space then, if block size in PM GUI for that datastore is set at 128k instead of 8k. Am I correct? Does anyone see any other downsides? I guess I will do some tests when I have the time and test...
  4. M

    ZFS zvol on HDD locks up VM

    It might be related to volblocksize of zvols. If you have the time, please match volblocksize to size as is on disks, and then also match it in filesystem you use in your VM. Do same tests then. But hopefully someone with more experience will join this conversation.
  5. M

    KVM change hardware without reboot

    No I have not. Thank you for pointing me in the right direction and wasting your time. :-)
  6. M

    ZFS zvol on HDD locks up VM

    I'm also interested in this. @udo @fabian @dietmar @tom @LnxBil
  7. M

    KVM change hardware without reboot

    Is there a way to change CPU or RAM settings with KVM instance without rebooting it?
  8. M

    pve-zsync: recovering VM?

    I use zfs rename on the target and copy and maybe fix the VM config files from /var/lib/pve-zsync after the first node is fixed and has no remnants of old VMS, I set up job in the opposite direction also, you can manually run sync jobs, then do not have to be scheduled or even if they are
  9. M

    [SOLVE] How to stop the "nonexisting" replication job?

    By the time i decided to try this, the stuck job was already gone. It took exactly one hour for it to fail and system to move on. :-)
  10. M

    [SOLVE] How to stop the "nonexisting" replication job?

    Hi, i have replication set to every minute. This VM has Qemu agent, but is currently not working. When backup started, and replication as well, replication never actually started in is in syncing state: 162-0 Yes local/p35 2020-12-15_21:19:51 pending...
  11. M

    KVM vs LXC for web server

    How is disk cache option set in KVM VM?
  12. M

    (ZFS) Snapshots put a lot of process in D state

    If you use KVM VMs and the freezing of FS before snapshot bothers you, you can disable it, by disabling qemu agent option on vm.
  13. M

    KVM vs LXC for web server

    Suprising! FYI I never had KVM VM outperform LXC (unless LXC was limited :-).
  14. M

    PM 6.2 KVM Live migration failed (bug or ?)

    FYI I live migrated another 50 KVM VMs without issue, including WHM/cPanel/CloudLinux. Live migration only failed for that VM with two disks. Some time in the future, after I upgrade both nodes to most recent PM version, will test again. If it fails agian, then it is reproducable, so I will open...
  15. M

    High CPU usage on ZFS

    Hi guys. all your suggestions are nice. However live migration with LVM takes forever, because it synces whole VM, but if using ZFS with replication it takes just a few seconds. However I solved this by installing intel microcode and now host uses much less CPU, or works as expected. :)...
  16. M

    [SOLVED] Upgrade from 6.0 to 6.3 CPU usage increase by 100% :-)

    I fixed it by installing latest intel microcode and now every operation is much faster. Even ZFS CPU usage decreased dramatically.
  17. M

    PM 6.2 KVM Live migration failed (bug or ?)

    Small update, I did live migrate 5 more VMs, and all worked. Only one that died, was the WHM/cPanel VM with two disks. Maybe it is related to number of disks, ...
  18. M

    PM 6.2 KVM Live migration failed (bug or ?)

    A VM with two disks live migration failed. It also died on source side. Offline (as it was dead anyway) migration worked and VM recovered after. I'm attaching the live migration log. Should I report a bug or..? Proxmox Virtual Environment 6.2-12 Virtual Machine 142 (XYZ) on node 'p37' Logs ()...
  19. M

    HA cluster on 2 servers with ZFS

    Ask your self, do you have quorum, with 2 node cluster, when one node dies?

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!