Search results

  1. K

    [SOLVED] Ransomware protection?

    So, to sum it up: Set up an user that has the DataStoreBackup priviliege, use it for backups from PVE (or from the backup agent on a phisical machine) and obviously do not try to set a pruning schedule on the client machine (PVE or phisical) because it will fail. Use pruning rules on the PBS server.
  2. K

    [SOLVED] Ransomware protection?

    Thanks a lot. So if I get it right, the DataStoreBackup role allows for backups to be made and restored but not deleted in any way?
  3. K

    [SOLVED] Ransomware protection?

    I have just read the documentation about the new PBS, and I am concerned about ransomware (or worse, attacks that are carried on by humans, that are far more intelligent that automated ransomware). The backup model is "push", i mean, the client machine (the one that is being backed up) accesses...
  4. K

    Issues when cloning VMs

    Some time has passed, so I don't remember exactly, but I believe that I have just waited for the migration task to arrive at 100% (on the web interface) and then I just entered a lot of "udevadm trigger" commands in console. Just "udevadm trigger" and enter and then again and again until somehow...
  5. K

    Issues when cloning VMs

    I'm experiencing this issue, too. I have 2 virtual disks, one is 134217728 bytes, and it moved properly. The other is 125861888 bytes and I cannot get it to migrate to LVM-THIN, even using "udevadm trigger". On PVE version pve-manager/6.0-4/2a719255 (running kernel: 5.0.15-1-pve) EDIT: It...
  6. K

    Benchmark: ZFS vs mdraid + ext4 + qcow2

    Guletz and mbaldini, thanks for your answers. I am not a ZFS expert, I used it for the first time in Proxmox, because it's its default choice for software RAID. I have always used md and ext4 before. What baffles me is the fact that ZFS has so many issues in Proxmox, and that, based on what I...
  7. K

    Benchmark: ZFS vs mdraid + ext4 + qcow2

    I will surely reduce ZFS ARC, because it's now clear to me that it takes up too much ram, and it does never give it back, even under low memory conditions. It should do it, documentation says it does it, but it's not true. It does not give up a single byte of ram. Then I will try to use your...
  8. K

    Benchmark: ZFS vs mdraid + ext4 + qcow2

    I am not overprovisioning (or at least, I believe I am not doing it). I have for example a 16 GB server that had, until yesterday: 8 GB ZFS ARC cache, 3 GB for a VM, 1 GB for the other (so we are at 12 total) and i could not start another VM with 1 GB because KVM told me it could not allocate...
  9. K

    Official sollution for SWAP in PM 5.* and ZFS?

    Your're right. I will eventually open another topic if needed.
  10. K

    Official sollution for SWAP in PM 5.* and ZFS?

    I don't have space for swap because I have set up PVE with its installer, and it leaves no space available. I have just now set ARC cache to 4 GB max on a 16 GB machine, and I should have more or less 5 GB free (considering the VMs and the ARC). I will now see how it works. I have set...
  11. K

    Official sollution for SWAP in PM 5.* and ZFS?

    Mailinglists, I have tried setting swap use to a minimum (swappiness at zero) and in fact crashes became less frequent, but did not go away completely. I will now try to limit ARC max to 4 GB (on a 16 GB ram, 2 TB hard disk server with just 3 small VMs on) and I hope to get back some RAM to run...
  12. K

    Benchmark: ZFS vs mdraid + ext4 + qcow2

    LnxBil, thanks for your reply. Please bear with me, I am quite desperate because of OOM issues and crashes (slowliness is an issue, but not the main one). I made a mistake in saying that I used RAIDZ-1. It's simply RAID1 made using ZFS. I got the term wrong. What I'm trying to accomplish is...
  13. K

    Official sollution for SWAP in PM 5.* and ZFS?

    I am really baffled. I have not enabled dedup. I have just installed PVE from its ISO image, setting up disks to use RAIDZ-1. I am running 5 servers, in 5 different environments. No clustering, no "fanciness" at all. Just simple single servers with local storage. On different hardware, with 16...
  14. K

    Official sollution for SWAP in PM 5.* and ZFS?

    I understand that, performance apart, you managed to "tame" ZFS so it does not crash the host when it eventually ends up eating all of the available RAM. What did you do? I mean, what's your tuning procedure for a freshly installed PVE with ZFS? (I assume you are using RAIDZ-1 as disk...
  15. K

    Official sollution for SWAP in PM 5.* and ZFS?

    Sadly md raid is not officially supported, while ZFS is definitely not production ready, IMHO. See this: https://forum.proxmox.com/threads/benchmark-zfs-vs-mdraid-ext4-qcow2.49899/
  16. K

    Benchmark: ZFS vs mdraid + ext4 + qcow2

    After fighting with ZFS memory hunger, poor performance, and random reboots, I just have replaced it with mdraid (raid1), ext4, and simple qcow2 images for the VMs, stored in the ext4 file system. This setup should be the least efficient because of the multiple layers of abstraction (md and...
  17. K

    [SOLVED] PVE 5.2-1 High (6%) cpu use on idle Linux guests

    It worked! And since I don't have GUI, it's just fine anyway. Thanks. Also, no need to reboot.
  18. K

    [SOLVED] PVE 5.2-1 High (6%) cpu use on idle Linux guests

    I have a quite old Dell R310 with a Xeon X3440 @ 2.53GHz, with the latest Dell BIOS and various firmwares (they should be patched for Spectre/Meltdown). I have installed the latest PVE and set up 2 Linux VMs. (Devuan 2, basically a Debian 9 without systemd) I have 5% to 6% cpu always in use...
  19. K

    Sudden reboots while running backups (maybe ZFS ram issue?)

    Regarding swap on ZFS, I have found this information: https://github.com/zfsonlinux/zfs/wiki/FAQ#using-a-zvol-for-a-swap-device Note that AFAIK PVE does not set these parameters as stated on the FAQ