It seems to me that swappiness is ignored or that it works in a way no one comprehends (or at least I don't). My much smaller server, with 16 GB RAM and just 4 VMs using about 7 GB RAM total for themselves is swapping too much. I have 5 GB swapped out with vm.swappiness at 60 (default) and more...
I also have a standard installation, only addition I installed is openvpn that I use for management. Everything else is standard PVE, single host with local LVM storage and no ceph.
When this happens, I see that these values are zero and not 1 as they should...
Happened to me too. Firewall stopped working for all VMs but was still working for the pve host itself. I don't have ceph. I don't know what made the firewall stop working.
This is A VERY VERY VERY BAD BUG.
I will switch to firewall rules configured inside the VMs.
I am modernizing a situation that I inherited, and wanted to p2v the elderly server (17 years) before it breaks down, to be able to migrate everything to a newer software later. It seems that the old hardware will stay in production until I'm able to configure a new server and move everything to it.
I have tried (and rolled back) a P2V of a win2003r2 server. Everything seemed fine with IDE virtual disks, but when backup task (snapshot) started on the VM, I got a lot of IDE timeouts in the windows guest log, then the VM crashed. I found the VM simply switched off. Restarted it, found the...
So, to sum it up: Set up an user that has the DataStoreBackup priviliege, use it for backups from PVE (or from the backup agent on a phisical machine) and obviously do not try to set a pruning schedule on the client machine (PVE or phisical) because it will fail. Use pruning rules on the PBS server.
I have just read the documentation about the new PBS, and I am concerned about ransomware (or worse, attacks that are carried on by humans, that are far more intelligent that automated ransomware). The backup model is "push", i mean, the client machine (the one that is being backed up) accesses...
Some time has passed, so I don't remember exactly, but I believe that I have just waited for the migration task to arrive at 100% (on the web interface) and then I just entered a lot of "udevadm trigger" commands in console. Just "udevadm trigger" and enter and then again and again until somehow...
I'm experiencing this issue, too. I have 2 virtual disks, one is 134217728 bytes, and it moved properly. The other is 125861888 bytes and I cannot get it to migrate to LVM-THIN, even using "udevadm trigger". On PVE version pve-manager/6.0-4/2a719255 (running kernel: 5.0.15-1-pve)
EDIT: It...
Guletz and mbaldini, thanks for your answers. I am not a ZFS expert, I used it for the first time in Proxmox, because it's its default choice for software RAID. I have always used md and ext4 before. What baffles me is the fact that ZFS has so many issues in Proxmox, and that, based on what I...
I will surely reduce ZFS ARC, because it's now clear to me that it takes up too much ram, and it does never give it back, even under low memory conditions. It should do it, documentation says it does it, but it's not true. It does not give up a single byte of ram. Then I will try to use your...
I am not overprovisioning (or at least, I believe I am not doing it). I have for example a 16 GB server that had, until yesterday: 8 GB ZFS ARC cache, 3 GB for a VM, 1 GB for the other (so we are at 12 total) and i could not start another VM with 1 GB because KVM told me it could not allocate...
I don't have space for swap because I have set up PVE with its installer, and it leaves no space available. I have just now set ARC cache to 4 GB max on a 16 GB machine, and I should have more or less 5 GB free (considering the VMs and the ARC). I will now see how it works. I have set...
Mailinglists, I have tried setting swap use to a minimum (swappiness at zero) and in fact crashes became less frequent, but did not go away completely. I will now try to limit ARC max to 4 GB (on a 16 GB ram, 2 TB hard disk server with just 3 small VMs on) and I hope to get back some RAM to run...
LnxBil, thanks for your reply.
Please bear with me, I am quite desperate because of OOM issues and crashes (slowliness is an issue, but not the main one).
I made a mistake in saying that I used RAIDZ-1. It's simply RAID1 made using ZFS. I got the term wrong.
What I'm trying to accomplish is...
I am really baffled. I have not enabled dedup. I have just installed PVE from its ISO image, setting up disks to use RAIDZ-1. I am running 5 servers, in 5 different environments. No clustering, no "fanciness" at all. Just simple single servers with local storage. On different hardware, with 16...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.