I tried. I can have the virtual media thx to a container, but the problem is to put the iso on the server. With iDRAC6 I can't access to samba share.
I think I should ask to a Dell forum.
Hmm, interesting point. I hadn't noticed that pattern and don't reboot my guests often, which is why I haven't responded to provide logs yet.
I'll have to keep an eye on this... the node where I run the most guests is usually above 80% memory usage.
I installed a fresh version of Proxmox Backup Server and added it to my 7 node ceph backed production cluster. I set up a test backup job to backup a set of test VMs.
When I ran the backup job the backups run for a few GB and then appeared to...
Danke. startUpdate() kommt dem schon ziemlig nahe und gibt mir einen schnellen refresh. Zwar muss ich noch **mehrere** load / datachanged events abwarten, bis der geklonte Gast da ist, aber es hilft mir erst mal so weiter.
Ich will ein...
A single vdev? That's worst case - there is no way no make it slower than that! (Even a RaidZ3 has the same IOPS = same speed...)
My recommendation would have been 7*mirrors --> seven times the IOPS of that construct...
Sorry, I have no trivial...
Find out yourself with your hardware:
apt install sysstat
Start two terminals:
iostat -dx 2
top -H
Increase readers until the load on iostat for every disk is ~90% (or some lower limit to leave headroom for other activities on the same zpool)...
You may want to enable "Backup fleecing", which directs writes during the backup to a local fast image to avoid slowing down guests: https://pve.proxmox.com/wiki/Backup_and_Restore#_backup_modes
While it's surely a good idea to disable ssh login for root I still think that for even better isolation of the qdevice a setup like the one described by @aaron here is to be oreffered...
Find out yourself with your hardware:
apt install sysstat
Start two terminals:
iostat -dx 2
top -H
Increase readers until the load on iostat for every disk is ~90% (or some lower limit to leave headroom for other activities on the same zpool)...
You must upgrade first to latest 8.4 (think it's 8.14.17 at time of writing) and Ceph 19.2.3 (Ceph Reef isn't supported in PVE9). It's clearly stated in the upgrade docs [1]. The Ceph upgrade docs are here [2].
Once you are in latest 8.4.x, and...
This is already available. Proxmox just has to integrate it. Try something like this
qm guest cmd VMID get-fsinfo | jq
You can see what that looks like here.
I also searched the bug tracker for get-fsinfo (I couldn't remember the command...
PVE cannot determine the free disk space inside the VM from the outside easily.
It would have to detect and parse the disk partition table and then read the filesystem metadata.
This stuff is trivial inside the VM, because there you have all the...
PBS selbst ist es egal woher hostname/IP kommen, du kannst lediglich nicht die eingebaute netzwerk management API/UI verwenden wenn du auf DHCP setzen willst..
Und den meisten reicht LVM vollkommen aus. Als Ersatz für Snapshots einfach PBS Backups benutzen. So habe ich das bei ganz vielen ehemaligen vSphere Setups umgesetzt. Beim nächsten Hardwaretausch wird dann oft in Richtung Ceph migriert. Ich habe...
Kleiner Hinweis, wenn man das macht, bitte nicht das Livelog des Jobs öffnen. Da kommen so schnell so viele Fehlermeldungen wegen missing Chunks, dass dir deine GUI einfriert. Also entweder per CLI Monitoren oder einfach laufen lassen.
Natürlich...
Okay so your /var directory seems to be taking up quite a bit more space than I think it should. You could try du -h /var | grep -P '^\d+\.?\d*G' to list any files that take up more than a GB. Maybe there is one major culprit in there ;)
And...
Hi,
When the proxmox installer partitions your drives (using lvm-thin [1] as default), it allocates part of your storage for the root partition (in your case 60GB) and creates another pool for VM and CT disk storage. This is why your root...