You need to understand how backup works. Specifically in stop mode:
VM is running
PVE tries to do a shutdown of the VM:
If QEMU Agent is configured and running, shutdown signal is issued with it.
Else, sends an ACPI signal which guest OS may or...
I mean a "RAID0 of two RAIDz2 vdevs". Something like this:
zpool create tank \
raidz2 disk1 disk2 disk3 disk4 disk5 disk6 disk7 \
raidz2 disk8 disk9 disk10 disk11 disk12 disk13 disk14
That's true, as I mentioned above:
That happens...
Good point!! I did assume OP had at least a mirror of NVMe for special device. Should be a mirror of 3 to give it the same redundancy as the HDD part.
None of those settings will really help to increase performance on your big zpool on verify...
Find out yourself with your hardware:
apt install sysstat
Start two terminals:
iostat -dx 2
top -H
Increase readers until the load on iostat for every disk is ~90% (or some lower limit to leave headroom for other activities on the same zpool)...
You must upgrade first to latest 8.4 (think it's 8.14.17 at time of writing) and Ceph 19.2.3 (Ceph Reef isn't supported in PVE9). It's clearly stated in the upgrade docs [1]. The Ceph upgrade docs are here [2].
Once you are in latest 8.4.x, and...
Yes, can be run on a live VM in the sense that the command runs, i.e. doesn't check if the given rbd image as an owner/lock, so we can suppose it's safe. As you mention, Ceph docs doesn't explicitly tell if it can be run live or not. That said, I...
Problem could be that your "trim" stopped working at some point for some reason (i.e. discard wasn't ticked in the VMs disk configuration) and even if fstrim tries to discard the whole free space, the underlying storage stack will only act on...
For Windows to properly report used memory to PVE you need:
- Virtio Balloon driver and service installed and running (installed and enabled automatically with the VirtIO ISO installer). Don't confuse this with QEMU Agent, which does different...
IIRC, unfortunately pre/post migration hook scripts aren't implemented yet [1]. The source PVE will have no clue the VM is no longer running in it.
[1] https://bugzilla.proxmox.com/show_bug.cgi?id=1996
Hi @RoxyProxy,
Since questions around pricing, company future, and so on came up, we felt it was appropriate to chime in. First, we'd like to thank the Blockbridge customers earlier in the thread for sharing their thoughts.
To be honest...
At https://packages.debian.org/forky/amd64/zfsutils-linux/filelist I can see there are /usr/bin/zarcstat and /usr/bin/zarcsummary
Maybe those were renamed in the newer version?
What about man zarcstat ?
P.S.
Indeed...
Thanks for the heads up. Pretty sure most where created with Ceph Reef except a few that got recreated recently with Squid 19.2.3. I'm aware of that bug, but given that I don't use EC pools (Ceph bugreport mentions it seems to only happen on OSD...
Best of luck to you * fingers crossed *. I had to rebuild the whole cluster in my clients case and fix ceph by manually restoring placement groups - which was a pain.
@fstrankowski I'm fully aware about the risks of OSD being full and know how to deal with that, but in any case an OSD should break because of that ;)
Definitely fragmentation has an impact on this and will watch it more closely from now on...
Initially i'd like to raise concerns about the amount of available storage already beeing in use. By default CEPH doesnt allow more then 80% so you'd have to take precautions really soon while taking these concerns into consideration.
I'd highly...
In short: if Ceph warns you about something, do something about it.
Read the full bugreport and found this comment [1]: "This issue seems to mostly affect disks which were heavily fragmented.". Mine are and in fact I have some warnings related...
PVE8.4.14 + Ceph 19.2.3, 3 node cluster. All disks are PCIe NVMe. Different pools, some with zstd compression enabled.
I'm seeing OSD crashing lately with the same failure. Journal shows that it is unable to properly run RocksDB with an assert...
@Zappes please, link the Bugzilla you mention here [1]. I would love to be able pull (not push) encrypted syncs where the source is unencrypted for any reason and destination PBS must store encrypted backups.
[1]...