I thought that it was true, but after reading some answers in this thread now I have doubts.
The real reason why I want to know it's because of the full_page_writes parameter in postgresql DB. Setting it to off improves the performance greatly (almost x2) but for non COW FS keeps the DB in a...
Got it now. Thank you for the conversation. Very instructive.
As a last point: In the case of a snapshot backup of a Windows guest with quemu agent running and calling for VSS snapshot, we can't say it's a inconsistency risk free case like the linux guest one?
Sorry, I haven't understood this bit. The virtualized disk rests in top of a ZFS fs, so in order to update a file in the virtualized disk it ultimately has to be updated in the physical disk by ZFS, and this update it's COW.
From my understanding, if the host FS is COW, then all the writes in...
Thank you so much for the fast, nice & detailed answer point by point.
So, in a linux VM with qemu agent running, fsfreeze is going to be called like in the container case. Does this make snapshot backups of Linux VM with qemu agent running as safe as containers snapshot backups?
EDIT: Also...
Hello,
I'm reading the proxmox documentation chapter 16, about backups and the VM backup - snapshot mode says:
It's talking about a inconsistency risk that I don't know if it's properly explained. At least for me it has left a lot of doubts after reading it (probably because I'm a noob):
What...
I have done it this weekend and for the moment it's working well.
Back up all the zfs datasets on rpool to a secondary pool, including ROOT/pve-1 (snapshot + send | receive works good for filesystem datasets and zvols)
Reinstalled proxmox to the lastest available version (with a smaller rpool...
It's crazy.
I have deleted all datasets except the base rpool and still the same grub error message....
I'm going to do a fresh UEFI installation. Any advice to do an easier configuration of the new proxmox from the old one? I guess I can't directly overwrite the /etc/pve folder with the older...
But as I said in my message 13 (that maybe you haven't read because I double posted, sorry):
I mean, zvols can't be rewrited with cp or rsync, they are moved as a block. If importing the zvols again to the new pool (with send/receive, dd or GUI) I propagate the old properties, the new pool...
There is also something strange. From the grub rescue console I can't read neither the mirrored parition neither the BIOS parition.
The rpool is composed from two disks that they are printed like that with grub rescue> ls:
(hd0) (hd0,gpt9) (hd0,gpt2) (hd0,gpt1) (hd1) (hd1,gpt9) (hd1,gpt2)...
You don't have to say sorry. Thanks for your help and time.
I can't add more disks to the system. (A modest server on consumer hardware with all sata ports used). But I have a bigger and secondary pool on the system, I can use it to copy everything from rpool to secondary-pool. Then completely...
I have done the same with gdisk and the outputs are the same. There is the bios partition but not the ESP.
I have enough spare disk space. But instead of rsync I have done the following (after changing dnodesize to legacy of course):
zfs snapshot rpool/ROOT/pve-1@grub-error
zfs send -Rv...
I have booted with USB flash proxmox and confirmed that ins't a UEFI installation. sda1 is a 1MB "bios boot" (literally fdisk type) partition, instead a 500MB UEFI one.
So, no option to bypass grub limitation booting from systemd-boot...
I think I'm going to copy all the rpool content to a...
Thanks for thje empathy hehe...
In the UEFI settings it's selecected CSM Support = ENABLED. Is like it has been all the time. (CSM suppor means BIOS/Legacy support mode). Like that the system was booting before the dnode problem and now brings to the GRUB error.
Now, if I change to CSM...
Yes, it's strange.
If I disable CMS on the bios the system doesn't boot. Instead, it opens BIOS again. If I order to exit without saving it restarts and opens BIOS again. And that's it.
With CMS enabled and Storage Boot Option Control = UEFI on BIOS settings I have the GRUB error. So I can't...
Didn't know that grub was installed as fallback, but on the bios Storage Boot Option Control = UEFI and Other PCI devices = UEFI.
What more I can do? For me seems that systemd-boot isn't installed. How can I check if it is?
Hello,
I have a proxmox (v5.4 I think) machine installed on ZFS, including the root filesystem. Then, quoting proxmox wiki, it should have been installed with systemd-boot instead of grub2.
The thing is that the machine has grub2 (don't know why, I have inherited it like that) and after...
Hello everybody,
I have a proxmox cluster of a single node and I want to start a new VM with a PostgreSQL and TimescaleDB and after a lot of reading about how to tune ZFS volumes for this purpose I still have some doubts with the cache options. We have 3 caches: The proxmox one (ARC), the linux...
Thanks for your answer wigor,
But at least for the top filesystem block size. Does it makes sense to make it the same size as volblocksize when compression is enabled? If someone already knows would be cool to know it beforehand.
Hello everybody,
Right now I have a 4TB zvol that is attached to a VM that works as a fileserver over NFS/SAMBA. There are some issues with this zvol configuration (mainly related with volblocksize) and I would like to create a new one with the right config to replace the old one.
This zvol is...
Thanks guys, didn't known that this was a thing. I was still in chapter 3, trying to understand the storage. I have done both and now results are different.
Logicalused & logicalreferenced now meet the measure from inside the VM (2.4T)
Referenced & usedbydataset show 3.54, that meets exactly...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.