I have a single machine with proxmox on ZFS with a single pool containing a three-disk mirror. Backups aside, I would like to implement a simple disaster recovery strategy. The idea is to save the partition scheme of the disks on removeble media, and then dump the pool to LTO tape, so to be able...
I have a proxmox installation with two bootable disks in a ZFS mirror (set up from the Proxmox installer): sda, sdb. I have removed a faulted disk ("zfs detach rpool sdb2") and now I want to replace with a new one.
I understand that I have to use the "zfs attach" command. However, with which...
I have no experience with hp microservers, but this is how I solved for my case:
1) no swap on ZFS (I don't remember if it was related to this particular failure, but still, don't do it; newer proxmox installers do not do it either).
2) no ZFS disk connected to any kind of pseudo-smart...
Ok, maybe this is a stupid question, but I was wondering...
pve-zsync syncs a snapshot of the zvol containing the VM virtual disk. But since this is done while the VM is running (unlike what vzdump does), isn't it possible that the guest filesystem is "frozen" while it is in some inconsistent...
Well, as I wrote above I had very bad experiences with ZFS on the H310, and finally I gave up and settled for the onboard SATA; this way I had no more problems. However, at least on R220 the I/O is not great with this setup. As always, YMMV.
Nine years later I am resuming this old thread to note that the info message is still very uninformative:
"only 1 backup(s) allowed - please consider to remove old backup files"
I also thought for a long time it was some kind of technical limitation, not a simple proxmox setting.
Maybe it...
@mailinglists: Okay, if swap on zfs is known to be unstable there is really no point in doing in-depth tests as @czechsys recommended. I'll get rid of that swap and consider some alternative. Thanks.
btw:
1) If the current Proxmox installer does not create swap on ZFS anymore, does it allow to...
I've tried this, without disabling the swap:
1. rebooted the host
2. started the guest
3. "sync; echo 3 > /proc/sys/vm/drop_caches" on both the guest and the host
4. started the infamous rsync on the guest
5. after a while, the host rebooted:
[ 823.461934] perf: interrupt took too long (2504...
This is not a cluster. They are three separate, almost identical machines carrying different VMs; their only relationship is that they use pve-zsync for backup (not related to this issue, the problem never manifest during that operation). I am showing you all three machines just to underline...
How often?
Yes, from one of the three affected machines (see below).
They have different specs:
machine1: 16 GB RAM, 8 GB swap, total pool size 4TB (mostly unused)
machine2: 32 GB RAM, 8 GB swap, total pool size 4TB (mostly unused)
machine3: 7 GB RAM, 6 GB swap, total pool size 250GB...
This is a single node, no cluster. It was originally installed from the 5.2-1 ISO image and eventually upgraded, up to the current version 5.4-5. It has basically no customizations.
Since it has swap on zfs (the installer made it), I tried applying the tweaks described here, but the problem did...
In a system with a single guest running (Ubuntu 16), I have the whole host system (proxmox) suddenly rebooting when the guest performs a local backup with rsync.
The host is Proxmox with kernel 4.15.18-14-pve; it has two disks with ZFS RAID 1.
Apparently nothing unusual is in the host logs, so...
Hello,
is it possible to use pve-zsync on two different hosts to backup the same (third) machine?
i.e. like this:
- pve-zsync on host B fetches a backup of VM101 from host A
- pve-zsync on host C also fetches a backup of VM101 from host A
?
Thanks
For the record: our Dell PowerEdge R220s produced some pretty disastrous results with proxmox & zfs and the disks attached to the PERC H310 controller, even when RAID was not configured at the controller level. Basically, it had a tendency to destroy the filesystem at boot, entering the GRUB...
I have received the following mysterious error message in a mail sent by the pve-zsync cron. What does it mean?
COMMAND:
ssh root@10.0.0.4 -- qm config 501
GET ERROR:
400 Result verification failed
lock: value 'snapshot-delete' does not have a value in the enumeration 'migrate, backup...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.