Seems to work.
For other users, who fail to understand pct manual syntax (like me :-), here is a example that should work:
pct restore 213 nfsAKlocalDir:backup/vzdump-lxc-213-2023_09_21-02_51_00.tar.zst --rootfs local-lvm:35 --mp0 local-lvm:5000,mp=/srv/storage --mp1...
Tnx, but now I am not sure what should I do with rootfs.. it should just use what is defined in backup. not sure what is is asking for. All i tried did not work.
pct restore 213 nfsAKlocalDir:backup/vzdump-lxc-213-2023_09_21-02_51_00.tar.zst --storage local-lvm --mp0...
I am looking at the manual, but failing. Please help with example. Maybe @fabian ?
root@ak:~# pct restore 213 nfsAKlocalDir:backup/vzdump-lxc-213-2023_09_21-02_51_00.tar.zst --storage local-lvm --mp0 mp=/srv/storage,size=5000 --mp1 mp=/srv/striped,size=4000
400 Parameter verification failed...
Still have no idea how to import this, as there is no option to resize specific disk upon restore.
This might even be a bug, because the backup was taken with PM, to PBS and restored to PM again.
This should work everytime, no matter if the source uses ZFS with compression and destination...
Hi,
I have a container, hosted with PM on ZFS (compression on) and multiple disks.
I want to import this with pct and define new disk size for each of his disks.
If there is one disk, command would look something like:
pct restore 212 nfsAKlocalDir:backup/vzdump-lxc-xxxx.tar.zst --storage...
In general, it helps to reduce IO pressure for the VM by using IO-Threads on the disks (and virtio-scsi-single as IO controller), as that gives the QEMU main thread more time to process some VM events/work.
Guys who experience the issue with fsfreeze locking fs inside guest, do you by any chance have /tmp on loopback interface?
Like:
/dev/loop0 1,2G 2,9M 1,2G 1% /tmp
I vagely remember, that there is an issue for a few years and developers are (still?) doing the...
There are now at least some r730xd and r630 DELL servers, that PM 7 does not work on (PM 6 does).
Can any of the PM devs comment on this thread?
@t.lamprecht ?
It seems related to kernels newer than 5.4. With 5.4 we can use PM7, but with later we can not.
It is dell r630 server. I think there might be some threads about this on his forum, will report back if, i find a solution in them, but so far nothing has helped.
All the same disks. Raid controler is: 86:00.0 Serial Attached SCSI controller: Broadcom / LSI SAS3416 Fusion-MPT Tri-Mode I/O Controller Chip (IOC) (rev 01). Might be a HW RAID related issue, but it also already is on latest firmware.
We have no way of getting firmware updates from Samsung for this Samsung disk, while Dell and others selling these rebranded disks, have working fixes / firmware out. We have latest BIOS and firmware on motherboard and NVMe (LSI 9400) kontroller already installed and there is nothing to update...
Hi guys,
just wanted to let you know, that with PM 7, when you plug in brand new Samsung pm9a3 MZQL21T9HCJR-00A07, one of the current disks dissapears (and so on with each disk you add). If you notice before your zfs raid falls apart, you can get old devices back with rescan of pci: echo 1 >...
Those threads also stated I should update all nodes to newer kernel for live migration to work, but I just updated the newer (Intel scalable) servers to: Linux 6.1.10-1-pve #1 SMP PREEMPT_DYNAMIC PVE 6.1.10-1 (2023-02-07T00:00Z) and live migration works in all directions now.
So unless devs...
Thank you for your answer. Because things change with time, I thought it was best to ask.
So the official response to solving this bug would be, upgrade to 5.19 kernel. We will do that and report back only if any issues arise.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.