Hi,
you can use the bigger partition. If you set "zpool set autoexpand=on rpool" than your zfs pool will get the new size after the second SSD ist replaced.
Udo
Hi,
the IO-issue like I had in https://forum.proxmox.com/threads/io-trouble-wit-zfs-mirror-on-pve7-2-5-15-39-1-pve-bug-soft-lockup-inside-vms.113373/ isn't fixed with vm-disk parameter aio=thread…
Today I migrate an VM live to an Mode with two disks (25G + 75G) and after that, many (all?) VMs on...
Hi,
I've also set all VMs to "virtio-scsi-single/iothread/threads" and this solved my issue with the actual 5.15.X Kernel on AMD-Server (Zen-2) described in this thread: https://forum.proxmox.com/threads/io-trouble-wit-zfs-mirror-on-pve7-2-5-15-39-1-pve-bug-soft-lockup-inside-vms.113373/
Udo
Status Update:
to use our new server in an production ready shape, there are two (in real only one) solutions:
- use Kernel 5.13 - not realy an solution
- use the actual 5.15 Kernel with the VM settings found by @RolandK :
scsihw: virtio-scsi-single
scsi0...
Hi,
status update - also with the new kernel pve-kernel-5.15.53-1-pve the same effekt happens. During migrate an big VM to that node, an existing VM hangs… see attached screenshot.
After that I installed the pve-5.13 kernel (pve-kernel-5.13.19-6-pve) and with this kernel I had migrate many VMs...
Hi,
I've tried this kernel for an big issue I have with two new host ( https://forum.proxmox.com/threads/io-trouble-wit-zfs-mirror-on-pve7-2-5-15-39-1-pve-bug-soft-lockup-inside-vms.113373/#post-496346 ).
Live migration to an host with this kernel looks better for the first tries (only some...
Hi again,
for the maintenance tomorrow morning, I've canceled the pve7.2 upgrade!
I've live migrate some VMs from the pve6-host, but first moved all VM-disk to ceph.
The pve6-host has an AMD EPYC 7542 CPU. The target (new 7.4) has an AMD EPYC 7402P.
All VMs was pingable after migration and the...
Hi,
status update: the issue is still there and actual updated hosts are not really production proof!
After Dell release in a short time a new bios-update and also some new pve-updates I'm hopefully to use the new hosts for production.
But with migration of an VM to the host I can still kill...
Hi,
due to the Issue in
https://forum.proxmox.com/threads/io-trouble-wit-zfs-mirror-on-pve7-2-5-15-39-1-pve-bug-soft-lockup-inside-vms.113373/ (two new AMD-Hosts are not realy usable!)
I would switch the Async IO to native as default (hope it's help). But the man page of datacenter.cfg don't...
Hi,
the "other" Server run's without issues since yesterday evening.
I migrate live an VM (with app. 130GB zfs disks) and the VM hang, after the migration. I stop the process and boot again - VM starting and during boot I got the messages: rcu_sched self-detected stall on CPU { 0} - and cpu 1...
The strange thing is, that I've now migrate all VMs to another server (same hardware, same software-version) and there are no issues.
Tried to reproduce the issue but still without success (can't use production VMs for that and stress swap do not produce much IO).
Even with fio are the issue...
perhaps the network device naming of one nic has changed? (I've see one case during an update in the last time)
But in this case you should get an error on manually start too…
Do you have an auto entry for all devices?
Udo
Hi,
last week I moved VMs to an fresh installed new cluster-node, but it's don't run well. Most of the VMs has massive trouble to do IO.
After migrating all VM-Disks to ceph (local-zfs before) the VMs work.
The big quesiton is, where are the issue? Host-Kernel?
The system is an Dell R6515 with...
Are you sure, that you have two copies only? That would be dangerous…
What is the output of following command?
for i in `ceph osd lspools | tr -d ",[0-9]"`
do
ceph osd dump | grep \'$i\'
done
Udo
Hi,
have you removed two OSDs from different servers, or from one server?
Are all PGs clean and aktive? E.g. following command don't shows PGs?
ceph pg dump | grep -v active+clean
If all clean, you can start an deep-scrub on all active+clean PGs
ceph pg dump | grep active+clean | cut -d' '...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.