The problem has occurred again. I think it looks very similar. However, different interface to other VM. I have now switched off the firewalls on the VM and hope that this will make it more stable. Any ideas what else I could do? Here is the stack trace:
[362086.626721] skbuff: skb_under_panic...
Could it be related to IPv6 or dualstack? In both of the dumps I have, I can see a lot of references to IPv6 like:
[451923.472465] reject_tg6+0xa3/0x100 [ip6t_REJECT]
[451923.478443] ip6t_do_table+0x2e5/0x870 [ip6_tables]
[451923.490508] ip6table_filter_hook+0x1a/0x20 [ip6table_filter]...
Unfortunately the problem still occurs. In the meantime there was a few months of silence, but now the server has stopped several times within 10 days. So I set up a netconsole and found the following dump:
[451923.338181] skbuff: skb_under_panic: text:ffffffffafcc27db len:74 put:14...
Many thanks for the answer. I have to admit that I don't really know anything about kernel development. But I also didn't want to just write an entry in the forum without having done at least a little research. I'm sorry if that caused confusion. In any case, I use virtio drivers in all VMs. And...
I also experience kernel crashes since I migrated to Proxmox 7. 3 times in 2 weeks. No changes to hardware compared to Proxmox 4. Just a fresh installation. memtest already ran. HP iLO shows good system health. Unfortunately, my system is then inaccessible and does not write any corresponding...
I have the same problem. I suddenly appeared out of nowhere. No reboot or anything. We have a eight node cluster and this only happens if I connect to the first node in the cluster which we are using as the "Web-UI-Host". If I connect to the second one everything works fine. I also restarted...
Additional research substantiated the assumption that the problem is related to the disk geometry. A closer view to the raid parameters with HP SmartStart showed, that the created RAID0 arrays and logical drives have a Sector/Track setting of 32. I checked some plain SATA disks and found a...
Yes. I understand. Thank you. So the interest in debugging the issue is low. I did a last proxmox 5.1 installation after wipeing all disk in a separate system without RAID controller to ensure all blocks are wiped. Still the same behavior. I'll attach two screenshots from grub with debug all...
HP P410i. The disks are exported as a RAID0 each, because P410i is not able to do JBOD. I now understand, that it is not supported. But I am still wondering why it worked before for a long time and stopped working now. Need to have a look if it is possible to connect the backplane to another HBA...
I meanwhile nulled all disks on the server, installed latest bios, raidcontroller firmware, disk firmware. Then I did a clean install with Proxmox 5.1 latest ISO. Install works fine, but the system is not able to boot. Same error as before. Are you interested in debugging this issue? I could...
ok. I'm wondering why it was possible to boot the machine at least 7 times after updates (according to our system documentation) since it was installed but now it won't boot up again and seems to remain "unfixable". When I do a "set debug=all" in grub and then try to do e.g. an "ls (hd0)" I can...
Thank you. Unfortunately I have no spare disk slots or space on existing disks to set up a separate boot partition. I'll backup the data now with zfs send. Does the recommendation to have a non-zfs /boot partition mean, that it is not recommended to boot from ZFS any more? The I woul take to...
tried downgrading grub without success.
tried recreating /etc/zfs/zpool.cache and update initramfs without success.
scrubed rpool without any error
reinstalled grub on all devices afterwards
tried to boot from all disks by setting the bootdisks
error is still the same
Any other ideas except...
interesting. using an older proxmox 4.4 iso and rescue boot does not work. It cannot find rpool. If i boot ubuntu 17.10 and do a "zpool import rpool" everything works fine.
Are there any news on that? Having the same problem on a HP DL360G7 with Proxmox 4.x (latest updates) RAIDz1 after a crash. The disks are one RAID0 array each on a P410i controller and handed to ZFS like this (/dev/sda, /dev/sdb etc.). This worked fine since a long time but after the last crash...
Is it meanwhile possible to submit data to influx-db using https and authentication. As I need to bypass unsafe networks I would prefer this over UDP without authentication.
Thank you!
Hi Manu,
the Host is Proxmox 3.x with 2.6.32-42-pve. The guest OS is Debian Jessie with Linux 3.16.0-4-amd64 x86_64. The affected nodes in the Cluster are not using OpenVZ, so using the 3.10 kernel could be a option. But the cluster consists of 4 Proxmox host. 2 of them are using OpenVZ. Is it...
Thank you. I'm aware of the fact that benchmarking using dd is not the best effort. But in this case I tried to reproduce the measurements of the customer's r&d department in detail. In this scenario it turned out that a ZVOL with 128k or higher blocksize performed much better as one which 8k...
As a clarification: The root cause of the whole investigation was a very bad disk io performance with KVM on ZFS. It turned out, that the customer has 4TB of data in random size files which are exported from a VM using nfs. The performance was below 20MB/s in most cases as only a little part of...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.