Search results

  1. OpenVSwitch produces output as kernel-panic

    Ups forgot to mention that I see this only on latest kernel versions like the one blow: 5.15.35-2-pve 5.15.35-3-pve I have switched to 5.13.19-6-pve - no issues at all.
  2. OpenVSwitch produces output as kernel-panic

    Hello guys, Doing usual checking activities today I came across this warning: [ 6.788543] openvswitch: Open vSwitch switching datapath [ 7.137084] device ovs-system entered promiscuous mode [ 7.137552] ------------[ cut here ]------------ [ 7.137555] WARNING: CPU: 3 PID: 2160 at...
  3. [SOLVED] How the reboots are handled when run inside on the containers

    Ah, sorry, no issue at all. Just wondering if this is the right way to do the things. I'm 100% sure when using VM, because this is completely separated/emulated process from the host OS, but what about CT's - it seems to work. Just a matter of discussion. If you want, I can close the topic in...
  4. 5.13.19-2-pve kernel bug, maybe, pve-manager/7.1-8/5b267f33

    So yeah, please ignore this tread till I manage to fix the network connection to 1GB, then we can check if the problem persist.
  5. 5.13.19-2-pve kernel bug, maybe, pve-manager/7.1-8/5b267f33

    On this node I have two network cards, and also noticed that the on-board network card is linked on 10MB speed, instead of 1G
  6. 5.13.19-2-pve kernel bug, maybe, pve-manager/7.1-8/5b267f33

    Not sure how to start this topic, but I noticed strange behavior on one of my systems, here is the kernel output: [ 242.670592] INFO: task pvescheduler:2292 blocked for more than 120 seconds. [ 242.670600] Tainted: P O 5.13.19-2-pve #1 [ 242.670602] "echo 0 >...
  7. VM I/O errors on all disks

    I made the test this morning, and it seems to work fine for me. Made the same test as I did it in the past (trying to download gitlab-ce package which is almost 1GB of size). The error messages are not shown anymore. Because my test is not comprehensive, please the other guys to test as well...
  8. VM I/O errors on all disks

    Thanks @Funar so you can clearly say this is a bug. Not sure how to make this official, I hope some of the Proxmox developers read the forums.
  9. VM I/O errors on all disks

    Yeah, it seems that there is some incompatibility between ZFS and VirtIO Block devices
  10. VM I/O errors on all disks

    Yeah, already did it :-) all good. The issue disappeared! Many thanks. Bear in mind, Windows will go BSOD when doing such change. Or at least mine did it ;-) Luckily I don't have any production Windows OSs.
  11. VM I/O errors on all disks

    Actually I was wrong, because it uses UUID, the UUID remains the same, no matter you are using SCSI or VirtIO. Will change it and check if the errors still appear. Many thanks!
  12. VM I/O errors on all disks

    But this will change how the disk are recognized by the OS, which means - unable to boot the OS
  13. VM I/O errors on all disks

    I'm not sure what you are talking about, You want me to switch to Virtio Single or?
  14. VM I/O errors on all disks

    So maybe the last test we can do is to simulate many small writes, just to check if it will appear or not. Because by far as mentioned above, the issue comes up only when trying to store/write big piece of data.
  15. VM I/O errors on all disks

    This morning I tried to limit the network and disk speed to 1MB/ps, the problem still persist. From my point of view it is not about heavy load, but the amount of data being written to the disk. As you see, the gitlab installation is one huge package, which contain all the things related to...
  16. VM I/O errors on all disks

    On my side only Ubuntu VM's are affected, but I think this is shoot in the dark. How this can be related to distribution of the OS.. Here are the details about my hypervisor: root@proxmox-node-1.home.lan:~# pveversion -v proxmox-ve: 7.1-1 (running kernel: 5.13.19-1-pve) pve-manager: 7.1-6...
  17. VM I/O errors on all disks

    I have the same issue only for 1 VM, at least I have tried setting up native with no cache, but this makes the things even worst - the VM completely freezes. Other ideas how to resolve the issue? Thanks in advance.
  18. Received packet on fwln interface with own address as source address

    No, one 1 interface in place. root@proxmox-node-1.home.lan:~# cat /etc/pve/lxc/121.conf #SERV%3A DNS, PIHOLE, LLDP, SNMP, LIGHTHTTP, POSTFIX, NRPE, PUPPET #IP%3A 192.168.10.5 #VLAN%3A 310 #PAT%3A NONE arch: amd64 cores: 2 hostname: pihole.home.lan memory: 768 nameserver: 192.168.0.7...
  19. Received packet on fwln interface with own address as source address

    No one one this one? Do I made something bad from configuration perspective?
  20. [SOLVED] How the reboots are handled when run inside on the containers

    Hello guys, I'm working on an playbook (Ansible) intended to perform so called "unattended upgrades" of the whole environment. Because the playbook includes actions like setting up scheduled downtime in Nagios, rebooting the server and wait till it came up again online I created a logic, all of...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!