Search results

  1. [TUTORIAL] Dell Openmanage on Proxmox 6.x

    Never tried it. Because it's OME and not OMSA (OpenManage Server Administrator).
  2. I/O errors since upgrading to PVE 7.1

    Same here. I've upgraded to pve-qemu-kvm_6.1.0-3 and no more IO issue on VirtIO disk.
  3. [TUTORIAL] Dell Openmanage on Proxmox 6.x

    Thank you everyone for your feedback. I've updated the main post with them.
  4. Give VMs client access to 3node full mesh Ceph

    In that case, enp129s0f1(node3), enp129s0f0(node2) should be slave of vmbr1 and the ip should be configured on the vmbr1. Right now, your bridge vmbr1 is not connected to any network interface. How could it communicate with the outside of your server ? in this situation VMa(on node2) and VMb(on...
  5. Help for DELL servers compatibility with Proxmox

    I've got Proxmox on multiple Dell servers (from 2950 to R740) H700 will work without problem. H310 wont work. You have 2 choices : Either do not use RAID or use mdadm software raid. In my opinion, if you have a shared storage (iSCSI, NFS, Ceph) you dont need raid for the system. I've been...
  6. Proxmox VE, PfSense, 1 NIC

    I've done it differently. On the switch : - the port where the box is connected is UNTAG on VLAN 42. - the port where the proxmox is connected is switchport mode trunk allowed vlan 1,42 with vlan 1 untag. On proxmox : - vmbr0 is not vlan aware - the pfsense has 2 interface virtio : the first...
  7. Proxmox VE 7.1 released!

    Maybe you are affected by the same problem as me. I had to edit VM config to set aio=native on all VM disk and switch virtio disk to scsi. check : You can validate...
  8. Sporadic Buffer I/O error on device vda1 inside guest,RAW on LVM on top of DRBD

    agent: 1 balloon: 0 boot: order=scsi0;ide2 cores: 4 cpu: host ide2: none,media=cdrom memory: 20480 name: MED-BDD-5 net0: virtio=46:58:91:52:2A:F7,bridge=vmbr0 net1: virtio=3E:30:4B:D0:17:29,bridge=vmbr0,link_down=1,tag=21 numa: 0 ostype: l26 scsi0...
  9. Sporadic Buffer I/O error on device vda1 inside guest,RAW on LVM on top of DRBD

    pveversion -v proxmox-ve: 7.1-1 (running kernel: 5.13.19-1-pve) pve-manager: 7.1-5 (running version: 7.1-5/6fe299a0) pve-kernel-5.13: 7.1-4 pve-kernel-helper: 7.1-4 pve-kernel-5.11: 7.0-10 pve-kernel-5.4: 6.4-7 pve-kernel-5.13.19-1-pve: 5.13.19-2 pve-kernel-5.11.22-7-pve: 5.11.22-12...
  10. Sporadic Buffer I/O error on device vda1 inside guest,RAW on LVM on top of DRBD

    On it ! Thank you. Would you like any other information ? You are right, VM are configured like that : #172.X.Y.Z agent: 1 balloon: 0 boot: cd bootdisk: virtio0 cores: 4 cpu: host ide2: none,media=cdrom memory: 20480 name: MED-BDD-5 net0: virtio=XX:XX:XX:XX:XX:XX,bridge=vmbr0 numa: 0 ostype...
  11. Sporadic Buffer I/O error on device vda1 inside guest,RAW on LVM on top of DRBD

    Hello everyone, I'm sorry to dig up an old thread but i'm exactly in this situation after upgrading from 6.4 (everything was fine) to 7.0 (last week) and 7.1 (yesterday). Nothing in the log of the host. Network/Disk/CPU/RAM all ok. But inside the VM (KVM) it's another story. It's independant...
  12. [TUTORIAL] Dell Openmanage on Proxmox 6.x

    The gpg error is usually an ipv6 problem.
  13. [TUTORIAL] Dell Openmanage on Proxmox 6.x

    I just did a new install on a R740 and a R430 with proxmox 6.2-1 iso and OMSA 930. Works perfectly. I also have 930 on R730, NF500 and PE2950 following this tutorial. For the PE2950 i have the problem that the CMOS battery is not found ... who cares ? :p Sorry i don't have R710 to test.
  14. [TUTORIAL] Dell Openmanage on Proxmox 6.x

    Thanks a lot for you feedback. I'll update the tutorial to add ncurses
  15. [TUTORIAL] Dell Openmanage on Proxmox 6.x

    You're welcome. I received so much from the community. It was my time to share ;)
  16. Upgrading proxmox cluster from 5.6 to 6

    my 2 cents after upgrading my cluster from 5.4 to 6, do update fast. Node by node i followed the tutorial for proxmox upgrade as well as ceph. When i was in a mixed versions, corosync was losing synchronization. I had to stop corosync and restart it in foreground "corosync -f" on all node until...
  17. Storage Idea

    He explained that he can have multiple 1U or 2 old R720XDs. With that he can make an HA setup by spreading the disk between the 2 boxes enable ceph and make a proxmox cluster in a 2 node setup.
  18. Storage Idea

    You should use ceph rbd with a target of 2 for the rbd pool. Each proxmox node of your setup would be hypervisor AND storage.
  19. [TUTORIAL] Dell Openmanage on Proxmox 6.x

    I've finally succeed in installing OMSA on proxmox 6.x and in return for all the information i've found thanks to the community, i wanted to share my findings. Here we go. Be sure to be logged as root at all time : sudo su First, be sure to remove omsa from Proxmox 5.4 before upgrading. apt...
  20. pmxcfs segfaults

    My 2 cents : [Wed Feb 13 05:24:44 2019] perf: interrupt took too long (4931 > 4920), lowering kernel.perf_event_max_sample_rate to 40500 [Wed Apr 10 16:24:53 2019] cfs_loop[6168]: segfault at 7f3bad915000 ip 00007f3bad08378a sp 00007f3ba4c323a8 error 4 in[7f3bad000000+195000] [Sun...


The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!