Search results

  1. Ceph Nautilus and Octopus Security Update for "insecure global_id reclaim" CVE-2021-20288

    I've been facing same issue (clients using insecure global-id) after upgrade to octopus 15.2.11 & pve-manager 6.4-8, but my config is not really standard, don't know if the issue can help. My proxmox cluster: 4 storage nodes, whith ceph installed; 3 compute nodes without ceph part deployed...
  2. Network problem bond+vlan+bridge

    Workaround found ... I've opened a case at Dell to see if problem was known ... but nothing in their databases. Debian / proxmox not really suported, but they suggested a really clever idea ... add the slaves later. So in interfaces, bond0 has the 2 interfaces on one PCI card, and no...
  3. Network problem bond+vlan+bridge

    Yes and MTU is not needed in vmbr confs .... but it works Yes ... that the last test I wanted to do, reinstall ifupdown. Seems ok root@ceph1:~# systemctl status networking ● networking.service - Network initialization Loaded: loaded (/lib/systemd/system/networking.service; enabled; vendor...
  4. Network problem bond+vlan+bridge

    ens6xxxx : 2 x 10 G on one PCI card eno1np0, eno2np1, eno3 eno4: the other card with 2 x 10 G + 2 x 1000BasrT (only 10G tested) my interfaces file: auto lo iface lo inet loopback auto eno1np0 iface eno1np0 inet manual iface eno3 inet manual iface eno4 inet manual auto ens6f0np0 iface...
  5. Network problem bond+vlan+bridge

    Same issue for me with 2 PCI cards with Broadcom Limited BCM57412 NetXtreme-E 10Gb in a Dell R740xd. No mix of interfaces between PCI cards in a bond come up at boot. The problem seems to be bond0.vlan not up .... The more starnge is thant I can configure and run my network config on a living...
  6. iSCSI mount of device with 4k blocs

    I've done some modifications to code, to achieve my goal, changing sector sizes of some of the virtio disks of a guest. My tests seem ok, but I don't know enough about the code to be sure. My version : pve-manager/6.2-10/a20769ed (running kernel: 5.4.44-2-pve) 1) I introduce a new boolean...
  7. iSCSI mount of device with 4k blocs

    Many tests later ... I think this is not possible. The block_size parameters (logical & physical) must be added to the virtioX: definition .... but there are not implemented in proxmox. may be on a next release .....
  8. iSCSI mount of device with 4k blocs

    In libvirtd/qemu in xml you have : <disk type='block' device='disk'> <driver name='qemu' type='raw' cache='none' io='native'/> <source dev='/dev/eql/seafile1'/> <blockio logical_block_size='4096' physical_block_size='4096'/> <target dev='vdb' bus='virtio'/>...
  9. iSCSI mount of device with 4k blocs

    Sorry but last week has been a bit hard and I was a bit confused ... iSCSI mount from equallogic is Ok, and the raw device (or LUN) come as it has ben formated (Phy & logical sector of 4096). The problem is when you give this raw device to a guest VM ... within the VM sector is 512 .... and I...
  10. iSCSI mount of device with 4k blocs

    Bonsoir, I'm testing proxmox/CEPH for the migration of a libvirtd/Kvm configuration. The problem I get, is I use some 'data' disks on guests on my libvird that I can't 'mount on my proxmox platform ... Theses disks are iSCSI LUNs from equallogic disks, formatted with 4 k sectors and used...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!