Recent content by Spiros Pap

  1. S

    performance of SAN nvme storage

    The fio command line arguments are in my initial post. the fio tests are always run from inside a VM on proxmox. The VM has 4 cores but I don't think the performance is CPU bounded, in the sense that my cores are 2.4Ghz (Xeon Gold 6148) and if only get 70Kiops, what CPU should i have to reach...
  2. S

    performance of SAN nvme storage

    They are quoting numbers like 100G/s and 8million IOPS 4k (warm) and 2.5M IOPs 4K readmiss.... The drives are FCM4 19TB flash modules. The question how is it possible to be so far from this spec. The hosts are HPE380G10 with Xeon(R) Gold 6148 CPU @ 2.40GHz. They are not the best but I guess...
  3. S

    performance of SAN nvme storage

    we are using LVM. The storage presents LUNs, which are initialized by LVM and then proxmox cuts LVs for each VM disk. But this is only for any disk that is attached to the VM by proxmox. The other tests,are tests where the VM itself is connecting to the storage via iscsi or nvme_o_tcp, so host...
  4. S

    performance of SAN nvme storage

    Hi all, i have a proxmox cluster that utilizes shared luns (LVM) from an IBM 9500 flashsystem. The storage is attached to the hosts via FC16 for main use and via 100G for iscsi/NVMEoverTCP ethernet connections. The storage can supposely reach 1M+ iops. I was doing various tests the other day...
  5. S

    proxmox migration or clone problem

    Full clone. The storage is the local-lvm.
  6. S

    proxmox migration or clone problem

    Well, i solved it. I rebuilt the cloud-init drives and the VM moved to the other server.... There must be a bug somewhere. I still don't understand though, why i can migrate a VM from cn1 to cn2 but i can not clone a VM from cn1 to cn2... It should be allowed.
  7. S

    proxmox migration or clone problem

    Hi all, Uptime: root@cn01:/etc/pve/nodes# uptime 15:03:04 up 1:21, 2 users, load average: 0.00, 0.00, 0.00 root@cn2:~# uptime 15:02:59 up 1 day, 19:33, 1 user, load average: 0.00, 0.03, 0.00 root@cn2:~# pveversion -v proxmox-ve: 8.2.0 (running kernel: 6.5.11-4-pve) pve-manager: 8.2.2...
  8. S

    proxmox migration or clone problem

    Hi all, I have a proxmox test setup. Each server has local storage. I am trying to migrate a VM from one node to another and it gives out the error: ERROR: migration aborted (duration 00:00:00): target node is too old (manager <= 7.2-13) and doesn't support new cloudinit section both nodes...
  9. S

    Where VLAN tagging of the Ethernet Frames take place?

    The ethernet frames from the VM come from the tap adapter. The VLAN tagging of these frames happen when the packets are processed by the bridge, You can see the vlan tag with : bridge vlan show dev tap100i0 port vlan-id tap100i0 444 PVID Egress Untagged
  10. S

    lvm stuck

    When i do "systemctl status multipathd" I am getting: multipathd[3683]: sdbd: path wwid has changed. Refusing to use What does this mean? Any ideas what might be causing this?
  11. S

    lvm stuck

    Hi all, I am once again in the dire situation where one proxmox node has problems with one multipathed iscsi device and it stopped responding to lvm commands which are stuck forever. I have tried many things to restore the iscsi device but I haven't managed to do it. The question is if there...
  12. S

    Ceph OSD rebalancing

    Well this tend to be funny: I did a "ceph osd set-require-min-compat-client luminous" which allowed OSDs to be able to rebalance and then a "ceph osd reweight-by-utilization" to make it rebalance OSDs. The result was: Before: ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META...
  13. S

    Ceph OSD rebalancing

    Well, I chose 2/1 because I usualy see raid5 for SSDs on enterprise storages, but Ok, your comment is noted (about the 3/2 size/min_size). I have only one pool on the ssds, so the target_ratio should really be 100%. The output in my posting was part from the "ceph osd df tree" command. Now...
  14. S

    Ceph OSD rebalancing

    Hi all, I have a setup of 3 proxmox servers (7.3.6) running Ceph 16.2.9. I have 21 SSD OSDs, 12*1.75TB,9*0.83TB. On these OSDs I have one pool with replication 1 (one copy). I have set the pg_autoscale_mode to 'on' and the resulting PGs of the pool are 32. My problem is that the OSDs are very...
  15. S

    ceph: unable to create OSDs over iscsi

    Well yes, forums are full of advise for ceph/zfs that OSDs/disks should rely on local storage. While both ceph/zfs are built with the assumption of local disks for perfectly valid reasons, you can always remap these reasons to your environment, take your risks and create something that suits...