Playing arround with proxmox and HP MSA P2000

grefabu

Well-Known Member
May 23, 2018
240
14
58
50
Hi,

I've here some older hardware:

3 x HP Proliant DL360p G8
1 x HP MSA P2000

I connect the MSA via SAS and setup it with multipath. After some beginning problems it seems to work like expected. I create an shared lvm and now I could install VMs and migrate them from one to an other node.

But I've some questions:

1. Is it possible to use an file system with the possibility of snapshots in this case (SYS, multypath)? Cause for an other setup this could be nessesary?

2. On on volume on the MSA still some old VMWare images. I could mount the volume via vmfs-tools on one node. I want to test an migration from this images to an proxmox image. How should I do that?

Bye

Gregor
 
Last edited:
The primary reference point regarding storage is this page: https://pve.proxmox.com/wiki/Storage

There is no PVE supported Cluster Filesystem for shared storage that would provide integrated snapshots on VM level. You can implement any of the industry supported Clustered File Systems https://en.wikipedia.org/wiki/Clustered_file_system and then use QCOW format for your VM disks. You will have QCOW snapshot support integrated with PVE.

For the migration - measure 7 times before you cut. Make a copy/backup/snapshot and use available tools/articles on converting a vmdk/vmxf file to PVE supported format.

Good luck


Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Hi,

what you write is right, but made me not realy happy ;-)

So, I could play and will test to set additional volumes on the MSA. then I use them to provide it to the nodes and made an ZFS on them. Then I try to ZFS replication for the VMs. It is not realy shared, but something nearly it :)

Bye

Gregor
 
This is not a fantastic idea to do - your MSA is already providing a backend RAID of some sort. You are planning to take that giant RAID pool, slice it in pieces, place ZFS on that raid (I think thats frowned upon) and then replicate between slices of the same pool. I expect the performance to suffer.

Your best approach is really some sort of clustered filesystem + qcow


Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Shure you are right, zfs should speak with the disk direct.

Your best approach is really some sort of clustered filesystem + qcow
But if proxmox didn't support one in this configuration I don't know how,...
VMWare has vmfs for this case and MS CVFS . With an Hyper-V Server and CVFS we 've running an System at a daughter enterprise. My hope is to replace it with an proxmox solution without buying new HW.
 
Thank you for the information.
So I was able to made an setup with ocfs2 and proxmox.

Unfortunately I got some kernel stucks and read here in the forum that ocfs2 isn't realy stable.
GFS2 could be another candidate and I remeber I try it on an test setup years ago with drbd. Maybe I test it even the information are not so well too.
 
Code:
Jan 18 11:45:40 fti-pve003 kernel: [71362.354217] ------------[ cut here ]------------
Jan 18 11:45:40 fti-pve003 kernel: [71362.354223] kernel BUG at fs/jbd2/journal.c:859!
Jan 18 11:45:40 fti-pve003 kernel: [71362.354269] invalid opcode: 0000 [#1] SMP PTI
Jan 18 11:45:40 fti-pve003 kernel: [71362.354292] CPU: 14 PID: 2360 Comm: jbd2/dm-2-26 Tainted: P          IO      5.13.19-2-pve #1
Jan 18 11:45:40 fti-pve003 kernel: [71362.354325] Hardware name: HP ProLiant DL360p Gen8, BIOS P71 08/20/2012
Jan 18 11:45:40 fti-pve003 kernel: [71362.354351] RIP: 0010:jbd2_journal_next_log_block+0x7b/0x80
Jan 18 11:45:40 fti-pve003 kernel: [71362.354384] Code: 00 75 10 49 8b 84 24 50 03 00 00 49 89 84 24 38 03 00 00 41 c6 44 24 44 00 4c 89 ea 4c 89 e7 e8 fb fe ff ff 41 5c 41 5d 5d c3 <0f> 0b 0f 1f 00 0f 1f 44 00 00 55 48 89 e5 41 56 41 55 41 bd ea ff
Jan 18 11:45:40 fti-pve003 kernel: [71362.354451] RSP: 0018:ffffbbf9cf90fc60 EFLAGS: 00010246
Jan 18 11:45:40 fti-pve003 kernel: [71362.354474] RAX: 0000000000000001 RBX: ffffa0f74cb1d800 RCX: 00000000ffffffff
Jan 18 11:45:40 fti-pve003 kernel: [71362.354502] RDX: 00000000000000ff RSI: ffffbbf9cf90fc80 RDI: ffffa0f7486e1044
Jan 18 11:45:40 fti-pve003 kernel: [71362.354529] RBP: ffffbbf9cf90fc70 R08: 0000000000000001 R09: ffffa0f74816cff0
Jan 18 11:45:40 fti-pve003 kernel: [71362.354557] R10: 0000000000000001 R11: ffffa0f74cb1d600 R12: ffffa0f7486e1000
Jan 18 11:45:40 fti-pve003 kernel: [71362.354584] R13: ffffbbf9cf90fc80 R14: ffffa0f7486e1000 R15: ffffa0f74cb1d800
Jan 18 11:45:40 fti-pve003 kernel: [71362.354612] FS:  0000000000000000(0000) GS:ffffa0f6df800000(0000) knlGS:0000000000000000
Jan 18 11:45:40 fti-pve003 kernel: [71362.354643] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
Jan 18 11:45:40 fti-pve003 kernel: [71362.354666] CR2: 00007f3cc27e7000 CR3: 0000001dab610004 CR4: 00000000000626e0
Jan 18 11:45:40 fti-pve003 kernel: [71362.354694] Call Trace:
Jan 18 11:45:40 fti-pve003 kernel: [71362.354709]  jbd2_journal_get_descriptor_buffer+0x38/0x100
Jan 18 11:45:40 fti-pve003 kernel: [71362.354736]  journal_submit_commit_record.part.0+0x3b/0x1f0
Jan 18 11:45:40 fti-pve003 kernel: [71362.354760]  jbd2_journal_commit_transaction+0x1391/0x1910
Jan 18 11:45:40 fti-pve003 kernel: [71362.354786]  kjournald2+0xa9/0x280
Jan 18 11:45:40 fti-pve003 kernel: [71362.354803]  ? wait_woken+0x80/0x80
Jan 18 11:45:40 fti-pve003 kernel: [71362.354822]  ? load_superblock.part.0+0xb0/0xb0
Jan 18 11:45:40 fti-pve003 kernel: [71362.354842]  kthread+0x12b/0x150
Jan 18 11:45:40 fti-pve003 kernel: [71362.354861]  ? set_kthread_struct+0x50/0x50
Jan 18 11:45:40 fti-pve003 kernel: [71362.354881]  ret_from_fork+0x22/0x30
Jan 18 11:45:40 fti-pve003 kernel: [71362.354901] Modules linked in: veth ebtable_filter ebtables ip_set ip6table_raw iptable_raw ip6table_filter ip6_tables sctp ip6_udp_tunnel udp_tunnel libcrc32c ocfs2_dlmfs iptable_filter bpfilter bonding tls softdog ocfs2_stack_o2cb ocfs2_dlm ocfs2 ocfs2_nodemanager ocfs2_stackglue quota_tree nfnetlink_log nfnetlink dm_round_robin dm_multipath scsi_dh_rdac scsi_dh_emc scsi_dh_alua ipmi_ssif intel_rapl_msr intel_rapl_common sb_edac x86_pkg_temp_thermal intel_powerclamp coretemp kvm_intel kvm mgag200 irqbypass drm_kms_helper crct10dif_pclmul ghash_clmulni_intel cec aesni_intel rc_core i2c_algo_bit crypto_simd cryptd fb_sys_fops syscopyarea rapl sysfillrect sysimgblt intel_cstate acpi_ipmi hpilo serio_raw ipmi_si ioatdma dca pcspkr input_leds ipmi_devintf acpi_power_meter mac_hid ipmi_msghandler zfs(PO) zunicode(PO) zzstd(O) zlua(O) zavl(PO) icp(PO) zcommon(PO) znvpair(PO) spl(O) vhost_net vhost vhost_iotlb tap drm sunrpc ip_tables x_tables autofs4 usbmouse hid_generic usbkbd usbhid hid
Jan 18 11:45:40 fti-pve003 kernel: [71362.354973]  ses enclosure mpt3sas hpsa uhci_hcd raid_class ehci_pci crc32_pclmul psmouse lpc_ich pata_acpi ehci_hcd scsi_transport_sas tg3
Jan 18 11:45:40 fti-pve003 kernel: [71362.355330] ---[ end trace b9d0f439e13ad957 ]---
 
The installation, configuration and support for either OCFS2 or GFS2 are out of scope for Proxmox support. There might be people here who can share knowledge, but your best bet is to find a community focused on that specific technology. In this case Proxmox is just an application that makes use of a shared file system to place files.

Good luck

Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Now it seems better with GFS2, I took this hint :
https://gist.github.com/jazzl0ver/8959215eaed6e8367f9a4486a5690809

One thing I've do reconise: I didnt't mount the device on boot, always manuel after reboot an node. When mounting it on boot I got some Problems with the machine. Maybee I_ve to change my boot optionsin the systemd conf.

Now I could migrate, clone, bulk-migrate and so on without problems.

I try to convert the vmdk files, will see if it work as expected.
 
Thank you for the information.
So I was able to made an setup with ocfs2 and proxmox.

Unfortunately I got some kernel stucks and read here in the forum that ocfs2 isn't realy stable.
GFS2 could be another candidate and I remeber I try it on an test setup years ago with drbd. Maybe I test it even the information are not so well too.

This is because the ocfs-tools (8.6.6) currently present in the stable version of proxmox does not work correctly on the 5.13 and 5.15 kernels. To work, you must install version 8.7.1 of ocfs2-tools which is in the pve testing repository. Version 8.7.1 works and has already been tested (it should be the default, in my opinion).

Below is my report on the bug found in ocfs2-tools in kernel 5.13/5.15:

Post source: https://forum.proxmox.com/threads/o...r-proxmox-ve-7-x-available.100936/post-457175

I'd like to report this bug to the developers of ocfs2-tools, but I have no idea how to do it, so you'll be here first hand to find out about the problem. For us users who want to use ocfs2-tools, we will have to settle for this option of installing version 8.7.1 to work.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!