Search results

  1. Z

    Proxmox 5.4 stops to work (zfs issue?)

    Hi, today the problem occurred again and I had to restart the server. As I wrote in my previous post I have update the Bios to the latest version available and also Proxmox is up to date. The only strange thing is that my kernel is 4.15.18-24-pve and not 4.15.18-52-pve as suggested by...
  2. Z

    Proxmox 5.4 stops to work (zfs issue?)

    Hi, as suggested by t.lamprecht, I installed the intel microcode, updated the Bios and Porxomox with apt dist-upgrade but my running kernel is still 4.15.18-24-pve and not 4.15.18-52 as you can see from pveversion -v proxmox-ve: 5.4-2 (running kernel: 4.15.18-24-pve) pve-manager: 5.4-13...
  3. Z

    Proxmox 5.4 stops to work (zfs issue?)

    I forget to specify that in the GUI, when I had the problem the IO delay was high, about 18%. Now, that the server has not problems the IO delay is 0.05% - 0.4%. zpool iostat capacity operations bandwidth pool alloc free read write read write ---------- ----- -----...
  4. Z

    Proxmox 5.4 stops to work (zfs issue?)

    Thank you t.lamprecht, I'll do what you suggested tomorrow night. Now I see this message on the screen: [ 1448.513043] kvm [54271]: vcpu1, guest rIP: 0xfffff80250fb6582 kvm_set_msr_common: MSR_IA32_DEBUGCTLMSR 0x1 nop while in kernlog: kernel: [ 7630.723176] perf: interrupt took too long...
  5. Z

    Proxmox 5.4 stops to work (zfs issue?)

    Hi, I have a single node with proxmox 5.4-13, and tonight it stops to work. I had to hard reboot the node... I have 3 zfs pool (one for proxmox in raid 1, one for my HDD disks in raidz2 and one for my SSD disks in raidz2) and all the pools are online and scrub is ok. pve version is...
  6. Z

    Proxmox 6.1 with Ceph and RoCE

    Can we expect the introduction of this feature in future versions? Thank you wolfgang.
  7. Z

    Proxmox 6.1 with Ceph and RoCE

    Hi, in a few weeks I will have to set up a 4 nodes cluster with ceph. Each node will have 4 nvme osd and I would like to use RoCE for ceph public network and ceph cluster network. My network card is Supermicro AOC-M25G-m4s, practically a Mellanox ConnectX-4LX, with 4 ports 25Gb. I would like to...
  8. Z

    Firewall multicast rules

    Hi, yes I tested multicast with omping and % of multicast and unicast packets loss is 0% (with firewall disabled). My switches are cisco nexus 3064 and the configuration is (vlan 15 is the management vlan): vlan configuration 15 ip igmp snooping querier 192.168.15.253 ip igmp snooping...
  9. Z

    Firewall multicast rules

    Hi, /etc/pve/nodes/prx1/host.fw [OPTIONS] enable: 1 [RULES] GROUP managementipmi # Management IPMI to ManagementVM GROUP ceph_private -i ceph23 # Ceph Private Subnet OK GROUP ceph_public -i ceph22 # Ceph Public OK GROUP migrationvm -i migr21 # MigrationVM Access GROUP management -i mgmt20 #...
  10. Z

    Firewall multicast rules

    Hi, I added the following rules to the firewall gui: Rules n.1: Direction: IN Action: ACCEPT Source: left blank Destination: left blank Macro: left blank Protocol: udp Source Port: left blank Destination Port: 5404:5405 Rules n.2: Direction: IN Action: ACCEPT Source: left blank...
  11. Z

    Firewall multicast rules

    Hi, I created a cluster of 4 nodes, now I would like to know which rule I have to add, in the firewall gui, to permit multicast traffic on the management subnet (192.168.15.0/24 , iface vmbr0)... Thank you very much
  12. Z

    OVS Bridge and jumbo frame (mtu=9000)

    Hi, I have read several posts about to configure OVS bridge and jumbo frame (mtu 9000) but I am still confused, so I have some question: Is it possible to set mtu=9000 in the GUI when I create/modify an OVS Bridge? Is it possible to set mtu=9000 in the GUI when I create/modify an OVS IntPort...
  13. Z

    FreeNAS 11.1 as SAN (iSCSI)

    Hi, I've a project to use a FreeNAS 11.1 box as SAN for a cluster of 3 Proxmox 5.2. FreeNas will have 12 disks for storage VM and Containers: 6 x 8TB HDD; 6 x 480GB SSD. I use a 960GB NVMe for ZIL and L2ARC. 2 superDOM is for boot (RAID 1). In FreeNAS I will create two Volumes RAIDZ2: one...
  14. Z

    Ceph pool numbers

    Hi, in a Ceph cluster of 3 nodes, each of one has 4 HDD disks and 4 SSD disks, is it better to create a single pool for each VM/CT or is better to use a small number of larger pools? In other words, is it better to have many small pools or few larger pools? Thank you very much
  15. Z

    Ceph and local cache on SSD

    Hi, initially we will have about 150/180 VM/CTS, some of which will be very write intensive (database). Cluster 2 (Ceph) will have a replica 3, so I can lose 2 nodes without lose data. As soon as possible I will add at least one node. Thank you
  16. Z

    Ceph and local cache on SSD

    Hello, we are planning to create 2 clusters: Cluster 1: Compute Nodes (Proxmox 5.2); Cluster 2: Storage Nodes (last version of Ceph) with initially 32 TB of storage on HDDs and at least 4 TB on SSD for mixed workload. In red some questions Cluster 1 3 x Compute Node CPU: 2 x Intel Xeon...
  17. Z

    Proxmox 5.1 - ZFS: guidelines

    Ok thank you. Now I'm in testing phase and my Test Server has 4 old HDD: 2 for OS and the other 2 for storage (mirror) I create a pool with: zpool create -f -m /zfsFS/zstorage zstorage -o ashift=9 mirror /dev/disk/by-id/wwn-0x6842b2b05711e7002170862a0e0d42c8...
  18. Z

    Proxmox 5.1 - ZFS: guidelines

    Hi wolfgang, at my question n.1 (For VMs (linux based, Windows based, etc..) is better to use volume or dataset/filesystem?) you answered that is better to use Volume because tool chain is optimized to zvols and FS has not O_DIRECT. Now I have read this proxmox wiki...
  19. Z

    Proxmox 5.1 - ZFS: guidelines

    Thank you wolfgang Another question. Consider having the following scenario I have a pool named storage and within it a volume named VM100. In this volume I have a VM with ID 100. Now, at time t0 I do a snapshot of VM 100 with zfs snapshot storage/VM100@vm100-t0 Then I continue to use my...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!