Search results

  1. Z

    Proxmox 5.4 stops to work (zfs issue?)

    Hi, I have removed swap with "swapoff -a" and now... hope well... I also noticed in kern.log the following message: Feb 7 14:11:50 dt-prox1 kernel: [57824.528963] perf: interrupt took too long (4912 > 4902), lowering kernel.perf_event_max_sample_rate to 40500 What is it?
  2. Z

    Proxmox 5.4 stops to work (zfs issue?)

    Maybe is related to the fact that my servers uses swap partition that is on a zfs raid1 volume created when I installed Proxmox 5.4?
  3. Z

    Proxmox 5.4 stops to work (zfs issue?)

    If can help you, this is arc_summary (part 2): ZFS Tunables: dbuf_cache_hiwater_pct 10 dbuf_cache_lowater_pct 10 dbuf_cache_max_bytes 104857600 dbuf_cache_max_shift 5...
  4. Z

    Proxmox 5.4 stops to work (zfs issue?)

    If can help you, this is arc_summary (part 1): ------------------------------------------------------------------------ ZFS Subsystem Report Thu Feb 06 07:44:18 2020 ARC Summary: (HEALTHY) Memory Throttle Count: 0 ARC Misc: Deleted: 13.13M...
  5. Z

    Proxmox 5.4 stops to work (zfs issue?)

    Hi, today the problem occurred again and I had to restart the server. As I wrote in my previous post I have update the Bios to the latest version available and also Proxmox is up to date. The only strange thing is that my kernel is 4.15.18-24-pve and not 4.15.18-52-pve as suggested by...
  6. Z

    Proxmox 5.4 stops to work (zfs issue?)

    Hi, as suggested by t.lamprecht, I installed the intel microcode, updated the Bios and Porxomox with apt dist-upgrade but my running kernel is still 4.15.18-24-pve and not 4.15.18-52 as you can see from pveversion -v proxmox-ve: 5.4-2 (running kernel: 4.15.18-24-pve) pve-manager: 5.4-13...
  7. Z

    Proxmox 5.4 stops to work (zfs issue?)

    I forget to specify that in the GUI, when I had the problem the IO delay was high, about 18%. Now, that the server has not problems the IO delay is 0.05% - 0.4%. zpool iostat capacity operations bandwidth pool alloc free read write read write ---------- ----- -----...
  8. Z

    Proxmox 5.4 stops to work (zfs issue?)

    Thank you t.lamprecht, I'll do what you suggested tomorrow night. Now I see this message on the screen: [ 1448.513043] kvm [54271]: vcpu1, guest rIP: 0xfffff80250fb6582 kvm_set_msr_common: MSR_IA32_DEBUGCTLMSR 0x1 nop while in kernlog: kernel: [ 7630.723176] perf: interrupt took too long...
  9. Z

    Proxmox 5.4 stops to work (zfs issue?)

    Hi, I have a single node with proxmox 5.4-13, and tonight it stops to work. I had to hard reboot the node... I have 3 zfs pool (one for proxmox in raid 1, one for my HDD disks in raidz2 and one for my SSD disks in raidz2) and all the pools are online and scrub is ok. pve version is...
  10. Z

    Proxmox 6.1 with Ceph and RoCE

    Can we expect the introduction of this feature in future versions? Thank you wolfgang.
  11. Z

    Proxmox 6.1 with Ceph and RoCE

    Hi, in a few weeks I will have to set up a 4 nodes cluster with ceph. Each node will have 4 nvme osd and I would like to use RoCE for ceph public network and ceph cluster network. My network card is Supermicro AOC-M25G-m4s, practically a Mellanox ConnectX-4LX, with 4 ports 25Gb. I would like to...
  12. Z

    Firewall multicast rules

    Hi, yes I tested multicast with omping and % of multicast and unicast packets loss is 0% (with firewall disabled). My switches are cisco nexus 3064 and the configuration is (vlan 15 is the management vlan): vlan configuration 15 ip igmp snooping querier 192.168.15.253 ip igmp snooping...
  13. Z

    Firewall multicast rules

    Hi, /etc/pve/nodes/prx1/host.fw [OPTIONS] enable: 1 [RULES] GROUP managementipmi # Management IPMI to ManagementVM GROUP ceph_private -i ceph23 # Ceph Private Subnet OK GROUP ceph_public -i ceph22 # Ceph Public OK GROUP migrationvm -i migr21 # MigrationVM Access GROUP management -i mgmt20 #...
  14. Z

    Firewall multicast rules

    Hi, I added the following rules to the firewall gui: Rules n.1: Direction: IN Action: ACCEPT Source: left blank Destination: left blank Macro: left blank Protocol: udp Source Port: left blank Destination Port: 5404:5405 Rules n.2: Direction: IN Action: ACCEPT Source: left blank...
  15. Z

    Firewall multicast rules

    Hi, I created a cluster of 4 nodes, now I would like to know which rule I have to add, in the firewall gui, to permit multicast traffic on the management subnet (192.168.15.0/24 , iface vmbr0)... Thank you very much
  16. Z

    OVS Bridge and jumbo frame (mtu=9000)

    Hi, I have read several posts about to configure OVS bridge and jumbo frame (mtu 9000) but I am still confused, so I have some question: Is it possible to set mtu=9000 in the GUI when I create/modify an OVS Bridge? Is it possible to set mtu=9000 in the GUI when I create/modify an OVS IntPort...
  17. Z

    FreeNAS 11.1 as SAN (iSCSI)

    Hi, I've a project to use a FreeNAS 11.1 box as SAN for a cluster of 3 Proxmox 5.2. FreeNas will have 12 disks for storage VM and Containers: 6 x 8TB HDD; 6 x 480GB SSD. I use a 960GB NVMe for ZIL and L2ARC. 2 superDOM is for boot (RAID 1). In FreeNAS I will create two Volumes RAIDZ2: one...
  18. Z

    Ceph pool numbers

    Hi, in a Ceph cluster of 3 nodes, each of one has 4 HDD disks and 4 SSD disks, is it better to create a single pool for each VM/CT or is better to use a small number of larger pools? In other words, is it better to have many small pools or few larger pools? Thank you very much
  19. Z

    Ceph and local cache on SSD

    Hi, initially we will have about 150/180 VM/CTS, some of which will be very write intensive (database). Cluster 2 (Ceph) will have a replica 3, so I can lose 2 nodes without lose data. As soon as possible I will add at least one node. Thank you