Hi,
I've read now this post and as i wrote in this post https://forum.proxmox.com/threads/proxmox-5-4-stops-to-work-zfs-issue.63849/#post-298631
I have the same identical problem (last time two days ago) and I don't know what to do anymore. I have update bios, proxmox (with pve-subscription)...
Hi,
I have removed swap with "swapoff -a" and now... hope well...
I also noticed in kern.log the following message:
Feb 7 14:11:50 dt-prox1 kernel: [57824.528963] perf: interrupt took too long (4912 > 4902), lowering kernel.perf_event_max_sample_rate to 40500
What is it?
If can help you, this is arc_summary (part 2):
ZFS Tunables:
dbuf_cache_hiwater_pct 10
dbuf_cache_lowater_pct 10
dbuf_cache_max_bytes 104857600
dbuf_cache_max_shift 5...
If can help you, this is arc_summary (part 1):
------------------------------------------------------------------------
ZFS Subsystem Report Thu Feb 06 07:44:18 2020
ARC Summary: (HEALTHY)
Memory Throttle Count: 0
ARC Misc:
Deleted: 13.13M...
Hi,
today the problem occurred again and I had to restart the server. As I wrote in my previous post I have update the Bios to the latest version available and also Proxmox is up to date. The only strange thing is that my kernel is 4.15.18-24-pve and not 4.15.18-52-pve as suggested by...
Hi,
as suggested by t.lamprecht, I installed the intel microcode, updated the Bios and Porxomox with apt dist-upgrade but my running kernel is still 4.15.18-24-pve and not 4.15.18-52 as you can see from pveversion -v
proxmox-ve: 5.4-2 (running kernel: 4.15.18-24-pve)
pve-manager: 5.4-13...
I forget to specify that in the GUI, when I had the problem the IO delay was high, about 18%. Now, that the server has not problems the IO delay is 0.05% - 0.4%.
zpool iostat
capacity operations bandwidth
pool alloc free read write read write
---------- ----- -----...
Thank you t.lamprecht, I'll do what you suggested tomorrow night.
Now I see this message on the screen:
[ 1448.513043] kvm [54271]: vcpu1, guest rIP: 0xfffff80250fb6582 kvm_set_msr_common: MSR_IA32_DEBUGCTLMSR 0x1 nop
while in kernlog:
kernel: [ 7630.723176] perf: interrupt took too long...
Hi,
I have a single node with proxmox 5.4-13, and tonight it stops to work. I had to hard reboot the node...
I have 3 zfs pool (one for proxmox in raid 1, one for my HDD disks in raidz2 and one for my SSD disks in raidz2) and all the pools are online and scrub is ok.
pve version is...
Hi,
in a few weeks I will have to set up a 4 nodes cluster with ceph. Each node will have 4 nvme osd and I would like to use RoCE for ceph public network and ceph cluster network. My network card is Supermicro AOC-M25G-m4s, practically a Mellanox ConnectX-4LX, with 4 ports 25Gb. I would like to...
Hi,
yes I tested multicast with omping and % of multicast and unicast packets loss is 0% (with firewall disabled). My switches are cisco nexus 3064 and the configuration is (vlan 15 is the management vlan):
vlan configuration 15
ip igmp snooping querier 192.168.15.253
ip igmp snooping...
Hi,
/etc/pve/nodes/prx1/host.fw
[OPTIONS]
enable: 1
[RULES]
GROUP managementipmi # Management IPMI to ManagementVM
GROUP ceph_private -i ceph23 # Ceph Private Subnet OK
GROUP ceph_public -i ceph22 # Ceph Public OK
GROUP migrationvm -i migr21 # MigrationVM Access
GROUP management -i mgmt20 #...
Hi,
I added the following rules to the firewall gui:
Rules n.1:
Direction: IN
Action: ACCEPT
Source: left blank
Destination: left blank
Macro: left blank
Protocol: udp
Source Port: left blank
Destination Port: 5404:5405
Rules n.2:
Direction: IN
Action: ACCEPT
Source: left blank...
Hi,
I created a cluster of 4 nodes, now I would like to know which rule I have to add, in the firewall gui, to permit multicast traffic on the management subnet (192.168.15.0/24 , iface vmbr0)...
Thank you very much
Hi,
I have read several posts about to configure OVS bridge and jumbo frame (mtu 9000) but I am still confused, so I have some question:
Is it possible to set mtu=9000 in the GUI when I create/modify an OVS Bridge?
Is it possible to set mtu=9000 in the GUI when I create/modify an OVS IntPort...