Hi SamCarson,
We have 2 cluster2nodes one on PVE5.3 the other on PVE6.2 running on HPE hardware (disk controller in HBA mode + 8 HPE enterprise class SSD) We encounter many instabilities.
sometimes an SSD is marked FAULTED by zfs, the CRC counter increments in the SMARTCTL output.
sometimes an...
Hi Pavel, Holr,
we use HP DL360gen9 (8 SSD sas + P840ar ctrl) under proxmox5.3 and proxmox 6.2 in HBA mode and encounter many instabilities.
sometimes an SSD is marked FAULTED by zfs, the CRC counter increments in the SMARTCTL output.
sometimes an SSD is abruptly ejected by the hpsa driver...
Hello everyone and Happy new Year !
When it is not possible to use motherboard embendded disk controller which HBA card is recommanded for ZFS replication setup ?
is there other possible technology like NVMe card ?
does anyone have a good card controller experience to share ?
thank you
Thank you for your attention
I'm not sure that this fixes the bug, that's why I would like to test it.
the link to release changelog
https://sourceforge.net/projects/cciss/files/hpsa-3.0-tarballs/
root@CLIPVE03:~/hpsa-buildir/hpsa-3.4.20/drivers# ls -lah /usr/src/hpsa-3.4.20.188
total 513K...
Hello,
On pve5.3 and pve6.2 we experience many disk errors with p840 hardware disk controller and hpsa module version 3.4.20.170 and fewer.
Randomly the driver do not respond to IO requests which results in disk reset.
a higher version (3.4.20.188) of hpsa seems to fix the pb...
Hi,
thak you for your reply
I think this is not related
On my understanding rsyslog bug do not change system clock but just put wrong timestamp in syslog data lines
Hello,
Last Saturday I saw for the first time a strange system time changing behavior in a PROMOX 5.4 servers
At 05:15 systemd report in syslog a time change to 14 Jun 07:38
HA components force node reboot after system watchdog failed
After reboot system and hwclock are good and cluster is...
Hello,
we already use proxmox 5.3 in production since many years, we are very satisfied
We planned to add a new proxmox setup based on HA with two nodes + qdevice as explained in the Wiki
as data backend we expect to use zfs pool (raid10) beacause it offer vm replication
I m looking for the...
Thank you for your reply
It is a good approach but not applicable on our environment
So in futur releases how PVE natively reach +10MPPS ?
is this a subject on the roadmap ?
is PVE research a solution based on openvswitch, based on linux kernel, or based on userspace network stack ?
we currently use 2nodes cluster in proxmox 5.4 version with ZFS local sortage
VMs are replicated with PVE native replication jobs.
We have 2 migrations cases
first one using offline migration
VM is shutdown after a fresh replication task and started on remote node
second one using online...
I am impressed every day by the quality of proxmox, whose main asset is its stability. thank you for this work
currently on proxmox 5.4 we are not able to reach 10G and +
The main reason is the linux kernel network stack design whose implies high context switches that affect performance
The nic...
Hello proxmox community,
we plan to migrate hundred of VMs from our old vmware platform to KVM
proxmox appears as an ideal compromise but I'm experiencing a CPU load problem
On proxmox HOST a single wget download consume 12% single CPU load at 11MBytes/sec (1518 packets size)
On proxmox KVM...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.