It means you have something that intermittently takes too much time to query for pvestatd daemon (mountpoint, some other info shown in gui). Usually slow disks or unreliable nfs mount.
So tested all the bioses upto 3.1 and still no go - I would like to say that's something was broken in kernel , rather than MB support. Is there anything to try out that would be out of ordinary?
EDIT:
Got it working with 3.4 bios.
Under PCI/PCIe settings there is SR-IOV . This needs to be...
Same problem. I know it worked before (and it was 3.1 bios indeed). Now machines are in production and did not needed IOMMU until now. Bios has been updated to 3.4 in the meantime. X10DRi-T4+ MB which is basically same bios. All the same things done as OP.
root@blake:~# dmesg | grep -e DMAR -e...
If you follow exactly the guide on ceph upgrade it is all good. I made the upgrade. Yes, it's ok to have some nodes running older version. It will report it on ceph status page also, but works. Make the upgrade as guide says and restart the daemons.
Same here on my node on little 1L server running nvme+ssd 1TB zfs. Machine is just idle, no machines running.. Something seems to add +1 Load average on top of what already happends. Cpu usage is nonexistent.
On my other nodes (44c, 88t cpu) I don't see any difference.
Well , initially it seemed fix to get managers running was to:
mkdir /usr/lib/ceph/mgr
But now it complains that modules are not available:
HEALTH_ERR: 10 mgr modules have failed
Module 'balancer' has failed: Not found or unloadable
Module 'crash' has failed: Not found or unloadable
Module...
After 8.04 to 8.1 upgrade my ceph managers won't start anymore:
Nov 25 05:35:02 quake systemd[1]: Started ceph-mgr@quake.service - Ceph cluster manager daemon.
Nov 25 05:35:02 quake ceph-mgr[166427]: terminate called after throwing an instance of 'std::filesystem::__cxx11::filesystem_error'
Nov...
For me it was RTL8111/8168/8411 driver issue. I saw quite a few people with same problems and it seems to boil down to power management of network card. If this was turned off , it would keep working normally (at the expense of a little higher power consumption on idle). I moved on from this...
I have the same problem except I'm trying to add storage and get the access denied error. I can connect from command line with smbclient just fine. Running proxmox 8.0.4.
I believe permission denied error came from you not having chorus - You should have at least 3 nodes so that chorus would work. If you have two - as soon as one breaks , the whole cluster will go read-only, because remaining node does not have confirmation if he's "in" or "out". Third node can...
Putting this as experience - i did see around 2x worse compression ratio if guest system would use xfs and it would be put on zfs storage in proxmox.
Just making and LVM storage on proxmox and using zfs in geuest yealded around 2x better compression. This difference comes probably because of...
I'm running 10gbe ethernet
All nvme's are in either dual or quad carriers that go onto pcie x8 or x16 sockets (running pcie bifurcation). They either have separate forced cooling or full aluminium double sided block heatsink on whole assembly.
Temps are also monitored to ensure that there is no...
With all respect that was not the question. "works" is wage statement. I'm pretty sure it works on usb flashdrives. I'm asking if someone has one some setup on el' cheapo drives.. 2 drives per node etc. and what the performance on direct IO (no random IO) would sound like.
So i'm slowly getting to the point where my small cluster is getting important enough to have redundancy. I'm already running local storage on zfs and samsung entry level nvme's and performance is great.But looking at moving my mechanical backup to something more "solid". So as i know that ceph...
Cool.. i was able to upgrade half of cluster to 5.19 kernel and saw that migration issues between amd and intel dissapeared. But now few days later it seems 5.19 kernel has been scraped as it was replaced with 6.1 kernel. Oh well. Nice timing on my side. Restart all the testing.
I just found that there was still my old node data available on the /etc/pve/nodes path. So just to test it out I just created dummy 1.conf file in the qemu-server subfolder and it immidiately added this node to he list of nodes in web ui. Of course it's with quiestion mark. After removing the...
I can't much as I don't understand what information you are seeking on this matter
https://www.tecmint.com/clear-ram-memory-cache-buffer-and-swap-space-on-linux/
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.