Search results

  1. N

    [SOLVED] All nodes with VMs crash during backup task to Proxmox Backup Server

    Not entirely sure if this is saying much of use: -- Journal begins at Thu 2021-07-08 11:26:31 PDT, ends at Fri 2021-10-08 08:05:08 PDT.> Oct 08 01:35:56 pvenode1 systemd[1]: Starting The Proxmox VE cluster filesystem... Oct 08 01:35:56 pvenode1 pmxcfs[6572]: [quorum] crit: quorum_initialize...
  2. N

    [SOLVED] All nodes with VMs crash during backup task to Proxmox Backup Server

    Further investigation suggests a corosync congestion issue with the current layout. Is there a good place to look for logging regarding corosync errors?
  3. N

    [SOLVED] All nodes with VMs crash during backup task to Proxmox Backup Server

    Every node running VMs suddenly reboots during our nightly backup task to a Proxmox Backup Server installation. Package Versions: proxmox-ve: 7.0-2 (running kernel: 5.11.22-5-pve) pve-manager: 7.0-11 (running version: 7.0-11/63d82f4e) pve-kernel-helper: 7.1-2 pve-kernel-5.11: 7.0-8...
  4. N

    Ceph performance with simple hardware. Slow writing.

    @adriano_da_silva From my experiences with Ceph, I prefer it for the rebalancing and reliability it has offered. I have put a lab ceph setup through hell with a mix of various drives, host capabilities, and even uneven node networking capabilities. The only thing it did not handle well (and...
  5. N

    Flexible number of nodes in a Proxmox cluster

    I want to bring up that Ceph cluster again. Are you expecting there to be OSDs on the nodes you power off to save power? If so Ceph will have to rebalance every time you turn servers off/on, and that will have considerable wear on your storage medium, slow down your storage pools, and cause...
  6. N

    Ceph is not configured to be really HA

    This sounds like a fundamental misunderstanding of Ceph, which is not a Proxmox product.
  7. N

    Proxmox VE 7.0 released!

    Well regardless, I suggest starting your diagnostics with vda1 since it is having I/O errors and buffer issues. It is likely what is causing both your storage issues and the massive amount of IO Wait I can see in the CPU graph. What is vda1?
  8. N

    Proxmox VE 7.0 released!

    You've got at least one bad disk there @frox . Time to do some testing and/or replacement.
  9. N

    Proxmox VE 7.0 released!

    Upgraded a 4 node production cluster running Ceph with no downtime or issues. I did do the needful and ran the check script, ensured Prox 6.4 was fully updated, and reviewed my systems for any of the known issues/extra steps. For example, 3/4 nodes in this cluster run old boards and therefore...
  10. N

    [SOLVED] Datacenter summary jumps

    FYI, this is resolved in Proxmox 7, at least as of pve-manager: 7.0-9 .
  11. N

    minimal loaded server radiating unexplainable heat by power supply

    Are you sure the heat isn't coming from somewhere else in the system and just being vented by the power supply? Otherwise, have any bulging capacitors on that motherboard?
  12. N

    Dell 2850 with h433 controller scsi

    Depending on the age of the system and the rarity of that hardware at this point, it very well may be that the drivers are not included in the Proxmox installer anymore. You may have to look into drivers for that system and load them into the installer. You could also try putting the controller...
  13. N

    Reduced data availability: 1 pg inactive, 1 pg stale

    Also in case you do not get the indication well enough from the docs, I strongly suggest you do hardware testing on the drive(s) which held the malfunctioning pg(s).
  14. N

    Dell 2850 with h433 controller scsi

    You may need to confirm what hardware you have, and if the storage controller is still functional on an older unit like that. Doing some quick searching I am unable to find an "H433" controller, but based on my testlabs I can say that other storage controllers of the same generation as that...
  15. N

    Proxmox and VM performance are too slow, Linux VM Taking 3-4 hours and Windows VM 7-8 hours to bootup

    @akus I'm glad the SSD cache is helping with that drive. Definitely keep a backup of what's stored on that array though, since I have seen those SMR drives cause issues with cached setups before, notably on my home machine when I tried to make some use of the extra SMR disks I had laying around.
  16. N

    Proxmox and VM performance are too slow, Linux VM Taking 3-4 hours and Windows VM 7-8 hours to bootup

    He may have more than one issue, fair, but having accidentally ran VMs on that exact disk model before, I can say that my performance on an otherwise decent system was awful. The IO Wait those disks can pile up when you have a bunch of random writes will make them feel like a failed/failing...
  17. N

    Proxmox and VM performance are too slow, Linux VM Taking 3-4 hours and Windows VM 7-8 hours to bootup

    I know exactly what your problem is. First and foremost, that Seagate ST2000DM005 is a Shingled Magnetic Recording (SMR) disk. These are only good for slow, mostly read only storage. They have terrible random write speeds due to the technology behind SMR. There is no way to fix this, there are...
  18. N

    Using HDD pool slows down SSD pool on the same server

    For an actual server storage controller, it can be an issue with command queue depths being filled with commands waiting on the slower HDDs. I've even seen decent server storage controllers choke a bit when there are SSDs and HDDs connected and the HDDs are loaded with a large write task...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!