You can use the (very high?) performance drives as an additional pool. You may mix them in with the other drives but it is recommended to have drives of similar performance in each pool.
In order to have more than 1 OSD per disk you partition the disks and use each partition for the whole OSD...
I don't think you will gain anything from having nvme cache disks when you already have nvme storage disks. There exist both cache storage (tiered) for Ceph and the concept of using different disks for the different storages in an OSD (DB+WAL+STORAGE). In both of those scenarios I think you will...
I have the same issue with live migrations. Since PVE 7.2 live migrations hangs my Linux VM:s when migrating from an Epyc gen3 "Milan" node, regardless of the target node. I don't see it when migrating from an Epyc gen1.
It never happened before 7.2 and kernel 5.15.
If I go back to the 5.13...
I might have gotten something related to this as well but on smalls disks. CPU stalls, qemu-agent involved.
See https://forum.proxmox.com/threads/rcu-info-rcu_sched-self-detected-stall-on-cpu.109112/post-469315
I have also gotten this some some machines after the upgrade to PVE 7.2 and kernel 5.15.
[155118.277548] INFO: rcu_sched self-detected stall on CPU
[155118.277575] 0-...: (2 GPs behind) idle=6ef/1/0 softirq=1813405/1813406 fqs=0
[155118.277590] (t=334307 jiffies g=1395437 c=1395436...
Debian 9 VM
1 CPU core spiked sometime during the upgrade and didn't go down for almost 12 hours until reset. Average CPU graph for the last month looks like a hockey stick. Storage is Ceph RBD.
Not much to go on but here's the VM config, slightly redacted.
agent: 1
balloon: 1024
bootdisk...
I have had a couple of VM:s misbehaving after upgrading to PVE 7.2. The ones doing so were all live migrated from a 7.1 node (node A) to an upgraded and rebooted 7.2 node (node B), and later back to the upgraded and rebooted 7.2 node (node A again).
The symptoms were high CPU usage on at least...
I think your issue is that you got 4 MON nodes which means that after 2 nodes are down your cluster is not quorate any longer since 50% of the cluster is down and the remaining 50% cannot be certain it is the "surviving" part of the cluster or if it just subject to a split-brain scenario.
I...
Short answer: Yes.
You can have a primary PBS instance with high performance drives but relatively few backups and short retain per VM, and an additional (or several additional) secondary PBS instances with larger storage and more backups and longer retain. I would actually recommend having 2...
I get the following errors for each VM:
sync group vm/101 failed - missing field `protected`
This started happening after updating a secondary PBS server to the latest non-enterprise version 2.0.14-1. The primary, and backup source of the sync, is on 1.1.13-3 enterprise.
The primary will be...
Seems like you also need to restart certain PVE services in order to get the UI so show ceph related data after disallowing insecure reclaims (unless you reboot). I restarted pvedeamon and pvestatd and that seemed to be enough, but perhaps there are more?
The SCSI controller should be set to "VirtIO SCSI" and the NIC "VirtIO". Changing the NIC is pretty easy but changing the controller can be tricky since it will cause Windows to not boot most of the time because of different disk paths.
Before you start doing that perhaps you could build a new...
How is your hardware configuration on the VMs? And did you install the VirtIO drivers? Without them Windows VMs perform very poorly in my experience. That would explain the poor IO performance even on SSDs.
I have also gotten this a few times and the reason has been that some tasks create multi gigabyte large files in the "/var/log/proxmox-backup/tasks/" directory, and not that the datastore is been exhausted. In my scenario it is because the datastore is remote and there was network issues causing...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.