Hello,
Have had an issue with one, single, live migration of a VM. This VM has been live migrated a few times before without issues, both from and to this same host. Many other VMs live migrate without issues (we've done 1000+ live migrations in...
Never ever use apt upgrade on PVE: always use apt dist-upgrade or it's synonym apt full-upgrade, as detailed in the docs you linked.
That said, if you follow those steps apt will update all packages, not just Ceph one's, which isn't what OP asked...
Although I would setup two clusters, if you really want one cluster just setup corosync links in vlans and place said vlans on the available physical links on each host. Doesn't make sense for those "remote" nodes as it won't provide any real...
Nice to see this reaching the official documentation!
Maybe OP did setup a vlan for Ceph Public network with different IP network from that of other cluster services and can just move the vlan to a different physical nic/bond. Did you...
Ceph Public is the network used to read/write from/to your Ceph OSDs from each PVE host, so your are limited to 1GB/s. Ceph Cluster network is used for OSD replication traffic only. Move Ceph Public to your 10GB nic and there should be an...
You can do this with ZFS, albeit manually (not from webUI). You could also create 2 5-way mirror with 5 disks each, then create a RAID0 with those two vdev. Something like choosing stripping "vertically" or "horizontally". No idea on how it would...
Because it has just 750GB and I bet that all metadata is already cached. Try rebooting the server and running GC again or try running GC on a 140TB datastore on BTRFS. I've done such tests and performance is similar to ZFS. Not to mention that...
IIUC what you propose: that makes little sense and that setup you describe can be accomplished with local sync jobs (SSD datastore where backups are done, then sync them to HDD for "archival").
The performance issue with HDDs on PBS isn't backup...
Today, we announce the availability of a new archive CDN dedicated to the long-term archival of our old and End-of-Life (EOL) releases.
Effective immediately, this archive hosts all repositories for releases based on Debian 10 (Buster) and older...
Sorry for that, but It seems you didn’t understand my point either. No need to convince me about anything: use bugzilla to explain your use case to the devs so they can decide what should be improved. I know how PVE's HA works, it's ok for me...
I don't agree with that statement: It's not a bug, it's how HA has always worked and requires shared storage (i.e. Ceph, NFS, CIFS, SAN, local ZFS + replication) [1]. As HA is right now, it's fully admin responsibility to check the systems...
A bit of good news to kick off the new year: our pull request addressing the iSCSI DB consistency/compatibility issue has been accepted by the Open-iSCSI maintainers. This means the fix will be included upstream and should make its way into a...
Don't know what are you referring to. Do you have a link to the change you mention? Also, resurrecting a year+ old post without providing details isn't that useful. In the mean time there has been improvements for both restores and verification...
This alone doesn't provide enough information: how many OSD do you have in each of your hosts? Are they full disks or did you use partitions in each disk?
16 consumer NVMe drives. Any write Ceph does is sync and any drive without PLP will show high latency and, once its cache fills, poor sequential performance. Keep in mind that you have to write to 3 disks and besides data itself it has to write...
A status update on this:
Two corosync parameters that are especially relevant for larger clusters are the "token timeout" and the "consensus timeout". When a node goes offline, corosync (or rather the totem protocol it implements) will need to...