Yes, they cost more and will get really expensive in the coming months, but second hand SATA/SAS are easy to find and not that costly. In the long run they end up being cheaper as they don't degrade as fast as consumer ones, so won't need to...
Don't want to start an argument here, but whoever told you that has little idea what is a PVE Ceph mesh cluster. Linux kernel routing may use like 0'1% of CPU and FRR may use like 3% CPU while converging or during node boot for a few seconds. If...
Added a comment to that bugzilla post....hopefully someone from Proxmox can chime in on this issue. I'm unsure still if this is an actual bug or if its something that I'm doing with the config of the filters.....or even something else.
When I need to, I use an AD backend and filter by groups that I create specifically to manage PVE privileges. Never had the need to use nested groups as the environments I've used this were not big enough to justify nesting groups or not creating...
For once, AI is right :)
Any consumer drive will have low Ceph performance due to rocksDB and sync writes, but those drives in particular are terrible for anything but PC archiving purposes due to it's small SLC cache and very slow QLC nand...
This lab is using PVE9.1.4, no subscription (although don't think there's anything different in Enterprise repo in this regard). Using nested PVE to test Ceph configs, etc. When installing the nested PVE with the iso I choosed to pin the network...
Hi,
it is a racy bug that's is fixed in qemu-server >= 9.1.3 with:
https://git.proxmox.com/?p=qemu-server.git;a=commit;h=b82c2578e7a452dd5119ca31b8847f75f22fe842...
Hello,
Have had an issue with one, single, live migration of a VM. This VM has been live migrated a few times before without issues, both from and to this same host. Many other VMs live migrate without issues (we've done 1000+ live migrations in...
Never ever use apt upgrade on PVE: always use apt dist-upgrade or it's synonym apt full-upgrade, as detailed in the docs you linked.
That said, if you follow those steps apt will update all packages, not just Ceph one's, which isn't what OP asked...
Although I would setup two clusters, if you really want one cluster just setup corosync links in vlans and place said vlans on the available physical links on each host. Doesn't make sense for those "remote" nodes as it won't provide any real...
Nice to see this reaching the official documentation!
Maybe OP did setup a vlan for Ceph Public network with different IP network from that of other cluster services and can just move the vlan to a different physical nic/bond. Did you...
Ceph Public is the network used to read/write from/to your Ceph OSDs from each PVE host, so your are limited to 1GB/s. Ceph Cluster network is used for OSD replication traffic only. Move Ceph Public to your 10GB nic and there should be an...
You can do this with ZFS, albeit manually (not from webUI). You could also create 2 5-way mirror with 5 disks each, then create a RAID0 with those two vdev. Something like choosing stripping "vertically" or "horizontally". No idea on how it would...
Because it has just 750GB and I bet that all metadata is already cached. Try rebooting the server and running GC again or try running GC on a 140TB datastore on BTRFS. I've done such tests and performance is similar to ZFS. Not to mention that...
IIUC what you propose: that makes little sense and that setup you describe can be accomplished with local sync jobs (SSD datastore where backups are done, then sync them to HDD for "archival").
The performance issue with HDDs on PBS isn't backup...
Today, we announce the availability of a new archive CDN dedicated to the long-term archival of our old and End-of-Life (EOL) releases.
Effective immediately, this archive hosts all repositories for releases based on Debian 10 (Buster) and older...
Sorry for that, but It seems you didn’t understand my point either. No need to convince me about anything: use bugzilla to explain your use case to the devs so they can decide what should be improved. I know how PVE's HA works, it's ok for me...